Dr. Obot: Manning the Machine

 

How AI could help doctors improve healthcare.

 
Illustration by  Rosa ter Kuile  for Are We Europe

Illustration by Rosa ter Kuile for Are We Europe

 

If Grey’s Anatomy convinced you that you need good looking men and women in white coats to save lives, think again. While another McDreamy is trying to charm Meredith Grey in the corridors of Seattle Grace Mercy West Hospital on the show’s latest season, the nature of the life-saving business we call medicine has changed drastically. A quick Google search is likely to reveal that the best doctor in the world is no longer a person wearing a white coat. In fact, it may not be a person at all, but an algorithm.

In 2016, Denmark-based start-up Corti put this case to test. To save lives, Corti has built an AI system that listens in on calls made to medical emergency dispatchers in Denmark and tries to detect if someone is suffering from a heart attack. In one particular case, when a woman called the emergency service to report that her husband had fallen from the roof, the dispatcher began instructing her to take care of him based on the assumption that the man had probably broken his back. But Corti, which was eavesdropping on the call, predicted a case of cardiac arrest instead. According to the start-up’s CEO, Andreas Cleve, the AI system had identified a rattling background noise; the sound of him gasping for air.

“The patient was gasping for air because his heart wasn’t beating, and the AI recognized the pattern. It turned out that the man had fallen off the roof because of a cardiac arrest,” said Cleave in an interview with business media Fast Company. As Corti was still in its test phase, the system was not authorized to send out an alert to the dispatcher, and it was too late to save the man’s life.

Want to keep reading?

This story is free! But if you want to support us, you could spoil yourself with a printed version of this story.

ORDER MAGAZINE

Keep reading

However, since 2016, the start-up has been helping more medical dispatchers in Copenhagen make critical decisions about people in distress. A report from the World Economic Forum shows that Corti has been successful in detecting cardiac arrests 95% of the time, while (human) medical dispatchers in Copenhagen identify cardiac arrests over the phone only about 73% of the time. 

“Corti also works well in recognizing several languages including English, French, Italian, and Danish. We are working on using these aspects of the model to detect if a dispatcher has missed important questions, so Corti can prompt them to ask and to also diagnose other diseases,” said Tycho Tax, Corti’s machine learning researcher.

For medical practitioners and companies alike, this begs the question: to what extent is a doctor’s purpose redefined by these technologies? Would we live in a safer and healthier world if life and death decisions were entirely handed over to artificial intelligence? 

Would we live in a safer and healthier world if life and death decisions were entirely handed over to artificial intelligence?

The potential of artificial intelligence in healthcare

Complex algorithms like Corti’s are increasingly being used in the healthcare sector to detect diseases, improve the organ transplantations, help radiologists in medical imaging and enable efficient clinical administration.

One of the most common applications of AI in healthcare is machine learning, where a computer trains itself to perform a specific task with the help of algorithms. Eric Horvitz, director of Microsoft Research Labs and one of the pioneers of AI in healthcare, found a way of using machine learning to alert a patient about the possibility of having Parkinson’s by analyzing large amounts of data relating to a person’s search history and monitoring motor movements like mouse clicks and keystrokes. In a report published in NPJ Digital Medicine in 2018, Horvitz and his colleague Ryen White found that even though Parkinson’s could be detected without using the data about cursor movements, they could do so more effectively if they included it.

We found a way to alert a patient about the possibility of having Parkinson’s by analyzing copious amounts of data relating to a person’s search history and monitoring their mouse clicks and keystrokes.

The power of machine learning is also evident in the case of medical imaging. Ariella Shoham, the vice president of marketing at Aidoc, an Israeli company that develops AI software to assist radiologists around the world, says radiologists often fail to spot critical information on scans due to time pressure. “Imagine a case where about ten CT scans are lying on a radiologist’s table and he doesn’t know which one is the most urgent and observes them on a first-come-first-serve basis. But if out of these scans, one is of a person who is suffering from a very bad brain bleed, our software instantly alerts the radiologist by pointing out which scan is the most critical and also points out the exact location where the bleeding is occurring. This saves many lives and has all been carried out through machine learning," she says.

According to Janine Khuc, a data scientist at Pacmed, an Amsterdam-based company that develops software to aid doctors in clinical decisions, most of the data that enables a machine to “learn” comes from the medical records of patients like their age, gender and the symptoms they experienced while ailing with a disease. “This data is just waiting to be used and a machine can process it at a much faster rate than the human brain ever could, in turn helping doctors personalize medical treatment and helping hospitals manage readmissions,” she says.

Machine learning has also helped the pharmaceutical industry conduct research more efficiently and optimize its supply chains. Jonathan Wilkins, marketing director at an industrial automation equipment supplier named EU Automation, explains how a Japanese study conducted by the Graduate School of Pharmaceutical Sciences at the University of Tokyo used machine learning to better predict the seizure-inducing adverse effects of drugs during preclinical development. “Data was collected on seizure-like neuronal activity from sections of brain tissue and perfused with preclinical drugs. Using machine learning technology, the researchers were then able to identify the drugs that were likely to induce seizures in patients,” he said.

With machines venturing into sectors of healthcare where human intelligence cannot, the next generation of doctors and medical dispatchers are likely to be trained in working with artificial intelligence. But would that mean dehumanizing healthcare, with patients being treated as mere diseases or another medical file to treat? Or could the use of AI in healthcare actually help doctors pay more attention to their patients by decreasing their workload?

Machine versus doctor?

Dr. Amit Sastry, a surgeon at Palmetto General Hospital in Florida, U.S., has developed a blissful relationship with a robot in his operating room. With the robot docked above his patient and him sitting in a console a few feet away, Dr. Sastry controls the robot as if it were a video game or “surgery with chopsticks,” as he describes it. “To me the robot is amazing since it allows me to work on excellent surgical techniques that I could still apply in the traditional method but with smaller, more precise incisions,” he says.

He asserts that he is still in full control of the robot. Moreover, its increased efficiency leaves him with more time to interact with his patients, who are surprisingly open to the idea of being cured by a robot. “Patients are not skeptical at all and they often think it sounds cool. Most patients I deal with have cancer and their real concern is to make sure that the tumor comes out. So they are okay with a robot being attached to them if it ultimately means that they are cured.”

Such robots meant for surgical purposes are programmed with “weak” artificial intelligence, which enables them to perform one particular task together with a human. This type of collaborative robotics increases efficiency and ensures effective medical treatment. But could the use of such AI also have an impact on a doctor’s decision making process? For example, after surgery, a patient is usually kept under observation. During this time period, many hospitals have begun using AI cameras which act as “bedsitters” and constantly observe the patient. If the camera spots any unusual activity from the patient such as heavy breathing, sudden seizures, or patients tugging on their oxygen tubes, an automated alert is immediately sent to the doctor on call. The camera might say the patient needs to be treated, but the doctor might want to keep the patient under observation for a longer period of time.

Trisha Huddar, a speech-language pathologist working at St. Mary Medical Center in California, in the U.S. says that since care from AI tools has boundaries and is linear, while human care is flexible, the ultimate decision is still in the hands of the human doctor. “Their analysis is based on several factors, like their own experiences handling such cases in the past, medical ethics and the patient’s wishes. The patient’s post-surgery condition is just one part of the decision-making process. Besides, these cameras have automated codes to send out alerts but they are still constantly controlled by the people who developed them,” she says.

Moreover, with doctors acting as the link between the patient’s medical, social and financial commitments in a hospital, an empathetic human hand will always remain important, especially when it comes to communicating with the patient’s family. Corti’s machine learning researcher Tycho Tax agrees and thinks that the future of AI would never be a case of machine versus doctor. “We should use technology where it makes sense while respecting human autonomy and realizing that humans have qualities technology should never want to replace,” he says.

However, in an attempt to improve healthcare by adopting new forms of technology where possible, concerns over data privacy violations and the slow standardization of AI tools still stand in the way of the cooperation between AI and doctors. While most medical professionals in the U.S. work with the latest AI tools, in many hospitals and clinics in Europe, Asia, Africa, and Latin America, artificial intelligence is either absent or unavailable.

We should use technology where it makes sense while respecting human autonomy and realizing that humans have qualities technology should never want to replace

AI trends in Europe

A study conducted by the European arm of the Healthcare Information and Management Systems Society said that, as of 2017, the European eHealth community still considered the U.S. as the most advanced country in terms of using AI in healthcare. The study stated that about 84% of health professionals in Europe were either unaware of AI tools, or simply did not use any except in the Nordics (including Estonia) and the Netherlands. Overall, a lack of product maturity and trust from medical staff were cited as the reasons for the low usage of AI tools throughout Europe.

“I think there is a large variation in the level of digitization between European countries, and the trust in using AI in healthcare might depend on that. Moreover, doctors are very well-educated in assessing medical information, but less so in assessing the mathematics behind an algorithm. A new way of guaranteeing the safety and clinical value of algorithms in the future would be helpful,” says Willem Herter, Pacmed’s Director.

Another challenge is the lack of standardization of AI tools across Europe. According to Arjan Sammani, a Ph.D. candidate investigating AI applications in healthcare, “There are so many new tools being developed but none are validated and standardized. For example, if I say my patient has hypertension, my colleague in another country has to define it the same way. So all the data and risk scores for a disease have to be the same. But because it is so much work, people don’t do it and we have so many new machine learning risk scores generated, which creates a lot of chaos.”

Then there is the issue of data privacy. Implemented in May 2018, the EU’s General Data Protection Regulation considers a patient’s data to be personal and sensitive and generally prohibits any kind of processing of such data for research unless a patient gives doctors consent.

“In Europe, the GDPR looks at informed consent and limits the uses of data for various purposes. But the real issue is that ‘consent’ is quite weak even in the best-case scenario today. ‘Boiler plate’ consent forms and blanket agreements that barely anyone reads are the backbone of this kind of health research,” says Vidushi Marda, an Indian lawyer working on AI matters at Article 19, a U.K. based human rights organization.

But, while data privacy laws protect patients against the nefarious effects of AI, who is protecting the doctors? For medical professionals’ trust in AI tools to increase, they might need some guarantees that AI tools aren’t out to replace them. Dr. Quentin Defrenet, a resident in psychiatry at a hospital in France, says that it will take time for a doctor’s human touch and a machine’s automated diagnosis to work together seamlessly. “It’s a matter of technological progress, funding, training, and general acceptance by society. The ideal situation would be an AI tool which could be adapted to each specific case a doctor has to deal with,” he says.

From the looks of it, AI won’t replace physicians, but it will personalize healthcare and help doctors diagnose diseases more accurately. So, at the end of the day, even if a Google search predicts that the algorithm is the world's best doctor, Google also tells you that you have a chronic illness and a week to live since you have a headache and a red nose. Looking past the hype, the future of AI in healthcare is bright.


 

This article appears in Are We Europe #5: Code of Conscience


Q5-mockup.jpg
Code of Conscience
10.00
Quantity:
Order Now
 
 
 

RELATED STORIES