Reasons to Invite AI to Your Healthcare Team

AI Can't Do Everything

In What AI still Doesn't Know How to Do, (Wall Street Journal), Alison Gopnik describes artificial intelligence as another in a long line of communication methods, which she refers to as "cultural technologies." To help allay concerns stoked by the recent claim that AI programs might be sentient, she explains AI falls short in innovation contests with young children. The child can imagine an outcome and invent a way to pursue it, which are activities the machine cannot initiate on its own.

In addition to imagining and inventing, I would add that AI also falls short on judgment. Humans need to decide whether or not to use the outputs of computer programs – and how to ethically use them. Computers can’t do that either… unless we tell them how.

For example, in cancer screening and other diagnostics, programs designed to read radiology images with # machine learning can work wonders by looking for patterns and comparing a patient’s # radiology images to millions of other images used to train such programs. But the programs are not sufficient on their own; we rely on clinicians to make a judgment about a diagnosis.

A couple of years ago, I took a course at MIT Sloan Executive Program in General Management called Artificial Intelligence: Implications for Business Strategy, which I highly recommend.* Professor Thomas Malone and the faculty shared a theme that went something like this: instead of fearing that humans are being replaced by machines, we need to apply the strengths of each when we work together. That’s an important message for all of us, healthcare leaders and health consumers alike. When we work human-with-human or human-with-machine, we can accomplish more. 

AI Can Help Us Achieve Our Goals

Partnering with AI, rather than fearing it, offers the chance to make healthcare more effective and more efficient.

For example, an AI tool from Bayesian Health and The Johns Hopkins University can identify sepsis early and save lives by alerting physicians. Trained on data from medical records and clinical notes, the system reduced sepsis hospital deaths by 18.2%, Stat News reported, covering three real-world studies published yesterday. “One of the most effective ways of improving outcomes is early detection and giving the right treatments in a timely way,” according to Suchi Saria, founder of BayesianHealth, which licensed the technology from Johns Hopkins Ventures

Similarly, in cancer, AI can help with speed and accuracy. In what was a breakthrough at the time, a New York University Langone Medical Center study (2019) showed that a Combination of Artificial Intelligence and Radiologists was more successful at breast cancer diagnostics than either the human or the machine alone. 

In that NYU study, the winning team consisted of one partner, a human, trained with thousands of hours in the classroom, hospitals, and outpatient centers; and the other partner was a machine, trained on millions of previous images and their corresponding diagnoses. Facing a radiologist shortage, that sounds especially helpful. The machine can churn through tons of data at a speed humans cannot process; humans can interpret the results with judgment that machines lack, at least for now. 

In addition to relying on humans for judgment, we need physicians to engage with patients and their loved ones to convey sensitive information in, well, a human way – with some level of EQ that machines can’t culturally communicate, or at least the empathy would get lost in translation if they tried.

So What Are The Barriers?

Many in the public, private, and nonprofit sectors know early diagnosis can save lives. As an example, one nonprofit where I volunteer, the Brem Foundation to Defeat Breast Cancer, has as its mission to maximize women’s chances of finding early, curable breast cancer. They focus on education, access and advocacy as means to achieve the goal.

We can have a common goal of early diagnosis and apply multiple means to that end. There is not a single solution to improving healthcare. As we develop machine learning and other AI tools to help with screening, diagnosis, treatment, and follow up, individuals and organizations in the healthcare system should welcome these tools to the care team.

Other than fear, what are the barriers to AI adoption? As noted earlier, one challenge is deciding when to use these tools and how to use them ethically. Another barrier is the quality of the data set. As Professor Gopnik noted in her article, AI programs are only as effective as the data we load into them. In other words, if the machine is trained on deficient data, it won’t succeed. 

Think of a different type of screening where this is the case: screening job candidates. If Human Resources hiring programs rely on data sets that are not sufficiently sensitive to the biases we have in our society -- even biases we’re not aware we have – then those AI tools are not well-suited for their task. When relied on, they may sacrifice quality and equity for speed and quantity. Likewise, training sets need to thoughtfully correct for – not reflect – disparities in health. At least real world evidence (RWE) offers insight into the treatment of people who are excluded from clinical trials. But even data from RWE reflect healthcare in a system that has inherent biases in areas like access to and selection of treatment. 

Like the child besting the computer, it will take humans, not machines, to imagine a health system with improved outcomes, higher quality, lower costs, greater access, and more equitable care. It will take humans plus machines to get us there faster.

(*Thanks to my instructors Raghu Bala and Nicholas Simigiannis for their teaching and guidance in that MIT Management course.)

Previous
Previous

Accelerating Innovation in Healthcare