JOver the past decade, significant investments have been made in companies creating artificial intelligence (AI) applications for health and healthcare. But while there have been successes, particularly in medical imaging, the industry is notorious for not yet living up to its potential — think IBM Watson.
The slow adoption of AI in healthcare stems from the fact that healthcare AI sits on the border between two major industries, healthcare and technology. And like the border between two nations, there are significant differences on each side.
During my career, I have spent time on each side. Now, as CEO of a frontier company, I have developed a deeper understanding of the differences that create barriers to mutual innovation. For AI for health to realize its potential, healthcare and tech companies need to keep the big picture in mind: healthcare is about saving lives.
Some of the barriers between health care and technology are quite tangible. Health care is heavily regulated, technology is not. The technology makes extensive use of open source software and libraries; healthcare tends to use proprietary software. But these differences are more about which side of the road you’re driving or which currency you’re using: they make crossing a border inefficient, but are ultimately resolvable.
It is the cultural differences that can be much more difficult to manage.
An important cultural difference is how each party prioritizes average benefits over individual harms when evaluating innovation. In technology, machine learning algorithms typically optimize for average profit. The healthcare industry, on the other hand, tends to pay more attention to individual harms, not wanting innovation to come at the cost of worse outcomes for even a few patients. The challenge of technology and healthcare collaboration is not because one side is wrong, but because both sides are right.
Cross the barriers
The complexity of these cultural differences is behind a lesson that Cornerstone AI, the company I run, recently learned. Our largest client has over 30 million patient health data that needed to be cleaned algorithmically. The customer is certainly interested in average metrics such as net error reduction, net increase in complete data, etc., which deal with the overall value of the data as a whole. But the customer is also interested in ensuring that the data is not damaged as a result of the process. even for just one of those 30 million patients. As a result, the AI software we built had to meet both standards, a higher bar that took much longer to achieve.
Cultural differences extend to how each party views software automation and algorithmic decision making. On the technical side, having a person see the prediction every time the algorithm is run can be considered a bad non-scalable business model. From the healthcare side, having a doctor review every algorithmic diagnosis can be considered good medical practice. Closing this gap is essential for the growth of healthcare AI. For example, technology companies can adopt the principles of clinical trials in reporting AI results, which provides more confidence in the underlying algorithms. And healthcare can follow technology’s lead that cloud-based and open-source software is not incompatible with data security and privacy.
Here is a personal example. When my daughter was a baby, she had a cold and developed a fever. She ended up in the hospital with a diagnosis of meningitis. There are two types of meningitis: viral, which is usually mild, and bacterial, which can be very serious. Distinguishing between the two takes a few days, which felt like a lifetime for his parents. Doctors recommended starting intravenous antibiotics immediately because, although antibiotics do nothing against viruses and come with potential side effects, the risks of untreated bacterial meningitis were greater.
As data scientists, my wife and I asked what the odds were that our daughter had viral meningitis versus bacterial meningitis. The response we got was 50-50, so we decided to continue with the treatment. But then we did hours of research on PubMed and found a published model that could estimate that probability. We manually calculated the model’s prediction for his case and realized that he estimated a 98% chance of viral meningitis versus only 2% for the bacteria. We breathed a sigh of relief.
If this model had been integrated into the hospital’s medical records system and instantly available for the doctor to review with us, we would have been much more reassured. We probably would have continued the treatment – 2% means something very different to your little girl than in an academic calculation – but others in a similar situation might not.
Decisions ultimately need a doctor, parent, or other human being to balance new AI information with the specifics of each situation. Personalized prediction should be accessible to everyone, not just those who have two nerdy parents who can devote time to this research.
Moving towards AI for health
I’m sharing stories from my company and my daughter to illustrate the complexity of what happens when AI models cross the line between technology and healthcare and see the “First, do no harm” sign. .
Along with the promise of AI for health comes the humility that those of us who develop health algorithms have learned: Algorithms are only as good as the data that enters them and the humans who interpret the results. .
The good news is that there is more momentum than ever in AI for health. Health data emerges from their proprietary, siled systems to serve as input to machine learning algorithms. The tangible barriers to merging technology and healthcare are falling. Cultural barriers will take longer, but companies focused on AI for health can also help overcome them, with a healthy respect for what humans and AI can uniquely contribute. As is the case with cultures, when the people who create products at this intersection respect and celebrate the contributions of each side, they will be able to fully realize the promise of AI for health.
Michael Elashoff is the CEO and co-founder of Cornerstone AI.
First Notice Bulletin: If you enjoy reading opinion and perspective essays, get a weekly top opinion digest delivered to your inbox every Sunday. Register here.