Chris Tyne
Author: Chris Tyne | Date November 3, 2016

Robot Overlords in Healthcare? Not Likely. And Not Us.

Random and Relevant: Thoughts on AI and Why healthfinch Isn't "Going There" Right Now
by Chris Tyne, VP Product

Before we welcome our new robot overlords, we need to understand what they will be doing.  There is a huge rise in artificial intelligence and machine learning that has shown real promise in a variety of industries. Technology is evolving and fast. But, our understanding of AI and its potential impact on our lives, is not evolving at the same pace. Generally, we humans still have a fear of intelligent electronics and it's not hard to figure out why. One bad encounter with Siri, and we get a little wigged out.

It is human nature to have a level of distrust for things we do not understand and that trust continues to drop if we are not provided with a suitable explanation.  Without proper understanding of how a computer is doing things on our behalf, there will always be a level of anxiety surrounding the outcome.  It is very important that we trust the electronics around us to carry out their function with reliability and predictability. 

The anxiety around AI is amplified when it comes to healthcare.  There are great companies with interesting ideas on how to use AI to help patients, yet they have very low adoption.  Healthcare has a very small margin for error and really no room for ambiguity, and that is for good and obvious reasons.  When talking to physicians about these new technologies, the biggest concern I hear is “How can I know the computer is right?”  This is a fundamental problem with Machine Learning and Artificial Intelligence, because the algorithms are changing and growing and there is rarely a clear explanation on why this answer was chosen over that one.

Watson is a prime example of artificial intelligence and machine learning in the healthcare industry.  It is doing great things as a decision support tool.  Watson takes data and provides recommendations then, with confidence, scores, and support evidence, it aids in many aspects of healthcare.  Though it is known that Watson will not always be right, it is less clear to why the system came to that conclusion.

You may remember a few years back, IBM put Watson against some of the most famous Jeopardy! competitors.  Watson did a downright amazing job through most of the game and easily beat the human competitors.  However, when it came to the final answer under the category “U.S. Cities", "Its largest airport was named for a World War II hero; its second largest, for a World War II battle”, Watson  responded with “What is Toronto?”

Obviously, Toronto is a major Canadian city and there is not a US City of Toronto that has an airport.  The problem is not that the answer was wrong, the problem really stems from the concept that even the IBM team could not explain why it happened.


Giving healthcare providers a level of transparency into what their system is doing is vital to the adoption and the trust of automation in the electronic medical record.  It is essential that a doctor know the rules that govern the care of their patients so they can rest easy as the machines start working for them.

I think AI does have a place in healthcare, eventually, but not until we have full visibility into the decision making process and recommendations.  I don't think physicians will trust something that they can't "see" when they are ultimately responsible for patient outcomes. That's why our product team is not pursuing AI solutions for decision support at this time. Instead, we're designing protocols that are powered with clinical intelligence and able to be customized to the provider or health system need. There's always full visibility into the recommendation.

Written By: Chris Tyne on November 3, 2016