Most healthcare organizations want to use AI for operational efficiencies. This is missing the mark to think about deploying it strategically. No wonder senior management and other staff are often skeptical.
I want to put forward a practical outline on how to think about healthcare innovation (including AI) strategically. Not just as “tech” to throw at a problem. Not as a shiny buzzword that means nothing.
The MIT Technology Review & GE Healthcare published “The AI Effect” to talk about the changes taking place in healthcare as they use more artificial intelligence. It is a survey of 900 healthcare professionals.
Let’s understand what the report says, and try to dig deeper and find more constructive ways to resolve some big challenges.
The primary case they put forward is that AI is “is making health care more human.” The report states that AI is all out to disrupt healthcare. It talks about all the progress it has made already and will continue to make. It talks about all the ways it will continue to bring massive changes to the healthcare industry as a whole.
Despite this, the real data and stories don’t always paint a rosy picture. The most public of this is of course IBM Watson’s implementation. The stories on it show repeatedly that the return on such a solution was underwhelming. (See here, here, and here)
I believe there is a structured way of approaching this problem to get the greatest benefits from AI in a healthcare setting.
I also believe that one-size-fits-all is not the right approach as you think about AI in your specific situation.
Let’s go back to the report. Some key survey results taken from the report are as follows.
- 79% will increase budget of AI Applications.
- 72% of respondents to this survey show interest¹ in implementing AI.
- 74% of health-care institutions are developing or planning² to develop AI application algorithms.
- 93% agree that AI has improved the speed and accuracy with which patient data is analyzed and shared.³
These numbers need greater context. Let’s try to understand the deeper meaning of these numbers.
¹ Interest is one of those nebulous words that mean nothing. As a publication that focuses on technological innovation, of course “interest” suggests “72 percent of leaders ARE implementing AI”. But anyone who has ever sat in an office to actually get a sale understands that “interesting” can often be double-speak for “we’ll let others do AI in healthcare first because we’ve got more important things to do right now.”
² Planning is another word that means little. Organizations “plan to” become the leader in their field. Very few manage to do so. This is similar to me “planning to” become like Bruce Wayne.
³ By how much? What are the quantifiable benefits here? This kind of abstract language is what makes deciphering a lot of these reports so frustrating.
This tension plays out nicely in the next section which talks about the challenges.
- Less than half (~45%) of respondents believe that AI has helped increase consultation time and time to perform surgery and other procedures.
- In contrast, more than 1/2 of respondents planning to deploy AI raise concerns about medical professional adoption, support from top management, and technical support.
- Integrating AI applications into existing systems is challenging for 57% of respondents.
The one silver lining here is that 79% indicate that AI has helped avert health-care worker burnout. While the above numbers are promising, there is a distinct lack of strategic thinking about the benefits of AI in a healthcare setting.
The report briefly summarizes all the challenges in a short section as follows:
“Among those was skepticism about the provable benefit and overall cost of AI as top factors hindering its adoption. Hospital administration is generally more skeptical than medical staff. Another hurdle is the disruptive impact that AI has on existing processes; a third is the difficulty of integrating AI applications into existing systems.”
The Juice is in the Details
The juice is in the details. What do you invest in? How does it deploy? What is the process used to deploy it? And how do you think about it in a structured yet holistic manner? Is AI even appropriate or are there lower hanging fruits available?
I want to propose a 5 Step Process for medical professionals to think about Artificial Intelligence within their healthcare organization. This is about elevating the case for AI in healthcare as a matter of sound business strategy.
The Five-Step Process to Build Strategic Healthcare AI & Innovation Cases
First, Start with Organizational Challenges & Values
Ask yourself, what are the key challenges facing the organization right now? Is it doctor burnout and turnover? Is it frivolous lawsuits? Is it long patient wait times? Is it increasing costs? If so, where?
In short, which issues are having a strategic impact on the organization itself?
Compare and contrast this with the organization’s principles, values, and priorities. Perhaps long wait times are not important after all to its stakeholders if you compete on being a low cost provider. Perhaps perfect diagnosis is not as important as giving people care that is “good enough” (no, not every case is a life & death situation and sometimes good enough is well…good enough to let the patient’s body heal itself).
A deliberate approach to healthcare innovation in your organization starts with asking: what do people we serve need? As leaders & decision-makers, this means understanding the needs of patients, as well as needs of providers themselves.
This is not about “healthcare innovation” it is about “healthcare innovation for you and your organization.”
Second, Understand Root Causes
It is surprising how few organizations in healthcare settings properly map out the root causes of a problem or situation. Solving the top-layer problem can leave a deeper issue to fester and metastasize. Getting to the root of an issue is key.
I talk about a root causes analysis in my Principles of Organizational Breakthroughs: A Practical Guide for Leaders which you can download by signing up for the Clarity Weekly newsletter.
In brief, get to the root causes of the organization issues and priorities. Understand both the quantitative and qualitative manifestations of these root causes.
For example, Dushyant Sahani from the University of Washington Medical Center makes the case of using AI for smarter scheduling so that more patients can be seen every day by doctors.
But perhaps the issue isn’t scheduling of doctors’ time. But perhaps the issue is under-staffing of doctors in the first place. In which case, “smart scheduling” is likely to cause greater burnout.
For such an issue, you can collect real data on what it costs to hire a doctor, and what it costs the organization when they burnout. Using this as a comparison to the benefits of AI will create a better case and lead to a better outcome.
Third, Create Options to Solve Root Causes
Is AI the right solution to solve these underlying root causes? If so, proceed further with understanding how to do it. If not, you have to use good old-fashioned Human Intelligence before deferring to anything Artificial.
Perhaps software is the right solution. Perhaps it isn’t.
To extend the example further, if the root cause is doctor burnout, a smart scheduling piece of AI won’t help. What would be the most effective ways of dealing with doctor burnout? Reducing patient workload? Would it be reducing the paperwork? Would it be a better match of a patient with the doctor’s interests and qualifications? Or would it be something else entirely such as allowing for greater doctor involvement in management?
Each of these questions points to a different solution. Some powered by Artificial Intelligence and some powered by Human Intelligence.
Fourth, Understand Second-Order Consequences
This is where people have been failing. As it mentions above, “integrating AI applications into existing systems is challenging for 57% of respondents.”
As a medical professional leader, it is your responsibility to think through the ripple effects of implementing such a change throughout the organization and understand the soft costs and cultural challenges ahead of time.
You need to create action plans and contingency plans as these second-order consequences play out in your organization. What will you do with them?
In the case of doctor burnout, say you choose to introduce AI tools to reduce paperwork. If so, how will non-medical professionals interpret this change? What would be the training time required to onboard everyone on this new platform? Does it impact other staff in the organization who have to interpret this data? How so? What is its accuracy?
This is the time to involve all stakeholders in the discussion and foresee issues.
This is sometimes overlooked.
Dr. Rachael Callcut says that “It’s challenging working to move the field forward in a transformative way with artificial intelligence. There has to be alignment in vision, commitment to exploration, and mutual excitement. Everyone involved needs to be willing to push forward into a sometimes unproven space. If we are afraid to fail on a project and thus, don’t take it on, the opportunity to change the future will pass us by.”
The very reason this happens is because leaders in one silo don’t involve everyone. It is hard to have mutual excitement about anything where you feel something is being pushed down your throat with a change. Involving stakeholders early in the process goes a long way.
An aside: I often notice a negative or adversarial force at play between medical professionals (who are used to giving advice for living) and staff, administrators etc who have to run the organization. I believe for any successful decision and subsequent implementation, medical professionals need to let go of “doctor-knows-best” and administrators at the same time need to engage with doctors earlier in the process. They also need to assist medical professionals in thinking through these 2nd order consequences more thoroughly.
Finally, Decide & Execute
Only after going through this exercise can you make an informed decision about AI in your healthcare organization. This also makes selling internally a lot easier if you have involved different stakeholders in finding root causes of the organization’s major concerns and mapping out the second order consequences of the options you have on the table.
What do you think?
There are major innovations possible in healthcare in the coming decade. But we need a systematic and strategic way of thinking about it.
What nuances, caveats, or other points would you bring to make the discussion of healthcare more strategic, systematic, and practical?
Discover more from Dhawal Tank
Subscribe to get the latest posts sent to your email.