A frozen globe of ice, resembling a glass ornament, hangs from a fir tree branch in a snowy forest, illustrating the concept of "explainable AI" in healthcare. The frozen globe’s clear and transparent nature represents the transparency and understanding that AI systems should provide to clinicians and patients.

The Upside of Proprietary AI Healthcare Solutions: Trust

Why internal and external stakeholders respond positively to bespoke AI solutions

In previous articles, we’ve discussed the benefit proprietary AI healthcare solutions can offer by building trust with internal stakeholders. Unlike premade solutions, customized healthcare AI can be built with the input of the clinicians, administrators and other users of the system. This process can create buy-in by incorporating input specific to your organization into the AI solution.

 

This trust can be built into the patient-provider relationship because the providers understand the machinery behind AI suggestions and can deliver those insights to the patient. This process falls under the umbrella of a concept of “Explainable AI,” and in this blog we’ll dive deeper into how explainability is the bedrock for successful AI healthcare solutions.

 

What is explainable AI?

While implementation can vary from patient to patient and clinician to clinician, a 2023 literature review provides an extremely useful working definition of explainable AI in healthcare:

 

Explainable AI refers to the capability of AI systems to provide human-understandable explanations for their decisions and recommendations. It enables clinicians and patients to comprehend the factors that contribute to an AI model’s output, fostering trust and facilitating informed decision-making. The importance of explainability in healthcare lies in enhancing patient safety, improving clinical decision-making, enabling effective collaboration between clinicians and AI, and ensuring ethical and regulatory compliance (Alam et. al, 2023).

 

From this definition, we can break down explainable AI into two important parts: explainability as it relates to patients, and explainability as it relates to clinicians.

 

Explainability in Patients

Patient trust with AI can be a highly variable phenomenon based on patient culture, race and previous medical experiences. In a study categorizing individuals as having high or low trust in the medical system, individuals with low trust in human diagnoses categorized AI as similarly untrustworthy. In high trust individuals, they regarded AI as being less trustworthy than human diagnoses and care (Lee & Rich, 2021).

 

In addition to initial mistrust, this study also found that patients who received a diagnosis in partnership with AI varied in how much more information they wanted following the initial diagnosis. Some did want more information with more complex medical terminology; others wanted to move onto their care plan quickly. Doctors noticed a large difference in urban and rural populations  in how much information they wanted from the AI as well as differences in preferences by level of education.

 

With these findings in mind, explainable AI becomes a crucial part of any bespoke AI healthcare solution. Patients come to their appointment with different preconceived notions of the healthcare system as well as different informational needs. While the exact combination of these factors is unique to an individual, one of the benefits of creating a solution specific to a healthcare organization is that population-level considerations can be considered in what “explainable AI” means at scale in a specific setting.

 

Explainability in Clinicians

Just as transparency desires are implemented differently from patient to patient, clinicians do vary in their expectations of an AI healthcare end product. Having buy-in in the process of creating bespoke AI solutions allows clinicians to feel like stakeholders in the end product and there are some common functionalities and features of AI that most clinicians will seek out in an ideal solution.

 

The difference between explainability in clinicians and patients is an example of domain-specific AI. This can refer to industries at large, such as healthcare AI versus banking AI, or to role and knowledge-base, in this case patients versus clinicians. Clinicians will typically have a few ideas of possible causes for illness, and their version of the AI requires input functions with transparent reasoning when using patient tests as diagnostic tools. 

 

A doctor doesn’t need to understand the logic trees branching down in the AI algorithm to generate the data on their screen; they simply need to be able to understand enough about final decisions to notice if there has been an error in AI logic. In contrast, a patient may not need to know the process of elimination or identification behind the final decision, but may be interested in what evidence was input to reach a conclusion.

 

Explainable AI in customized healthcare solutions

One of the key ways to involve clinicians as trusted stakeholders in designing a bespoke healthcare solution is to get a good understanding of what transparency means to your staff. 

 

For example, a facility specializing in complex diagnoses like immunology or cancer may need to have a more complex, visible decision matrix for doctors to check against before deciding on patient treatment. At an orthopedic facility, the focus of the AI data may need to be on correctly identified imaging that can streamline diagnoses. 

 

Unlike out-of-the-box solutions, customized healthcare AI can be adapted to your facility or organizational needs over time. If a misdiagnosis or mistake in the AI is occurring consistently, clinicians need transparency to understand where the logic of the AI is breaking down. But regardless of transparency, if your team doesn’t have access to the root logic of your AI algorithm, they won’t be able to make necessary adjustments to fix future diagnostics.

 

Similarly, a customized AI healthcare solution allows your team to incorporate patient feedback in explainability and transparency and to put patient comfort and care first. Clinicians who understand the “black box” behind their AI make better decisions that they can explain to patients. Patients who understand their care plan are more likely to comply.

 

If a customized AI healthcare solution seems like the right answer to your transparency needs, our firm specializes in helping organizations get the most out of AI with tailored tools. Reach out today to learn more.

 

References

Alam, M. N., Kaur, M., & Kabir, M. S. (2023). Explainable AI in healthcare: Enhancing transparency and trust upon legal and ethical consideration. International Research Journal of Engineering and Technology (IRJET). https://www.academia.edu/104494802/Explainable_AI_in_Healthcare_Enhancing_Transparency_and_Trust_upon_Legal_and_Ethical_Consideration

 

Lee, M. K., & Rich, K. (2021). Who is included in human perceptions of AI? Trust and perceived fairness around healthcare AI and cultural mistrust. Academia.edu. https://www.academia.edu/51024230/Who_Is_Included_in_Human_Perceptions_of_AI_Trust_and_Perceived_Fairness_around_Healthcare_AI_and_Cultural_Mistrust

 

Pawar, U., O’Shea, D., Rea, S., & O’Reilly, R. (2020). Explainable AI in healthcare. Proceedings of Cyber Science. https://www.academia.edu/43933192/Explainable_AI_in_Healthcare