This illustration depicts language models which generate text as a part of a project showing the visualization of artificial intelligence.

Specific Risks with AI in Integrated Risk Management in Healthcare

Unlike a marketing company that can essentially plug in an AI solution and hit the ground running, healthcare AI solutions require a real evaluation of the risks associated with implementing AI in processes like integrated risk management (IRM). 

Considering the outcome of these risks can drastically affect patient health and safety, it’s important for decision makers to understand why out-of-the-box AI solutions can fall short in healthcare AI risk management and how proprietary AI can mitigate some of these risks.

According to a recent article in The Journal of AI, some of the risks of AI in hospital IRM include misinterpretation of results in diagnostic tests, ethical concerns around patient privacy and consent, and reduced human oversight and accountability (Božić, 2023).

In the article, Božić further identifies close collaboration between those with “domain knowledge”, ie. an in-depth understanding of the healthcare setting in which the AI will be used, and those with the technical knowledge for building AI solutions themselves.

Another study found that clinical data sets can even skew data from hospital to hospital based on how a specific team collects and classifies data, a phenomenon called “hospital-specific biases” (Muley et al., 2023). This data is then used to teach an AI tool, and if a hospital’s data does not correspond to the original data set used to teach the AI healthcare solution, then the usefulness and accuracy of that solution falls drastically.

These are essential sticking points to consider for those looking to outside solutions for an AI healthcare solution. The AI risk management inherent in a pre-built system is not built on collaboration between healthcare providers and the programmers who created the data-reading programs underneath the layers of technology. 

Additionally, variances in how data is collected from hospital to hospital can affect how accurately an AI healthcare solution performs when its learning algorithms are not based on proprietary data sets.

Therefore, rather than being an aide to IRM processes in a clinical setting, the AI solution itself can pose a risk by possessing large blindspots to trends that are specific to healthcare settings, to weighing data inaccurately, or by offering solutions to clinicians that are hard to understand or completely inaccurate.

How Proprietary Solutions Can Simplify AI Risk Management in Healthcare

In a symposium on implementing AI in healthcare, the keynote speakers agreed on several necessary ingredients for the smooth implementation of AI in healthcare settings: contextualization, lifecycle planning, and stakeholder involvement (Drysdale, 2020).

When implementing proprietary software, a healthcare organization is able to incorporate these principles for successful healthcare AI implementation directly into their AI solution.

Contextualization is the process of integrating the AI solution into the current operational and cultural framework of a healthcare setting. Most healthcare data is encounter-specific (Stanfill & Marc, 2019), meaning that most healthcare providers classify and code data specific to their internal systems. A proprietary system addresses the reality of healthcare AI in existing workflows by being specifically built for a healthcare system’s existing data set and workflows.

Lifecycle planning, or the practice of assigning responsibility for updating and refining an AI healthcare solution over time, can also be addressed with a proprietary system. This is a huge AI risk management move because a healthcare provider does not place their trust in an AI solution provider to be adapting to the healthcare landscape and data patterns over time, instead they are able to directly mitigate this risk with personnel that are intimately familiar with their systems.

Stakeholder involvement is perhaps the most crucial upside for crafting proprietary AI healthcare solutions and for implementing ideal AI risk management. A solution can be made with the specific buy-in and feedback from the clinicians, executives, and daily users who will be responsible for using AI solutions in real-time to improve patient care and healthcare practices. Rather than relying on a programmer’s idea of ideal healthcare operations, your team can craft an AI solution that’s pointed directly at the data you deem most important and focused on solving problems specific to your healthcare context.

AI Risk Management Is in Your Hands

Above all, what this blog aims to do is show you that there are options besides just plugging in someone else’s AI solution, and that doing so may actually put your healthcare system at risk. 

The flexibility and customization inherent in a proprietary healthcare AI solution can help your team ensure you’re doing the very best for your patients by mitigating risks inherent, AI while also using AI to make life easier on your physicians and improve patient outcomes.

If you have more questions about implementing AI risk management in your healthcare AI solution, we’d love to talk

References

Božić, V. (2023). Integrated Risk Management and Artificial Intelligence in Hospital. Journal of AI. 7(1), 63-80. https://doi.org/10.61969/jai.1329224

Drysdale, E. (2020, March 24). Implementing AI in Healthcare. ErikDrysdale.com. Retrieved August 11, 2024, from https://www.erikdrysdale.com/figures/implementing-ai-in-healthcare.pdf

Muley, A., Muzumdar, P., Kurian, G., & Prasad Basyal, G. (2023). Risk of AI in Healthcare: A Comprehensive Literature Review and Study Framework. Asian Journal of Medicine and Health, 21(10), 276-291. https://doi.org/10.9734/AJMAH/2023/v21i10903

Stanfill, M., & Marc, D. (2019). Health Information Management: Implications of Artificial Intelligence on Healthcare Data and Information Management. Yearbook of Medical Informatics, 28(01). https://doi.org/10.1055/s-0039-1677913