How responsible AI can boost patient outcomes

Lisa Jarrett, an artificial intelligence expert at PointClickCare, previews her HIMSS24 session that will dive into AI topics including transparency, fairness, explainability and transparency.
By Bill Siwicki
11:48 AM

Leo, and his best friend Lisa Jarrett, senior director, AI and data platform, at PointClickCare

Photo: Lisa Jarrett

Hospitals and health systems need to understand how to balance the many new opportunities artificial intelligence brings for improving patient outcomes with the imperative to deliver AI-enabled products responsibly.

There are ethical and regulatory considerations alongside data privacy. And there are principles of what is known as "responsible AI" with active use in products today.

Lisa Jarrett, senior director, AI and data platform, at PointClickCare, will discuss all of these issues in an educational session at the HIMSS24 Global Conference & Exhibition entitled "Responsible AI to Improve Patient Outcomes."

Transparency and fairness

With the extraordinary promise of AI comes an equally enormous imperative to use AI in ways that augment clinicians' and caregivers' work with transparency and fairness, Jarrett said.

"As we weigh the opportunities for AI’s use, we also need to evaluate and design in ethics from the earliest plan through customer use and ongoing management and measurement," she explained. "In healthcare, we need to incorporate the core values of responsible AI and go even farther to consider the diverse ecosystem of patients, care environments, caregivers and clinicians that will either use AI features directly or be impacted by those features.

"To ensure successful use and positive impact, active partnership with clinicians and users to learn their questions and feedback on how AI impacts their daily activities is critical," she continued. "Health IT leaders need to understand how responsible AI principles come into play across the ecosystem of users and health delivery environments to ensure that critical questions are answered from the start and through the lifecycle to support effective adoption."

Legislation and regulation for AI are emerging, and industry groups are developing and sharing principles for responsible AI in clinical decision support.

Required responsible AI practices

"Diverse perspectives exist across clinicians, delivery environments, etc., about what required responsible AI practices should be," Jarrett said. "The recent HHS ONC HT1 provision for algorithm transparency offers more detailed guidance for AI uses in healthcare. HHS outlined a framework called FAVES (Fairness, Appropriateness, Validity, Effectiveness and Safety).

"This is a practical and meaningful framework to ensure a consistent, baseline set of information about algorithms used to support their decision making," she continued. "The approach that PointClickCare uses is on top of these principles, engaging early and often with clinicians who will be users to integrate their questions and concerns into the product."

This is critical to ensuring that predictions will be received positively and to understand how to build customer trust, she added.

"As an example, for the development of a predictive return to hospital algorithm that’s active in both Pacman and Performance Insights, users ranging from case managers, nurses and medical directors reviewed content and established a human baseline to compare algorithmic predictions and derive accuracy metrics," Jarrett noted.

"There is no one size fits all, unique considerations apply to primary and edge use cases, different personas have varying perspectives and concerns," she continued. "Responsible AI values give a starting point for the design, training and deployment of algorithms. Product teams need to start with a framework and then dive deeper and adapt based on the use case and users."

Data security and privacy

"Explainability" and transparency on data used in algorithm development and evaluation is required, alongside data security and privacy to ensure trust by users in hospitals and health systems, she added.

An important learning attendees should walk away from Jarrett's session with is that for IT leaders it is as important to evaluate responsible AI on AI-driven or enabled systems as it is for quality of the system itself, she said.

"On behalf of their users, whether it be clinicians or caregivers, they need to look for and ask questions about the explainability of algorithms, how they’re developed, and how the product incorporates feedback and adaptation into ongoing monitoring and management," she explained. "These questions and the availability of information on responsible AI for the product to answer them is critical to evaluate, particularly as hospitals and health systems grow their portfolio of AI-enabled tools.

"Health IT leaders are critical to ensure a responsible AI visible supply chain, which should be considered just as important as proof of a trusted security software supply chain," she continued. "Understanding and acceptance at the user level to know what’s behind the curtain is a prerequisite for effective adoption and use. Health IT leaders know their users, their use cases, and the thresholds that their users will or won’t accept for trust."

Transformational opportunities with AI

On another front, clinicians are fundamental to both identifying transformational opportunities with AI and helping raise the bar on responsible AI levels in clinical decision support, Jarrett said of further topics in her HIMSS24 session.

"PointClickCare's experience with predictive algorithms is that there’s a wide range of acceptance or skepticism within the same persona and that it's imperative to incorporate sufficient volume to develop a solid baseline, then revisit and adjust based on changes," she said.

"This proactive process by the product developer is one part of what health IT leaders should look for as they evaluate AI-enabled solutions," she continued. "Only with clinical collaboration and direct engagement throughout AI product development can we both reach for the stars and make sure there’s an unobstructed view in the telescope."

The session, "Responsible AI to Improve Patient Outcomes," is scheduled for March 12, from 10:30-11:30 a.m. in room W208C at HIMSS24 in Orlando. Learn more and register.

Follow Bill's HIT coverage on LinkedIn: Bill Siwicki
Email him: bsiwicki@himss.org
Healthcare IT News is a HIMSS Media publication.

Want to get more stories like this one? Get daily news updates from Healthcare IT News.
Your subscription has been saved.
Something went wrong. Please try again.