MedCity Influencers, Artificial Intelligence, Legal

AI Is at the Intersection of Safety and Equity in Healthcare

Hidden biases, reduced privacy, and over-reliance on non-transparent, decision-making black boxes can cut against democratic values, potentially putting our civil rights at risk. This means that the effective and equitable use of AI will be based on solving inherent ethical, safety, data privacy and cybersecurity challenges.

Artificial Intelligence is poised to transform nearly every single aspect of our lives, including in health, AI can support advancements in clinical trials, patient outreach, image analysis, patient monitoring, drug development and more. However, such progress is not without risk. Hidden biases, reduced privacy, and over-reliance on non-transparent, decision-making black boxes can cut against democratic values, potentially putting our civil rights at risk. This means that the effective and equitable use of AI will be based on solving inherent ethical, safety, data privacy and cybersecurity challenges.

To encourage ethical, non-biased AI development and use, President Biden and the Office of Science and Technology Policy drafted a “Blueprint for an AI Bill of Rights.” Acknowledging the growing importance of AI technologies and their huge potential for good, it also recognizes the inherent risks that accompany AI. The Blueprint lays out core principles that should guide the design, use, and deployment of AI systems to guarantee progress does not come at the expense of civic rights; these will be key to mitigating risks and ensuring the safety of individuals who interact with AI-powered services.

This comes at a critical time for healthcare. Innovators are working to harness the newly unleashed powers of AI to radically improve drug development, diagnostics, public health, and patient care, but there have been challenges. A lack of diversity in AI training data can unintentionally perpetuate existing health inequities.

For example, in one case, an algorithm misidentified patients who could benefit from “high-risk care management” programs, as it trained on parameters introduced by researchers who didn’t take factors of race, geography, or culture into account. Another company’s algorithms intended to predict sepsis, were implemented at hundreds of US hospitals but had not been tested independently; a retrospective study showed incredibly poor performance of the tools, raising fundamental concerns and reinforcing the value of independent, external review.

To provide protection from algorithms that may be inherently discriminatory, AI systems should be designed and trained in an equitable manner to ensure they do not perpetuate bias. By training on data that is unrepresentative of a population, AI tools can violate the law by favoring people based on race, color, age, medical conditions, and more. Inaccurate healthcare algorithms have been shown to contribute to discriminatory diagnoses, discounting the severity of disease in certain populations.

To limit bias and even help to eliminate it, developers must train AI tools with as much diverse data as possible to make AI recommendations safer and more comprehensive. For example, Google recently launched an AI tool to identify unintentional correlations in training datasets so researchers can be more deliberate about the data used for their AI-powered offerings. IBM also created a tool to evaluate training dataset distribution and, similarly, reduce the unfairness that’s often present in algorithmic decision making. At Viz.ai, where I am the chief technology officer and co-founder, we also aim to reduce bias in our AI-tools by implementing software in underserved, rural areas and, in turn, collecting patient data that might not have otherwise been obtainable.

Because safety is interlinked with equity and ensuring that medications are developed for diverse patient groups, all AI tools should be created with diverse input from experts who can proactively mitigate against unintended and potentially unsafe uses of the platform that perpetuate biases or inflict harm. Companies that use AI, or hire vendors who do so, can ensure they’re taking precautions against unsafe use by rigorous monitoring, ensuring AI tools are being used as intended, and encouraging independent reviewers to confirm AI platforms’ safety and efficacy.

Finally, when it comes to algorithms involving health, a human operator should be able to insert themselves into a decision-making process to ensure user safety. This is especially important in the event a system fails with dangerous, unintended consequences—as in the instance of an AI-powered platform mistaking pets’ prescriptions for their owner’s, which blocked her from receiving the care she needed.

Some have criticized the AI Bill of Rights with complaints ranging from stifling innovation to being nonbinding. But it is a much-needed next step in the development of AI-powered algorithms that have the potential to identify patients at risk for serious health conditions, pinpoint health issues too small for providers to notice, and flag problems that aren’t a primary concern, but which could be later. The guidance it provides is needed to ensure that AI tools are accurately trained, correcting biases and improving diagnoses. Increasingly, AI has the ability to transform health and bring faster, targeted, more equitable care to more people, but leaders and innovators in healthcare AI have a duty and responsibility to apply AI ethically, safely, and equitably. It’s also up to healthcare companies to do what’s right to bring better healthcare to more people, and the AI Bill of Rights is a step in the right direction.

Photo: metamorworks, Getty Images


Avatar photo
Avatar photo

David Golan

David Golan, PhD, is the co-founder and chief technology officer of Viz.ai, a digital healthcare company harnessing deep learning to analyze medical data and improve clinical workflow. Prior to founding Viz.ai, David was a Fulbright postdoctoral scholar at Stanford University, working on leveraging deep learning for the analysis of medical imaging and genetic data. David holds a PhD in Statistics and Machine learning from Tel-Aviv University, and has co-authored more than 20 scientific papers including three publications in the journal Science. Prior to his academic career, David founded the ML team of b-hive Networks, an Israeli startup which was acquired by VMWare in 2008.

This post appears through the MedCity Influencers program. Anyone can publish their perspective on business and innovation in healthcare on MedCity News through MedCity Influencers. Click here to find out how.

Shares0

This article is featured in the Healthcare Docket newsletter, a partnership between Breaking Media publications MedCity News and Above the Law.

Enter your email address to subscribe.

Shares0