To find success with AI, health IT leaders must understand its recent evolution

The healthcare industry must simultaneously guard against technology outpacing its practical uses, says one physician and CIO.
By Bill Siwicki
12:19 PM

Dr. Bala Hota, senior vice president and CIO at Tendo

Photo: Dr. Bala Hota

As quickly as they have hit the healthcare industry, generative artificial intelligence and large language models are reshaping the healthcare landscape. And CIOs and other health IT leaders at hospitals and health systems must fully grasp these technologies before putting them to use.

One real-world application of AI that's key for provider organizations to understand: the use of AI-powered language models in doctor-patient communication. 

These models have been found to have valid responses that simulate empathetic conversations for patients, making it easier to manage difficult interactions. But there are many challenges that must be overcome before the many more applications of AI can move forward.

For example, one challenge is ensuring regulatory compliance, patient safety and clinical efficacy when using AI tools.

Dr. Bala Hota is senior vice president and CIO at Tendo, a healthcare software company that works in artificial intelligence. We interviewed him to discuss understanding generative AI and large language models, leveraging LLMs for healthcare applications, real-world applications of genAI and challenges and ethical concerns.

Q. CIOs and other IT leaders at hospitals and health systems must understand generative AI before deploying it. What are a few things about genAI that you feel are most important for these leaders to grasp?

A. It's important for CIOs and IT leaders to understand that genAI is just one aspect of the broader digital transformation required in the industry, and it's essential to understand the fundamental evolution AI has undergone in recent years.

Data generation, augmentation and anomaly detection can significantly accelerate decision making within an organization. However, generative AI cannot replace human judgment and interaction. Instead, it acts as a supplement that can enhance productivity.

The semantic component of large language models dramatically reduces the time an organization's teams spend cleansing and presenting data, allowing them to operate at the top of their license and focus on strategic tasks. Any form of AI must ensure adequate security, compliance, and common-sense approaches to protecting and distributing data. The industry must guard against technology outpacing its practical uses.

Q. How can hospitals and health systems best leverage large language models today?

A. The use of AI is gaining importance in the healthcare industry as it can help hospitals and health systems to streamline their decision-making processes, enhance efficiency and improve patient outcomes. AI has a wide range of applications, from simplifying data to interacting with patients, which can significantly impact the healthcare industry.

A significant benefit of AI in healthcare is improving the effectiveness of treatment planning. Ambient voice can be used to enhance the usage of electronic health records. Currently, AI scribes are being implemented to aid in medical documentation. This allows physicians to focus on patients while AI takes care of the documentation process, improving efficiency and accuracy.

In addition, hospitals and health systems can use AI's predictive modeling capabilities to risk-stratify patients, identifying patients who are at high or increasing risk and determining the best course of action.

In fact, AI's cluster detection capabilities are being increasingly used in research and clinical care to identify patients with similar characteristics and determine the typical course of clinical action for them. This can also enable virtual or simulated clinical trials to determine the most effective treatment courses and measure their efficacy.

Q. What are some real-world applications of AI you think point the way for the rest of the industry?

A. One real-world application of AI that points the way is the use of AI-powered language models in doctor-patient communication. These models have been found to have valid responses that simulate empathetic conversations for patients, making it easier to manage difficult interactions.

This application of AI can greatly improve patient care by providing quicker and more efficient triage of patient messages based on the severity of their condition and message.

Additionally, AI can be used for better risk stratification at the time of treatment. This can help healthcare providers work at the top of their license by making better use of resources. By accurately identifying patients who require more intensive care, providers can allocate their resources more effectively and improve overall patient outcomes.

This includes automation of interactions with patients to scale communication and increase patient engagement. AI is being used to reach out to patients with reminders, follow-ups and better engagement, leading to improved outcomes. By identifying patients in need of more high-touch care, AI can help overcome barriers such as clinical inertia and poor adherence, significantly improving outcomes.

Q. What are the challenges and ethical concerns of AI you feel healthcare provider organizations must tackle?

A. One challenge with AI implementation in healthcare is ensuring regulatory compliance, patient safety and clinical efficacy when using AI tools. While clinical trials are the standard for new treatments, there is a debate on whether AI tools should follow the same approach. Some argue that mandatory FDA approval of algorithms is necessary to ensure patient protection.

Another concern is the risk of data breaches and compromised patient privacy. Large language models trained on protected data can potentially leak source data, which poses a significant threat to patient privacy. Healthcare organizations must find ways to protect patient data and prevent breaches to maintain trust and confidentiality.

Bias in training data is also a critical challenge that needs to be addressed. To avoid biased models, better methods to avoid bias in training data must be introduced. It is crucial to develop training and academic approaches that enable better model training and incorporate equity in all aspects of healthcare to avoid bias.

To address these challenges and ethical concerns, healthcare provider organizations must focus on developing data sets that accurately model healthcare data while ensuring anonymity and de-identification.

They should also explore approaches for decentralized data, models and trials, using federated, large-scale data while protecting privacy. Additionally, partnerships between healthcare providers, health systems and technology companies must be established to bring AI tools into practice in a safe and thoughtful manner.

By addressing these challenges, healthcare organizations can harness the potential of AI while upholding patient safety, privacy and fairness.

Follow Bill's HIT coverage on LinkedIn: Bill Siwicki
Email him: bsiwicki@himss.org
Healthcare IT News is a HIMSS Media publication.

Want to get more stories like this one? Get daily news updates from Healthcare IT News.
Your subscription has been saved.
Something went wrong. Please try again.