Effects of the EU AI Act Visible After 2026

The final negotiation round between the EU Parliament, EU Council, and EU Commission in December sealed the EU AI Act, however, the official document isn’t published yet and questions still hang in the air about the final rules.
By Tjasa Zajc
10:02 AM

Photo: mediaphotos / Getty Images

2023 was the year marked by AI. With the rise of general-purpose large language models ChatGPT, Claude and Bard, AI has become tangible and accessible for millions of people. There was no shortage of excitement, but also appeals for regulation of this powerful technology. 

State of AI Regulation in Healthcare

The EU AI Act was drafted already in 2021, long before the availability of general-purpose large language model. In December of 2023 a consensus on legislation seemed to have been achieved. Across the pond in the US, under the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, Health and Human Services (HHS) needs to establish an AI Task Force which will need to develop a regulatory action plan for predictive and generative AI-enabled technologies in health care in 2024.

Regulation is always a step behind the industry and setting rules for healthcare isn’t possible without the tech industry and medical recommendations. There seem to be more and more associations forming every day. There’s the US Coalition for Health AI (CHAI™) -  a community of academic health systems, organizations, and expert practitioners of artificial intelligence (AI) and data science. It proposed a setup of special assurance labs that would allow health systems as well as tool developers and vendors to submit their AI solutions for evaluation. The New England Journal of Medicine launched NEJM AI, a monthly, online-only publication for evaluating applications of artificial intelligence in clinical medicine. The Global Agency for Responsible AI in Health strives to harmonize global healthcare standards as defined by WHO. The organization wants to connect different regulatory bodies from around the world and create early warning systems. These would notify the network if unintended effects of AI were detected anywhere in the world. The ideas are there, but it will take time for them to take form. 

AI Regulation in the EU will be felt after 2026

The official EU AI legislative document is expected early in 2024. Till then, the experts are cautious and also critical, especially in regards to open-source AI and proprietary model regulation. „Our information about the EU AI regulation stems from leaks in the negotiation process which was marked by opacity and lack of democracy oversight. Right now, anti-open source lobbyists are trying to mess with the Act after the deal, all because they're upset about the rumor that open-science based AI R&D might be exempt from regulations. I and most other AI experts advocate that regulations should only come into play at the application level. Going beyond that would suppress scientific freedom and undermine the European advantage,“ says Bart de Witte, Founder of HIPPO AI Foundation, which advocates for the development of open-source AI, to prevent consolidation of power in the hands of global tech corporations. 

EU AI Act could be disrupted by countries like France and Germany who prefer self-regulation and want changes to the legislation. Ricardo Baptista Leite, CEO at The Global Agency for Responsible AI in Health sees this as problematic: “In highly sensitive areas like health, I don't believe this is necessarily the best approach. It depends on the application of the AI, but at the end of the day, these countries are opening up a whole can of worms of discussions, which can undermine the process moving forward.” 

The upcoming European elections may also steer the conversation, as shifts in political power could influence the act's implementation. “We have the country level approach that with the rise of extreme nationalisms could lead to very, very poor policies that can undermine all the efforts of trying to find some level of harmonization. Harmonization is critical to ensure that these technologies can be used in a safe manner, but also in a way of us making sure that we're uptaking the most of the technology’s potential,” says Ricardo Baptista Leite and explains that while the AI Act is a step forward, the real impact on industries, particularly health, won't be felt until around 2026. The next two years are going to be crucial for balancing the risks and harnessing the benefits AI brings to health systems and patient care. 

How Can Industry Scale?

The Global Agency for Responsible AI in Health, will advocate for the same principles that we have for medicines approval and health technology assessment to be used for AI in healthcare. In the future, the Agency plans to create a comprehensive AI solutions repository. This platform will focus on health AI, providing an online public database. It will be a global showcase, featuring AI technologies validated by various countries.

More of these topics will be discussed at HIMSS24 Europe, which will have a dedicated AI track.  
 

Want to get more stories like this one? Get daily news updates from Healthcare IT News.
Your subscription has been saved.
Something went wrong. Please try again.