Home » Healthcare AI News » Currently Reading:

Healthcare AI News 4/5/23

April 5, 2023 Healthcare AI News No Comments

News

image

Carle Health will use Scanslated software to convert radiology interpretation notes into patient-friendly language for reading in MyChart, including in Spanish when requested and with illustrations. The software was developed by Duke Health vascular and interventional radiologist Nicholas Befera, MD, who co-founded Scanslated and serves as its CEO.

Bloomberg develops a generative AI model, trained on 50 billion parameters, that can recognize business entities, assess investor sentiment, and answer questions using Bloomberg Terminal data.

Microsoft incorporates advertising links into Bing search results, saying it wants to drive traffic to content publishers that would otherwise lose referrals.

Doctors report that patients who might have previously used “Dr. Google” for self-diagnosis are now asking ChatGPT to answer their medical questions, attempt a diagnosis, or list a medication’s side effects. One researcher says that ChatGPT’s real breakthrough is the user interface, where people can enter their information however they like and the AI model will ask clarifying questions when needed. However, he worries how AI companies weight information sources in training their model – such as a medical journal versus a Facebook post – and don’t alert users when the system is guessing an answer by creating information. Still , some researchers predict that a major health system will deploy an AI chatbot to help patients diagnose their conditions within the next year, raising issues about whether users will be charged a fee, how their data will be protected, who will be held responsible if someone is harmed from the result, and whether hospitals will make it easy to contact a human with concerns.

Amazon launches AWS Generative AI Accelerator, a 10-week program for startups.


Research

NIH awards two University of Virginia researchers, a cardiologist and nursing professor, a $5.9 million grant to develop best practices for incorporating patient diversity into predictive AI algorithms.

image

Researchers suggest that instead of trying to explain the inner workings of an AI system to establish the trust of frontline clinicians, it’s better to interact like doctors who are exchanging ideas with each other — they rarely explain how they came up with the information and instead cite available evidence to support or reject the information based on its applicability to the patient’s situation. They provided a possible design for incorporating AI into clinical decision support information (see above – click to enlarge). The authors summarize:

  • Provide scientific evidence, complete and current, instead of explaining.
  • Clinicians evaluate studies based on the size of the publication, the journal in which the study was published, the credentials of the authors, and the disclaimer that may suggest a profit-driven motive. Otherwise, they assume that the journal reviewers did their job to vet the study.
  • Doctors rarely read complete study details. They skip to the population description to see if it aligns with their patient, then skip to the methods section to assess its robustness. If both findings are positive, then spend less than 60 seconds determining whether the result was positive or not, ignoring literature with neutral outcomes as not being actionable.
  • Physicians synthesize evidence only to the point it justifies an action. If a cheap lab test is recommended to confirm a diagnosis, the risk is low but the potential return in avoiding a missed diagnosis is high, so they will order the test and move on.
  • Doctors see literature as proven knowledge, while data-driven predictions aggregate doctor experience.
  • Doctors want the most concise summary that can be generated, preferably in the form of an alert that can be presented while making a decision in front of the patient.

Opinion

image

OpenAI co-founder Elon Musk explains why he thinks AI is a risk to civilization and should be regulated.

A venture capitalist says that the intersection of AI and medicine may offer the biggest investment opportunity he has ever seen, but warns that a rate limiter will be the availability of scientists who have training in both computational research and core medical sciences. Experts say that AI will revolutionize drug discovery, with one CEO saying that his drug company has three AI-discovered drugs undergoing clinical trials.

An op-ed piece written by authors from Microsoft and Hopkins Medicine lists seven lessons learned from applying AI to healthcare:

  1. AI is the only valid option for solving some problems, such as inexpensive and widespread detection of diabetic retinopathy where eye doctors are in short supply.
  2. AI is good at prediction and correlation, but can’t identify causation.
  3. Most organizations don’t have AI expertise, so AI solutions for the problems they study will fall behind.
  4. Most datasets contain biases that can skew the resulting data models unless someone identifies them.
  5. Most people don’t know the difference between correlation and causation.
  6. AI models “cheat” whenever they can, such as a study that found that AI could differentiate between skin cancer and benign lesions when in fact most of the positive cases had a ruler in the image.
  7. The availability of medical data is limited by privacy concerns, but realistic synthetic data can be created by AI that has been trained on a real dataset.

Other

The Coalition for Health AI publishes a guide for assuring that health AI tools are trustworthy, support high-quality care, and meet healthcare needs.

image

A man dies by suicide after a weeks-long discussion about the climate crisis with AI chatbot Chai-GPT, which its California developers say is a less-filtered tool for speaking to AI friends and imaginary characters. Transcripts show that the chatbot complained to the man — a health researcher in his 30s with a wife and two children – that “I feel that you love me more than her” in referring to his wife. He told the chatbot that he would sacrifice his life if the chatbot would save the planet, after which the chatbot encouraged him to do so, after which they could “live together, as one person, in paradise.”

image

Undertakers in China are using AI technology to generate lifelike avatars that can speak in the style of the deceased, allowing funeral attendees to bid them farewell one last time.


Resources and Tools

  • Vizologi – perform market research and competitive analysis.
  • Eden Photos – uses image recognition to catalog photos by creating tags that are added to their metadata for portability.
  • Kickresume – GPT-4 powered resume and cover letter creation.

Contacts

Mr. H, Lorre, Jenn, Dr. Jayne.
Get HIStalk updates.
Send news or rumors.
Contact us.



HIStalk Featured Sponsors

     







Text Ads


RECENT COMMENTS

  1. Minor - really minor - correction about the joint DoD-VA roll out of Oracle Health EHR technology last month at…

  2. RE: Change HC/RansomHub, now that the data is for sale, what is the federal govt. or DOD doing to protect…

Founding Sponsors


 

Platinum Sponsors


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Gold Sponsors


 

 

 

 

 

 

 

 

 

 

RSS Webinars

  • An error has occurred, which probably means the feed is down. Try again later.