Make sense of the news fast with Opinions' daily newsletterArrowRight

Dabbling in self-diagnosis is becoming more robust now that people can go to chatbots powered by large language models scouring mountains of medical literature to yield answers in plain language — in multiple languages. What might an elevated inflammation marker in a blood test combined with pain in your left heel mean? The AI chatbots have some ideas. And researchers are finding that, when fed the right information, they’re often not wrong. Recently, one frustrated mother, whose son had seen 17 doctors for chronic pain, put his medical information into ChatGPT, which accurately suggested tethered cord syndrome — which then led a Michigan neurosurgeon to confirm an underlying diagnosis of spina bifida that could be helped by an operation.

The promise of this trend is that patients might be able to get to the bottom of mysterious ailments and undiagnosed illnesses by generating possible causes for their doctors to consider. The peril is that people may come to rely too much on these tools, trusting them more than medical professionals, and that our AI friends will fabricate medical evidence that misleads people about, say, the safety of vaccines or the benefits of bogus treatments. A question looming over the future of medicine is how to get the best of what artificial intelligence can offer us without the worst.

Advertisement

It’s in the diagnosis of rare diseases — which afflict an estimated 30 million Americans and hundreds of millions of people worldwide — that AI could almost certainly make things better. “Doctors are very good at dealing with the common things,” says Isaac Kohane, chair of the department of biomedical informatics at Harvard Medical School. “But there are literally thousands of diseases that most clinicians will have never seen or even have ever heard of.”

Follow this authorBina Venkataraman's opinions

Follow

Patients with rare diseases often endure years of trying to figure out what’s wrong with them. Even good doctors miss diagnoses that are hiding in plain sight. My niece Ayoni, for instance, had the classic symptoms of Prader-Willi syndrome at birth — floppy muscles, difficulty sucking, a weak cry — and was seen by doctors at some of the most respected hospitals in the world, but none recognized the signs to give her the relevant test until almost five years later.

The National Institutes of Health Undiagnosed Diseases Network has found that as many as 11 percent of the patients referred to it each year with mysterious illnesses have diseases that expert reviewers can diagnose by looking closely at lab results and doctors’ notes. Kohane, who leads the network’s coordinating center, is now working with Matt Might, a computer scientist whose son died of a rare disease, to train a large language model to make diagnoses more quickly by examining patients’ health records.

Advertisement

Beyond chatbots, AI could also put some power back in the hands of patients whose doctors are dismissive — or at least help patients connect with others like themselves. Parents of children with hard-to-diagnose diseases frequently describe feeling gaslighted by medical professionals who don’t take their concerns seriously. That happened to Kate McCrann, whom I met last month at a rare disease research conference. Even though McCrann was a doctor in her final fellowship year at Yale when she realized that her newborn, Tess, was not developing normally, a pediatrician told her she was worrying about nothing. It wasn’t until years later that McCrann and her husband, Bo Bigelow, learned that Tess had a rare disease called Hao-Fountain syndrome, caused by a genetic mutation.

During the first four years of Tess’s life, her disease had yet to be named or identified, leading her parents to believe they had reached a dead end even after they discovered that Tess had an unusual mutation of the USP7 gene. A social media post in 2015, when Tess was 5, led them to a researcher who was studying the mutation. They learned then about seven other patients like Tess, and in 2017, they launched a foundation that has since connected more than 200 similar patients. That has given them hope: It’s nearly impossible to get researchers to look for treatments without a critical mass of patients for clinical trials, and patient groups often have to raise money themselves for early-stage rare disease research.

More recently, networks that link medical databases have been helping patients with the same rare gene mutations find each other — and helping researchers study their genetic diseases. But one day soon, AI could help link patients with similar conditions even more easily, without knowing the genes causing their illnesses upfront. One new tool built using neural networks (not large language models), for example, showed how AI might aid diagnosis and pooling similar patients.

Advertisement

The problem with relying on AI to help diagnose diseases now is that we don’t yet know how much we can trust it. Because the major consumer-facing AI models are built with data sets obscured from public view, we don’t know how skewed the medical literature that underpins the answers from their chatbots might be. If, for example, a large language model was trained using studies of particular countries’ populations, it could offer medical advice that is not so relevant to other populations. Medical research already suffers from bias, including the underrepresentation of women and people of color, and that bias can be amplified by AI. And while we can ask chatbots to cite their sources, we know they fabricate citations and fake scientific papers in the very style and format we have come to trust.

A key remedy is for governments to require disclosure of the data sources used to train the commercial chatbots we use for medical advice — and, in the meantime, for people not to use those that will not disclose details about their health data. And any AI system using patient health records — not just study results — should be subject to government regulation globally to protect privacy.

Beyond government, independent researchers will be needed to scrutinize the medical advice AI models give — and medical boards and associations will be needed to certify the more credible ones — so that people know what they can trust. Companies that want to do right by consumers must make chatbots that are more reliable and transparent. Thus far, technology companies have refused to share details about the data used to train their large language models.

Advertisement

So many applications of AI in health care today cut costs for hospitals or save doctors’ time but offer questionable benefit to patients. AI tools that actually help patients could be a refreshing counterbalance — as long as they come with a healthy dose of precaution. Preventing a dystopian future hinges on discernment in how and when we use AI. For now, chatbot diagnoses should be seen as somewhat-informed second opinions, more useful for hard-to-crack cases. We’d be wise to remember, however, that what people need most when they are sick is compassion, curiosity and care — which humans, at their best, still pull off better than machines.

This column is part of an occasional series on the future of rare diseases. Find the first column in the series here.

Share

Comments

Popular opinions articles

HAND CURATED

View 3 more stories

Loading...

Since finding a primary care doctor these days takes longer than finding a decent used car, it’s little wonder that people turn to Google to probe what ails them. Be skeptical of anyone who claims to be above it. Though I was raised by scientists and routinely read medical journals out of curiosity, in recent months I’ve gone online to investigate causes of a lingering cough, ask how to get rid of wrist pain and look for ways to treat a bad jellyfish sting. (No, you don’t ask someone to urinate on it.)

Dabbling in self-diagnosis is becoming more robust now that people can go to chatbots powered by large language models scouring mountains of medical literature to yield answers in plain language — in multiple languages. What might an elevated inflammation marker in a blood test combined with pain in your left heel mean? The AI chatbots have some ideas. And researchers are finding that, when fed the right information, they’re often not wrong. Recently, one frustrated mother, whose son had seen 17 doctors for chronic pain, put his medical information into ChatGPT, which accurately suggested tethered cord syndrome — which then led a Michigan neurosurgeon to confirm an underlying diagnosis of spina bifida that could be helped by an operation.

The promise of this trend is that patients might be able to get to the bottom of mysterious ailments and undiagnosed illnesses by generating possible causes for their doctors to consider. The peril is that people may come to rely too much on these tools, trusting them more than medical professionals, and that our AI friends will fabricate medical evidence that misleads people about, say, the safety of vaccines or the benefits of bogus treatments. A question looming over the future of medicine is how to get the best of what artificial intelligence can offer us without the worst.

It’s in the diagnosis of rare diseases — which afflict an estimated 30 million Americans and hundreds of millions of people worldwide — that AI could almost certainly make things better. “Doctors are very good at dealing with the common things,” says Isaac Kohane, chair of the department of biomedical informatics at Harvard Medical School. “But there are literally thousands of diseases that most clinicians will have never seen or even have ever heard of.”

Patients with rare diseases often endure years of trying to figure out what’s wrong with them. Even good doctors miss diagnoses that are hiding in plain sight. My niece Ayoni, for instance, had the classic symptoms of Prader-Willi syndrome at birth — floppy muscles, difficulty sucking, a weak cry — and was seen by doctors at some of the most respected hospitals in the world, but none recognized the signs to give her the relevant test until almost five years later.

The National Institutes of Health Undiagnosed Diseases Network has found that as many as 11 percent of the patients referred to it each year with mysterious illnesses have diseases that expert reviewers can diagnose by looking closely at lab results and doctors’ notes. Kohane, who leads the network’s coordinating center, is now working with Matt Might, a computer scientist whose son died of a rare disease, to train a large language model to make diagnoses more quickly by examining patients’ health records.

Beyond chatbots, AI could also put some power back in the hands of patients whose doctors are dismissive — or at least help patients connect with others like themselves. Parents of children with hard-to-diagnose diseases frequently describe feeling gaslighted by medical professionals who don’t take their concerns seriously. That happened to Kate McCrann, whom I met last month at a rare disease research conference. Even though McCrann was a doctor in her final fellowship year at Yale when she realized that her newborn, Tess, was not developing normally, a pediatrician told her she was worrying about nothing. It wasn’t until years later that McCrann and her husband, Bo Bigelow, learned that Tess had a rare disease called Hao-Fountain syndrome, caused by a genetic mutation.

During the first four years of Tess’s life, her disease had yet to be named or identified, leading her parents to believe they had reached a dead end even after they discovered that Tess had an unusual mutation of the USP7 gene. A social media post in 2015, when Tess was 5, led them to a researcher who was studying the mutation. They learned then about seven other patients like Tess, and in 2017, they launched a foundation that has since connected more than 200 similar patients. That has given them hope: It’s nearly impossible to get researchers to look for treatments without a critical mass of patients for clinical trials, and patient groups often have to raise money themselves for early-stage rare disease research.

More recently, networks that link medical databases have been helping patients with the same rare gene mutations find each other — and helping researchers study their genetic diseases. But one day soon, AI could help link patients with similar conditions even more easily, without knowing the genes causing their illnesses upfront. One new tool built using neural networks (not large language models), for example, showed how AI might aid diagnosis and pooling similar patients.

The problem with relying on AI to help diagnose diseases now is that we don’t yet know how much we can trust it. Because the major consumer-facing AI models are built with data sets obscured from public view, we don’t know how skewed the medical literature that underpins the answers from their chatbots might be. If, for example, a large language model was trained using studies of particular countries’ populations, it could offer medical advice that is not so relevant to other populations. Medical research already suffers from bias, including the underrepresentation of women and people of color, and that bias can be amplified by AI. And while we can ask chatbots to cite their sources, we know they fabricate citations and fake scientific papers in the very style and format we have come to trust.

A key remedy is for governments to require disclosure of the data sources used to train the commercial chatbots we use for medical advice — and, in the meantime, for people not to use those that will not disclose details about their health data. And any AI system using patient health records — not just study results — should be subject to government regulation globally to protect privacy.

Beyond government, independent researchers will be needed to scrutinize the medical advice AI models give — and medical boards and associations will be needed to certify the more credible ones — so that people know what they can trust. Companies that want to do right by consumers must make chatbots that are more reliable and transparent. Thus far, technology companies have refused to share details about the data used to train their large language models.

So many applications of AI in health care today cut costs for hospitals or save doctors’ time but offer questionable benefit to patients. AI tools that actually help patients could be a refreshing counterbalance — as long as they come with a healthy dose of precaution. Preventing a dystopian future hinges on discernment in how and when we use AI. For now, chatbot diagnoses should be seen as somewhat-informed second opinions, more useful for hard-to-crack cases. We’d be wise to remember, however, that what people need most when they are sick is compassion, curiosity and care — which humans, at their best, still pull off better than machines.

This column is part of an occasional series on the future of rare diseases. Find the first column in the series here.

QOSHE - Can AI solve medical mysteries? It’s worth finding out. - Bina Venkataraman
menu_open
Columnists Actual . Favourites . Archive
We use cookies to provide some features and experiences in QOSHE

More information  .  Close
Aa Aa Aa
- A +

Can AI solve medical mysteries? It’s worth finding out.

10 11
15.11.2023

Make sense of the news fast with Opinions' daily newsletterArrowRight

Dabbling in self-diagnosis is becoming more robust now that people can go to chatbots powered by large language models scouring mountains of medical literature to yield answers in plain language — in multiple languages. What might an elevated inflammation marker in a blood test combined with pain in your left heel mean? The AI chatbots have some ideas. And researchers are finding that, when fed the right information, they’re often not wrong. Recently, one frustrated mother, whose son had seen 17 doctors for chronic pain, put his medical information into ChatGPT, which accurately suggested tethered cord syndrome — which then led a Michigan neurosurgeon to confirm an underlying diagnosis of spina bifida that could be helped by an operation.

The promise of this trend is that patients might be able to get to the bottom of mysterious ailments and undiagnosed illnesses by generating possible causes for their doctors to consider. The peril is that people may come to rely too much on these tools, trusting them more than medical professionals, and that our AI friends will fabricate medical evidence that misleads people about, say, the safety of vaccines or the benefits of bogus treatments. A question looming over the future of medicine is how to get the best of what artificial intelligence can offer us without the worst.

Advertisement

It’s in the diagnosis of rare diseases — which afflict an estimated 30 million Americans and hundreds of millions of people worldwide — that AI could almost certainly make things better. “Doctors are very good at dealing with the common things,” says Isaac Kohane, chair of the department of biomedical informatics at Harvard Medical School. “But there are literally thousands of diseases that most clinicians will have never seen or even have ever heard of.”

Follow this authorBina Venkataraman's opinions

Follow

Patients with rare diseases often endure years of trying to figure out what’s wrong with them. Even good doctors miss diagnoses that are hiding in plain sight. My niece Ayoni, for instance, had the classic symptoms of Prader-Willi syndrome at birth — floppy muscles, difficulty sucking, a weak cry — and was seen by doctors at some of the most respected hospitals in the world, but none recognized the signs to give her the relevant test until almost five years later.

The National Institutes of Health Undiagnosed Diseases Network has found that as many as 11 percent of the patients referred to it each year with mysterious illnesses have diseases that expert reviewers can diagnose by looking closely at lab results and doctors’ notes. Kohane, who leads the network’s coordinating center, is now working with Matt Might, a computer scientist whose son died of a rare disease, to train a large language model to make diagnoses more quickly by examining patients’ health records.

Advertisement

Beyond chatbots, AI could also put some power back in the hands of patients whose doctors are dismissive — or at least help patients connect with others like themselves. Parents of children with hard-to-diagnose diseases frequently describe feeling gaslighted by medical professionals who don’t take their concerns seriously. That happened to Kate McCrann, whom I met last month at a rare disease research conference. Even though McCrann was a doctor in her final fellowship year at Yale when she realized that her newborn, Tess, was not developing normally, a pediatrician told her she was worrying about nothing. It wasn’t until years later that McCrann and her husband, Bo Bigelow, learned that Tess had a rare disease called Hao-Fountain syndrome, caused by a genetic mutation.

During the first four years of Tess’s life, her disease had yet to be named or identified, leading her parents to believe they had reached a dead end even after they discovered that........

© Washington Post


Get it on Google Play