AI is uncannily good at diagnosis. Its makers just won’t say so. |
How often have you asked ChatGPT for health advice? Maybe about a mysterious rash or that tightening in your right calf after a long run. I have, on both counts. ChatGPT even correctly diagnosed that mysterious rash I developed when I first experienced Boston’s winter as cold urticaria, a week before my doctor confirmed it.
More than 230 million people ask ChatGPT health-related questions every week, according to OpenAI. While people have been plugging their health anxieties into the internet since its earliest days, what’s changed now is the interface: Instead of scrolling through endless search results, you can now have what feels like a personal conversation.
This story was first featured in the Future Perfect newsletter.
Sign up here to explore the big, complicated problems the world faces and the most efficient ways to solve them. Sent twice a week.
In the past week, two of the biggest AI companies went all-in on that reality. OpenAI launched ChatGPT Health, a dedicated space within its larger chat interface where users can connect their medical records, Apple Health data, and stats from other fitness apps to get personalized responses. (It’s currently available to a small group of users, but the company says it will eventually be open to all users.) Just days later, Anthropic announced a similar consumer-facing tool for Claude, alongside a host of others geared toward health care professionals and researchers.
Both consumer-facing AI tools come with disclaimers — not intended for diagnosis, consult a professional — that are likely crafted for liability reasons. But those warnings won’t stop the hundreds of millions already using chatbots to understand their symptoms.
However, it’s possible that these companies have it backward: AI excels at diagnosis; several studies show it’s one of the best use cases for the technology. And there are real trade-offs — around data privacy and