Here’s what California should do to combat medical misinformation |
Generative artificial intelligence is making it easier to create and disseminate false claims that could imperil public health and safety.
In April of last year, as a measles outbreak swept across West Texas, an unvaccinated 8-year-old girl from Gaines County died from measles-related lung failure.
That second vaccine-preventable death of a child during the Texas outbreak prompted vaccine skeptic Robert Malone to circulate an alternative explanation online, falsely suggesting that the hospital misattributed her death to measles and attributing it instead to sepsis.
These claims rapidly gained traction, eclipsing official statements from the Texas Department of State Health Services confirming measles complications and lack of vaccination. As confusion mounted, the Children’s Health Defense, an anti-vaccine advocacy group, then released videos that further reframed the child’s death, deepening public confusion and undermining trust in vaccines.
Advertisement
Article continues below this ad
This was not an isolated event, but the chronic expression of an information ecosystem engineered for virality rather than accuracy — one so unregulated and susceptible to manipulation that falsehoods outpace verified guidance and inflict lasting suffering on families with limited health literacy.
With details still emerging about a case of measles reported last month in Walnut Creek, misinformation is all but sure to follow.
This dynamic is now being sharply intensified by the rapid integration of generative artificial intelligence, one of the most powerful modern accelerants of mis/disinformation. By dramatically lowering the cost and skill required to produce and disseminate false content, AI has enabled coordinated information........