A routine attempt to cut salt from his diet turned into a medical nightmare after a man followed health advice from ChatGPT, replacing table salt with a chemical commonly used to clean swimming pools. The result: three weeks in hospital, plagued by hallucinations, paranoia, and severe anxiety.
Doctors detailed the case in the Annals of Internal Medicine, revealing that he developed bromism, a condition almost eradicated since the 20th century.
His “personal experiment” involved using sodium bromide, a toxic compound once sold in sedative pills and now mostly used for pool maintenance, instead of ordinary sodium chloride.
Bromism can cause psychosis, delusions, nausea, and skin eruptions. In the 19th century, it accounted for up to eight percent of psychiatric hospital admissions.
The patient, previously mentally healthy, even claimed that his neighbor was trying to poison him when he arrived at the emergency department.
Doctors later tested ChatGPT and found that the AI still recommended sodium bromide as a salt alternative, with no warning of the potential health risks. The case underscores the dangers of relying on AI-generated advice and how seemingly simple dietary experiments can go catastrophically wrong.
This isn’t the first time AI guidance has caused problems. Last year, a Google chatbot advised people to “eat rocks” to stay healthy, apparently drawing advice from satirical sources.
OpenAI, the company behind ChatGPT, recently announced the GPT-5 update, which is better at answering health questions. A spokesperson stressed, “You should not rely on output from our services as a sole source of truth or factual information, or as a substitute for professional advice.”
Experts also warn that overreliance on AI can worsen mental health. Clinical psychologist Paul Losoff told DailyMail.com that dependence on AI may prevent individuals from seeking human interaction, particularly harming those struggling with anxiety or depression.
“Using AI can exacerbate cognitive symptoms like chronic pessimism, distorted thinking, or cloudy reasoning,” Dr. Losoff explained. “This increases the risk of misinterpreting AI feedback and causing harm, especially in individuals with acute thought disorders like schizophrenia.”