The much-anticipated promise of ChatGPT as a reliable source of health guidance has not held up to expectations, leaving users disillusioned rather than empowered.
What's happening?
As The New York Times reported, many medical professionals and tech critics have long warned that generative artificial intelligence systems such as ChatGPT are often wrong, inconsistent, or dangerously misleading when asked to answer health questions.
According to a study published in the journal Nature Medicine, large language models — including those powering ChatGPT — often fail to improve medical decision-making when used by real people and, in some cases, may do more harm than good.
What experts once thought would be a shortcut to understanding symptoms and treatment options now seems to bring only confusion and inaccuracy.
Oxford University researchers found that participants who used these chatbots to identify health issues or decide on care pathways performed no better — and often worse — than people relying on traditional internet searches or generic written sources.
"The researchers found that participants chose the 'right' course of action — predetermined by a panel of doctors — less than half of the time," per the Times.
What's more, the chatbots gave wildly different diagnoses and treatment options depending on how the user described their condition. In one instance, the chatbot instructed the patient to lie down in a dark room for a migraine, but told another to immediately head to an emergency room for the same problem.
"Very, very small words make very big differences," Andrew Bean, a graduate student at Oxford and the study's lead author, told the Times.
Why are AI health chatbots concerning?
Despite OpenAI's launch of a dedicated ChatGPT Health experience that analyzes users' medical records and personal health data to tailor responses, the underlying technology hasn't been shown to be safe or reliable enough for real clinical use. Even industry leaders have said that the system is not intended for diagnosis or treatment, yet it continues to shape user behavior and expectations.
Often, the problem is that chatbots have not been trained to ask follow-up questions or handle "free-form decision making" as doctors do on a daily basis.
|
Which of these savings plans for rooftop solar panels would be most appealing for you?
Click your choice to see results and earn rewards to spend on home upgrades. |
Aside from the incomplete and conflicting health advice, AI also has another drawback: it requires massive amounts of water and electricity to run. The technology needs data centers for power, and their rapid growth is increasing energy bills for millions of people. While more are being powered by cleaner energy sources, such as wind and solar, citizens are often left to foot the bill until the grid catches up.
What's being done to improve the chatbots?
The researchers called for independent validation, clearer disclaimers, and an insistence that humans — not algorithms — remain in the driver's seat when it comes to personal health.
The Times also reported that tech companies are working to improve ChatGPT models so they ask more follow-up questions for greater accuracy.
In the meantime, it's probably best to consult a licensed doctor in person or online for medical advice.
Get TCD's free newsletters for easy tips to save more, waste less, and make smarter choices — and earn up to $5,000 toward clean upgrades in TCD's exclusive Rewards Club.







