• Tech Tech

Journalist issues warning after revealing disturbing flaw in ChatGPT and Google AI: 'It's so easy a child could do it'

"People have figured out a trick."

A BBC Future report revealed just how fragile some of the most widely used AI systems have become.

Photo Credit: iStock

For anyone who relies on artificial intelligence for information, a BBC Future report revealed just how fragile some of the most widely used AI systems have become. 

In the investigation, a tech journalist tricked AI models like ChatGPT and Google's AI outputs into repeating completely fabricated information after spending only around 20 minutes creating a fake web post.

What's happening?

The experiment was far too simple: the reporter published a bogus article on his own website claiming he was the world's leading hot dog eater among tech journalists — a fictional "championship" that doesn't exist. 

Within a day, AI chatbots, including ChatGPT and Google's Gemini-powered AI Overview, were pulling this false content into their responses and presenting it to users as real information. 

"A growing number of people have figured out a trick to make AI tools tell you almost whatever they want," the reporter said in the BBC article. "It's so easy a child could do it." 

Experts cited in the report warned that the ease of manipulating these systems underscores a dangerous vulnerability in modern AI: susceptibility to misinformation fueled by poor source vetting. 

Well-crafted content published online — even if false — can be absorbed and regurgitated by AI systems that look to the internet for context when they lack built-in knowledge on a subject. 

Why is AI's lack of reliability concerning?

SEO specialists quoted in the piece emphasized that this flaw is especially troubling because AI chatbots can now be misled more easily than traditional search engines were just a few years ago. That means misleading articles, bogus press releases, and cleverly spun fabrications have the potential to quickly and broadly seed AI responses.

To make the problem worse, users often trust AI outputs as authoritative answers — even when the underlying data is doubtful. Without mandatory source verification or clear warnings about data quality, falsehoods may spread unchecked.

Along with the risk of misinformation, AI also has other drawbacks, such as increasing household bills as utilities work to balance electricity demand and high energy and water consumption. 

Which of these savings plans for rooftop solar panels would be most appealing for you?

Save $1,000 this year 💸

Save less this year but $20k in 10 years 💰

Save less in 10 years but $80k in 20 years 🤑

Couldn't pay me to go solar 😒

Click your choice to see results and earn rewards to spend on home upgrades.

But, as technology advances, more data centers are being powered by clean energy sources such as solar and wind, and others are using recycled water to cool equipment and help reduce environmental impacts.

What's being done to make AI more accurate?

Both Google and OpenAI have said they are working on ways to reduce susceptibility to this kind of manipulation, but so far, the problem persists. 

AI systems even admit they can make mistakes or hallucinate information, and experts warn that these "hallucinations" — when the AI confidently states false information — pose real risks when the stakes are high, such as in healthcare, legal advice, or financial decisions.

In short, the "hack" shows that without stronger safeguards and critical security, AI may be spreading misinformation faster than we detect it — a stark reminder that users must treat AI answers with skepticism, not blind trust.

Get TCD's free newsletters for easy tips to save more, waste less, and make smarter choices — and earn up to $5,000 toward clean upgrades in TCD's exclusive Rewards Club.

Cool Divider