• Tech Tech

AI-generated police report stirs controversy after making fantastical claim about officer: 'Funny on the surface, but also a real warning'

"Maybe we shouldn't use the misinformation machine for serious things like police reports."

An AI error has cast serious doubts on its application in law enforcement, according to Forbes.

Photo Credit: iStock

An AI error has cast serious doubts on its application in law enforcement, according to Forbes

What's happening?

An AI-generated incident report made for police in Heber City, Utah, said one of the officers turned into a frog. 

This was thanks to software made by Axon called Draft One. Axon makes body cameras worn by police officers. It has made a program that takes data from footage and turns them into reports. The goal is to save officers time on writing up these incidents themselves, which could amount to eight hours weekly. 

In the instance of Heber City, the movie The Princess and the Frog was playing in the background, confusing the AI tool. 

"Funny on the surface, but also a real warning," said one user on X. "If AI can't separate background fiction from official records, human review isn't optional — it's the system."

"Yeah so maybe we shouldn't use the misinformation machine for serious things like police reports," commented another.

Why is AI use important?

In any other circumstance, a mistake like this could be laughed off, but the consequences of inaccuracies falling through the cracks in law enforcement are dire. Some agencies have even taken to using ChatGPT in order to generate suspect sketches — theoretically a sound use for AI if deployed carefully and with oversight, if studies found witnesses reported high levels of satisfaction on the renderings matching what they saw, but with risks if deployed without much care for its effectiveness.

Leaning heavily on AI for law enforcement already has hefty civil and legal implications, but these aren't the only costs to consider. The energy demands of AI usually require tapping into dirty sources like coal and gas. These produce loads of pollution, exacerbating destructive weather patterns and causing property damage across society at large.

Of course, leaning on cleaner sources of energy can help clean up AI use. Tech giants like Google, Meta, and Microsoft are investing in nuclear power to address energy emissions, and while nuclear power generally doesn't contribute to climate change, renewable energy like large-scale solar adoption can better help to reduce the damage done by AI with even lower pollution-related drawbacks.  

What's being done about AI in law enforcement?

As the software name "Draft One" implies, it's expected that AI-generated reports are reviewed by humans for factual accuracy. Luckily, the Heber City police report was generated purely for demonstration purposes. 

"We learned the importance of correcting these AI-generated reports," said Heber City officer Rick Keel after the demonstration, per Fox 13 News.

Which is your biggest concern regarding AI technology?

Energy consumption ⚡

Job displacement 🤖

Data privacy and security risks ⚠️

Humans losing control of it 😱

Click your choice to see results and speak your mind.

Get TCD's free newsletters for easy tips to save more, waste less, and make smarter choices — and earn up to $5,000 toward clean upgrades in TCD's exclusive Rewards Club.

Cool Divider