• Business Business

Experts sound alarm about 'silent failure' that could shake up the economy: 'It could escalate … aggressively'

"People have too much confidence in these systems."

Experts in the AI sector emphasized that the most significant risk isn't an apocalyptic scenario but rather small operational failures.

Photo Credit: iStock

When artificial intelligence screws up royally, it gets a ton of attention. A new report indicates that even more dangerous than those high-profile mishaps might be when the AI errs in a minor but less detectable fashion.

What's happening?

CNBC reported on how the increased complexity and autonomy of AI systems is surpassing human comprehension and predictive capabilities. 

Experts in the AI sector emphasized that the most significant risk isn't an apocalyptic scenario but rather small operational failures. These deviations from what developers expect occur when AI acts slightly differently than anticipated.

"Autonomous systems don't always fail loudly," said Noe Ramos, vice president of AI operations at Agiloft. "It's often silent failure at scale."

One example involves a beverage manufacturer that faced an unanticipated software error related to new holiday product labels. The autonomous AI recognized the unfamiliar labels as an error and ordered the unnecessary production of thousands of cans.

Obviously, the tech wasn't supposed to do that, but technically, it was behaving as it was taught. A similar case featured an autonomous AI Microsoft customer-service agent that ended up giving excess refunds due to a persuasive customer and a positive feedback loop.

"It could escalate slightly to aggressively, which is an operational drain, or it could update records with small inaccuracies," Ramos told CNBC. 

Why is AI's unpredictability important?

Right now, a lot of companies are going all-in on AI. They believe they can both generate efficiencies with it when it's humming, and contain it quickly if it goes off-course. That might be naive.

"People have too much confidence in these systems," opined Mitchell Amador, CEO of crowdsourced security platform Immunefi. "They're insecure by default."

CNBC spoke with numerous sources who relayed that the technology is simply moving too fast for companies to anticipate risk. For harder-to-spot mistakes, companies could lose big if they don't have proper processes to nip them in the bud.

Which of these savings plans for rooftop solar panels would be most appealing for you?

Save $1,000 this year 💸

Save less this year but $20k in 10 years 💰

Save less in 10 years but $80k in 20 years 🤑

Couldn't pay me to go solar 😒

Click your choice to see results and earn rewards to spend on home upgrades.

This issue underlines AI's inherent risks, alongside its potential gains in efficiency and speed. While it can do good by optimizing clean energy systems and automating processes, it also poses challenges and heavy use of water and energy.

What's being done about AI's potential freelancing?

There are potential solutions to help contain AI errors quickly. One is implementing kill switches that will allow companies to shut it down immediately when it goes off the rails. 

Ramos also advised companies to apply continuous oversight to not just what AI is sending out, but also how it's behaving. It's clear that, as autonomous as the tech claims to be, humans are going to get burned if they don't deploy it with discipline and constant surveillance.

Get TCD's free newsletters for easy tips to save more, waste less, and make smarter choices — and earn up to $5,000 toward clean upgrades in TCD's exclusive Rewards Club.

Cool Divider