• Business Business

OpenAI's CEO makes concerning prediction about 'AI Agents' joining the workforce: 'We are here for the glorious future'

"With superintelligence, we can do anything else."

"With superintelligence, we can do anything else."

Photo Credit: iStock

AI is a hot topic right now, as tech companies are investing heavily in developing machine learning applications for a wide range of uses. Some — like AI models that predict the impact of floods — are incredibly useful. Others are good at making money for businesses but at the cost of employees. A possible new use for AI is on the horizon, the Guardian reports — but it could have serious implications for the workforce.

What's happening?

Sam Altman, the chief executive of OpenAI, recently announced in a blog post that "AI agents" or "virtual employees" could start performing tasks for companies as early as this year.

"We believe that, in 2025, we may see the first AI agents 'join the workforce' and materially change the output of companies," Altman wrote.

OpenAI launched its "Operator" AI agent this month, a program capable of independently carrying out simple tasks using an internet browser, while Microsoft has announced its Copilot Studio product and Anthropic launched the Claude 3.5 Sonnet AI model. Consulting firm McKinsey is working on an AI agent to help process new clients and schedule follow-up meetings.

McKinsey predicts that AI could be taking over 30% of the hours worked across the U.S. by 2030.

Altman also claimed that OpenAI knows how to build artificial general intelligence, essentially an AI program that isn't specialized to one task but is smarter than a human across a variety of tasks and applications.

"We are now confident we know how to build AGI as we have traditionally understood it," Altman wrote. "We love our current products, but we are here for the glorious future. With superintelligence, we can do anything else."

Why are AI agents concerning?

In theory, having a computer perform complex tasks could help many people. However, replacing human employees with programs as a cost-cutting measure for companies could lead to widespread unemployment and loss of income unless it were accompanied by other changes in our economic structure.

There's also the question of liability. If an AI agent makes a mistake — as current AI models frequently do — and that mistake harms someone, who will be held accountable? What safeguards will be in place to prevent an AI agent from, for example, wrongfully denying an applicant insurance or medical treatment? This issue has already arisen, as the Guardian recently reported.

Do you worry about robots taking away our jobs?

Absolutely 💯

Yes — but it will create new ones too 🦾

Possibly — but not significantly 💪

Not really 😎

Click your choice to see results and speak your mind.

Finally, generative AI takes a lot of computing power, using electricity to power the computers and water to cool them. This is a serious concern in a world where pollution and drought are impacting communities across the globe. If AI takes over 30% of the U.S. economy, those costs could balloon out of control.

What's being done about AI agents?

Right now, Elon Musk is in a public dispute with OpenAI, the Guardian reported. After dropping an initial lawsuit in June, he returned with a suit that named both OpenAI and Microsoft, accusing them of putting profit over safety.

As AI tools continue to emerge, more legal challenges could arise to limit their use or establish a legal framework for it.

Join our free newsletter for good news and useful tips, and don't miss this cool list of easy ways to help yourself while helping the planet.

Cool Divider