• Business Business

'Culture of lying and … deceit': Sam Altman slammed for cultivating 'toxic culture' at OpenAI

The trial is pulling OpenAI's internal decision-making, safety culture, and governance conflicts into public view.

Sam Altman gestures while speaking into a microphone on a stage with a blurred dark background.

Photo Credit: Getty Images

OpenAI's courtroom fight with Elon Musk took a darker turn in its second week, with former insiders accusing CEO Sam Altman of fostering a "toxic culture," according to Business Insider.

The testimony added new scrutiny to whether one of the world's most powerful artificial intelligence companies has strayed from its public-interest roots as it rushed to ship products and deepen its relationship with Microsoft.

What happened?

Musk's lawsuit argues that Altman and OpenAI President Greg Brockman steered the organization away from its original nonprofit mission by building a partnership with Microsoft. A jury has not yet decided whether OpenAI or Altman is legally liable, but several witnesses called by Musk's lawyers described a troubling internal culture.

One of those witnesses was Rosie Campbell, a former OpenAI safety researcher who worked at the company from 2021 to 2024. Campbell testified that OpenAI once had two teams dedicated to long-term AI safety, including one focused on preparing for superhuman AI. But over time, she said, the company became far more product-driven, and both of those teams were eventually dissolved, according to Business Insider.

She told the court that she ultimately quit because she believed OpenAI was abandoning its safety commitments.

Former OpenAI board member Tasha McCauley was even more blunt in a deposition played in court. She said Altman created "chaos" and "crisis" through a "culture of lying and culture of deceit" that spread through the company's leadership. McCauley also testified that Altman was dishonest about whether GPT4-Turbo had to go through an internal safety review before it launched in India.

FROM OUR PARTNER

Get cost-effective air conditioning in less than an hour without expensive electrical work

The Merino Mono is a heating and cooling system designed for the rooms traditional HVAC can't reach. The streamlined design eliminates clunky outdoor units, installs in under an hour, and plugs into a standard 120V outlet — no expensive electrical upgrades required.

And while a traditional “mini-split” system can get pricey fast, the Merino Mono comes with a flat-rate price — with hardware and professional installation included.

She said former OpenAI board member Ilya Sutskever had emailed her with "dozens of pages" of examples describing chaotic situations tied to Altman's behavior or alleged falsehoods.

Musk's legal team also called Columbia Law School professor David Schizer, an expert in nonprofit governance. After reviewing the conduct described by earlier witnesses, Schizer said a CEO withholding product-launch information from a board would be "a big problem" and inconsistent with how a nonprofit should function, Business Insider reported.

Why is this concerning?

This goes well beyond a Silicon Valley power struggle.

When a company building powerful AI systems is accused of weakening safety teams and keeping its board in the dark, the consequences do not stay confined to executives and investors. These tools are increasingly being used in search engines, offices, classrooms, customer service systems, and software products that millions of people depend on. If they are released without rigorous review, the risks can include misinformation, biased results, unreliable guidance, and little accountability when failures happen.

There is also a broader public-interest concern. OpenAI was originally presented as a mission-driven organization meant to develop AI for the benefit of humanity. If an organization built on that premise shifts toward fast commercialization without strong guardrails, it can undermine trust in the same companies asking the public to accept more powerful technology in everyday life.

The environmental stakes matter too. Training and operating large AI models requires vast data centers that consume major amounts of electricity and water. If AI companies prioritize speed and market share over transparency and oversight, communities may end up absorbing the costs — from stressed power grids to resource-intensive infrastructure — without much say in how these systems are rolled out.

In other words, leadership decisions inside AI companies do not remain inside the boardroom. They can shape energy demand, public trust, worker protections, and the quality of the tools people use every day.

What's being done about this?

For now, the clearest form of accountability is unfolding in court. The trial is pulling OpenAI's internal decision-making, safety culture, and governance conflicts into public view, giving regulators, workers, and consumers a clearer look at how the company operates behind closed doors.

Outside the courtroom, lawmakers and regulators in the United States and elsewhere have continued to push for stronger rules on AI transparency, testing, and corporate accountability. Those efforts are still taking shape, but they reflect a growing recognition that self-policing may not be enough for technology this powerful.

People can also play a role by backing policies that require independent safety audits and clearer disclosures from AI companies, asking schools and employers how AI tools are evaluated before they are adopted, and choosing products from companies that are more transparent about governance and testing. 

As AI becomes more woven into daily life, public pressure for slower and more responsible rollouts may be one of the strongest checks on risky corporate decisions.

Get TCD's free newsletters for easy tips, smart advice, and a chance to earn $5,000 toward home upgrades. To see more stories like this one, change your Google preferences here.

Cool Divider