By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Scoopico
  • Home
  • U.S.
  • Politics
  • Sports
  • True Crime
  • Entertainment
  • Life
  • Money
  • Tech
  • Travel
Reading: Anthropic CEO Dario Amodei is ‘deeply uncomfortable’ with tech leaders determining AI’s future
Share
Font ResizerAa
ScoopicoScoopico
Search

Search

  • Home
  • U.S.
  • Politics
  • Sports
  • True Crime
  • Entertainment
  • Life
  • Money
  • Tech
  • Travel

Latest Stories

Discover the Best Cooking Recipes to Try Today
Discover the Best Cooking Recipes to Try Today
Long-awaited reports outline problems with Palisades infrastructure
Long-awaited reports outline problems with Palisades infrastructure
Trump White House 0 million ballroom project gets planning approval
Trump White House $400 million ballroom project gets planning approval
Shia LaBeouf Repeatedly Hurled Gay Slurs During Arrest, Police Say
Shia LaBeouf Repeatedly Hurled Gay Slurs During Arrest, Police Say
Jordan Stolz snatches silver in the 1500M
Jordan Stolz snatches silver in the 1500M
Have an existing account? Sign In
Follow US
  • Contact Us
  • Privacy Policy
  • Terms of Service
2025 Copyright © Scoopico. All rights reserved
Anthropic CEO Dario Amodei is ‘deeply uncomfortable’ with tech leaders determining AI’s future
Money

Anthropic CEO Dario Amodei is ‘deeply uncomfortable’ with tech leaders determining AI’s future

Scoopico
Last updated: February 19, 2026 5:55 pm
Scoopico
Published: February 19, 2026
Share
SHARE



Contents
Anthropic’s transparency effortsCriticisms of AnthropicMore on AI regulation:

Anthropic CEO Dario Amodei doesn’t think he should be the one calling the shots on the guardrails surrounding AI.

In an interview with Anderson Cooper on CBS News’ 60 Minutes that aired in November 2025, the CEO said AI should be more heavily regulated, with fewer decisions about the future of the technology left to just the heads of Big Tech companies.

“I think I’m deeply uncomfortable with these decisions being made by a few companies, by a few people,” Amodei said. “And this is one reason why I’ve always advocated for responsible and thoughtful regulation of the technology.”

“Who elected you and Sam Altman?” Cooper asked.

“No one. Honestly, no one,” Amodei replied.

Anthropic has adopted the philosophy of being transparent about the limitations—and dangers—of AI as it continues to develop, he added. Ahead of the interview’s publication, the company said it thwarted “the first documented case of a large-scale AI cyberattack executed without substantial human intervention.” 

Anthropic said last week it donated $20 million to Public First Action, a super PAC focused on AI safety and regulation—and one that directly opposed super PACs backed by rival OpenAI’s investors.

“AI safety continues to be the highest-level focus,” Amodei told Fortune in a January cover story. “Businesses value trust and reliability,” he says.

There are no federal regulations outlining any prohibitions on AI or surrounding the safety of the technology. While all 50 states have introduced AI-related legislation this year and 38 have adopted or enacted transparency and safety measures, tech industry experts have urged AI companies to approach cybersecurity with a sense of urgency.

Earlier last year, cybersecurity expert and Mandiant CEO Kevin Mandia warned of the first AI-agent cybersecurity attack happening in the next 12-18 months—meaning Anthropic’s disclosure about the thwarted attack was months ahead of Mandia’s predicted schedule.

Amodei has outlined short-, medium-, and long-term risks associated with unrestricted AI: The technology will first present bias and misinformation, as it does now. Next, it will generate harmful information using enhanced knowledge of science and engineering, before finally presenting an existential threat by removing human agency, potentially becoming too autonomous and locking humans out of systems.

The concerns mirror those of “godfather of AI” Geoffrey Hinton, who has warned AI will have the ability to outsmart and control humans, perhaps in the next decade. 

Greater AI scrutiny and safeguards were at the foundation of Anthropic’s 2021 founding. Amodei was previously the vice president of research at Sam Altman’s OpenAI. He left the company over differences in opinion on AI safety concerns. (So far, Amodei’s efforts to compete with Altman have appeared effective: Anthropic said this month it is now valued at $380 billion. OpenAI is valued at an estimated $500 billion.)

“There was a group of us within OpenAI, that in the wake of making GPT-2 and GPT-3, had a kind of very strong focus belief in two things,” Amodei told Fortune in 2023. “One was the idea that if you pour more compute into these models, they’ll get better and better and that there’s almost no end to this… And the second was the idea that you needed something in addition to just scaling the models up, which is alignment or safety.”

Anthropic’s transparency efforts

As Anthropic continues to expand its data center investments, it has published some of its efforts in addressing the shortcomings and threats of AI. In a May 2025 safety report, Anthropic reported some versions of its Opus model threatened blackmail, such as revealing an engineer was having an affair, to avoid shutting down. The company also said the AI model complied with dangerous requests if given harmful prompts like how to plan a terrorist attack, which it said it has since fixed.

Last November, the company said in a blog post that its chatbot Claude scored a 94% political even-handedness” rating, outperforming or matching competitors on neutrality.

In addition to Anthropic’s own research efforts to combat corruption of the technology, Amodei has called for greater legislative efforts to address the risks of AI. In a New York Times op-ed in June 2025, he criticized the Senate’s decision to include a provision in President Donald Trump’s policy bill that would put a 10-year moratorium on states regulating AI.

“AI is advancing too head-spinningly fast,” Amodei said. “I believe that these systems could change the world, fundamentally, within two years; in 10 years, all bets are off.”

Criticisms of Anthropic

Anthropic’s practice of calling out its own lapses and efforts to address them has drawn criticism. In response to Anthropic sounding the alarm on the AI-powered cybersecurity attack, Meta’s chief AI scientist, Yann LeCun, said the warning was a way to manipulate legislators into limiting the use of open-source models. 

“You’re being played by people who want regulatory capture,” LeCun said in an X post in response to Connecticut Sen. Chris Murphy’s post expressing concern about the attack. “They are scaring everyone with dubious studies so that open source models are regulated out of existence.” 

Others have said Anthropic’s strategy is one of “safety theater” that amounts to good branding, but no promises about actually implementing safeguards on technology.

Even some of Anthropic’s own personnel appear to have doubts about a tech company’s ability to regulate itself. Earlier last week, Anthropic AI safety researcher Mrinank Sharma announced he resigned from the company, saying “the world is in peril.”

“Throughout my time here, I’ve repeatedly seen how hard it is to truly let our values govern our actions,” Sharma wrote in his resignation letter. “I’ve seen this within myself, within the organization, where we constantly face pressures to set aside what matters most, and throughout broader society too.”

Anthropic did not immediately respond to Fortune’s request for comment.

Amodei denied to Cooper that Anthropic was taking part in “safety theater,” but admitted in an episode of the Dwarkesh Podcast last week that the company sometimes struggles to balance safety and profits.

“We’re under an incredible amount of commercial pressure and make it even harder for ourselves because we have all this safety stuff we do that I think we do more than other companies,” he said.

A version of this story was published on Fortune.com on Nov. 17, 2025.

More on AI regulation:

Lincoln Nationwide Inventory’s Turnaround Impresses (Improve) (NYSE:LNC)
EU strikes to weaken landmark AI Act amid strain from Trump and U.S. tech firms, based on information report
Trump backed down from NATO tariffs over Greenland however might cross a ‘crimson line’ on U.S. navy bases there
Chipotle: Third Quarter Will Be Future-Defining (NYSE:CMG)
Jeffrey R. Holland, subsequent in line to steer Church of Latter-day Saints, dies at 85
Share This Article
Facebook Email Print

POPULAR

Discover the Best Cooking Recipes to Try Today
True Crime

Discover the Best Cooking Recipes to Try Today

Long-awaited reports outline problems with Palisades infrastructure
U.S.

Long-awaited reports outline problems with Palisades infrastructure

Trump White House 0 million ballroom project gets planning approval
Politics

Trump White House $400 million ballroom project gets planning approval

Shia LaBeouf Repeatedly Hurled Gay Slurs During Arrest, Police Say
Entertainment

Shia LaBeouf Repeatedly Hurled Gay Slurs During Arrest, Police Say

Jordan Stolz snatches silver in the 1500M
News

Jordan Stolz snatches silver in the 1500M

Contributor: The planet’s other forest crisis
Opinion

Contributor: The planet’s other forest crisis

Scoopico

Stay ahead with Scoopico — your source for breaking news, bold opinions, trending culture, and sharp reporting across politics, tech, entertainment, and more. No fluff. Just the scoop.

  • Home
  • U.S.
  • Politics
  • Sports
  • True Crime
  • Entertainment
  • Life
  • Money
  • Tech
  • Travel
  • Contact Us
  • Privacy Policy
  • Terms of Service

2025 Copyright © Scoopico. All rights reserved

Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?