Dario Amodei says Anthropic struggles to balance 'incredible commercial pressure' with its 'safety stuff'
Michael M. Santiago/Getty Images
- Dario Amodei says Anthropic struggles to maintain its values in the face of commercial pressure.
- Anthropic was founded in 2021 to prioritize AI safety.
- "The pressure to survive economically while also keeping our values is just incredible," Amodei said.
A familiar tension has come for even the most safety-minded of the AI industry: principles or profit?
OpenAI, the leading AI startup, was founded to build artificial intelligence that benefits all of humanity. Many AI watchers and former employees have questioned its commitment to that mission, however, as it rushes to generate revenue to justify enormous investments in the company.
Anthropic, one of OpenAI's chief rivals, was founded by former OpenAI employees who were concerned about that perceived mission drift. They sought to run an AI company focused on safety above all else.
Even Anthropic, however, struggles to stay on course.
Anthropic CEO Dario Amodei says his company faces significant pressure to uphold its commitments to mitigating AI's potential risks while still turning a profit.
"We're under an incredible amount of commercial pressure, and we make it even harder for ourselves because we have all this safety stuff we do that I think we do more than other companies," Amodei said on a recent episode of the "Dwarkesh" podcast.
Last week, Anthropic, which launched only five years ago, announced $30 billion in Series G funding at a $380 billion post-money valuation, making it one of the most valuable private companies in the world.
In its press release, the company underscored its growing revenue.
"It has been less than three years since Anthropic earned its first dollar in revenue," the company said. "Today, our run-rate revenue is $14 billion, with this figure growing over 10x annually in each of those past three years."
Growth like that often comes with growing expectations.
"The pressure to survive economically while also keeping our values is just incredible," Amodei said on the podcast. "We're trying to keep this 10x revenue curve going."
Amodei was formerly OpenAI's vice president of research, focusing on safety. He founded Anthropic in 2021 with his sister, Daniela Amodei, and five other former OpenAI staffers, driven by a desire to prioritize safety as AI systems grew increasingly powerful.
Amodei is not the only one who says that Anthropic's mission is hard to sustain as the company grows. Mrinank Sharma, a former safety researcher at Anthropic, said he resigned last week in part due to this tension.
"Throughout my time here, I've repeatedly seen how hard it is to truly let our values govern our actions," Sharma wrote in his resignation letter, which he shared on X. "I've seen this within myself, within the organization, where we constantly face pressures to set aside what matters most, and throughout the broader society too."
Even at companies that aren't developing foundational AI models, adopting AI responsibly often takes a back seat to the promise of efficiency and increased profits.
Responsible AI use in the workplace is moving "nowhere near as fast as it should be," Tad Roselund, a managing director and senior partner at Boston Consulting Group, told Business Insider in 2024.
The same is true across the venture capital ecosystem.
"The venture capital environment also reflects a disproportionate focus on AI innovation over AI governance," Navrina Singh, the founder and CEO of AI governance platform Credo AI, told Business Insider in 2024. "To adopt AI at scale and speed responsibly, equal emphasis must be placed on ethical frameworks, infrastructure, and tooling to ensure sustainable and responsible AI integration across all sectors."
