Добавить новость
ru24.net
Деньги
Апрель
2026
1 2 3 4 5 6 7 8 9 10 11 12 13 14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30

Anthropic is facing a wave of user backlash over reports of performance issues with its Claude AI chatbot

0

Anthropic, the high-flying AI company, is facing a backlash from some of its most prolific users over a perceived decline in the performance of its Claude AI models.

The issues have left the company—recently valued at $380 billion and reportedly en route to an IPO—scrambling to respond to user revolt and online speculation about its motives and its ability to serve its newest wave of customers.

Anthropic’s popular Claude AI model has seen a significant decline in performance recently according to many developers and heavy users, who say the model increasingly fails to follow instructions, opts for sometimes inappropriate shortcuts, and makes more mistakes on complex workflows.

The complaints appear to be connected to recent changes Anthropic quietly made to the way Claude operates, reducing the model’s default “effort” level in order to economize on the number of tokens, or units of data, the model processes in response to each request.

The more tokens processed per task, the more computing power that task consumes. And there is widespread speculation that Anthropic, which has announced fewer multi-billion dollar deals for data center capacity than some of its rivals, may be running short of computing resources after its adoption of its products soared in the past few months.

User dissatisfaction with Claude’s sudden performance decline and anger at Anthropic’s perceived lack of transparency could potentially derail the company’s runaway growth, just as the company is hoping to woo investors for a potential IPO. The claims that Anthropic has not been candid about the changes it has made to the way Claude operates or the way the changes may increase the cost for using Claude are particularly threatening to Anthropic because it, more than any other AI company, has tried to build a brand reputation on being more transparent than other AI companies and more aligned with its users’ interests.

Anthropic declined to answer Fortune’s specific questions about Claude users’ complaint on the record. Boris Cherny, the Anthropic executive who leads its Claude Code product, responded to user complaints online by saying that Anthropic had reduced the default “effort” Claude makes in answering user prompts to “medium” in response to user feedback that Claude was previously consuming too many tokens per task. But many users complained that the company had not highlighted this change to users.

The situation has caused a pile-on of speculation and allegations—including from some of its competitors—that the company is purposely degrading performance due to a lack of compute capacity.

Across the industry, AI companies are facing rising GPU costs, constrained data center expansion, and difficult trade-offs over which products to prioritize as demand for “agentic” AI systems accelerates faster than infrastructure can scale. While an Anthropic spokesperson has said publicly that the AI lab does not degrade its models to better serve demand, there are reasons to believe the company is facing more acute constraints than some rivals.

Anthropic suffered a series of recent outages as usage has increased and has introduced stricter usage limits during peak hours, drawing complaints from some users. In an internal memo reported by CNBC, OpenAI’s revenue chief also claimed that Anthropic had made a “strategic misstep” by not securing enough compute capacity, and was “operating on a meaningfully smaller curve” than competitors. (Anthropic declined to answer CNBC’s questions about these claims .)

Meanwhile, Anthropic also announced last week that it had trained a new, yet-to-be-released model called Mythos that is significantly more capable than its Opus AI model—but which is also larger and more expensive to run, meaning that likely consumes more computing capacity than prior models. Anthropic stressed that it’s not releasing the model to the general public yet because of security concerns, but some have questioned whether Anthropic lacks sufficient compute capacity to support a broad Mythos rollout. 

Victim of its own success

The scrutiny on Anthropic underscores the fast-changing nature of the AI market and the stakes involved. Just last week, Anthropic stunned the industry by announcing that its annualized recurring revenue, or ARR, is now $30 billion, up from $9 billion at the end of 2025. OpenAI said last month that it is generating $2 billion a month in revenue, or $24 billion a year, although the two companies do not report revenues in exactly the same way, making direct comparisons problematic.

Anthropic has recently benefited from a flood of new users, first due to the popularity of its AI coding tool, Claude Code, and later from a wave of consumer support that followed its feud with the U.S. Department of Defense. Many users switched to Claude from rivals such as OpenAI’s ChatGPT after the Trump administration designated Anthropic a “supply chain risk.” Anthropic had said the dispute stemmed from its insistence that U.S. government agree in its contract not to use the company’s technology in lethal autonomous weapons or for the mass surveillance of American citizens.

Over the last few years, Anthropic has gained significant ground in the AI race, emerging as a leader in enterprise AI and building up significant goodwill among developers and enterprise users. But if the anger around Claude’s performance issues persists, it risks eroding some of that goodwill and could lead the company to stumble at a critical moment.

In response to some of the controversy around Claude’s recent performance issues, Cherny, the Claude Code head, said that Claude Opus 4.6—Anthropic’s flagship model—had introduced “adaptive thinking” in early February, which allows the model to decide how much reasoning to apply to a given task rather than using a fixed budget. In early March, Anthropic also shifted the default setting down to a “medium effort” level, Cherny said. While Claude Code users can manually change the tool’s effort levels, users who pay for the Pro versions of Cowork or the desktop version of Claude are not able to change the default at this time.

To resolve some of the user issues, Cherny said the company will test “defaulting Teams and Enterprise users to high effort, to benefit from extended thinking even if it comes at the cost of additional tokens & latency” going forward.

He also pushed back on speculation that the model had been purposely watered down and on complaints from users that the change was rolled out with a lack of transparency, claiming the changes were made in response to user feedback and were flagged to users via a pop-up within the Claude Code interface. 

‘Unusable for complex engineering tasks’

Most of the user complaints center on Claude Code, Anthropic’s AI-powered coding tool, which has become one of the company’s most popular and fastest-growing products.

Launched in early 2025, Claude Code operates as a command-line agent that can read, write, and execute code autonomously within a developer’s environment. Since its debut, it has been widely adopted by individual developers and large enterprise engineering teams who rely on it for complex, multi-step coding tasks.

The recent changes in the performance of Claude Code gained widespread attention on social media thanks to a GitHub analysis that appears to be from Stella Laurenzo, a senior director of AI at AMD. In a widely-shared analysis, Laurenzo said the changes had made Claude “unusable for complex engineering tasks.”

In her analysis, she found that from late February into early March, Claude moved from a “research-first” approach—reading multiple files and gathering context before making changes—to a more direct “edit-first” style. The model reads less context before acting, makes more mistakes, and requires significantly more user intervention, according to the analysis. The analysis also points to a rise in behaviors like stopping too early, avoiding responsibility, or asking unnecessary permission, which it links to a reduction in “thinking” depth over the same period.

“Claude has regressed to the point [that] it cannot be trusted to perform complex engineering,” she wrote.

In a comment responding to the analysis, Anthropic’s Cherny says the analysis is likely misreading at least part of the data, claiming that the model’s reasoning hasn’t been reduced but that Anthropic had made a change so that the full “reasoning trace” of the model is no longer visible to the user.

But Laurenzo is far from the only person having issues with the tool.

“I’ve had incredibly frustrating sessions with Claude Code the past two weeks,” Dimitris Papailiopoulos, a principal research manager at Microsoft, wrote on X. “I set effort to max, yet it’s extremely sloppy, ignores instructions, and repeats mistakes.”

This story was originally featured on Fortune.com




Moscow.media
Частные объявления сегодня





Rss.plus
















Музыкальные новости




























Спорт в России и мире

Новости спорта


Новости тенниса