Anthropic’s Claude AI Model Faces Performance Crisis Amidst Compute Capacity Concerns

Anthropic, a prominent player in the rapidly evolving artificial intelligence landscape, is grappling with a significant crisis of confidence among its user base. Developers and heavy users of its popular Claude AI model report a marked decline in performance, citing instances where the model fails to adhere to instructions, takes inappropriate shortcuts, and commits more errors, particularly in complex workflows. This widespread dissatisfaction appears directly linked to recent, unannounced alterations Anthropic made to Claude’s operational parameters, specifically reducing the model’s default "effort" level. This strategic shift, aimed at economizing on "tokens"—the fundamental units of data processed by the AI—has sparked intense speculation about the company’s computing resource constraints, threatening to derail its impressive growth trajectory and aspirations for a future initial public offering (IPO).
The Core of the Problem: Diminished Performance and Hidden Changes
At the heart of the current controversy lies a discernible degradation in Claude’s output quality. Users, particularly those engaged in intricate programming tasks, have observed a frustrating regression in the AI’s ability to execute multi-step instructions, maintain context, and deliver reliable results. These complaints surged following Anthropic’s quiet adjustment to Claude’s underlying mechanics. By dialing down the default "effort" level, the model processes fewer tokens per task, directly correlating with reduced computational power consumption. While this might seem like a pragmatic move to manage resources, the resulting drop in intelligence and reliability has alienated a significant portion of its dedicated user community.
The timing of these performance issues is critical. The AI industry is currently witnessing unprecedented demand for sophisticated "agentic" AI systems—models capable of autonomous, multi-step reasoning and action. This surge in demand is outpacing the expansion of critical infrastructure, including Graphics Processing Units (GPUs) and data center capacity. Unlike some of its rivals, Anthropic has publicly announced fewer multibillion-dollar deals for data center expansion, leading many observers to infer that the company might be struggling to keep pace with its own explosive user growth. This perceived shortage of compute resources, coupled with a lack of transparency regarding the changes, casts a long shadow over Anthropic’s carefully cultivated brand image.
A Reputation Under Scrutiny: Transparency and User Trust
Anthropic has meticulously built a brand reputation anchored in transparency and a strong alignment with user interests, often positioning itself as a more ethically grounded alternative to other AI giants. This commitment to openness has been a cornerstone of its appeal, particularly among developers and enterprises seeking trustworthy AI solutions. However, the current situation—where critical performance changes were implemented without clear, upfront communication—directly contradicts this ethos. User dissatisfaction is not merely about reduced performance; it’s profoundly exacerbated by the perceived lack of candor from a company that prides itself on being different. This erosion of goodwill could have far-reaching consequences, especially as Anthropic reportedly eyes a potential IPO, where investor confidence is paramount.
When pressed for specifics, Anthropic declined to provide on-the-record answers to Fortune‘s detailed questions regarding user complaints. However, Boris Cherny, the Anthropic executive leading its Claude Code product, did respond to user grievances online. He acknowledged that Anthropic had indeed reduced the default "effort" Claude expends in responding to user prompts to "medium." Cherny framed this adjustment as a response to earlier user feedback indicating that Claude was consuming an excessive number of tokens per task. Yet, this explanation has done little to quell the outrage, as many users contend that the company failed to adequately highlight this crucial change, leaving them to discover the performance degradation firsthand.
Industry-Wide Challenges and Anthropic’s Acute Constraints
The challenges faced by Anthropic are symptomatic of broader pressures within the AI industry. Companies globally are confronting escalating GPU costs, limited data center expansion capabilities, and difficult trade-offs regarding which products and features to prioritize as demand for advanced AI systems continues its relentless acceleration. While an Anthropic spokesperson has publicly stated that the AI lab does not intentionally degrade its models to better manage demand, various indicators suggest the company may indeed be facing more acute compute constraints than some of its primary competitors.
A chronological look at recent events underscores this narrative:
- Early 2025: Anthropic launches Claude Code, an AI-powered coding tool that quickly gains traction, becoming one of its fastest-growing products.
- Late 2025: Anthropic’s annualized recurring revenue (ARR) reaches an impressive $9 billion, signaling rapid commercial success.
- Early February (Current Year): Claude Opus 4.6, Anthropic’s flagship model, introduces "adaptive thinking," a feature designed to allow the model to dynamically determine the level of reasoning required for a task rather than adhering to a fixed budget.
- Early March (Current Year): Anthropic quietly shifts the default "effort" setting for Claude to a "medium" level, a change that coincided with the noticeable decline in performance reported by users.
- Recent Months (Current Year): As usage has soared, Anthropic has experienced a series of outages and has implemented stricter usage limits during peak hours, further frustrating its user base.
- Mid-April (Current Year): An internal memo reported by CNBC reveals claims from OpenAI’s revenue chief, who alleged that Anthropic made a "strategic misstep" by failing to secure sufficient compute capacity, operating on a "meaningfully smaller curve" than its rivals. Anthropic notably declined to comment on these specific claims.
- Late April (Current Year): Anthropic announces it has trained a new, yet-to-be-released model called Mythos, described as significantly more capable than its current Opus AI model. However, Mythos is also larger and more expensive to run, raising further questions among experts about Anthropic’s capacity to support a broad public rollout, despite the company stressing security concerns as the primary reason for its delayed release.
- Early May (Current Year): Anthropic stuns the industry by announcing its ARR has surged to $30 billion, a staggering increase from $9 billion at the end of 2025, underscoring its rapid commercial expansion.
These events, particularly the outages, usage limits, and competitor’s direct allegations, paint a picture of a company struggling to match its burgeoning popularity with adequate infrastructural support.
Victim of Its Own Success: Growth Outpacing Resources
Anthropic’s current predicament can be largely characterized as a classic case of being a victim of its own success. The company’s annualized recurring revenue (ARR) has skyrocketed to $30 billion, up from $9 billion at the close of 2025. This meteoric rise, while indicative of strong market adoption, also highlights the immense pressure on its underlying infrastructure. For context, OpenAI recently reported generating $2 billion a month in revenue, or $24 billion annually, though direct comparisons are challenging due to differing revenue reporting methodologies.
Anthropic’s recent surge in user acquisition stems from several factors. The popularity of its AI coding tool, Claude Code, was an initial driver. Later, the company benefited from a wave of consumer support following its publicized dispute with the U.S. Department of Defense. In March 2026, many users reportedly switched to Claude from rivals like OpenAI’s ChatGPT after the Trump administration controversially designated Anthropic a "supply-chain risk." Anthropic clarified that this dispute arose from its principled stance, insisting that the U.S. government agree in its contract not to utilize the company’s technology in lethal autonomous weapons or for mass surveillance of American citizens. This public demonstration of ethical commitment resonated deeply with a segment of the AI user base, drawing significant goodwill and a new influx of users.
Over the past few years, Anthropic has indeed carved out a significant leadership position in enterprise AI, fostering strong relationships with developers and corporate clients. However, if the current wave of anger and frustration over Claude’s performance issues persists, it risks severely eroding this hard-won goodwill. At a critical juncture, with IPO ambitions on the horizon, such a stumble could prove incredibly costly.
Official Responses and Proposed Solutions
In an attempt to address the escalating controversy, Boris Cherny, the head of Claude Code, provided further clarification on a GitHub issue thread. He explained that Claude Opus 4.6 had introduced "adaptive thinking" in early February, allowing the model to intelligently gauge the necessary reasoning for a given task. Subsequently, in early March, the default setting was indeed shifted to a "medium effort" level. Cherny noted that while Claude Code users can manually adjust the tool’s effort levels, users of the Pro versions of Cowork or the desktop version of Claude currently lack this customization option.
To mitigate the immediate user issues, Cherny announced that the company would begin testing "defaulting Teams and Enterprise users to high effort, to benefit from extended thinking even if it comes at the cost of additional tokens and latency." This suggests a recognition of the problem’s severity, particularly for its high-value enterprise clients. Cherny also pushed back against widespread speculation that the model had been deliberately "watered down" and refuted claims of a lack of transparency, asserting that the changes were made in response to user feedback and were flagged to users via a pop-up notification within the Claude Code interface. However, many users have stated they did not recall seeing such a notification, fueling the perception of opacity.
"Unusable for Complex Engineering Tasks": User Testimonials and Technical Analysis
The bulk of user complaints has concentrated on Claude Code, Anthropic’s AI-powered coding agent. Launched in early 2025, Claude Code functions as a command-line tool capable of autonomously reading, writing, and executing code within a developer’s environment. Its initial success stemmed from its utility for complex, multi-step coding tasks, making it a favorite among individual developers and large enterprise engineering teams.
The performance degradation of Claude Code gained significant traction on social media, largely due to a detailed GitHub analysis believed to be authored by Stella Laurenzo, a senior director of AI at AMD. Her widely shared findings concluded that the changes had rendered Claude "unusable for complex engineering tasks." Laurenzo’s analysis meticulously documented a shift in Claude’s behavior from late February into early March. Previously, the model adopted a "research-first" approach, meticulously reading multiple files and gathering extensive context before attempting modifications. This shifted to a more direct, "edit-first" style, where the model now reads less context, leading to more frequent mistakes and a higher demand for user intervention. The analysis also highlighted an increase in undesirable behaviors such as prematurely stopping tasks, avoiding responsibility, or requesting unnecessary permissions, all linked to a reduction in the model’s "thinking" depth during the same period. "Claude has regressed to the point [that] it cannot be trusted to perform complex engineering," Laurenzo starkly concluded.
In response to Laurenzo’s analysis, Anthropic’s Cherny countered, suggesting that part of the data might be misinterpreted. He claimed that the model’s core reasoning capabilities had not been reduced, but rather Anthropic had altered the interface such that the full "reasoning trace"—the internal thought process of the model—is no longer visible to the user. This technical distinction, while potentially accurate, does little to assuage users experiencing tangible performance declines.
Laurenzo’s experience is far from isolated. Dimitris Papailiopoulos, a principal research manager at Microsoft, echoed similar frustrations on X (formerly Twitter): "I’ve had incredibly frustrating sessions with Claude Code the past two weeks. I set effort to max, yet it’s extremely sloppy, ignores instructions, and repeats mistakes." Such testimonials from leading industry figures underscore the severity and breadth of the performance issues.
Broader Implications for the AI Landscape
The current crisis facing Anthropic is more than just a temporary blip; it carries significant implications for the broader AI industry. It vividly illustrates the acute tension between rapid innovation, user demand, and the foundational compute infrastructure required to sustain it. The race to develop more powerful and "agentic" AI models is accelerating, but without commensurate investment in GPUs and data centers, even the most promising models risk being hobbled by resource constraints.
For Anthropic, the stakes are particularly high. Its positioning as a leader in ethical AI and transparency is now under immense pressure. Should the company fail to effectively address the performance issues and, crucially, rebuild trust through transparent communication, it risks alienating its core developer community and enterprise clients. This erosion of goodwill could severely impact its competitive standing against rivals like OpenAI, Google, and Meta, all of whom are heavily investing in compute infrastructure.
Furthermore, the saga highlights the delicate balance AI companies must strike between optimizing for cost and maintaining performance. In an industry where "bigger and better" models are constantly being developed (such as Anthropic’s own Mythos), the ability to efficiently scale and deploy these advanced systems without compromising quality will be a key differentiator. Anthropic’s current struggles serve as a cautionary tale: sustained growth and market leadership in AI demand not only cutting-edge research but also robust, scalable, and transparent operational strategies. The coming months will be critical in determining whether Anthropic can navigate this challenging period, reaffirm its commitment to its users, and solidify its position in the fiercely competitive AI future.




