Anthropic denied "nerfing" Claude Code after a week of user complaints about quality.Samuel Boivin/NurPhoto via Getty ImagesAnthropic found issues with Claude Code after complaints that the popular tool had gotten worse.The company denied "nerfing" or intentionally degrading the model.Users had been complaining for weeks that Claude Code was giving worse responses.If you were one of the users complaining that Claude Code has sucked lately, Anthropic just confirmed it wasn't all in your head.The company wrote in a lengthy blog post that after reviewing user complaints about the quality of Claude Code, one of its most popular products, it identified three issues likely contributing to a worse user experience."We take reports about degradation very seriously. We never intentionally degrade our models," the Thursday post read.

It said the underlying model was not affected; the issues were tweaks made at the product level.As of April 20, Anthropic said, those issues were fixed and that it had taken steps to avoid similar problems in the future.The post came after weeks of user complaints that Claude Code's responses had deteriorated. The complaints followed a series of wins for Anthropic, which has been praised for the technical abilities of its tools and earned a $1 trillion valuation on secondary markets.The issues led some users to speculate that Anthropic had intentionally degraded or "nerfed" the tool, which the company flatly denied."Claude has regressed to the point it cannot be trusted to perform complex engineering," a GitHub user identified as Stella Laurenzo, senior director of chipmaker AMD's AI group, wrote in a post from early April.Similar complaints were shared on Reddit, where one user said Claude Code had become "lazy," "ignorant," and "degraded and myopic.""Over the past month, some of you reported Claude Code's quality had slipped," Anthropic's developers said in an X post on Thursday.

"We investigated, and published a post-mortem on the three issues we found."The company detailed product changes that led to the reported problems: a change to Claude Code's default thinking level, a cache-optimization tweak that introduced a bug, and a system prompt meant to make the tool less verbose.Steps taken to solve the problem include having a larger share of staff use the public build of Claude Code, improving its code review tool, and adding tighter controls to system prompt changes.The company thanked users who submitted feedback and said it is resetting usage limits for all subscribers as of Thursday.Some Claude users expressed feeling validated after Anthropic's announcement."I can't believe we were right," popular X user @levelsio wrote. "Claude was dumbified on March 4, just when we noticed!"The user reshared their March 4 post — the day Anthropic said it had changed the default thinking level — that said, "Claude Code with Opus 4.6 was so dumb today I finally had to write my own code again."Anthropic also received some scrutiny on Tuesday when it said it was experimenting with removing Claude Code from the Pro plan, its lower-tier paid subscription. The company said it was a test that affected around 2% of new users.Read the original article on Business Insider