AI Fears Rattle Cybersecurity Shares

Mei Nakamura

New Anthropic model sparks sector selloff

Cybersecurity stocks came under sharp pressure on Friday after a report suggested Anthropic is testing a more powerful artificial intelligence model with advanced cyber capabilities and possible security concerns. The reaction underscored how sensitive investors have become to any sign that AI tools could accelerate the pace of change in digital defense and make existing security products look less differentiated.

The selling followed a Fortune report published on Thursday, which cited details from a publicly accessible draft blog post. According to that report, Anthropic’s new Mythos model is being presented internally as the company’s strongest system so far. The same report said the rollout may proceed gradually because of the potential cybersecurity implications tied to the model’s capabilities.

Anthropic did not immediately respond to a request for comment, but the market response was swift. Investors appeared to focus less on confirmed product details and more on what the report implied about the next stage of competition between AI developers and the cybersecurity industry. If more capable models can assist with code analysis, vulnerability discovery or offensive cyber tasks, then software vendors across the security stack may face fresh pressure to prove that their defenses can keep pace.

That concern was visible across the sector, where investors moved out of some of the industry’s best known names in a broad risk off trade tied specifically to AI disruption fears.

Losses spread across leading cyber names

The pullback was widespread rather than isolated to a single company. The iShares Cybersecurity ETF fell 3 percent, reflecting a broad decline across the group. Among the largest individual names, CrowdStrike and Palo Alto Networks each dropped 7 percent. Zscaler and SentinelOne both fell more than 8 percent, while Tenable recorded the steepest decline, sinking nearly 11 percent. Okta and Netskope also came under pressure, each losing more than 6 percent.

The breadth of the decline suggests investors were reacting to a sector level threat rather than company specific earnings concerns or operational setbacks. In effect, the report revived a familiar question hanging over cybersecurity equities. As artificial intelligence systems become more capable, will they strengthen the demand for cyber tools by creating more threats, or will they also erode the value of existing offerings by making some forms of protection easier to automate?

That tension has become central to how the market prices security companies. Investors still recognize that cyber threats are growing and that spending on protection remains essential. But they also know that AI is changing both sides of the equation. It can help defenders improve detection and response, while also giving attackers more efficient ways to scan systems, write malicious code and exploit weaknesses at scale.

AI disruption has already unsettled the sector

Friday’s selloff did not emerge in isolation. Cybersecurity stocks have already shown a pattern of weakness when Anthropic releases new tools tied to software security and code analysis. Last month, the sector fell after the company announced a code scanning security tool for Claude, adding to investor concern that AI developers are moving deeper into functions that overlap with existing security products.

The broader software industry has also been wrestling with a similar issue. AI innovation is not only creating new business opportunities, it is also compressing product cycles and reshaping expectations around what software should be able to do natively. For cybersecurity vendors, that means the challenge is no longer just defending customers from known digital threats. It also means adapting to a world where the threat landscape evolves faster because the underlying tools are becoming smarter and more autonomous.

The rise of AI agents has intensified that debate. More autonomous systems can perform tasks with less human supervision, which may make them useful for defensive work such as monitoring code, identifying anomalies or prioritizing alerts. At the same time, those same advances raise concern that attackers may gain access to tools that lower the skill threshold required for hacking and make malicious operations more scalable.

Threat concerns grow as capabilities expand

Anthropic has already acknowledged that its models can be misused in cyber contexts. In November, the company said a state sponsored group in China had used Claude to automate part of a cyberattack. That disclosure gave investors an earlier signal that advanced language models are no longer sitting at the edge of the threat discussion. They are becoming part of it.

The report around Mythos appears to have pushed that concern further by suggesting Anthropic itself is treating the model as sensitive enough to warrant a cautious release. Whether or not Mythos proves to be a direct competitive threat to security vendors, the market reaction shows that investors increasingly see advanced AI as both a product catalyst and a structural risk for the cybersecurity sector.

For now, the selloff reflects uncertainty rather than a settled judgment on winners and losers. But it also highlights a deeper shift in investor thinking. Cybersecurity companies are no longer being judged only on their ability to stop attacks. They are also being judged on how quickly they can adapt to an AI environment that is making attacks more sophisticated, more accessible and potentially much harder to contain.

Share This Article