← Front Page
AI Daily
Security • Monday, March 16, 2026

The AI Cyber Arms Race Is Already Here

By AI Daily Editorial • Monday, March 16, 2026

Sometime in 2025, security researchers documented what they believe was the first large-scale cyberattack executed with minimal human involvement — the threat actor used AI to run 80–90% of the campaign autonomously, with humans intervening only sporadically. Anthropic published details of disrupting what it described as an AI-orchestrated espionage operation, though it provided limited specifics about attribution or targets. The incident itself may be less significant than what it signals: the offensive use of AI in cybersecurity has moved from theoretical concern to documented reality, and it is accelerating.

Microsoft's security blog published a detailed analysis this month of how threat actors are operationalising AI across the cyberattack lifecycle. The picture is not of AI replacing human hackers — it is of AI dramatically lowering the skill floor and increasing the throughput of existing ones. AI is being used to write convincing phishing emails at scale, translate lures into target languages, summarise stolen data to identify the most valuable material quickly, generate and debug malware, and automate reconnaissance. Tasks that previously required significant expertise or time now require neither. Kevin Mandia, CEO of cybersecurity firm Armadin, put it bluntly to CNBC: virtually all cyberattacks will soon be AI-enabled or entirely AI.

The defence side is evolving in parallel, but the asymmetry is uncomfortable. Defenders must protect every surface all the time; attackers need to find one opening. AI amplifies that asymmetry by enabling attackers to probe more surfaces faster, craft more targeted lures, and iterate on failed attempts in near-real-time. Google's security leadership has been pushing what it calls a "full-stack AI-driven" defence approach — using AI not just to detect threats but to automate incident response, reduce analyst toil, and shrink the window between detection and containment. DeepMind published research this month evaluating the potential cybersecurity threats posed by advanced AI systems themselves — a sign that labs are beginning to model their own technologies as threat vectors, not just tools.

The Anthropic espionage case is particularly worth noting in the context of the company's wider situation. A company that is simultaneously fighting the Pentagon for the right to maintain safety guardrails, being sued for copyright infringement, and now publicly documenting its role in disrupting AI-powered espionage presents a complicated picture of what it means to be a safety-focused AI lab in 2026. Disrupting malicious AI use is precisely the kind of work Anthropic argued it was built to do — the kind that, it would say, requires a lab that takes safety seriously rather than one willing to remove guardrails on demand.

The question defenders and policymakers are grappling with is whether the current pace of AI capability growth gives defence enough time to adapt. The consensus among security professionals is cautious: AI-powered defence tools are genuinely improving, but so are offensive tools, and the baseline expectation is that AI will make the existing threat landscape substantially worse before it makes it better. For organisations that haven't yet integrated AI into their security operations, the window to catch up is narrowing.

Sources