The New York Times published its broadest AI security feature of 2026 on April 6, with a direct conclusion: AI systems from companies like Anthropic and OpenAI have given hackers a meaningful upgrade in attack capability, and the only viable response is deploying AI on the defensive side too.

The piece, bylined by Cade Metz and Kate Conger, frames the shift as structural rather than incremental. Hackers can now use AI agents to probe infrastructure, craft phishing content, identify vulnerabilities, and execute attacks at a speed and volume that human operators cannot match. The defense layer has followed suit: security teams are deploying their own AI agents for anomaly detection, threat triage, and automated response. The result is an arms race where both sides are running on the same underlying technology stack.

The timing of the NYT piece coincided with one of the worst single days in AI agent security history. On April 6, the DEV Community published an OpenClaw post-mortem documenting nine CVEs, 135,000 exposed instances, and 12 percent of the skill marketplace compromised. Anthropic simultaneously patched a separate critical flaw in Claude Code: a command parser bug that let attackers bypass all developer-configured deny rules by embedding a 51st subcommand past a hard-coded limit of 50. The NYT article did not reference either incident directly, but published into the same news cycle.

Why the Arms Race Framing Matters for Agent Builders

The NYT framing carries a specific implication for anyone building autonomous agents: the same AI capabilities that make agents useful for legitimate automation also make them useful as attack infrastructure. An agent that can browse, execute code, use tools, and call APIs is functionally useful to both a business automating workflows and an attacker probing enterprise environments.

The practical question is not whether AI agents will be used in attacks. According to the New York Times, they already are. The question is whether the agents running in your stack have the logging, access controls, and anomaly detection in place to be visible to your defensive systems when something goes wrong.

Monday, April 6 made the stakes concrete: OpenClaw exposed, Claude Code patched, and the newspaper of record confirming that AI-powered attacks are no longer a future scenario.