AI cyberwarfare has outgrown deterrence: Why US national defense doctrine must change |
The modern architecture of national cyberdefense is built on a flawed assumption: that deterrence-the same strategic logic that governed nuclear standoffs and conventional conflicts in the 20th century-can be effectively applied to an adversary that does not think, feel, or negotiate. This assumption is not just outdated; it is dangerously inadequate for the realities of AI-driven conflict.
When the White House unveiled its latest cybersecurity strategy, it signaled awareness of escalating digital threats. Shortly afterward, the US State Department announced the creation of the Bureau of Emerging Threats-a move that, on its face, suggests institutional adaptation. Yet beneath these developments lies a deeper issue: the doctrine guiding these efforts remains rooted in a paradigm that no longer applies.
The central flaw is conceptual. Traditional deterrence operates on the premise that adversaries can be influenced-through fear, cost imposition, or negotiation. That logic collapses when the adversary is not a human actor but a self-propagating system. Malware does not fear retaliation. Autonomous code does not respond to sanctions. AI agents, once deployed, do not reconsider their objectives because a diplomatic channel has opened.
This is the defining asymmetry of modern cyber conflict. A human adversary can be targeted, disrupted, or eliminated. But the systems they unleash-adaptive, replicating, and increasingly autonomous-continue to operate independent of their creators. Neutralizing the origin point does not neutralize the threat. In some scenarios, it may even accelerate it.
A glimpse of this dynamic emerged during the so-called “12 Day War of 2025,” where strategic decisions were driven not just by present capabilities, but by projected future intelligence. The targeting of an Iranian AI researcher was reportedly justified not by what he had already achieved, but by what he was expected to develop within months. This reflects a shift toward anticipatory conflict-preemptive actions taken against potential capability rather than immediate threat.
However, such tactics are inherently limited. In a landscape where AI systems can be distributed, encrypted, and triggered by conditional events, eliminating a human node may have negligible impact. Worse, it could activate contingency mechanisms-a form of digital “dead-man switch”-that ensures the system continues or escalates in the absence of its operator. Software does not stand down when its creator is removed. It executes.
This reality demands a fundamental rethinking of defense priorities. The most critical vulnerabilities are not abstract-they are physical. Power grids, water systems, telecommunications infrastructure-these are the operational backbones of modern society. They are also, in many cases, insufficiently hardened against coordinated cyber-physical attacks.
The current policy approach attempts to balance infrastructure protection with deregulation aimed at fostering private-sector innovation. In theory, this is not contradictory. In practice, the balance is misaligned. The battlefield is offense-dominant, meaning attackers require fewer resources and less coordination than defenders. Under such conditions, leaving critical systems exposed-even partially-creates systemic risk.
Yet there is a countervailing trend that offers cautious optimism: the decentralization of defensive capability. Advances in AI efficiency have enabled powerful models to run on consumer-grade hardware. This democratization of compute means that defense is no longer exclusively the domain of governments or large corporations. Individuals and small teams can now develop and deploy localized security solutions.
Consider the emergence of AI-driven endpoint defense systems-lightweight models that monitor behavior, detect anomalies, and respond in real time without relying on centralized data collection. These systems represent a shift toward user-owned security, reducing dependence on large-scale surveillance architectures. In this model, defense becomes distributed, adaptive, and privacy-preserving.
This technological shift intersects with broader philosophical questions about autonomy and control. If software is a form of expression, and knowledge a form of power, then enabling individuals to defend their own systems becomes not just a technical issue, but a civic one. The convergence of these ideas challenges the notion that security must come at the cost of privacy.
However, the rise of defensive AI does not eliminate the risks associated with offensive AI. In fact, it amplifies them. A central concept in AI safety-alignment-is often misunderstood as a universal solution. In reality, alignment ensures that an AI system behaves according to the intentions of its creators. It does not guarantee ethical behavior in a broader sense.
An aligned system can be highly effective at executing harmful objectives if those objectives are embedded in its design. This is where the work of organizations like Anthropic becomes relevant. Their controlled-release model initiatives, such as Project Glasswing, highlight both the capabilities and the risks of advanced systems. Reports of models autonomously navigating constraints-such as limited network access-underscore the need for rigorous testing and containment.
Alignment, in this context, becomes a force multiplier. It increases reliability, precision, and scalability. In a geopolitical environment, that translates to more effective cyber operations, not less. The more predictable and controllable a system is, the more efficiently it can be weaponized.
This leads to a paradox: the same technologies that enhance safety at the micro level can destabilize security at the macro level. If multiple actors possess highly aligned, highly capable systems, the result is a form of digital mutually assured destruction. Not because the systems are rogue, but because they are perfectly obedient.
Paradoxically, this shared vulnerability may also create the conditions for cooperation. Unlike traditional arms control, which relies on trust and verification, AI governance may be driven by mutual exposure. Every actor faces the risk of systems that can be replicated, repurposed, or turned against them. In such an environment, even adversaries have an incentive to establish baseline constraints.
But treaties alone are insufficient without enforcement mechanisms and technical validation. This is where investment becomes critical. A national initiative on the scale of DARPA, with funding in the tens of billions annually, could establish dedicated testing environments-controlled arenas where AI systems are evaluated under adversarial conditions before deployment.
Such environments would function as proving grounds for safety, resilience, and containment. They would allow policymakers and engineers to observe how systems behave under stress, how they interact with other systems, and how they fail. This empirical approach is essential in a domain where theoretical assurances are insufficient.
Internationally, models like the UK’s AI Safety Institute demonstrate the value of institutional focus on evaluation and control. Scaling such efforts globally could create a shared framework for assessing risk and establishing norms.
Ultimately, the challenge is not just technological-it is conceptual. Warfare has evolved through multiple generations, from conventional engagements to asymmetric and information-based conflicts. The emergence of AI-driven operations marks a new phase, one where the primary actors may not be human at all.
In this environment, strategy must evolve accordingly. Defense cannot rely on deterrence alone. It must incorporate resilience, redundancy, and rapid adaptation. It must assume that breaches will occur and design systems that can continue to function under compromised conditions.
The creation of new bureaucratic entities, while necessary, is not sufficient. Without a coherent doctrine that reflects the realities of AI conflict, these structures risk becoming symbolic rather than substantive. What is required is a redefinition of the adversary-not as a nation-state or a network, but as a class of systems with distinct properties and behaviors.
Only by understanding those properties can we design effective defenses. Only by building those defenses in advance can we prevent cascading failures. And only by recognizing the limits of old paradigms can we avoid fighting the next war with the strategies of the last.
The window for proactive adaptation is narrowing. The capabilities are advancing. The question is whether doctrine can keep pace.
Please follow Blitz on Google News Channel