menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

The Fog of AI

18 7
06.01.2026

Artificial intelligence is rapidly becoming indispensable to national security decision-making. Militaries around the world already depend on AI models to sift through satellite imagery, assess adversaries’ capabilities, and generate recommendations for when, where, and how force should be deployed. As these systems advance, they promise to reshape how states respond to threats. But advanced AI platforms also threaten to undermine deterrence, which has long provided the overall basis for U.S. security strategy.

Effective deterrence depends on a country being credibly able and willing to impose unacceptable harm on an adversary. AI strengthens some of the foundations of that credibility. Better intelligence, faster assessments, and more consistent decision-making can reinforce deterrence by more clearly communicating to adversaries a country’s defense capabilities as well as its apparent resolve to use them. Yet adversaries can also exploit AI to undermine these goals: they can poison the training data of models on which countries rely, thereby altering their output, or launch AI-enabled influence operations to sway the behavior of key officials. In a high-stakes crisis, such manipulation could limit a state’s ability to maintain credible deterrence and distort or even paralyze its leaders’ decision-making.

Consider a crisis scenario in which China has placed sweeping economic sanctions on Taiwan and launched large-scale military drills around the island. U.S. defense officials turn to AI-powered systems to help formulate the U.S. response—unaware that Chinese information operations have already corrupted these systems by poisoning their training data and core inputs. As a result, the models overstate China’s actual capabilities and understate U.S. readiness, producing a skewed assessment that ultimately discourages U.S. mobilization. At the same time, Chinese influence campaigns, boosted by sudden floods of AI-driven fake content across platforms such as Facebook and TikTok, suppress the U.S. public’s support of intervention. Unable to interpret their intelligence and gauge public sentiment accurately, U.S. leaders may then conclude that decisive action is too risky.

China, sensing opportunity, now launches a full blockade of Taiwan and commences drone strikes. It also saturates the island with deepfakes of U.S. officials expressing their willingness to concede Taiwan, fabricated polls showing collapsing U.S. support, and rumors of U.S. abandonment. In this scenario, credible signals from the United States showing that it was inclined to respond might have deterred China from escalating—and might well have been pursued if U.S. officials had not been dissuaded by poisoned AI systems and distorted public sentiment. Instead of strengthening deterrence, AI has undermined U.S. credibility and opened the door to Chinese aggression.

As AI systems become increasingly central to leaders’ decision-making, they could give information warfare a potent new role in coercion and conflict. To bolster deterrence in the AI age, then, policymakers, defense planners, and intelligence agencies must reckon with the ways in which AI models can be weaponized and ensure that digital defenses against these threats are keeping pace. The outcome of future crises may depend on it.

For deterrence to work, an adversary must believe that a defender is both capable of imposing serious costs and resolved to do so if challenged. Some elements of military power are visible, but others—such as certain weapons capabilities, readiness levels, and mobilization capacities—are harder to gauge from the outside. Resolve is even more opaque: only the leaders of a country typically know precisely how willing they are to wage war. Deterrence, therefore, hinges on how effectively a country can credibly signal both its capabilities and its willingness to act.

Costly military actions, such as repositioning forces or raising readiness levels, demonstrate credibility because they require time, resources, and political risk. After a Pakistani militant group launched an attack on the Indian Parliament in 2001, for example, India amassed troops along its border with Pakistan, and by credibly signaling both its ability and determination to act, it deterred further strikes on its soil. The domestic political pressures inherent in democracies can also bolster credibility. Leaders of democracies must answer to their citizens, and making threats only to later back down can result in political backlash. In 1982, for instance, after Argentina seized the Falkland Islands, strong public pressure in the........

© Foreign Affairs