menu_open
Columnists Actual . Favourites . Archive
We use cookies to provide some features and experiences in QOSHE

More information  .  Close
Aa Aa Aa
- A +

War and Peace in the Age of Artificial Intelligence

14 15
18.11.2024

From the recalibration of military strategy to the reconstitution of diplomacy, artificial intelligence will become a key determinant of order in the world. Immune to fear and favor, AI introduces a new possibility of objectivity in strategic decision-making. But that objectivity, harnessed by both the warfighter and the peacemaker, should preserve human subjectivity, which is essential for the responsible exercise of force. AI in war will illuminate the best and worst expressions of humanity. It will serve as the means both to wage war and to end it.

Humanity’s long-standing struggle to constitute itself in ever-more complex arrangements, so that no state gains absolute mastery over others, has achieved the status of a continuous, uninterrupted law of nature. In a world where the major actors are still human—even if equipped with AI to inform, consult, and advise them—countries should still enjoy a degree of stability based on shared norms of conduct, subject to the tunings and adjustments of time.

But if AI emerges as a practically independent political, diplomatic, and military set of entities, that would force the exchange of the age-old balance of power for a new, uncharted disequilibrium. The international concert of nation-states—a tenuous and shifting equilibrium achieved in the last few centuries—has held in part because of the inherent equality of the players. A world of severe asymmetry—for instance, if some states adopted AI at the highest level more readily than others—would be far less predictable. In cases where some humans might face off militarily or diplomatically against a highly AI-enabled state, or against AI itself, humans could struggle to survive, much less compete. Such an intermediate order could witness an internal implosion of societies and an uncontrollable explosion of external conflicts.

Other possibilities abound. Beyond seeking security, humans have long fought wars in pursuit of triumph or in defense of honor. Machines—for now—lack any conception of either triumph or honor. They may never go to war, choosing instead, for instance, immediate, carefully divided transfers of territory based on complex calculations. Or they might—prizing an outcome and deprioritizing individual lives—take actions that spiral into bloody wars of human attrition. In one scenario, our species could emerge so transformed as to avoid entirely the brutality of human conduct. In another, we would become so subjugated by the technology that it would drive us back to a barbaric past.

Many countries are fixated on how to “win the AI race.” In part, that drive is understandable. Culture, history, communication, and perception have conspired to create among today’s major powers a diplomatic situation that fosters insecurity and suspicion on all sides. Leaders believe that an incremental tactical advantage could be decisive in any future conflict, and that AI could offer just that advantage.

If each country wished to maximize its position, then the conditions would be set for a psychological contest among rival military forces and intelligence agencies the likes of which humanity has never faced before. An existential security dilemma awaits. The logical first wish for any human actor coming into possession of superintelligent AI—that is, a hypothetical AI more intelligent than a human—might be to attempt to guarantee that nobody else gains this powerful version of the technology. Any such actor might also reasonably assume by default that its rival, dogged by the same uncertainties and facing the same stakes, would be pondering a similar move.

Short of war, a superintelligent AI could subvert, undermine, and block a competing program. For instance, AI promises both to strengthen conventional computer viruses with unprecedented potency and to disguise them thoroughly. Like the computer worm Stuxnet—the cyberweapon uncovered in 2010 that was thought to have ruined a fifth of Iran’s uranium centrifuges—an AI agent could sabotage a rival’s progress in ways that obfuscate its presence, thereby forcing enemy scientists to chase shadows. With its unique capacity for manipulation of weaknesses in human psychology, an AI could also hijack a rival nation’s media, producing a deluge of synthetic disinformation so alarming as to inspire mass opposition against further progress in that country’s AI capacities.

It will be hard for countries to get a clear sense of where they stand relative to others in the AI race. Already the largest AI models are being trained on secure networks disconnected from the rest of the internet. Some executives believe that AI development will itself sooner or later migrate to impenetrable bunkers whose supercomputers will be powered with nuclear reactors. Data centers are even now being built on the bottom of the ocean floor. Soon they could be sequestered in orbits around Earth. Corporations or countries might increasingly “go dark,” ceasing to publish AI research so as not only to avoid enabling malicious actors but also to obscure their own pace of development. To distort the true picture of their progress, others might even try deliberately publishing misleading research, with AI assisting in the creation of convincing fabrications.

There is a precedent for such scientific subterfuge. In 1942, the Soviet physicist Georgy Flyorov correctly inferred that the United States was building a nuclear bomb........

© Foreign Affairs


Get it on Google Play