Artificial Intelligence and In Extremis Decision-Making
Optimal decisions made in extreme conditions require effective fast and slow thinking.
Artificial intelligence (AI) may improve the speed and accuracy of decisions made in life-or-death situations.
There are also significant risks associated with AI used under such circumstances.
Decision-making in high-stakes, dangerous situations is challenging, to say the least. Time pressure, limited information, confusion, fatigue, and mortality salience combine to set the stage for decision-making errors, sometimes with grave consequences. An example is the downing of Iran Air Flight 655 by a missile launched by the USS Vincennes in 1988, resulting in the death of 290 passengers and crew. In a time of heightened tension between the U.S. and Iran, the captain of the Vincennes misidentified the airliner as an incoming hostile aircraft and ordered his crew to shoot it down.
Two Types of Thinking and Decision-Making
Decision-making under these conditions is based on two types of thinking elaborated on by Nobel Laureate Daniel Kahneman. Kahneman distinguishes between System I thinking (fast, automatic, and intuitive) and System II thinking (slower, conscious, and deliberate).[i] The Vincennes incident illustrates an error in Type I thinking. Without the luxury of time, the captain had to make a split-second decision based on rapidly evolving information. If the target detected was indeed a hostile aircraft, failure to act could result in damage or the sinking of his ship and the loss of many of his crew’s lives. Unfortunately, he made the wrong decision, which, in turn, caused the deaths of hundreds of innocent civilians.
System II thinking also influenced the captain’s decision. Thorough knowledge of the current political and military situation and of Navy doctrine on whether and how to respond to a potential attack also contributed to his decision to shoot. System I thinking and decision-making occur in the context of System II thinking and knowledge. Both contributed to the captain’s decision in this case.
In extremis decision-making is not confined to the military. It is common in law enforcement, firefighting, and emergency medicine. In these professions, life or death decisions, while guided by doctrine and policy, must be made in a split second.
The traditional way to optimize in extremis decision-making is to combine classroom instruction with field training and on-the-job training to enhance both System I and System II thinking skills. West Point cadets, for example, spend four years studying academic topics (e.g., history and political science) that strengthen their System II thinking. Both prior to and after graduation, they receive extensive and realistic field training to hone System I thinking. Law enforcement officers, firefighters, and medical personnel receive a similar combination of academic and hands-on training intended to improve both types of thinking.
Artificial Intelligence May Aid In Extremis Decision-Making
Given the dramatic consequences of in extremis thinking, artificial intelligence (AI) is emerging as a tool to increase the odds of a good decision while minimizing the odds of a bad decision. In essence, the speed and capacity of AI can transform System II thinking into System I thinking through its ability, in just a matter of seconds, to synthesize and integrate vast storehouses of knowledge.
In the case of the Vincennes incident, AI could, arguably, have more quickly and completely processed the information that may have overloaded the captain’s cognitive capacity. Doing so could have saved innocent lives. Medical doctors are utilizing AI to make more accurate decisions in diagnosing and treating illnesses. Law enforcement leaders may employ AI to more effectively assign officers to combat crime. These are just a few examples of how AI may be used to optimize decision-making.
Pitfalls in AI-Aided In Extremis Decision-Making
That said, there are many potential pitfalls to using AI as a decision-making aid in life-or-death situations.
Degree of human control: Currently, most AI decision aids default to human judgment. Medical doctors review AI input, then render a diagnosis. Similarly, military commanders consider AI input but are personally responsible for making the final decision. But this may change as AI systems evolve. Allowing AI to make life-or-death decisions without human oversight is technically possible but ethically questionable.
Cultural differences in AI policy are hugely important. Nations and non-state actors may have vastly different AI doctrines. It will be critical to track not just how potential opposing nations may employ AI, but also how it is used by non-traditional threats. Terrorist organizations may be less constrained by laws of war and more likely to unleash AI-controlled weapons systems in ways that endanger noncombatants or otherwise violate international standards regulating war.
Moral and ethical considerations matter. Artificial intelligence systems must be designed to consider moral and ethical factors in reaching tactical, operational, or strategic decisions. One can imagine many scenarios where a moral or ethical factor must figure into an in extremis decision. Future AI systems should reflect these factors when reaching autonomous decisions or rendering recommendations to the human in the loop.
Sorting out disinformation and misinformation: This will be a challenge for AI systems. The signal-to-noise ratio of valid information to bad information is slim. To optimize decision-making, AI must make this discrimination.
Do not assume AI beneficence. Governments may employ AI in nefarious ways to monitor and control behavior. An authoritarian regime could easily use AI facial recognition tied to other databases to arrest or intimidate those with opposing political views.
Trust in AI will vary. In a recent article in the American Psychologist, a team of psychologists showed that trust in AI is impacted by both nationality and by occupational context.[ii] People may be more accepting of AI in some contexts (e.g., interpreting a routine MRI scan) than in others (a law enforcement officer employing deadly force).
In the end, the question is not whether AI will be used to aid in or make in extremis decisions. It will rapidly and inevitably spread in this and many other uses. The question is how militaries, local and federal law enforcement agencies, medical teams, and others who must make life-or-death decisions employ this technology. Despite potential benefits, it is not hard to imagine profound misuse for such purposes and under dire circumstances.
Note: The views expressed herein are those of the author and do not reflect the position of the United States Military Academy, the Department of the Army, or the Department of Defense.
[i] Kahneman, Daniel. 2011. Thinking, Fast and Slow. Farrar, Straus and Giroux.
[ii] Dong, Mengchen, Jane Rebecca Conway, Jean-François Bonnefon, Azim Shariff, and Iyad Rahwan. (2024). “Fears about artificial intelligence across 20 countries and six domains of application.” American Psychologist, 81(1), 53–67. https://doi.org/10.1037/amp0001454
