Governments Can’t Agree on What AI Actually Is |
Get audio access with any FP subscription. Subscribe Now ALREADY AN FP SUBSCRIBER? LOGIN
Get audio access with any FP subscription.
ALREADY AN FP SUBSCRIBER? LOGIN
There is a major paradox in global dialogue around artificial intelligence (AI) today. Most countries around the world have called for some form of international engagement or coordination around AI—whether in the form of export promotion deals, calls for international governance, or more. Yet despite this impulse, any form of concrete, substantial international action around AI remains elusive. The commitments made at AI summits held in India remain voluntary, global enforcement of the past commitments made at the Seoul AI Summit has been shaky at best, and international debates around governance continue to be highly fractured.
Some have claimed that the lack of substantial international action around AI is due to different political interests and national values across countries. Europe’s interests in regulating frontier models arguably diverge from the more light-touch approach proposed by the United States, preventing broader action around the technology. All of these factors certainly play a role. Yet, most writing today underappreciates one key part of the problem: epistemics.
There is a major paradox in global dialogue around artificial intelligence (AI) today. Most countries around the world have called for some form of international engagement or coordination around AI—whether in the form of export promotion deals, calls for international governance, or more. Yet despite this impulse, any form of concrete, substantial international action around AI remains elusive. The commitments made at AI summits held in India remain voluntary, global enforcement of the past commitments made at the Seoul AI Summit has been shaky at best, and international debates around governance continue to be highly fractured.
Some have claimed that the lack of substantial international action around AI is due to different political interests and national values across countries. Europe’s interests in regulating frontier models arguably diverge from the more light-touch approach proposed by the United States, preventing broader action around the technology. All of these factors certainly play a role. Yet, most writing today underappreciates one key part of the problem: epistemics.
One core reason that global action around AI has been poor is that none of the world agrees on what AI is. First, there is clearly a definitional problem. When some people refer to artificial intelligence, they think almost exclusively about ChatGPT or large language models. Others conceptualize superintelligent systems exceeding human capabilities. Others still use the term to describe much more commonplace machine learning algorithms.
This problem—which computer scientists Arvind Narayanan and Sayash Kapoor analogue to using the term “vehicle” to describe cars, trucks, and all forms of transportation—clearly plagues public discourse about AI, which can make it hard to discuss what specific technologies need to be governed, and in what way.
Second, but arguably more important, there is much deeper epistemic disagreement over what kind of technology AI is and, specifically, the speed and scale of the transformation that it might induce.
Some individuals, such as Daron Acemoglu, think that AI will have a “nontrivial but modest” impact on the economy, localized primarily to certain white-collar sectors over decades. Others, such as Dario Amodei, by contrast, argue that AI will rapidly transform nearly all sectors of the economy, society, and civilization in a very short period of time, with the possibility of superintelligence in the next........