
There is a major paradox in global dialogue around artificial intelligence (AI) today. Most countries around the world have called for some form of international engagement or coordination around AI—whether in the form of export promotion deals, calls for international governance, or more. Yet despite this impulse, any form of concrete, substantial international action around AI remains elusive. The commitments made at AI summits held in India remain voluntary, global enforcement of the past commitments made at the Seoul AI Summit has been shaky at best, and international debates around governance continue to be highly fractured.
Some have claimed that the lack of substantial international action around AI is due to different political interests and national values across countries. Europe’s interests in regulating frontier models arguably diverge from the more light-touch approach proposed by the United States, preventing broader action around the technology. All of these factors certainly play a role. Yet, most writing today underappreciates one key part of the problem: epistemics.
One core reason that global action around AI has been poor is that none of the world agrees on what AI is. First, there is clearly a definitional problem. When some people refer to artificial intelligence, they think almost exclusively about ChatGPT or large language models. Others conceptualize superintelligent systems exceeding human capabilities. Others still use the term to describe much more commonplace machine learning algorithms.
This problem—which computer scientists Arvind Narayanan and Sayash Kapoor analogue to using the term “vehicle” to describe cars, trucks, and all forms of transportation—clearly plagues public discourse about AI, which can make it hard to discuss what specific technologies need to be governed, and in what way.
Second, but arguably more important, there is much deeper epistemic disagreement over what kind of technology AI is and, specifically, the speed and scale of the transformation that it might induce.
Some individuals, such as Daron Acemoglu, think that AI will have a “nontrivial but modest” impact on the economy, localized primarily to certain white-collar sectors over decades. Others, such as Dario Amodei, by contrast, argue that AI will rapidly transform nearly all sectors of the economy, society, and civilization in a very short period of time, with the possibility of superintelligence in the next few years.
These analogies reflect a deeper disagreement not about whether AI is transformative—most actors agree that it will be—but rather about the nature of that transformation, ranging from views of AI as an important technology playing out over decades to it being the most impactful technology in history in a very short period of time.
Now, it is worth noting that agreement or disagreement on these timelines does not imply agreement or disagreement over how good or bad AI will be—indeed, these views can contrast greatly. For example, some who believe that superintelligence is imminent believe that it might produce abundance, while others worry about the cybersecurity risks of such actors misusing it. These groups may disagree over AI’s impact, but their underlying epistemics of the technology’s future progress are similar.
For governments, this epistemic question colors the policy that they take toward AI. If a government thinks that superintelligence capable of transforming human civilization is imminent, then that merits getting closer to whoever controls that superintelligence.
By contrast, if a government believes that AI is likely to be an important but slower-moving and sectoral technology, then they will push to integrate AI across the most important sectors of their economy, much like the United States promoted electrification in rural regions, to harness this technology for national competitiveness. Crucially, they will also want to make sure that they own some of the power plants—so no other government can turn off their electricity supply at will. For governments, their view of AI influences how they view AI governance writ large, and especially their dependency on foreign governments.
If we combine these two ideas, we get a structured way to think about how governments view AI and that view’s effect on their foreign policy. Imagine a chart with two axes. On one axis is how countries perceive the speed and scale of the transformation induced by AI, ranging from a localized or sectoral impact to a civilization-scale one. This axis is loosely similar to other concurrent frameworks, though we arrived at ours independently and focus on government epistemics with different endpoints. On the other axis is how self-sufficient a country views its domestic AI capabilities to be—ranging from the perception that a country controls and owns its entire domestic industry, from chips to models, to the perception that it is totally dependent on foreign models from U.S. or Chinese firms.
Going along the two axes leads us to several broad groupings of countries and their foreign policies. In one grouping, among those that believe civilizationally-transformative AI is coming soon—and have self-sufficient AI capabilities—are the U.S. frontier laboratories of Anthropic, OpenAI, and Google DeepMind, as well as various key actors in the United States, including some members of the Biden administration or Sen. Bernie Sanders.
These groups think that AI has the potential to be—though is not guaranteed to be—a civilizationally-important technology in a relatively short period of time. These groups suggest that AI might reach artificial general intelligence (AGI)—a AI system whose capabilities match human cognitive performance—or even superintelligence that exceeds human performance in a short number of years, resulting in radical changes in the global balance of power, across all sectors of the economy and more. This camp has divisions of its own, such as between those concerned about the need to test advanced AI systems versus those who are optimistic about their impact—but their epistemics of the technology are similar.
More conservative about the speed and scale of AI’s potential are a few other key groups, including some Silicon Valley investors and actors in the government of China (at least according to China watchers such as Jordan Schneider and Kyle Chan). These groups, in general, believe that AI is unlikely to go toward AGI or superintelligence in the short-term, but they do not see AI as a technology with only localized impacts, either—rather, they view AI as a highly important general-purpose technology, akin to a foundational piece of infrastructure, that will diffuse much more slowly across the entire economy.
Their strategy—which still chases the leading-edge, or frontier AI capabilities like the American frontier labs— emphasizes speeding up the diffusion of AI capabilities across society, which, for example, China seeks to do through the use of open-source and state-led efforts such as the “AI Plus” plan.
China is also rapidly seeking to build its own agentic economy, with local governments using DeepSeek to proofread readers while consumers race to get OpenClaw installed on their phones. Of course, these groups are not monoliths—for example, some Chinese labs like DeepSeek still chase AGI. More broadly, these groups do not believe AI is hype either; rather, they still view the technology as transformative, but perhaps slower moving in scope and speed than some of the Americans.
Outside of the United States and China, governments around the world disagree sharply on the speed and scale of the transformation that AI might induce. Some groups, such as various scientists and civil society actors in Europe, reject the idea that civilizationally-transformative superintelligence may emerge soon. By contrast, a few governments think transformative AI capabilities might emerge much more rapidly, including the United Arab Emirates and the former Sunak government in the United Kingdom.
In general, most non-American or Chinese governments worldwide see the most transformative AI capabilities as slower emerging, but many of these governments differ in their predictions of what the speed of AI capabilities development and diffusion will be.
Immediately, this level of global epistemic divergence makes global coordination challenging because it prevents all parties in international discussions from conceptualizing which benefits and risks of AI should be addressed. Some in the superintelligence camp might want to focus on AI-enabled cyber risks because they think rapidly emerging AI capabilities will enable certain actors to have significant offensive cyber capabilities.
By contrast, those who think AI will develop more slowly may reject those concerns immediately and urge the world to focus on short-term harms—for example, France’s AI summit highlighted the Macron government’s focus on risks such as labor disruption or cultural erasure, compared to prior summits, which focused on risks associated with superintelligence.
In short, the underlying divergences in different countries’ epistemics of AI make it difficult to develop a shared agenda or set of priorities worth tackling. The result causes the policy recommendations and even the language that countries propose for coordination to diverge sharply.
However, as established, integral to the epistemics of AI is not just how the governments perceive the advent of the technology’s capabilities, but how self-sufficient they perceive their capabilities to be. The United States and China account for some 90 percent of global AI compute, most of the world’s leading models, and more. These countries have more autonomous AI ecosystems—though both have critical dependencies too, such as U.S. dependence on advanced chips from Taiwan.
Outside the United States and China, many governments worldwide feel more acutely dependent on U.S. and Chinese AI capabilities. When this self-assessment of perceived national dependency combines with each country’s view of the technology, the result leads to significant divergences in national policy. For example, governments such as the United Arab Emirates, which see transformative AI capabilities as coming more rapidly, have proposed deals including an AI “marriage” with the United States.
Such a proposal might seem confusing at first, but if governments think AI capabilities may be transformative quickly and view themselves as dependent on the United States or China, then they have strong incentives to bandwagon with either power to guarantee themselves access to such capabilities.
By contrast, other governments that either see AI capabilities as slower emerging or do not care about the strongest frontier capabilities believe they have the time needed to build domestic AI autonomously.
For example, the former head of India’s AI mission explicitly stated that the country did not plan to chase AGI. Such a view would justify the country’s significant investment in domestic AI efforts, including indigenous computing infrastructure, talent, and developing its own sovereign champions such as Sarvam—they may not reach the American or Chinese frontiers overnight, but in their eyes, the slower-moving frontier of AI progress permits such a strategy to prioritize autonomy.
In turn, this epistemic perception of self-sufficiency or dependency—and the policy responses that it informs—is what impedes further coordination on AI. U.S. and Chinese officials who see their AI ecosystems as largely self-sufficient have limited incentive to unilaterally cede decision-making power over their AI ecosystems to international bodies beyond bilateral arrangements between two countries themselves. The result means that states seeking to bandwagon with the United States or China have limited incentives to defect from either government’s view, since they otherwise risk jeopardizing their ability to bandwagon itself.
Only the nations that lack self-sufficient domestic capabilities—and believe that AI will advance too quickly for them to acquire those capabilities—have stronger incentives for global coordination, precisely so they can have a say in governing the technology. This is also the reason that these efforts largely fall flat—these governments lack the capabilities today to make much of their coordination relevant. The result is effectively a dying international ecosystem for multilateral coordination on AI.
This analysis makes clear why we see this paradox in global AI discourse today, as governments call for coordination but are not able to agree on it. Epistemic differences in how the world sees the pace of the technology’s progress and how autonomously governments can wield AI’s capabilities create an unfavorable incentive structure for any international action.
Until governments have converged on these differences, the serious issues that merit at least some level of international coordination—planning for unexpected multi-agent interactions across borders, global technical standards to support agentic economies, and more—remain out of reach. In this sense, the paradox is not a failure of will—rather, it is the result of incompatible epistemics.
What might break this stalemate? It is hard to say. Perhaps it will be concerns around the proliferation of AI capabilities that might lead certain actors to prevent their misuse by nonstate actors, or new economic developments that expand the number of players needed to finance frontier model development. Until then, global AI coordination seems to have a dim window—for now.