Is Domestic AI Governance Impactful?
Why the possibility of AI nationalization should change your research focus
The last time I wrote on career prioritization, I clarified, ex post, why I shifted from working on technical AI alignment to AI governance. Here, I’m trying to preempt what might be my next major shift in prioritization.
Re-reading Leopold Aschenbrenner’s “Situational Awareness,” I ran up against a particularly interesting instance of Aschenbrenner’s rhetorical flourishes:
Many plans for “AI governance” are put forth these days, from licensing frontier AI systems to safety standards to a public cloud with a few hundred million in compute for academics. These seem well-intentioned—but to me, it seems like they are making a category error.
I spent the summer researching a single kind of proposal for domestic AI governance. Is that a category error? Here’s (roughly) the case that it is:
AI progress, currently driven by domestic AI labs (i.e., OpenAI, Anthropic, etc.) will continue rapidly. We might see AI systems nontrivially relevant to national security, warfare, and geopolitics soon. It’s likely the progress curve will be smooth enough that governments will realize this is going to happen before private companies develop the truly transformative systems. Thus, the first (and therefore all) transformative AI projects will be government led, in what amounts to a Manhattan Project for AI. These will be the important systems to govern, so corporate governance won’t be important.
It doesn’t really matter if the “early,” non-transformative AIs developed by labs are irresponsibly governed because governments will nationalize all AGI projects the moment they become powerful enough to worry about.
Let’s call this the “Napoleonic view,” in reference to his (likely misattributed) quote “Let China sleep, for when she wakes, the world will tremble.”1 It’s a simple view, but it has many implications for career prioritization (if true).
Implications of the Napoleonic View
It’s obvious that, given one takes this Napoleonic view, domestic governance work should be heavily deprioritized. In particular, it looks much less promising to work on regulations that narrowly apply to private sector development of transformative AI. Any progress would dissolve upon the nationalization of the project and corresponding moratorium on private development. Perhaps the only surviving reason for a Napoleonic to work on domestic regulation would be that principles from such work might transfer past the transition. But, this seems to have lower leverage than Napoleonic alternatives, given domestic regulation is relatively popular.
So what should the Napoleonic work on? I see a few options.
The most obvious avenue is to work on international governance and coordination. Analogy to other domains of regulation indicates that this is significantly under prioritized relative to domestic regulation. And, if the Napoleonic view is right, it’s significantly more important. This is because, as the US and China spearhead AI projects, dangerous competitive pressures expand from domestic to international in scale. Since these racing dynamics are a primary barrier to responsible development—and will be fierce given the national security implications of the technology—their abatement might be incredibly valuable.
Some proposals exist for robust coordination (or at least mutual oversight) of development between nations, but the scale of research is totally inapposite to the difficulty of such unprecedented coordination. This avenue has many counts in its favor, but tractability is unclear. Would the US and China really coordinate in the face of such a critical technology?
A second avenue, perhaps for those less optimistic about the prospects of mutual oversight, is to lay the groundwork for a coming transition from industry to state development. A more effective transition could improve the government project in various ways. Greater emphasis on safety work could be encouraged, increasing the resources and staffing devoted to alignment. Development and hiring strategy might hasten the project, increasing the US and its allies’ lead over competing national projects, therefore reducing the dangers of competitive pressures. One concern for this avenue is leverage: if and when the US initiates such a project, hundreds of highly-experienced planners will descend on these questions.
A third avenue is to do work that pokes and prods the sleeping giant, hoping to wake up the US government to the promise of transformative AI more quickly. The sooner such a project starts, the further ahead the US will be on AI development and the lesser will be pernicious competitive pressures. Work in this avenue might look like model organisms research or public communications.
Or, I suppose, one can take the Aschenbrenner avenue and… start an investment firm to profit off of AI progress? Hmm, maybe we’ll table that one.
Leading Indicators for the Napoleonic View
A leading indicator provides evidence in favor or against the existence of some phenomenon before it is clear that such a phenomenon exists. For example, the number of new startups is a leading indicator of economic health. What leading indicators might tell us, before any government project starts, that we should do work in line with the Napoleonic view?
First, we should look for various indicators that the government is waking up to the transformative nature of AI progress. Perhaps the earliest indication of this has been the stringent export controls on China focused on semiconductors relevant to AI development and proliferation (especially to the military and surveillance apparatuses). A more recent example is that “Situational Awareness” is widely read and discussed in prominent D.C. circles. The appearance of additional indicators like these should be tracked and counted in favor of the Napoleonic view.
Second, indicators of short timelines and fast takeoff should accumulate as evidence against the Napoleonic view. Governments are notoriously slow and clunky; if private industry develops AI so quickly that it grows from hardly relevant to fully transformative in one year (or at a time when governments are preoccupied with miscellany, such as elections, wars, protests, budgets, etc.), the government may not have time to react. Longer timelines, then, would increase the probability of the project.
Lastly, watch for convincing and useful demonstrations of general AI in military applications. National security dominates everything else on the government’s mind—and rightly so. The military establishment seems to have a slow-rollout plan that is bearish on the applicability of general AI to warfare. If this view changes, the project looks very likely.
By the time I am pursuing my next major research project, I believe a number of these indicators will have activated one way or the other. For instance, the capabilities of GPT-5 will bear heavily on the number of years before AI is truly transformative. When I brainstorm for that project, I’ll first consider agendas of a Napoleonic flavor.
The connection to this quote is that the US and other powerful governments haven’t yet “woken up” to the potential for AI to transform warfare, the economy, and everything else. Once they do, their nationalized projects will dominate domestic and geopolitical life. Also, the view is sort of arrogant, kind of in the way that we think of a person with a Napoleon complex.