On Tuesday morning, the US-China Economic and Security Review Commission (USCC), chartered to “review the national security implications of trade and economic ties between” the US and China, released its 2024 annual report to Congress. The 800-page document includes 32 recommendations. Its top-line recommendation is, as far as I am aware, the first official government recommendation to nationalize AGI development. The Commission recommends:
Congress establish and fund a Manhattan Project-like program dedicated to racing to and acquiring an Artificial General Intelligence (AGI) capability. AGI is generally defined as systems that are as good as or better than human capabilities across all cognitive domains and would usurp the sharpest human minds at every task. Among the specific actions the Commission recommends for Congress:
Provide broad multiyear contracting authority to the executive branch and associated funding for leading artificial intelligence, cloud, and data center companies and others to advance the stated policy at a pace and scale consistent with the goal of U.S. AGI leadership; and
Direct the U.S. secretary of defense to provide a Defense Priorities and Allocations System “DX Rating” to items in the artificial intelligence ecosystem to ensure the project receives national priority.1
This recommendation is a bit bizarre in that it is not argued for or even explained at all in the body of the text. However, there are a number of interesting pieces to dissect here, most notably that it recommends a “Manhattan Project-like” program, yet seeks to primarily fulfill this goal by contracting to existing firms and compute providers. This does not complete a centralization; it looks rather like a soft nationalization.
I figure that, once the implications of the technology (e.g., its security needs, transformative nature, safety issues, etc.) become clear, soft nationalization will be seen much less favorably. It is of course unlikely that any plan of this type, if actually enacted, would follow this short recommendation as a literal blueprint. This is momentous for its broad strokes, not details.
The release of this report has reinvigorated my desire to know the true likelihood of a centralized AGI project in the United States. I won’t take this report’s release as a huge piece of evidence, primarily because I am not sure how influential this Commission is.2 (While I was writing this, a brief article criticizing the report was published by Garrison Lovely. He seems to think that the Commission is quite influential, but also severely misguided and erroneous. My quick take is that Garrison correctly identifies the report’s hasty treatment of AGI, but focuses too heavily on the unimportant question of whether China is already centralizing its development. Perhaps more on this another time.)
As I have explained before, the likelihood of centralization has severe consequences for the most effective ways to bring about the safe development of AI. Before new evidence arises (as it is wont to do), I want to hammer out my views on this likelihood to ensure I’m not Blockbuster passing on Netflix.
The Likelihood of Nationalization
What’s the chance of a centralized development project in the US? Let’s start with a very precise definition of the question:
Question: What is the chance that the US government centralizes frontier AI development in a single government-sponsored project before the first year that the global economic growth rate exceeds 25%.
A few parts of this definition beg explanation. For one, a case in which the USG does not prevent other domestic actors (e.g., the labs) from continuing frontier (i.e., the most highly capable general) AI development is not a centralization. This would hardly be taking the situation seriously. Image if the USG started the Manhattan Project while also allowing General Electric to poach researchers, enrich Uranium, and develop an atomic bomb.
Another important point is the economic growth rate argument. Here I basically want to say “before AI becomes highly transformative.” This is because many of the questions related to this one on Manifold and other websites suffer from asking for a similar nationalization condition “before AGI.” However, this AGI hurdle is typically pretty low or ambiguous, and probably will not line up with the point at which AI is as critical for national security as the bomb was.3
I choose 25% in line approximately with Ajeya Cotra’s marker of transformative AI, roughly a 10x boost to the global economic growth rate (on par with the growth caused by the industrial revolution).
Let’s take a look at the best version of arguments that centralization is likely and unlikely.
Existing predictions and arguments
If you search for questions like ours on Manifold (a popular play-money prediction market website), you will find probabilities around 30-40% with 20-50 traders. Characteristic is this market, which sits at 40% for “Will the US or UK nationalize any frontier AI labs by 2035?” But this market has some arbitrary and demanding resolution criteria. Considering the going rate of this market and others—and what I think are more lenient resolution criteria in the case of our question—I would guess a reasonable Manifold estimate for our question is 40-50%. (I have created exactly this market here. Soon we may be able to see if I was right!) So, from the Manifold angle, the probability looks to be 45%.
I now want to present what I think are my favorite existing arguments for and against centralization being likely. Note that I have spent more time than most people, but not a whole lot of time, looking for these arguments. (I have spent relatively more time evaluating them.)
The best argument that centralization is unlikely
Roughly, the most convincing reason to think that development won’t be centralized is that the US opts for something closer to a “soft nationalization.” Methods for this are discussed at length in this report by Convergence. Here is the authors’ argument:
we argue that existing descriptions of nationalization along the lines of a new Manhattan Project are unrealistic and reductive. The state of the frontier AI industry – with more than $1 trillion in private funding, tens of thousands of participants, and pervasive economic impacts – is unlike nuclear research or any previously nationalized industry… Government consolidation of frontier AI development is legally, politically, and practically unlikely.
There’s also a three-part extension to the argument, roughly: (1) Policymakers would see full nationalization as undermining the pace of US AI progress and thus its lead; (2) the powerful corporations involved in such a nationalization are huge and likely to present legal and political feasibility challenges; and (3) a soft nationalization could achieve the same goals, even with regards to national security.
This vein of argument certainly has its merits. Another version of this argument is provided by this post arguing that the USG is more likely to make extensive use of contracting due to more recent precedent and the size of the private sector industry. This seems to gel nicely with the implementation details recommended in the USCC report.
Probably my strongest rebuttal to these steelmans of the anti-centralization argument is that they lack imagination about the power of AI. If the successful development of transformative AI actually ends up as consequential as I think it will be (leading to, among other things, >25% annual economic growth), especially from a national security perspective, it seems unbelievably unlikely that a rational government would allow the systems that will be developed after a year of autonomous AI R&D to remain outside of the direct and complete control of executive government.
The private sector does not need to be entirely centralized, either. The proposed nationalization does not involve the dissolution of Google, Meta, Nvidia, etc. This simply requires frontier AI development to be a unique government initiative. Perhaps all training runs under 1028 FLOPs, for instance, could be permitted.
In addition, I think these arguments lack insight into the unbelievable emphasis that the USG places on national security. Perhaps the only remaining nonpartisan issue in the US today is a strong distaste for China and its militarization campaign. (But then again, BIS can’t get any damn money to enforce its national security-focused AI chip export controls on China. So who knows.)
The best argument that centralization is likely
I am wary of calling the following argument the last word on the likelihood of centralization. Leopold Aschenbrenner, who has proliferated this line of argument to the world, is an extremely rhetorically effective writer and I worry this clouds my judgment. Nevertheless, I am not aware of a better argument.
The main thrust is this: “I find it an insane proposition that the US government will let a random [San Francisco] startup develop superintelligence. Imagine if we had developed atomic bombs by letting Uber just improvise… in the next few years, the world will wake up. So too will the national security state. History will make a triumphant return.”4
The state cannot allow a private firm operating within its borders to become more broadly capable than itself. This is antithetical to the American understanding of its nation’s role in the world. We have not spent so many trillions on defense to be usurped by Sam Altman.
The only remaining path that avoids centralization, then, is a scenario where the government never realizes the implications of the technology before it’s too late. In an incredibly fast timeline and takeoff scenario, this looks possible. However, in the mainline, Leopold gives us a convincing story for government intervention.
As AI capabilities increase rapidly “on the cusp of AGI, on the cusp of an intelligence explosion, … From the halls of the Pentagon to the backroom Congressional briefings will ring the obvious question, the question on everybody’s minds: do we need an AGI Manhattan Project?”
And we will. The truth is, we will need one. OpenAI, nor Elon Musk, nor Google, nor even Anthropic could ever be capable of managing the shitstorm that is growing a kindly superintelligence.
Though Leopold considers the contracting model (under a “suave orchestration” by the DoD), he acknowledges that by late in the decade “The core AGI research team will move to a secure location… The Project will be on.” And how else could it go? By this point we’ll have demos that the most capable AIs can automate the military logistical apparatus, rapidly research new weapons systems, launch state-level cyberattacks, inter alia.
And sure, I admit it feels strange to argue with this level of confidence about the function of the USG. I know I should not be quite this confident in my model of how that complex apparatus works. It’s truly an opaque system, with a notorious coordination and execution track record. But how else could it go? If we believe in straight lines on graphs, we must believe the government will step in. If not to save us, then to save itself.
My Conclusion
A rational government would surely nationalize frontier AI development. Perhaps our bureaucracy does not possess this virtue. For all the reasons above, and considering the substantial chance that I’ve got this badly wrong, I’ll settle on a 65% chance that the centralization project happens per the criteria of our question. That leaves a 35% chance to be split between “soft nationalization,” “the labs build superintelligence,” and “I’m terribly wrong about transformative AI.” That seems high to me, but: epistemic humility!
If the probability is actually this high, it’s looking more important than ever to consider whether your impact plan makes sense under a centralized development scenario. It’s worth saying that I think most plans developed with no knowledge of nationalization are unlikely to be effective in this scenario. In a previous post I wrote up some ideas for activities that make sense in such a case, the highlights being:
International governance and coordination research. For example, hardware-enabled mechanisms, global governance schema, or general geopolitical stabilization.
Improve transition to centralized project. For example, communicate need for a safety focus, research hiring and operations strategy, create logistical plans.
Wake up the USG establishment faster. For example, communicate AI progress, create scary and capable demos, expose state-backed general AI efforts in China.
To this list I might also add internal (to the AI safety community) strategy and communications. If this probability is roughly correct, far too many people are working on domestic governance solutions. The secret military AI project won’t watch out for torts.
Page 27 of this report.
Indeed, this topline recommendation is quite inapposite to the remaining nine “key recommendations” of the report, which are trivialities in comparison.
This market is probably the best analog, but it only asks about OpenAI and the closing date of 2030 means that many with longer timelines will vote down the market. This market and this one are also instructive.
https://situational-awareness.ai/the-project/. Emphasis is in the original.