To the winner of the Artificial Generative Intelligence race goes … EVERYTHING!
The first nation to field Artificial Generative Intelligence (AGI) wins everything!
True AGI will be able to rewrite its own code and scale its cognitive capacity at speeds that will leave slow-footed rivals in the dust. In such a contest there is no silver medal—only a collection of also-rans falling increasingly far behind.
What is AGI?
Before we can consider the implications of AGI, we must first define it. According to Google, AGI is an AI that possesses the ability to understand or learn any intellectual task that a human being can. Amazon adds the ability to teach itself and tackle problems it was never explicitly trained to solve. The working assumption inside leading labs, however, is not “average human” but “genius.” That future is close: OpenAI’s recent o3 model scored 136 on Mensa Norway’s IQ test—gifted range.
Take a moment to reflect on what genius-level AIs mean. Picture thousands of such AI-minds, networked, tireless, never distracted, flooding every scientific niche. That reality is already taking shape in biology, physics, and every area of scientific research. Industry leaders have discarded timetables measured in decades; Google points to 2030, Elon Musk and Anthropic’s Dario Amodei say 2026, Nvidia’s Jensen Huang prefers 2029, and Sam Altman claims OpenAI already knows the recipe. Public perception that an AGI moment might be much closer began shifting when, in March, the New York Times alerted the world that people working at the highest levels of leading AI firms were anticipating its imminent arrival.
Daniel Kokotajlo’s “AI 2027” scenario offers the most vivid schedule: once an AI sets its own research agenda, it compresses a year of work into a month and then accelerate exponentially until capped only by available compute. Nations, fearing each other’s unboxed systems, choose speed-ups over safety checks, creating a loop of risky acceleration, where AI advances without corresponding progress in AI-alignment, opening the possibility for either an authoritarian AI controlling humanity or civilizational collapse.
Alignment is not hopeless. Mechanistic interpretability, red-team evaluations, and constitutional-style instruction tuning are growing disciplines. Yet caution competes with irresistible incentives: whoever solves alignment last still reaps the benefits if they deploy first.
Is AGI Truly Possible?
Human research capacity grows roughly five percent a year. AI cognitive labor is expanding 500-fold faster and improving itself simultaneously. At that rate the total cognitive power of AI will likely surpass total human cognitive power in just a few years. Conservative models from OpenAI and Epoch AI suggest global research output could quintuple annually; even if this is off by fifty percent, we are about to shrink a century’s progress in research into a decade.
Bottlenecks remain. Compute is scarce, data messy, and legacy processes—from law to logistics—must be rebuilt. Before AGI’s full impact can be felt, nearly every commercial, industrial, and administrative process requires upending in an unprecedented orgy of creative destruction. Such gut-wrenching changes will make the impacts of the Industrial Revolution seem mild, as they will be condensed into months and years rather than decades. AGI’s societal impacts will be enormous by any measure, and we have yet to hear from the modern Luddites, who, without a new “social contract,” are sure to man the barricades.
Delays are certainly plausible, but AGI derailment is unlikely. Humans, with brains designed to evade sabretooth tigers, are woefully unprepared to match research capabilities with AIs optimized for research. Already, a single high-end chip can absorb all human learning on a discipline in hours. Moreover, are impressing users Argonne National Laboratories with its ability to come up with “new” ideas, something that until recently remained the preserve of humans. It is difficult to imagine how future Luddites will stand up against such focused intellectual power once it is set upon overcoming bottlenecks.
The Agentic Revolution
Large-language models were the gateway drug; autonomous agents are the next wave. Google defines an agent as software pursuing user goals with memory, planning, and autonomy. Tentative steps in this direction have been coming for several years, but Nvidia CEO Jensen Huang believes this is the year the world sees agents “take off.” These agents, once inserted into workflows, can be interactive, helping humans with scientific discovery, customer service, healthcare education, or personalized support. Alternatively, they can function as autonomous background processes that automate routine tasks, analyze data for insights, optimize processes, or spot potential problems—all with minimal human interaction.
The consulting firm, Accenture, envisions networks of AI agents all having different purposes, ranks, and roles—much like bees in a hive working separately but together toward a common goal. For instance, a Fortune 500 firm typically runs on a thousand enterprise apps mediated by analysts and middle managers. Swap those layers for a mesh of AI-agents and the org chart collapses. Decision latency shrinks from weeks to minutes; managers curate objectives rather than supervise clerks to enhance quality, increase productivity, and reduce costs autonomously.
Global Impacts
The Congressional Budget Office, ignoring AI, pegs long-run U.S. growth at two percent—a doubling every 36 years. Even the most pessimistic AI case, adds two additional points of growth—turning today’s $28 trillion U.S. economy into $500 trillion within a lifetime. That is about the same rate of growth the Western world experienced at the peak of the Industrial Revolution (1870-1914), which forced gut-wrenching societal change and caused two world wars.
Presently, the most pessimistic case for the impact of AI on economic growth is that it will add an additional two percent to GDP growth, meaning that a baby born today would see U.S. GDP grow from $28 trillion to $500 trillion in her lifetime. Nvidia CEO Satya Nadella’s forecasts ten-percent—yielding $57 quadrillion sized economy by century’s end; Epoch AI’s 30-percent model produces mind-bending numbers few economists will even print. Such numbers are almost incomprehensible, but probably no more so than our current $28 trillion economy would seem at the start of the Industrial Revolution.
Assume politics slows adoption: Average U.S. GDP growth is only estimated at four percent until 2030, before accelerating to ten percent through 2035. America still hits $55 trillion by mid-decade and $285 trillion by 2050, erasing annual deficits within five years and the national debt before today’s newborns reach college, assuming Congress refrains from an orgy of new spending.
For perspective: the world economy topped $100 trillion only in 2022, after 250 years of industrial growth. AGI compresses that journey into a mortgage cycle, as capital formation doubles every seven years at ten-percent growth, funding trillion-dollar bets on further progress as well as national security.
A Japan that harnesses AGI to reshore manufacturing could double defense outlays without higher taxes, reshaping the Indo-Pacific balance. India’s Anglophone talent pool might join the high-income club, while petro-states watch hydrocarbon rents crater as AGI-guided materials science delivers carbon-free energy. More crucially to the continuance of the Pax American into the next century, is if China trails in employing AGI by only five years, it still reaches $80–100 trillion by 2050—staggering, yet two hundred trillion short of a first-mover United States. Europe, hamstrung by regulatory caution, risks comfortable irrelevance.
If, however, Beijing leapfrogs the United States in AGI adoption, even briefly, the tables flip with epochal consequences. That the U.S. will win the race to AGI cannot be taken for granted, as, despite U.S. chip sanctions, Huawei is already testing a chip that rivals Nvidia’s most powerful commercial chip, the H100. China is intent on gaining AI supremacy and, through massive investments, is keeping the race close. But this is a race the United States must win; there is no greater national priority than ensuring that AGI is attained, and that the United States gets there first.
The Impact of AGI-Powered Agentic Warfare
Agentic Warfare will soon be upon us, and we are only just beginning to think about the implications. Dr. Benjamin Jensen has recently laid out some important ideas about how an agentic conflict will unfold, which do not require repeating here. What is crucial to know about agentic warfare is that it is rapidly approaching, and it is nearly impossible to imagine our legacy military organizations being able to meet the challenge.
One way to think about the difference between an agentic force versus a legacy military organization is to consider what the Iraqi Army faced in 1991, when it was obliterated in less than 100 hours. In a must-read essay “Situational Awareness: The Decade Ahead,” AI researcher Leopold Aschenbrenner lays out why Operation Desert Storm was so lopsided: “The difference in technology wasn’t godlike or unfathomable, but it was utterly and completely decisive: guided and smart munitions, early versions of stealth, better sensors, better tank scopes (to see farther in the night and in dust storms), better fighter jets, an advantage in reconnaissance…”
Any nation that obtains and keeps an agentic advantage will have the same military advantages against an opponent as the United States possessed during Desert Storm. The situation gets much more interesting and dire once AGI is incorporated into military systems, whereupon the technological difference may indeed seem “godlike and unfathomable.”
Right now, however, we are not moving forward with anything near the required urgency. For instance, in January, DARPA announced the results of a study showing that humans can single-handedly and effectively manage a heterogeneous swarm of more than 100 autonomous ground and aerial vehicles, while feeling overwhelmed only for brief periods. What a wonderful achievement for humanity, and it shows how wrong our current thinking is.
In an AGI-dominated conflict, an AI-agent would control those 100 drones. Moreover, there would be thousands of other AI-agents controlling their own 100+ drone swarms, all reporting to super AI-agents responsible for different missions: Guarding the attack drones, planning, and executing attacks, keeping all the other AI-agents aligned, etc. There is actually no limit to AGI scaling of our military forces, as AGI-powered agents can coordinate across domains and theaters, with a human only truly necessary at the top of the pyramid.
Once a conflict erupted, the AGI-powered economy would deliver materiel and munitions at an unprecedented rate, as AGI systems autonomously shifted millions of industrial robots toward war production. We will also see superhuman levels of hacking that will completely paralyze our enemy’s systems in the first moment of a war, possibly making those billions of drones superfluous. Even if some enemy systems were fenced off from such attacks, whatever assault an opponent managed to launch would be futile. We would have new super-powerful WMDs, and our AGI-enabled defenses against such enemy systems would be impenetrable. In a must read essay, Leopold Aschenbrenner lays out a scenario that should send chills down the spine of leaders whose nation falls behind in the AGI race:
Improved sensor networks and analysis could locate even the quietest current nuclear submarines (similarly for mobile missile launchers). Millions or billions of mouse-sized autonomous drones, with advances in stealth, could infiltrate behind enemy lines and then surreptitiously locate, sabotage, and decapitate the adversary’s nuclear forces. Improved sensors, targeting, and so on could dramatically improve missile defense; if there is an industrial explosion, robot factories could churn out thousands of interceptors for each opposing missile. This is all accomplished without even considering completely new scientific and technological paradigms.
Conclusion
In case it escaped anyone’s notice the “third offset” was a miserable failure. We decisively lost, and our potential enemies won. The United States kept building expensive weapons systems while our enemies built up an asymmetric advantage by focusing on relatively inexpensive missiles, drones, cyber, etc. But there is good news, an AGI-powered economy and military forces are going to give us another bite at the apple. If the United States first wins the race to deploy agentic warfare and then couples that with AGI, our past mistakes and failures will become meaningless, as the new AGI paradigm will guarantee freedom, security, and prosperity. This time, failure is not an option.
Dr. James Lacey is a professor of strategic studies at Marine Corps University, where he holds the Horner Chair of War Studies.