We explore some of the potential paths to AGI and their implications.
In June 2024, ex-OpenAI employee Leopold Aschenbrenner created a stir with his long-form essay “Situational Awareness,” outlining a path to Artificial General Intelligence (AGI) by 2027. In this blog post, we will dissect the key arguments and see how they’ve held up.
Within five years, LLMs have gone from GPT-2 (preschooler) to GPT-4o (college student) and now OpenAI o-series (early PhD). Given another few years, Aschenbrenner argues that LLMs will perform at the level of researchers, which is definitely plausible. “The Singularity” is the point at which AI can do the job of an AI researcher and improve itself without human intervention. The question now isn't if this will happen—it's when.
Technological progress has always been on an exponential curve, with computers improving every year without us noticing too much. We are now hitting the most interesting part of the exponential curve, when AI is about as good as humans at many tasks. Once AI learns how to improve itself, it can then transform different domains such as robotics or quantum computing, by creating new and specialized AI algorithms.
The biggest caveat with Situational Awareness is that predicting the timeline of AI is more or less impossible because of how unpredictable the development of technology is. Five years ago, AI researchers focused on Reinforcement Learning, while LLMs were more of a curiosity. With more research and computing power, LLMs became unexpectedly powerful. The next leap in AI will be similarly unpredictable.
Aschenbrenner identifies three major bottlenecks in advancing AI: computing power, algorithmic progress, and unhobbling (unlocking hidden model functionality or integrating them with tools). Of these, computing power is the most predictable. While we know that AI models improve with better hardware, building the next generation of data centers will take years.
Training GPT-4 required approximately 100,000 times more computing power than training GPT-2. As companies race to develop customized hardware and supercomputing clusters, we can expect a similar leap for future models. Five years ago, spending a million dollars on training an AI model was unimaginable, yet today companies will happily invest billions.
Algorithmic progress and unlocking hidden model functionality are areas of significant research, though their contributions to AI quality are less predictable. The fourth major bottleneck would be access to clean datasets. As more of the internet is scraped and cleaned to train foundational AI models, the quality and quantity of training becomes crucial. Although Aschenbrenner notes that most of the clean text on the internet has already been scraped, this overlooks the vast amounts of image and video data, private datasets, and advancements in synthetic data generation.
Aschenbrenner extensively compares the development of AGI to the development of the nuclear bomb. Both involved massive technological change with ripple effects that fundamentally changed society. Both involved potentially deadly technologies. The Singularity will happen over an extremely short time period and lead to massive societal change.
Security will become a paramount concern. It won't matter if the US is the world leader in AGI if a malicious actor can steal cutting-edge AI models from a private company or research lab. Similar to how nuclear physicists were secluded in Los Alamos during the atomic bomb's development, Aschenbrenner predicts that top AI researchers will be subject to stringent security measures. Key AI breakthroughs may also be kept confidential.
One of his most intriguing predictions is that the US government will nationalize key AI research labs and companies to ensure oversight and regulation. Developing AGI first would grant governments a substantial global advantage, turning AI development into a race that the US government is highly motivated to win. This AI race would also lead to geopolitical fragmentation, similar to how the nuclear bomb led to the US-Soviet divide during the Cold War.
China has already emerged as the main contender to the US, with models like DeepSeek R1 and V3 outperforming OpenAI's o1 and GPT-4o, all while being developed at a fraction of the cost. As competition with the US intensifies, the AI era will become increasingly chaotic and unpredictable, but it will also be the most exciting age ever for technological progress.