Situational Awareness: Reading the Trendlines to AGI

This essay, written by Tomas Corza, reflects on and interprets Leopold Aschenbrenner’s Situational Awareness: The Decade Ahead. Drawing from the insights in the original work, it distills key ideas about the trajectory of artificial intelligence, the accelerating race toward AGI, and the societal, economic, and geopolitical implications of this transformation.

San Francisco has always been a city where the future arrives first. Today, that future is being shaped by an AI race of unprecedented scale, one that Leopold Aschenbrenner describes in Situational Awareness as a transition from $10 billion compute clusters to $100 billion and soon trillion-dollar clusters, backed by a mobilization of American industrial power not seen in decades (Aschenbrenner, 2024, p. 2). Every transformer on the grid, every long-term energy contract, is being claimed in anticipation of machines that will think and reason at superhuman levels.

The idea is simple: by 2025 - 2026, AI systems will match or surpass the cognitive abilities of college graduates; by decade’s end, they may exceed human intelligence altogether (Aschenbrenner, 2024, p. 2). This is not an abstract projection, it is an extrapolation of the straight lines that have defined deep learning progress for over a decade. From GPT-2’s preschooler-level grasp of language to GPT-4’s high school level reasoning, the field has advanced by scaling compute, refining algorithms, and removing artificial constraints on models (Aschenbrenner, 2024, pp. 10 - 14).

Aschenbrenner frames this in terms of “counting the OOMs” orders of magnitude of effective compute. History shows that roughly 0.5 OOMs/year come from raw compute scale (fueled by massive investment) and another 0.5 OOMs/year from algorithmic efficiency improvements (Aschenbrenner, 2024, pp. 19–24). Between 2023 and 2027, that pace suggests an additional 100,000× increase in effective compute, enough for another leap equivalent to the jump from GPT-2 to GPT-4 (Aschenbrenner, 2024, pp. 9 - 10).

This is not just about bigger models. “Unhobbling” is a third, often underestimated driver: chain-of-thought reasoning, reinforcement learning from human feedback, tool use, scaffolding, and vastly extended context windows all unlock latent capabilities without retraining from scratch (Aschenbrenner, 2024, pp. 30 - 33). The shift from chatbot to agent, AI systems that onboard like employees, plan multi-week projects, and use external tools, could redefine how we think of “using” AI at all.

Yet the road is not without obstacles. The “data wall” looms large, internet-scale datasets are nearly exhausted, especially in high-value domains like code. Overcoming this will require breakthroughs in synthetic data generation, self-play, and reinforcement learning to achieve human-like sample efficiency (Aschenbrenner, 2024, pp. 26 - 29). If solved, the payoff could be enormous, training on exclusively high-quality data could produce leaps in reasoning power even without additional raw compute.

Aschenbrenner’s warning is implicit but clear: the institutions, companies, and nations that fail to develop situational awareness now will be blindsided. The race is not merely technological but geopolitical, with superintelligence conferring decisive military and economic advantages (Aschenbrenner, 2024, p. 126). Whether the outcome is an all out race with China, or something far darker, will depend on how this next decade is navigated.

In short, the lesson of Situational Awareness is not optimism or fear, it is clarity. The same clarity that early observers had when they trusted the trendlines and saw GPT-4 coming. In 2024, the numbers are still visible to anyone who chooses to look. The only question is whether we will act on them before the world wakes up.

Previous
Previous

Generative Engine Optimization: Rethinking Visibility in the Age of AI Search

Next
Next

Generative Engine Optimization (GEO): AI-Driven Search and Content Visibility