About
The state of AI today
Intelligence
AI Catastrophe
AI Safety
AI Governance
The AI Race
A good future, if you can keep it

The Compendium

By Connor Leahy, Gabriel Alfour, Chris Scammell, Andrea Miotti, Adam Shimi. Edited by Rita Sokolova

Humanity faces extinction from AGI.

AI progress is converging on building Artificial General Intelligence, AI systems as or more intelligent than humanity. Today, ideologically motivated groups are driving an arms race to AGI, backed by Big Tech, and are vying for the support of nations. If these actors succeed in their goal of creating AI that is more powerful than humanity, without the necessary solutions to safety, it is game over for all of us. There is currently no solution to keep AI safe. 

In order to do something about these risks, we must understand them fully. 

The Compendium aims to present a coherent worldview explaining the race to AGI and extinction risks and what to do about them, in a way that is accessible to non-technical readers who have no prior knowledge about AI. You can read this end-to-end or a-la-carte. Each section is standalone.

The Compendium is a living document, and we will update it over time as the landscape changes. We welcome your feedback, which you can provide here or by email.

Start Reading

The Compendium

By Connor Leahy, Gabriel Alfour, Chris Scammell, Andrea Miotti, Adam Shimi

Humanity faces extinction from AGI.

AI progress is converging on building Artificial General Intelligence, AI systems as or more intelligent than humanity. Today, ideologically motivated groups are driving an arms race to AGI, backed by Big Tech, and are vying for the support of nations. If these actors succeed in their goal of creating AI that is more powerful than humanity, without the necessary solutions to safety, it is game over for all of us. There is currently no solution to keep AI safe. 

In order to do something about these risks, we must understand them fully. 

The Compendium aims to present a coherent worldview explaining the race to AGI and extinction risks and what to do about them, in a way that is accessible to non-technical readers who have no prior knowledge about AI. You can read this end-to-end or a-la-carte. Each section is standalone.

The Compendium is a living document, and we will update it over time as the landscape changes. We welcome your feedback, which you can provide here or by email.

Scroll down to start reading the introduction.

The Compendium

By Connor Leahy, Gabriel Alfour, Chris Scammell, Andrea Miotti, Adam Shimi

Humanity faces extinction from AGI.

AI progress is converging on building Artificial General Intelligence, AI systems as or more intelligent than humanity. Today, ideologically motivated groups are driving an arms race to AGI, backed by Big Tech, and are vying for the support of nations. If these actors succeed in their goal of creating AI that is more powerful than humanity, without the necessary solutions to safety, it is game over for all of us. There is currently no solution to keep AI safe. 

In order to do something about these risks, we must understand them fully. 

The Compendium aims to present a coherent worldview explaining the race to AGI and extinction risks and what to do about them, in a way that is accessible to non-technical readers who have no prior knowledge about AI. You can read this end-to-end or a-la-carte. Each section is standalone.

The Compendium is a living document, and we will update it over time as the landscape changes. We welcome your feedback, which you can provide here or by email.

Scroll down to start reading the introduction.

Introduction

Reading Time:

10 minutes

Foreword

A few million years ago, something very strange happened. 

Through minor genetic tweaks, an ancestor of the modern chimpanzee split into a new line of species: Homo, humans. This new chimp variant was odd in several ways: it learned to stand upright, lost most of its fur, and grew a bigger brain. This bigger brain was not really all that different from that of his chimp cousins, just scaled up by a factor of about three.

If you had seen this odd, half naked chimp with a brain three times bigger than its cousins’, and you would have to guess what this new chimp will do, what would you have said?

Maybe you would have expected it to be a bit better at collecting termites, or throwing rocks more accurately, or have more complicated status hierarchies. But that 3x scaled up chimp ends up building nuclear weapons and going to the moon. Chimps don’t go one third of the way to the moon, they go zero to the moon; humans go all the way. 

We still don’t exactly know how or why this happened, but whatever it is that happened, we call the result General Intelligence. It is what has allowed our species to build the magical glowing brick that you are looking at right now to transmit the words of another chimp descendant located halfway across the world to your eyes and brain. 

This is crazy.

General Intelligence is what separates human from animal, industrial civilization from chimpanzee band. It probably isn't a discrete all-or-nothing property, but it sure is suspicious that you go from “zero going to the moon” to “all of going to the moon” within a 3x difference in brain size. Things can change quickly with scale.

Our intelligence makes us the masters of the planet. The future of chimpanzees is utterly dependent on what humans want to do with them. If we want to give them infinite food, incredible medicines they can’t hope to understand, and safety from any predators, we can. If we want to keep them in zoos, or hunt them for sport, we can. If we wanted them extinct, their habitats paved over with parking lots and solar cells, we could.

This kind of relationship, of complete domination over another, is the natural balance of power between a much more intelligent creature and a less intelligent one. It’s the kind of power an adult has over a small child, or an owner over their pet. The arrangement may or may not be beneficial to the weaker party, but ultimately, the more intelligent and powerful agent decides the future. A pet doesn’t get a say in whether they get spayed or not. 

Luckily, there are no other species out there running around that might be even smarter than us.

-

But that is changing.

Currently, the future belongs to humanity, for better or for worse. The planet and stars are ours to do with as we decide. If we want to drown ourselves in pollutants and a warming climate, we can. If we want to annihilate each other in nuclear war, we can. If we want to become responsible stewards of our environment, we can. If we want to build global abundance, limitless energy, interstellar travel, transcendent art and a rule of just law, we can. 

If a new, more intelligent species were to appear on Earth, humanity would surrender its choice over what future we want to make manifest. The future would be in the hands of the successor, and humanity would be relegated to a position no more admirable than the one chimpanzees inhabit today.

No such more intelligent species exist today, but they are being built.

Since its inception, the field of artificial intelligence has aspired to construct artificial minds as smart as, and then even smarter than, humans. If they succeed, and such systems are built, humanity will no longer be in control of the future, and the decisions will be in the hands of the machines.

-

If you don’t do something, it doesn’t happen.

This might seem so obvious it’s barely worth bringing up. Yet, you might be surprised how often people, probably including you, don’t really believe this. 

If we want the future to go well, someone needs to make it so. The default state of nature is chaos, competition, and conflict, not peace. Peace is a choice we must strive for, a delicate balance on the edge of entropy that must be lovingly and continuously maintained and strengthened. Good intentions are not enough — it demands calm, cooperative, and decisive action. 

This document is a guide to what is happening with AI, and offers a playbook for nudging the future into the direction you want it to go. It is not a solution, but a guide. A book cannot be a solution, only a person's actions can.

What is AI? Who is building it? Why? And is it going to be a future we want? (Spoiler: No) There are so many things happening every single day in the field of AI, not to speak of geopolitics, that it seems impossible to keep up with, or to keep focused on what really matters: What kind of future do we want, for ourselves, and for our children?

We must steady our focus on this, and not let ourselves be distracted by all the noise and demoralizing nihilism pelting down on us from all sides. We need to understand where we want to go, chart a path there, and then walk this path. 

If we don’t do something, it doesn’t happen.

-

The default path we are on now is one of ruthless, sociopathic corporations racing toward building the most intelligent, powerful AIs as fast as possible to compete with one another and vie for monopolization and control of both the market and geopolitics. Without intervention, humanity will be summarily outcompeted and relegated to irrelevancy by such machines, as our chimp cousins were by us. 

A species of intelligent beings born from the crucible of sociopathic market and military competition will not be one of loving grace, and, for reasons we'll discuss in depth later on, will have far fewer qualms about paving over humanity’s habitat with solar cells and parking lots. Despite humanity’s flaws, we still have a heart, we have love, even for our chimpanzee cousins, somewhere, sometimes. Machines of ruthless competition need not have such hindrances.

And then that’s…it. Story over. Humanity is no more.

There is no one coming to save us. There is no benevolent watcher, no adults in the room, no superhero that will come to save the day. This is not that kind of story. The only thing necessary for the triumph of evil is for good people to do nothing. If you do nothing, evil triumphs, and that’s it. 

If you want a better ending for the Human Story, you must create it. Can we forge a good, humanist future, one that is just, prosperous, and leaves humanity sailing into a beautiful twilight, wherever its final destination may lie? Yes. But if you don’t do it, it doesn’t happen. 

The path we are on is one of going out with a whimper, not of humanist splendor. It is embarrassing to lose out on all of the future potential of humanity because of the shortsightedness and greed of a few. But it wouldn’t be surprising. A human story if there ever was one.

-

It isn't decided yet whether the Human Story ends here, but it will be decided soon.

We hope you join us in writing a better ending.

- Connor Leahy, October 2024

Summary

The Compendium aims to present a coherent worldview about the extinction risks from AGI – artificial intelligence that exceeds human intelligence – in a way that is accessible to non-technical readers with no prior knowledge about AI. A reader should come away with an understanding of the current landscape, the race to AGI, and its existential stakes. 

AI progress is rapidly converging on building AGI, driven by a brute-force paradigm that is bottlenecked by resources, not insights. Well-resourced, ideologically motivated individuals are driving a corporate race to AGI. They are now backed by Big Tech, and will soon have the support of nations.

People debate whether or not it is possible to build AGI, but most of the discourse is rooted in pseudoscience. Because humanity lacks a formal theory of intelligence, we must operate by the empirical observation that AI capabilities are increasing rapidly, surpassing human benchmarks at an unprecedented pace. 

As more and more human tasks are automated, the gap between artificial and human intelligence shrinks. At the point when AI is able to do all of the tasks a human can on a computer, it will functionally be AGI and able to conduct the same AI research that we can. Should this happen, AGI will quickly scale to superintelligence, and then to levels so powerful that AI is best described as a god compared to humans. Just as humans have catalyzed the Holocene extinction, these systems will pose an extinction risk to humanity not because they are malicious, but because we will be powerless to control them as they reshape the world, indifferent to our fate. 

Coexisting with such powerful AI requires solving some of the most difficult problems that humanity has ever tackled, which demand Nobel-prize-level breakthroughs, billions or trillions of dollars of investment, and progress in fields that resist scientific understanding. We do not have enough time to adequately address these challenges.

Current technical AI safety efforts are not on track to solve this problem, and current AI governance efforts are ill-equipped to stop the race to AGI. Many of these efforts have been co-opted by the very actors racing to AGI, who undermine regulatory efforts, cut corners on safety, and are increasingly stoking nation-state conflict in order to justify racing. 

This race is propelled by the belief that AI will bring extreme power to whoever builds it first, and that the primary quest of our era is to build this technology. To survive, humanity must oppose this ideology and the race to AGI, building global governance that is mature enough to develop technology conscientiously and justly. We are far from achieving this goal, but believe it to be possible. We need your help to get there.

Overview

In (1) The state of AI today — We do not understand the AI we are building, we contextualize the current state of AI, highlighting the recent trends and their potential downstream effects. 

The pace of AI progress in the last decade has been extraordinary, driven by a brute-force paradigm — development does not require insight, but data, compute, and money. It has worked so well that many companies have shifted to pursuing AGI as their primary goal. Concerningly, researchers and engineers don’t need to understand how modern AI systems work in order to create them, and AIs have become increasingly powerful, mysterious, and unpredictable. Given this accelerated development, we and many experts anticipate the emergence of uncontrolled AGI in the next few years, leading to catastrophic risks for humanity.

In (2) Intelligence — Intelligence is mechanistic and AGI can be built, we discuss whether it is possible to recreate human-level intelligence in AGI, and conclude that it is. 

Intelligence broadly corresponds to the ability to solve intellectual tasks. Observing how humans have massively increased the range of intellectual tasks, we conclude that intelligence appears to be mechanistic. The primary arguments against intelligence being automatable thus rely on finding a mechanistic “missing component” that cannot be solved by AI. We fail to find any empirically validated missing component, and conclude that intelligence can and will be automated, leading to AGI.

In (3) AI Catastrophe — Current AI research leads to godlike AI, we extrapolate what would happen if humanity created AGI, unfolding the consequences of our current approach to building AI, where AGI leads to an AI takeoff that ends in a catastrophic and permanent loss of control by humanity.

When we develop AGI, all intellectual tasks will be automatable, including software engineering tasks such as AI and machine learning development. Given that AI companies are aggressively pursuing that end, we expect that the creation of AGI will catalyze AI self-improvement, where AI can improve the range, power, and efficiency of AI, compounding far beyond humanity’s intelligence. This leads to a system which has developed such advanced powers that it’s better described as a “god” from the perspective of humanity. Because this godlike AI will be beyond our control, it will control the future and almost certainly obliterate humanity, not by spite but by indifference.

In (4) AI Safety — We are not on track to solve the hard problems of safety, we argue that controlling and directing godlike AI depends on solving AI “alignment,” and that we cannot do so in time.

In order to avoid catastrophe by godlike AI, we must achieve alignment by answering deep technical, moral, and philosophical questions that are more complex than any problem humanity has faced before. These questions are not neat mathematical problems but cross-disciplinary issues that will require major research programs to resolve, at least billions of dollars of investment, and decades of sequential work. Today’s research ignores these challenges, instead focusing on hacks and tricks to correct the most egregious mistakes without really addressing the underlying problems. We dismiss the naive claim that we can wait for AGI to solve this problem. On our current path, we do not expect to solve alignment in time, resulting in annihilation by godlike AI.

In (5) AI Governance — We lack the mechanisms to control technology development, we consider the institutions and mechanisms that would be necessary to steer us off the “default path” and prevent AGI from being built, and argue that these do not exist today. 

Lacking a technical safety solution, we must avert the “default path” through policy and governance means. Technical actors must be overseen by national regulators to control the safe development of AI, but these do not exist. Even if we collectively did want to slow or stop the development of AI technology, the physical (e.g compute kill-switches) and policy levers, do not exist. International stability must be maintained through high-bandwidth communication lines and multinational agreements enforced by international law, but these do not exist. And no single individual, institution, or collective has a comprehensive plan for how to handle AGI. We argue that the lack of effective efforts stems directly from AGI companies, who have captured governance and research efforts, and aggressively pushed policies of self-regulation that keep them in control without averting danger.

In (6) The AI Race — The race to AGI is ideological, we explore the history and present day realities of the race to AGI, making sense of the adversarial social dynamics that currently plague the field.

Although the race to AGI often presents itself under an economic or geopolitical mantle, its original motivation is ideological: all relevant actors care about building AGI, be it to bring about utopia, gain power, or build god. Yet the main companies racing to AGI, who want to build it to foster their view of utopia, end up in fear that someone else will beat them to the post. This fear in turn creates a dynamic where only the actors willing to compromise and undermine safety stay in the race. Thus it is not surprising to see that AGI companies are using the full industry playbook, using fear, uncertainty, and doubt (FUD) to capture regulation and research, turning every argument into a justification to race even faster. Since these tactics are currently working for them, we expect them to continue in this line, and thus the race to AGI to further accelerate.

In (7) A good future, if you can keep it, we argue that we must urgently work to avert the “default path” to extinction, and suggest that civic duty is what is needed today to reduce the risk. 

The primary way to avoid the default trajectory towards AI extinction risk is to not build AGI, as this is a point of no return. But to reach this level of control, we must first build global common knowledge of the risks and global communities capable of responding. These do not exist yet, and bottleneck larger solutions like the need to implement global regulations, stabilize international governance, and end the race to AGI. To get there, we must build up from small, local actions that engage in existing civic processes, and — where these fail — create new ones.

In (8) Outro, we close with a brief message about the challenge ahead of us.