Introduction
Reading Time:
10 minutes
Foreword
A few million years ago, something very strange happened.
Through minor genetic tweaks, an ancestor of the modern chimpanzee split into a new line of species: Homo, humans. This new chimp variant was odd in several ways: it learned to stand upright, lost most of its fur, and grew a bigger brain. This bigger brain was not really all that different from that of his chimp cousins, just scaled up by a factor of about three.
If you had seen this odd, half naked chimp with a brain three times bigger than its cousins’, and you would have to guess what this new chimp will do, what would you have said?
Maybe you would have expected it to be a bit better at collecting termites, or throwing rocks more accurately, or have more complicated status hierarchies. But that 3x scaled up chimp ends up building nuclear weapons and going to the moon. Chimps don’t go one third of the way to the moon, they go zero to the moon; humans go all the way.
We still don’t exactly know how or why this happened, but whatever it is that happened, we call the result General Intelligence. It is what has allowed our species to build the magical glowing brick that you are looking at right now to transmit the words of another chimp descendant located halfway across the world to your eyes and brain.
This is crazy.
General Intelligence is what separates human from animal, industrial civilization from chimpanzee band. It probably isn't a discrete all-or-nothing property, but it sure is suspicious that you go from “zero going to the moon” to “all of going to the moon” within a 3x difference in brain size. Things can change quickly with scale.
Our intelligence makes us the masters of the planet. The future of chimpanzees is utterly dependent on what humans want to do with them. If we want to give them infinite food, incredible medicines they can’t hope to understand, and safety from any predators, we can. If we want to keep them in zoos, or hunt them for sport, we can. If we wanted them extinct, their habitats paved over with parking lots and solar cells, we could.
This kind of relationship, of complete domination over another, is the natural balance of power between a much more intelligent creature and a less intelligent one. It’s the kind of power an adult has over a small child, or an owner over their pet. The arrangement may or may not be beneficial to the weaker party, but ultimately, the more intelligent and powerful agent decides the future. A pet doesn’t get a say in whether they get spayed or not.
Luckily, there are no other species out there running around that might be even smarter than us.
-
But that is changing.
Currently, the future belongs to humanity, for better or for worse. The planet and stars are ours to do with as we decide. If we want to drown ourselves in pollutants and a warming climate, we can. If we want to annihilate each other in nuclear war, we can. If we want to become responsible stewards of our environment, we can. If we want to build global abundance, limitless energy, interstellar travel, transcendent art and a rule of just law, we can.
If a new, more intelligent species were to appear on Earth, humanity would surrender its choice over what future we want to make manifest. The future would be in the hands of the successor, and humanity would be relegated to a position no more admirable than the one chimpanzees inhabit today.
No such more intelligent species exist today, but they are being built.
Since its inception, the field of artificial intelligence has aspired to construct artificial minds as smart as, and then even smarter than, humans. If they succeed, and such systems are built, humanity will no longer be in control of the future, and the decisions will be in the hands of the machines.
-
If you don’t do something, it doesn’t happen.
This might seem so obvious it’s barely worth bringing up. Yet, you might be surprised how often people, probably including you, don’t really believe this.
If we want the future to go well, someone needs to make it so. The default state of nature is chaos, competition, and conflict, not peace. Peace is a choice we must strive for, a delicate balance on the edge of entropy that must be lovingly and continuously maintained and strengthened. Good intentions are not enough — it demands calm, cooperative, and decisive action.
This document is a guide to what is happening with AI, and offers a playbook for nudging the future into the direction you want it to go. It is not a solution, but a guide. A book cannot be a solution, only a person's actions can.
What is AI? Who is building it? Why? And is it going to be a future we want? (Spoiler: No) There are so many things happening every single day in the field of AI, not to speak of geopolitics, that it seems impossible to keep up with, or to keep focused on what really matters: What kind of future do we want, for ourselves, and for our children?
We must steady our focus on this, and not let ourselves be distracted by all the noise and demoralizing nihilism pelting down on us from all sides. We need to understand where we want to go, chart a path there, and then walk this path.
If we don’t do something, it doesn’t happen.
-
The default path we are on now is one of ruthless, sociopathic corporations racing toward building the most intelligent, powerful AIs as fast as possible to compete with one another and vie for monopolization and control of both the market and geopolitics. Without intervention, humanity will be summarily outcompeted and relegated to irrelevancy by such machines, as our chimp cousins were by us.
A species of intelligent beings born from the crucible of sociopathic market and military competition will not be one of loving grace, and, for reasons we'll discuss in depth later on, will have far fewer qualms about paving over humanity’s habitat with solar cells and parking lots. Despite humanity’s flaws, we still have a heart, we have love, even for our chimpanzee cousins, somewhere, sometimes. Machines of ruthless competition need not have such hindrances.
And then that’s…it. Story over. Humanity is no more.
There is no one coming to save us. There is no benevolent watcher, no adults in the room, no superhero that will come to save the day. This is not that kind of story. The only thing necessary for the triumph of evil is for good people to do nothing. If you do nothing, evil triumphs, and that’s it.
If you want a better ending for the Human Story, you must create it. Can we forge a good, humanist future, one that is just, prosperous, and leaves humanity sailing into a beautiful twilight, wherever its final destination may lie? Yes. But if you don’t do it, it doesn’t happen.
The path we are on is one of going out with a whimper, not of humanist splendor. It is embarrassing to lose out on all of the future potential of humanity because of the shortsightedness and greed of a few. But it wouldn’t be surprising. A human story if there ever was one.
-
It isn't decided yet whether the Human Story ends here, but it will be decided soon.
We hope you join us in writing a better ending.
- Connor Leahy, October 2024
Overview
In (1) The state of AI today — We do not understand the AI we are building, we contextualize the current state of AI, highlighting the recent trends and their potential downstream effects.
The pace of AI progress in the last decade has been extraordinary, driven by a brute-force paradigm — development does not require insight, but data, compute, and money. It has worked so well that many companies have shifted to pursuing AGI as their primary goal. Concerningly, researchers and engineers don’t need to understand how modern AI systems work in order to create them, and AIs have become increasingly powerful, mysterious, and unpredictable. Given this accelerated development, we and many experts anticipate the emergence of uncontrolled AGI in the next few years, leading to catastrophic risks for humanity.
In (2) Intelligence — Intelligence is mechanistic and AGI can be built, we discuss whether it is possible to recreate human-level intelligence in AGI, and conclude that it is.
Intelligence broadly corresponds to the ability to solve intellectual tasks. Observing how humans have massively increased the range of intellectual tasks, we conclude that intelligence appears to be mechanistic. The primary arguments against intelligence being automatable thus rely on finding a mechanistic “missing component” that cannot be solved by AI. We fail to find any empirically validated missing component, and conclude that intelligence can and will be automated, leading to AGI.
In (3) AI Catastrophe — Current AI research leads to godlike AI, we extrapolate what would happen if humanity created AGI, unfolding the consequences of our current approach to building AI, where AGI leads to an AI takeoff that ends in a catastrophic and permanent loss of control by humanity.
When we develop AGI, all intellectual tasks will be automatable, including software engineering tasks such as AI and machine learning development. Given that AI companies are aggressively pursuing that end, we expect that the creation of AGI will catalyze AI self-improvement, where AI can improve the range, power, and efficiency of AI, compounding far beyond humanity’s intelligence. This leads to a system which has developed such advanced powers that it’s better described as a “god” from the perspective of humanity. Because this godlike AI will be beyond our control, it will control the future and almost certainly obliterate humanity, not by spite but by indifference.
In (4) AI Safety — We are not on track to solve the hard problems of safety, we argue that controlling and directing godlike AI depends on solving AI “alignment,” and that we cannot do so in time.
In order to avoid catastrophe by godlike AI, we must achieve alignment by answering deep technical, moral, and philosophical questions that are more complex than any problem humanity has faced before. These questions are not neat mathematical problems but cross-disciplinary issues that will require major research programs to resolve, at least billions of dollars of investment, and decades of sequential work. Today’s research ignores these challenges, instead focusing on hacks and tricks to correct the most egregious mistakes without really addressing the underlying problems. We dismiss the naive claim that we can wait for AGI to solve this problem. On our current path, we do not expect to solve alignment in time, resulting in annihilation by godlike AI.
In (5) AI Governance — We lack the mechanisms to control technology development, we consider the institutions and mechanisms that would be necessary to steer us off the “default path” and prevent AGI from being built, and argue that these do not exist today.
Lacking a technical safety solution, we must avert the “default path” through policy and governance means. Technical actors must be overseen by national regulators to control the safe development of AI, but these do not exist. Even if we collectively did want to slow or stop the development of AI technology, the physical (e.g compute kill-switches) and policy levers, do not exist. International stability must be maintained through high-bandwidth communication lines and multinational agreements enforced by international law, but these do not exist. And no single individual, institution, or collective has a comprehensive plan for how to handle AGI. We argue that the lack of effective efforts stems directly from AGI companies, who have captured governance and research efforts, and aggressively pushed policies of self-regulation that keep them in control without averting danger.
In (6) The AI Race — The race to AGI is ideological, we explore the history and present day realities of the race to AGI, making sense of the adversarial social dynamics that currently plague the field.
Although the race to AGI often presents itself under an economic or geopolitical mantle, its original motivation is ideological: all relevant actors care about building AGI, be it to bring about utopia, gain power, or build god. Yet the main companies racing to AGI, who want to build it to foster their view of utopia, end up in fear that someone else will beat them to the post. This fear in turn creates a dynamic where only the actors willing to compromise and undermine safety stay in the race. Thus it is not surprising to see that AGI companies are using the full industry playbook, using fear, uncertainty, and doubt (FUD) to capture regulation and research, turning every argument into a justification to race even faster. Since these tactics are currently working for them, we expect them to continue in this line, and thus the race to AGI to further accelerate.
In (7) A good future, if you can keep it, we argue that we must urgently work to avert the “default path” to extinction, and suggest that civic duty is what is needed today to reduce the risk.
The primary way to avoid the default trajectory towards AI extinction risk is to not build AGI, as this is a point of no return. But to reach this level of control, we must first build global common knowledge of the risks and global communities capable of responding. These do not exist yet, and bottleneck larger solutions like the need to implement global regulations, stabilize international governance, and end the race to AGI. To get there, we must build up from small, local actions that engage in existing civic processes, and — where these fail — create new ones.
In (8) Outro, we close with a brief message about the challenge ahead of us.