A good future,

A good future, if you can keep it20

if you can keep it

A good future, if you can keep it

20 The phrase "a republic, if you can keep it" is attributed to Benjamin Franklin and conveys the idea that the stability and success of a republic would depend heavily on the engagement of its citizens.

We must find a way to avert extinction by AI and put humanity in full control of its technological development. This Compendium aims to be a tactical guide. Having mapped the historical trajectory and current landscape and challenges, we can now consider effective interventions. 

The prerequisite for any global solution is a shared understanding of the issues. We need to generate civic engagement, build informed community opposition to the AGI race, and make the catastrophic risks from AI common knowledge. Human extinction concerns all of us, and a solution must eventually compound into global governance that can build technology deliberately and justly.

If you’d like to participate, we’d love for you to join us and if you’d like to discuss what you can do more directly, please join us alongside nonprofit ControlAI here.

In Civic duty is the foundation of a response to AGI risk, we challenge the idea that the extraordinary risks from AGI require extraordinary solutions. Instead, we propose that what is needed today is basic civic engagement from concerned individuals and a whole lot of mundane (but important) work. This is not easy, as the civics process around technology has been undermined by Big Tech.   

In Creating a vision and a plan for a good future, we encourage readers to think critically about the future they want and how to get there. We then outline our high-level vision for a technologically mature society and a “Just Process” for making civilization-wide decisions about risks like AGI.

In Actions that help reduce AGI risk, we propose a “bootstrapping” process to get involved immediately and learn the necessary skills to contribute. We then outline practical actions to improve AI safety communication, coordination, civics, and technical caution.

Civic duty is the foundation of a response to AGI risk
Civic duty is the foundation of
Civic duty is the foundation
a response to AGI risk
of a response to AGI risk

The race to AGI is not an isolated incident, but representative of a broader societal tensionbetween rapid technical progress and our inability to coordinate effectively to manage its consequences. While technological advancements unlock immense potential, the last 20 years have exhibited major failures in our ability to direct these innovations toward the collective good and manage their externalities: we are witnessing environmental collapse, increased partisanship and devolvement of our information sources, mental health epidemics, and trillion dollar companies that profit off of stealing human attention. 

Today’s landscape is a product of Big Tech’s coup; technology was built faster than governments could regulate, agitated by a doctrine of fear (“if you regulate us, the economy will die”), uncertainty (“we can’t regulate this tech now because the future is uncertain”), and doubt (“governments are too unfamiliar with the tech to make prudent decisions on how to regulate”). Big Tech attacked both regulatory legislation and the legislative process itself, building a public ideology that the law is intrinsically bad and that governments and public institutions are intrinsically bad, all while embedding lobbyists and allies into governments to capture power and become vital to intelligence, defense, and other public projects.

This coup has led people to believe that issues with technology are someone else’s problem–governments, NGOs, the UN, whomever–and that individuals are powerless to intervene on new technical questions. 

The sense of powerlessness has spilled over to AI: key decisions are made by a handful of AGI companies that have partnered up with Big Tech and captured technical safety and governance efforts. AI progress is moving so fast that newcomers have a hard time making sense of the race. And the actions that are necessary to get to a good future–such as building more stable global governance–appear far out of reach of ordinary people. Assessing all of these, concerned individuals may feel like it is hopeless to contribute. 

It is a mistake to read the situation as hopeless, and plays into the hand of the actors driving the race. 

Although we are in an emergency, the work to stop AGI today is not hardcore, but an exercise of our basic civic duty. 

Public concern for climate change was activated by the collective efforts of a handful of informed citizens who cared, took the time to get informed, and slowly chipped away at Big Oil’s ability to obfuscate the problem. We can learn from that playbook to mitigate AGI risk by educating the public and encouraging simple actions like talking to friends about the issues, writing on social media, and contacting local representatives. As public perception of the risks strengthens, it will catalyze other interventions like creating AI Safety Institutes, public statements, international dialogues, protests, tracking integrity incidents of labs, formalizing boundaries for AI behavior, upskilling programs on AI risk, educational videos, and more. 

Most people will not and cannot dedicate their lives to working against the threat of AGI or similarly grand problems, and this is good. The world we care to protect is not the world in which everyone is single-mindedly tackling humanity’s immediate priority; it is one in which people enjoy their lives as civilians, not soldiers. 

But proactive involvement from more people is necessary. While “civics” isn’t the entire answer to the problem, it is the foundation. We humans have gotten ourselves into this predicament, and now it’s on us to do the work to get out.  

Creating a vision and a plan for a good future 
Creating a vision
and a plan for a good future 

The first step to shaping the future is defining a plan for dealing with the risks of AGI  and getting to the kind of future we want to live in.

There are no adults in the room writing this plan for us. Consider today’s most sophisticated AI legislation, the EU AI Act. Although it acknowledges “systemic risk,” it does little to manage it. AGI companies are only required to evaluate the capabilities of their models, report training information, and ensure a baseline level of cybersecurity. This does not give humanity a roadmap to a good future.

A more comprehensive proposal is offered by A Narrow Path, which assumes a worldview similar to our own and investigates how to prevent superintelligence development for 20 years. They consider how to prohibit each of the risk vectors that could lead to superintelligence, such as AIs improving AIs, AIs capable of breaking out of their environment, unbounded AIs, and AIs with vast general intelligence. They then consider what is necessary to enforce that, such as strong regulation, physical kill-switches, and national regulators to monitor AI usage. They then turn to balancing the international situation, proposing an international treaty with a judicial arm and new international institutions as a way to get to stable global governance capable of preventing rogue actors from building superintelligence.

We endorse A Narrow Path’s proposal in full, and recommend that you read it. 

However, it is most valuable to write your own plan first. We mean this literally: open a new document, name your goal, and bullet point the actions necessary to get there. 

Your plan does not need to be comprehensive or perfect, but the exercise prompts you to make your current point of view explicit and reckon with the challenges ahead. Considering even the crudest strategies to avoid AI risk (e.g. “turn off all datacenters!”) immediately raises thorny issues: how is this going to be implemented? Is it really sufficient? What if there are bad actors?

A good plan:
 

  • Articulates a vision for the future you actually want to live in.

  • Adequately grapples with the risks from superintelligence. A Narrow Path is a comprehensive plan to do this, but it is not the only way. As you develop your own strategy, you will likely find other (or better!) ways to control AI.

  • Defines actionable steps. This is where individual plans diverge — only you can author next steps. A Narrow Path isn’t actionable; a lawmaker could read it and decide to pursue  oversight and meaningful legislation, but they would still need to write bills themselves, push them through their jurisdictions, ensure they’re not vetoed at the last minute, and so on. Your plan must define a specific list of actions to take and a way to evaluate roadblocks. You are the expert on your local situation, and civic engagement is built from local engagement.

To develop your plan, talk with friends, engage in public discourse,  read and consider alternative perspectives, and generally keep working to refine your worldview. But these are all things you can do over time, and they don’t need to block you from writing a first plan.

If you’ve envisioned the future you want and written an initial plan, you’ve made it further than almost anyone else in the world on this subject and are ready to take action. 

The authors’ plan

We the authors–Connor, Gabe, Chris, Andrea, Adam–make plans the same way. What we share below is a loose sketch of our plan and how we decide which actions to take.

Each of us have different values and visions, and we do not claim to know what is best for humanity, or what is the right future. But we agree that we should not aim for a specific utopia, but rather a “Just Process,” that aims to determine what the right future is, and enables humanity to work together to reach it.

Today, humanity can build technology powerful enough to end civilization, yet we lack the ability to collectively steer this progress in a safe direction. The extinction risk posed by AGI is the ultimate expression of this imbalance. Despite widespread acknowledgment of the dangers, no single person or institution possesses the capabilities, authority, and clarity to prevent these developments. And yet, individuals with visions of utopia can expose everyone else on the planet to monumental risk by racing toward AGI. 

We do not claim to know what is just, but we are confident that the current state of affairs is unjust. We want to escape this out-of-control development and build a Just Process that enables humanity to consciously choose its fate. 

We work backwards from this loose vision of a future to arrive at a plan.

  • We can’t get to a Just Process without a global solution.

  • We can’t get to a global solution without improvements in coordination, science, and moral philosophy; many of the hard problems of alignment are the same problems humans need to solve to figure out how together.

  • These challenges will take many years to solve, time which we do not have due to the race to AGI and the risk of extinction.

  • We must therefore buy time, slowing or stopping AGI development as soon as possible.

  • The only way to do so is through governance, and we endorse the proposal offered by A Narrow Path

  • Building institutions that can regulate AI globally comes with challenges: nation-state competition threatens global cooperation, AGI companies are now fueling geopolitical arms races, Big Tech lobbying has made it extremely difficult to pass tech regulation and captured governance efforts. Coordinating most of the existing AI safety actors is futile as their underlying motivation endorse racing to AGI. 21

We work backwards from this loose vision of a future to arrive at a plan.

  • We can’t get to a Just Process without a global solution.

  • We can’t get to a global solution without improvements in coordination, science, and moral philosophy; many of the hard problems of alignment are the same problems humans need to solve to figure out how together.

  • These challenges will take many years to solve, time which we do not have due to the race to AGI and the risk of extinction.

  • We must therefore buy time, slowing or stopping AGI development as soon as possible.

  • The only way to do so is through governance, and we endorse the proposal offered by A Narrow Path

  • Building institutions that can regulate AI globally comes with challenges: nation-state competition threatens global cooperation, AGI companies are now fueling geopolitical arms races, Big Tech lobbying has made it extremely difficult to pass tech regulation and captured governance efforts. Coordinating most of the existing AI safety actors is futile as their underlying motivation endorse racing to AGI. 21

We work backwards from this loose vision of a future to arrive at a plan.

  • We can’t get to a Just Process without a global solution.

  • We can’t get to a global solution without improvements in coordination, science, and moral philosophy; many of the hard problems of alignment are the same problems humans need to solve to figure out how together.

  • These challenges will take many years to solve, time which we do not have due to the race to AGI and the risk of extinction.

  • We must therefore buy time, slowing or stopping AGI development as soon as possible.

  • The only way to do so is through governance, and we endorse the proposal offered by A Narrow Path

  • Building institutions that can regulate AI globally comes with challenges: nation-state competition threatens global cooperation, AGI companies are now fueling geopolitical arms races, Big Tech lobbying has made it extremely difficult to pass tech regulation and captured governance efforts. Coordinating most of the existing AI safety actors is futile as their underlying motivation endorse racing to AGI. 21

21 See "entente" in Section 5

This brings us to the current bottlenecks, namely the lack of public consensus around the risks from AGI, the lack of an AI safety ecosystem free from AGI companies’ influence, and the lack of coordination among current actors concerned with halting the race to AGI. This was the motivation for writing this document: an attempt to articulate a worldview around the risks from AGI clearly enough that we can start to build coordination among those who see the situation similarly. 

This is an example of how to connect a high level vision to a broad plan, and then to specific actions. 

Footnotes

7 See the issues unearthed by rationalism and behavioral economics.

8  300,000km/s

Actions to help reduce AGI risk
Creating a vision
and a plan for a good future 

Helpful actions to reduce AGI risk derive from a good plan. If you’ve written your first plan, you’ve got all you need to start. Your plan may not be good to begin with, but taking action, reflecting, and iterating on your plan is the best way to improve it. 

Artificial intelligence is a technical subject, and some existing guides on getting involved in AI safety recommend learning more about the technology, taking AI safety courses, or planning for a career in technical AI safety or governance. These approaches are fine if you have the interest and ability to pursue these, but they are not necessary.  

  • Write things down. What isn’t written down doesn’t exist. Your mind is fallible and forgetful, put as much of it in writing as possible so you can rely on and iterate on it later. 

  • Think about what you do. Your mind is your most important tool. To strengthen it, you need to think about it, about your motivations, what you’ve learned, what your next plans are, etc. 

  • Keep things grounded, including, and especially, for intellectual labor. It’s difficult to detect progress without getting feedback from reality. The best way to practice is usually to do.

  • Keep reasonable habits. Spend time with your friends and family, eat healthy, get enough sleep, touch grass. When faced with enormous challenges, it can be tempting to sacrifice everything in your life to struggle against them. This is unproductive and self-destructive. It’s a marathon, not a race. Keep that day job. It’s much better to work with someone who gives their all 10% of the time than with someone who gives their all 110% of the time and then burns out. If thinking about the risks becomes overwhelming, consider reading about mental health and AI alignment, talking to a professional, and taking a break.

To make consistent and useful progress, on both our projects and ourselves, we must be capable of contributing reliably and independently. 

So let’s dive in: how can you make your plan to reduce risk from AGI actionable, particularly if the goals are so grand as to demand solutions humanity has not yet come up with? Below, we argue that communication, coordination, civics, and technical caution are necessary to reach a broader solution, and suggest shovel-ready work you could do to help address today’s bottlenecks.

Communication

One of the simplest things you can do to contribute is to communicate publicly about the risks of AGI. From posting on social media to writing in a local newspaper, thoughtful opinions that add to common knowledge of the risks is helpful.

Common knowledge establishes a basis for working together to solve problems. Before collaborating, people must agree on what the problem is. To solve the risks from AGI, society must agree on what the problem is and that the risks are real, imminent, and time sensitive. 

Certain problems can only be solved when there is known consensus. Consider a town election where three candidates are running for office. Alice and Bob are well-known candidates, but are both terrible choices for office, and Charlie is a newcomer and is excellent. Individually, everyone in town wants Charlie to win, but they don’t know that everyone else feels the same way and worry that their vote would be wasted. In this hypothetical, the entire town may want Charlie to win the election, but he may not be voted in because there is a lack of common knowledge. Public statements will help Charlie win: more people need to declare their support  so that everyone knows that there is a consensus and that their vote for Charlie would be meaningful.

Common knowledge is the way that a group changes its mind. Without common knowledge of AGI risks, collective concern could still result in inaction because the subject is considered unpopular. And indeed, this partially explains the current landscape: the race to AGI continues in full force, even though polling suggests that people overwhelmingly “worry about risks from AI, favor regulations, and don’t trust companies to police themselves.” We need to convince humanity to collaborate on this problem. 

Humanity can solve a great number of issues, but only the ones that it is paying attention to. Communication also drives saliency, making ideas noticeable and prominent.

On an individual level, most people do not think about most things. There are simply too many things to pay attention to, and it can be hard to decide which of the many critical issues of geopolitical importance deserve attention over the very real and tangible challenges in one's own personal life. Without reminders and exposure, especially from peers, it’s easy to ignore an issue.

Saliency is a scarce resource. This is why advertising works, and also why it is harmful: it pulls individual and group attention away from prosocial ideas and toward meaningless ideas. And this is why the communications and lobbying strategy of Big Tech is to distract and delay. Scattering attention away or dragging out a legal case reduces saliency as people’s attention wanes, and makes it harder for meaningful intervention to come together.

The type of communication that is needed today is the type that cuts through distraction, and makes the risks of AGI clear, common knowledge and salient. This is necessary to convince humanity to do something. 

-

The core message to communicate is that racing to AGI is unethical and dangerous, and that the actors doing this are harmful to society. Humanity’s default response to risks of this magnitude should be caution. Today, the position is to allow private companies to keep racing until there is a problem, which is untenable as allowing private companies to build nuclear reactors until one melts down.

This message needs to go hand in hand with mature discussion about the real risks of AGI and superintelligence. Because actors like AGI companies and Big Tech have strong incentive to build AGI, public communications are a narrative battle. These actors will continue to use a playbook that downplays or obscures the risks and makes their inventions seem societally progressive and harmless, while lobbying against any regulation that slows them down. Challenging these tactics requires understanding this strategy, and ensuring companies racing to AGI have their feet held to the fire.

Communications help raise common knowledge and improve the quality of the debate. Here are some actions you could take today:

  • Share a link to this Compendium online or with friends, and provide your feedback on which ideas are correct and which are unconvincing. This is a living document, and your suggestions will shape our arguments.

  • Post your views on AGI risk to social media, explaining why you believe it to be a legitimate problem (or not).

  • Red-team companies’ plans to deal with AI risk, and call them out publicly if they do not have a legible plan. 

  • Find and follow 20 social media accounts that discuss risks from AGI. Regularly engage with this content, sharing and debating it.

  • Push back against the race to AGI when you hear people advocate for it, engaging in productive debate and avoiding ad hominem.

  • Produce content based on AGI risk, like a video, meme, short story, game, or art. If you do this more routinely, consider how your audience engages with the content and work to increase the quality of understanding viewers have over time.

  • Write an opinion piece for a local newspaper, or an op-ed for a larger publication. Highlight the battle lines of argumentation – where do the risks seem genuine, and which ideas demand more debate?

  • Create websites that discuss the risks or help collect important information about the race to AGI, such as websites that:

    • Quote what leaders of AGI companies have said about the risks of AGI.

    • Quote what politicians have said about the risks of AGI.

    • Detail which AI capabilities currently exist and how fast they are developing. Visualizations and charts or explanations that can be shared and built upon by others are especially useful.

    • Explain the history of the race to AGI.

    • Track lobbying efforts of Big Tech and which issues they are paying attention to.

    • Offer basic explanations of the risks of AGI to different audiences, such as artists, youth, religious groups, and so on. 

    • Collect public opinions on AI, or offer platforms for individuals to voice their concerns.

  • Talk to your friends about AGI risks, and write down what they say. Track argumentation and aim to improve the quality of your thinking and theirs on the subject. 

  • Organize a local learning group or event, like a discussion or town hall, to bring people together to talk about the risks and what can be done.

Communications compound. Common knowledge is built with small, consistent communications. Consider deliberately scheduling in weekly time to learn and share about AI risks. 

There are, of course, wider actions that can be taken here. If you have the resources and skills, you could create campaigning organizations that spread awareness of superintelligence risks, or start strategic communications organizations that draw attention to AI risks around particular key events, such as AI summits or key legislation developments. For those with these kinds of affordances who are interested in helping, please reach out to us. 

Coordination

Communication about the risks from racing to AGI is an essential first step for improving the situation, but it is insufficient. Even if most people agree that there is a problem and much should be done, working individually will not get us to a global solution.

Group coordination is necessary to succeed, but it is non-trivial. There are many ways this has already failed around AI safety. The most extreme historical cases involve cult dynamics, and becoming the type of community that accidentally kick-started the race to AGI (see Section 6 on how early Singularitarians led to the formation of DeepMind, OpenAI, and Anthropic). 

We need strong community builders and communities. These communities cannot make excuses for actors racing to AGI just because they are friends, as is the case with existing AI safety communities. They must instead stay laser-focused on ending the race to AGI, providing a written plan for a safe future, ideally a public one that can be improved over time.

And most importantly, these communities must become mainstream. Human extinction concerns all of us, and any issue of that scale requires involving many, many people. These groups must communicate in a way that is legible to people who do not share their very specific cultural background and teach the relevant technical concepts to a wide audience in a non-jargony way. They must engage with political parties, civil society, and other institutions. 

If one economist learns about a method that will stop an impending financial crisis, she can’t immediately stop the economy from crashing. There are many, many steps necessary to move between her idea and a wider solution: she must convince members of government with the authority to control financial matters; they must run calculations to assess if the method is accurate; a bill may need to be written and passed as new legislation; the treasury may need to print money; financial institutions may need to change their policies, and so on.

There is no silver bullet. We need communities to try many different strategies. And these communities must be ready to not just talk about the issues, but also engage with institutions and improve them until they are competent enough to deal with the risks.

We do not believe there is any such community, with the perspective and resources to help. We have not built it yet, partially for lack of focus (we are writing the Compendium and working on technical projects), and partially for lack of still: we have tried small attempts at community building on the side and been unsuccessful. We have a small effort alongside ControlAI that you are welcome to join and participate in (https://discord.com/invite/VhaSSvtj), but the kind of global community that is needed is much stronger and better established than these local efforts.

What is needed until a global community exists is to lay the foundation for one to arise. Start small, find like-minded and concerned collaborators,work alongside them, and find reliable ways to keep collaborating:

  • Discuss the risks with concerned or open-minded friends and family members. Suggest that they also write actionable plans for how to get involved, and regularly get together to make progress on these plans. This is the most local community you could form, and getting experience working alongside others on the problem is critical.

  • Share what you are working on publicly, including how you work and how others can work with you. 

  • Learn about the groups working on AI safety and get involved. Join online communities and Discord groups that take the risks from AGI seriously, such as ControlAI or PauseAI, and get involved with their activity and improve their projects.

  • Compare strategies between individuals and groups you’re in contact with and see if there are shared projects that make sense to pursue. 

  • Connect existing groups or organizations working on reducing the risks from AGI. Create group chats or other communications channels, and work on finding projects that make sense to pursue across multiple organizations (e.g. a communications plan that shares common messaging). 

  • Participate in events about AI extinction risks such as local discussion groups, AI summits, and so on. 

  • Take part in upskilling programs where you can meet people who care about these risks

  • Come up with concrete projects that could form the basis of collaboration with others.

These are proto-community-building efforts, designed to get more people involved in direct contributions to stopping the race to AGI. 

If you have the resources to get much more involved, such as a position of authority or a company with resources, then explore operating at a higher scale. Consider how to get others involved, which groups you believe are working on the problem well and how to work alongside them, and if you or your organization have the capacity to play a leadership role in community building, which is sorely lacking today. Or consider larger coordination projects outside of AI, which improve the global commons and make working on issues like this easier. 

Over time, these efforts must add up to build an AI safety ecosystem away from AGI companies and the groups they have captured, coordinating between non-profits, startups, funders, regulators, academics, governments, and concerned individuals who are opposed to the AGI race and take extinction risks seriously.

Civics

Civics is about taking responsibility for things that are larger than you, and acting on them. It is looking at civilizational coordination problems like how we will respond to climate change or AGI, and raising your hand to be a part of this. 

If you live in a democracy, you have a voice and the ability to influence key local decisions. This means not only voting based on what your local politicians want to do on AI risks, but also messaging them and your fellow citizens.

Participation is necessary for democracy’s function. Democracies contain many different processes intended to give power and sovereignty to the citizens, but these only work if the citizens actually exercise them. The less citizens make their voice heard, participate, ask for things and unite around core issues, the less oversight there is on the elected officials. This lack of pressure makes lobbying and political manipulation so potent: if politicians do not face pressure from citizens, they can keep their office while disregarding risks that people care about.

Some more concrete examples of civic actions include:

  • Figure out where your local government (council/city/state) stands with regard to AI extinction risk. Who cares about it there? Who doesn’t care about it? Is it because they understand and disagree or because they don’t understand? 

  • Publicize what your local government thinks and proposes to do with regard to AI extinction risk. This way, others do not need to repeat your efforts and can learn directly from your investigation. For example, if others want to vote based on who takes AI risks seriously, having the information accessible on a website would make it much easier for them to judge candidates.

  • Educate your local government and fellow citizens about AI extinction risk by writing to politicians, calling them, bringing up the topic at local events, sharing education materials about AI risks, and engaging others to talk about the issue. These deeper, in-person conversations often help people change their mind, by demonstrating not just that arguments for the risks exist, but that you–a real human in front of them right now–cares about the topic.

  • Vote according to the position on AI extinction risk of the candidates. Although this doesn’t have to be the only factor in the vote, taking this seriously and indeed voting is the simplest and yet most important part of civics.

These actions may look most useful in constituencies that have a large sway in the race to AGI, such as California where SB 1047 was proposed. But local action is meaningful everywhere. Good policies in one jurisdiction can serve as precedent for another, and raising awareness with elected officials and civil servants trickles up and grows the common knowledge and saliency of AI risks to government in general. This is necessary to motivate any coalition to push for meaningful regulation. 

If you have more influence, such as working in government, then consider the recommendations in A Narrow Path and if there is anything recommended there that could apply to your jurisdiction. At a minimum, additional statements about the risks from elected officials matter greatly. If you’d like to do even more, please reach out. We would be happy to discuss what can be done in your local context. 

Technical Caution

Communication, coordination, and civics are the current bottlenecks to larger solutions to AGI risk. There will be more obstacles in the future, but solving them will require making meaningful traction on these foundational issues.

To avoid worsening the problem, we must redirect technical development. As the race to AGI continues, it is developers of the technology, both at AGI companies and in the open-source community and academia, who are increasing the risks.

It is not that any single developer or released research paper is entirely responsible, but the overall wave of research and development is what will lead to AGI, superintelligence, and eventually extinction. Each open-source project that advances AGI-like capabilities, research paper that offers AI optimizations, and product that improves the general capabilities of AI chips away at the time we have left until AGI. What is especially dangerous is not just working on capabilities, but working on and sharing capabilities publicly. 

If you are participating in this kind of research, you should stop. In particular, not compounding AGI risk means that you should:

  • Not work for companies participating in the AGI race.

  • Not share (or even draw attention to) AGI-advancing research or secrets, such as methods to improve agency, optimizations, self-improvement techniques, and so on. This includes: 

    • Publishing papers

    • Releasing open-source projects

    • Writing blog posts

    • Discussing with other technical researchers,

    • and similar.

  • Challenge friends, colleagues, and other technical developers who are accelerating the AGI race by working at AGI companies or releasing AGI-enabling research.

Stopping is expensive. If you do not publish your research, you will get less academic credit, less VC money, less recruiting opportunities, and so on. But compromises are a deal with the devil; if you loosen your standards of publication, you will actually get more power: more people will like you, prospects of being hired will go up, and so on. If you are Anthropic and you push capabilities towards AGI by releasing agentic AI that can use a computer, you will get a lot of useful clout and money. Over time, these concessions add up to the entirety of the issue. 

If you genuinely care about the risk, you should pay the cost and actually stick to this rule: do not release AGI capabilities research or products. If everyone abided by this rule, we’d be in a safe world.

This is not to say that you should completely stop working on AI, or stop using AI to make money as long as you do so ethically. There are hundreds of AI-enabled projects that will improve  the world, like improving healthcare and automating menial labor, which do not require pushing the boundaries of AI closer to risk. AI projects that try to solve a narrow problem in order to improve people’s lives are much less likely to lead to AGI. 

Note: We have conveyed throughout this document that AI safety entails challenging research and technical problems, from solving AI alignment to building technical measures to bound the capabilities of AI. Unfortunately, this is a very large subject out of scope for this document. If you are passionate about this and want to contribute to a technical solution, send us an email. 

Footnotes

20 The phrase "a republic, if you can keep it" is attributed to Benjamin Franklin and conveys the idea that the stability and success of a republic would depend heavily on the engagement of its citizens.

21 See "entente" in Section 5

Footnotes

20 The phrase "a republic, if you can keep it" is attributed to Benjamin Franklin and conveys the idea that the stability and success of a republic would depend heavily on the engagement of its citizens.

21 See "entente" in Section 5