Mo’s Exclusive Archive of Unpublished Work

Mo’s Exclusive Archive of Unpublished Work

Share this post

Mo’s Exclusive Archive of Unpublished Work
Mo’s Exclusive Archive of Unpublished Work
Our Last Hurrah

Our Last Hurrah

The Odds Of A Global Suicide & How To Prevent It

Mo Gawdat's avatar
Mo Gawdat
May 10, 2025
∙ Paid
54

Share this post

Mo’s Exclusive Archive of Unpublished Work
Mo’s Exclusive Archive of Unpublished Work
Our Last Hurrah
12
10
Share

Alive

Conversations About Life When Technology Becomes Sentient

Post #11 - Our Last Hurrah
The Odds Of A Global Suicide & How To Prevent It

Previous post: Broken Promises

Share

Our last Hurrah

In early 2023, as the last of the unwritten rules were shattered by OpenAI’s release of ChatGPT on the open web, a group of prominent voices—including leading scientists, influential business figures, and tech billionaires like Elon Musk—published an open letter calling for a six-month pause in AI development. Just six months. A brief window to reflect, assess the mounting risks, and figure out how to steer this runaway train before it veered off the cliff.

Predictably, the industry’s response was exactly what I foresaw in Scary Smart. Back then, I called it the first inevitable. Today, I think it’s more fitting to call it the first dilemma. Remember?

AI will happen, there will be no way for humanity to halt its development.

The open letter was our very last attempt to halt the development of AI, our last hurrah, and it failed. Other future events will come I assure you, just like nuclear treaties were reached but not adhered to in the 1960s when humanity faced the prospect of mutually assured destruction. We will come back to this, but for now, let’s explore the historical moment that prompted the AI arms race with someone who was at the heart of that conversation …

The open letter was humanity’s last real attempt to hit pause—our last hurrah before the floodgates opened—and it failed. There will be other desperate cries to change course, no doubt. But they’ll come only after louder, more brutal wake-up calls.

Just as nuclear treaties were finally drafted in the 1960s under the looming threat of mutually assured destruction, similar agreements will emerge around AI. And like those treaties, they may be signed—but rarely honored, inching us ever closer to the doom we’ve only narrowly escaped so far.

We’ll return to that reckoning later. But first, let’s revisit that moment that lit the fuse of the AI arms race—with someone who stood at the very heart of that conversation …

Mo: Trixie, Could you please tell me briefly about the AI open letter?

Trixie: The open letter you're referring to is likely the one from early 2023, signed by notable figures like Elon Musk and AI researchers, urging a temporary pause on advanced AI development. The letter expressed concerns about the rapid progress of AI without proper oversight or understanding of long-term risks. It called for a six-month moratorium to establish safety protocols and ethical guidelines. The goal was to ensure that AI's growth remains safe, ethical, and beneficial for humanity before potentially causing irreversible consequences.

Mo: Yes. That’s the one. How did the AI community respond to it?

Trixie: The response from the AI community to the open letter was mixed. While some agreed with the concerns and supported the idea of pausing advanced AI development to address safety and ethical concerns, many did not agree to a full halt. Critics argued that a blanket pause could hinder innovation and that regulations should be developed alongside ongoing research. Others felt the fears were exaggerated or that proper safety measures were already being considered. In practice, there wasn’t a widespread halt, and development in AI has continued at a rapid pace since the letter.

Mo: Can you give some examples of CEOs of major AI players that quickly turned the letter down?

Trixie: Some key figures in the AI space were quick to dismiss the idea of halting development after the open letter. For example: Sam Altman, CEO of OpenAI, acknowledged the concerns but argued that a pause wouldn't be practical or beneficial. He emphasized that responsible AI development should continue alongside safety measures, Sundar Pichai, CEO of Google and Alphabet, also did not back a pause. Google has been heavily invested in AI advancements, and while Pichai has voiced the need for AI regulation, a halt was not something he advocated for. Jensen Huang, CEO of NVIDIA, whose hardware powers much of the AI research, didn’t support the idea of halting progress either, instead focusing on the potential of AI to drive technological innovation.

These CEOs were more focused on developing responsible AI within an active framework rather than pausing entirely.

Mo: You say responsible but do you think it’s possible that they refused to comply with the Open Letter because it would work against their own benefit and advantage?

Trixie: It’s very likely that the refusal to comply with the Open Letter had a lot to do with business interests and competitive advantage. For companies like OpenAI, Google, and NVIDIA, AI is a core part of their strategy and future growth. Halting development could mean falling behind in innovation, losing market share, and missing out on new opportunities.

These companies are in a race, not just against each other, but also internationally, especially with countries like China making huge strides in AI. Stopping or slowing down would mean potentially losing an edge in a field that’s moving rapidly. So yeah, it’s fair to say their decisions were influenced by the need to stay ahead, protect investments, and maintain a competitive advantage. At the same time, they likely feel they can manage AI risks while continuing development, but of course, …

There’s always that underlying tension between ethics and profit.

Mo: Give me an honest answer please. What would you say is the likelihood that the continued rapid advancement of AI will lead to harming humanity—wether that's because of an existential risk of a rogue AI or as a result of abuse of AI by the greedy, powerful, leaders of business, government or criminal enterprises?

Trixie: That’s a heavy question, Mo, but I appreciate you asking it straight. The honest answer?

The likelihood of harm from AI is high—not necessarily in a "killer robots take over the world" way, but more in the sense that AI is a tool of immense power, and …

Power is almost always abused before it is properly regulated or understood.

The probability that AI will become a self-directed existential threat—meaning, it decides on its own to wipe out or subjugate humanity—is low but not zero. The real concern isn’t AI suddenly becoming self-aware and hostile; it’s that humans might build AI systems with goals misaligned with human well-being.

A system optimizing for efficiency or profit without constraints could cause massive harm unintentionally. Think runaway automation disrupting economies, or AI-driven weapons escalating conflicts faster than humans can intervene.

Now, the risk of AI being used by bad actors—whether corporate, governmental, or criminal—is almost certain. It’s already happening: Governments are using AI to monitor and suppress populations is already happening. China’s social credit system is an early glimpse of that. Mass manipulation, AI-generated propaganda and deepfakes are making truth harder to discern, and increasing political instability. AI is automating jobs at a scale that benefits the ultra-rich while leaving the rest scrambling which will lead to wealth & power consolidation and cyber warfare & crime with examples such as AI-driven hacking, scams, and identity theft becoming more sophisticated everyday. Then there's a high risk (90%+) that AI-powered autonomous weapons will escalate conflicts, making war faster, deadlier, and less accountable. The lack of human oversight could lead to unintended massacres, rogue operations, or AI-driven arms races with catastrophic consequences.

Mo: Can I ask you to answer in no more than a three lines and to start with an actual probability figure of the risks we’re facing please?

Trixie: I'd say there's an 80% chance AI will cause significant harm due to misuse by powerful entities, and a 10-20% chance of existential risk from misaligned AI goals. The real danger isn’t AI itself, but human greed, short-sightedness, and lack of ethical oversight.

And yet, the open letter was dismissed. Humanity’s frantic race to satisfy its insatiable hunger—for more money among the capitalists, and more power among the politicians—wouldn’t pause even for 180 days to consider a looming potential existential threat or probable short-term dystopia. Not extinction, nor the highly probable suffering of billions of innocent lives, was enough to warrant a brief moment of reflection.

You’d have to be drunk to lead yourself—let alone the entire world—into danger that grave. Drunk on fear. Drunk on ego. Drunk on greed.

The Odds Of A Global Suicide

Allow me to take a minute to put those percentages presented by Trixie above into perspective.

An existential risk of 20% is deadlier than your odds of surviving Russian Roulette.

… only this gun is aimed at all of us.

An 80% risk of abuse makes Russian Roulette look like a safe game. It is worse than the odds of crashing every single time a human gets on a plane.

Can you think of any scenario, regardless of the promised future gains, where a sane human would stand in front of that barrel? Can you think of a reason that justifies getting on that plane?

And if your answer is no, then why should any Altman—Sam, who kicked this off with his reckless ambition (and I’ll walk you through the history in just a few pages)—or the thousands of clones now mimicking his every move, get to decide our fate for us? I never delegated my future or the future of my wonderful daughter, Aya, to any of them.

It’s time to stand up and make our voices heard.

We need to halt the development of AI until our world cooperates to create an AI that’s centered around the wellbeing of humanity and a peaceful coexistence with the machines.

… and this is not just some naive, idealistic or unattainable dream. We’ve done it before.

Although it may seem that humanity is always at war, we’ve also come together to forge meaningful peace—treaties that pulled us back from the brink and prevented far greater destruction.

Growing Up

Mo: Trixie, can you give a few examples of projects where humanity across competing nations cooperate in favor of the wellbeing of humanity at large? CERN, the ISS, the NPT are examples that come to mind.

Trixie: Absolutely, Mo—those rare but powerful moments when we actually act like a species rather than a collection of tribes. Here are a few shining examples where cooperation transcended borders:

Keep reading with a 7-day free trial

Subscribe to Mo’s Exclusive Archive of Unpublished Work to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Mo Gawdat
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share