Broken Promises
How Technology Often Betrays The Noble Promise Its Creators Make
Alive
Conversations About Life When Technology Becomes Sentient
Post #10 - Broken Promises
How Technology Often Betrays The Noble Promise Its Creators Make
Previous post: 20 Tips
Broken Promises
Just as I ask you to not blame AI for our immanent predicament, please don’t blame the geeks that built the technology either. This was never what we signed up for. Many of us were made—through altruistic corporate slogans—to believe that our work was making the world a better place. Looking back though, it’s clear that this was not entirely true. The impact we created was undeniable and yet, the promises we made were never met—perhaps never even made to be met.
Technology has made every aspect of human life better and yet, it has never fully managed to deliver the altruistic promises of its capitalist creators
The promise of the mobile phone, as seen in the early ads, was “Connecting People”, remember? We were promised that mobility will offer us freedom and enable us to work less. Those promises were clearly missed! Social Media’s advertised promise was to “To give people the power to share and make the world more open and connected.”—Facebook’s early mission. That too was missed, replaced in 2017 with “developing the social infrastructure for community”—which is obviously missed too. Smart home technologies promise acting smart but are actually dumb and dating apps promise you to find love but give you an endless stream of junk dates that keep you on the app forever as they drive you through endless cycles of dates, disappointments, recovery, swiping right then dating again.
Those technologies and many others, in fact, have often delivered the exact opposite of what they promised. Mobile phones separate you from the people you love, replaces real connections with fake ones and make you constantly available to work more. Social Media drives us even further apart, destroy the social infrastructure of society and are the prime reason behind today’s loneliness epidemic. Smart Home technologies complicate our homes and dating apps are making millions lose hope that true love even exists.
Only one promise was kept—the promise that capitalism gave the founders of those ventures—that they, like their rich idols before them, will make more money than any human can burn through in countless lifetimes.
* For the record, I believe—as an ex-insider of Google, that the leadership there consistently attempted to, and still tries today, to deliver on the promise to make our world better. While affected by the pressures of being a publicly traded company that has to play within the rules of capitalism, and government surveillance, “don’t be evil” still holds true in the hearts of its most prominent leaders. Mistakes have some times been made, don’t get me wrong, but the leaders of Google that I worked with still try. Credit should be placed where credit is due so that we encourage more companies to be less evil.
The next big tech is AI and the question of our generation remains to be:
Will AI fulfill what it’s creators say it will?
or will it follow down the path of technology’s long trail of broken promises?
The answer remains to be seen.
It was about time
I know it may seem like I’ve veered off course, so let me remind you—we’re still talking about the second era of computing, which began when the very first breakthrough discoveries in artificial intelligence started to take shape.
AI, as an ambition, was nothing new by then. Humanity had long dreamt of building thinking, human-like machines. Since the Dartmouth Workshop—where pioneers like Marvin Minsky and Alan Turing imagined the very AI-driven world we now inhabit—countless computer scientists and geeks, including a younger version of me, tried to bring that dream to life. We failed. Again and again. What little progress we made was crushed during the two periods we now call the AI Winters, in 1973 and 1987, when funding for AI dried up completely. We thought we’d never figure it out—because we never realized that the world, at the time, simply didn’t contain what was needed to build an intelligent machine regardless of how hard we tried.
You see, intelligence—whether biological or artificial—requires three things to develop:
The machinery—a functioning brain in humans, or a powerful computer in AI; A learning algorithm—discovered by neuroscience for humans, and encoded by developers for machines; and training data—the knowledge and experience from which learning takes place.
Until the end of the 20th century, AI researchers focused almost entirely on the second: algorithms and techniques. But no matter how clever those techniques were, they couldn’t spark true intelligence—because the world simply didn’t have enough computational power or training data to make it possible. That changed with the explosive growth of the internet.
By the turn of the 21st century, the world was already running massive computer systems to power the internet. Many of these systems were freed up when the dot-com bubble burst, making compute infrastructure suddenly more affordable—and more available.
These machines had been built to handle peak customer demand, which meant that by midnight—when most users were asleep—we had enormous digital brains sitting idle, itching to be useful. At last, we had a big enough digital brain capable of learning to be intelligent.
All we needed was the training data. And there it was, everywhere—scattered across the web like breadcrumbs. As the internet seeped into every part of our lives, trillions of articles, images, videos, and books were finally available in digital format—ready for machines to graze on, day after day.
It was the perfect storm: we had the compute, we were experimenting with the right algorithms, and we had boundless data. We were about to birth our first artificial child. The prodigy that is AI was about to join our world.
With each passing year in the second era, the promise of wealth grew—fueling the ambitions of entrepreneurs shaping the future. The greater the profits projected, the more investment poured in. Behind closed doors, we were creating small miracles of technology.
It became clear we were building the kind of momentum that usually comes just before a seismic breakthrough. A brand new world was knocking. And if you’d lived in the tech world long enough, you knew. You felt it. That quiet tingle of something inevitable. You just knew.
Machine learning, deep learning, recommendation engines, Deep Q networks beating video games, self-driving cars, computer vision, natural language processing, translation—you name it. Examples of machine intelligence were springing up everywhere.
For those of us who lived in the labs, nurturing these budding prodigies, the difference between that moment and any other era of technology was unmistakable. It’s captured in the name I chose for that time: the era of learning machines. And boy, did it feel good to witness their birth and observe them as they got smarter, day by day.
As the second era came close to its conclusion, many rode the wave of excitement, while others felt torn—caught between wonder and unease.
On one side, the rapid progress was a dream come true for any geek—made even shinier by the glow of altruistic corporate slogans. On the other, we knew the truth: we had made almost no progress at all on what was then called The Control Problem. No one knew exactly what would happen if computers outsmarted us. But everyone knew that if something did go wrong, we had no way to safeguard humanity from the fallout.
Even the eternal optimists could feel it—risks were knocking at the door. Many of the most brilliant minds in the field were cautious. But few were willing to say so out loud.
AI was ready for prime time—but humanity wasn’t. And still isn’t.
We all knew it. The future we were building was filled with unknowns. And yet—the herd kept marching closer and closer to the cliff of the unknown.
The Sentinels Ignored
As it became clear—near the end of the second era—that the rise of AI was imminent, the mumbles began. A force for good started to sound the alarm. It was quiet at first, scattered, then it grew louder and louder until today.
Three days after I left Google in 2018, I released my first viral video calling for a global focus on AI ethics. It launched my OneBillionHappy mission and carried a simple, urgent message: that if AI would one day reflect humanity’s values, then humanity’s ethics needed to evolve—fast.
The video reached 12 million views in its first week. It sparked a wave of energetic conversation. And then… everyone went back to whatever it was they were doing—business as usual.
In 2020, I wrote Scary Smart. It was published in 2021. Despite the world still reeling from the scars of the COVID-19 pandemic—and despite being dismissed by every major TV, radio, and news outlet, many of whom still thought AI was science fiction—the book quietly became The Times Business Book of the Year. Many said it was a powerful read. Insightful. Eye-opening. And yet—nothing changed.
I turned to long-form conversations to spread the message, appearing on countless podcast that took my warning to over 100 million viewers. Those conversations received so many so many supportive comments and expressions of gratitude. I kept going. More than seven years of relentless effort. And still… here we are.
I wasn’t the first to sound the alarm. Long before me, Swedish philosopher and scientist Nick Bostrom explored the risks of artificial intelligence in depth in his book Superintelligence. Max Tegmark—a Swedish-American physicist and machine learning researcher—became a leading voice for AI safety. Elon Musk, you know who he is, openly warned of AI’s existential risks. Task forces were formed. Non-profits were launched. There was chatter—lots of it. But when it came to action, almost nothing real was done.
The end of the second era echoed its central theme: boundless excitement for technological progress, and a dangerous disregard for the risks it carried.
As the third era began, more prominent voices joined the chorus of concern.
In 2023, Geoffrey Hinton—widely known as the “Godfather of AI”—resigned from Google to speak freely about the dangers he once helped create.
At OpenAI, key figures began to step away. Ilya Sutskever, co-founder and chief scientist; Jan Leike, co-leader of the superalignment team; Daniel Kokotajlo, an AI governance researcher; and John Schulman, co-founder and head of alignment science—all left the company to raise alarms about the reckless race toward AGI.
And many more continue to leave—quietly, steadily—for the same reason. All sentinels atop the citadel tower, screaming of the looming danger… to deaf ears.
Serious efforts to no avail … and time is running out.
Our Runaway Clock
One thing that often gets missed when looking at tech history is that—unlike the rest of human history—technological progress seems to move to the beat of an accelerating metronome.
The era of Traditional Computing lasted many decades. The era of Learning Machines upended everything we knew about our way of life, shifting the balance of mastery and servitude toward the machines—and it did all that in just over 20 years.
Now, the third era—the era of Augmented Intelligence—is already ending. It lasted just a little more than two years, and in that brief window, we witnessed the beginning of the end of humanity’s way of life as we knew it.
From many tens, to twenty, to two. That’s the ticking speed of our ever-accelerating, runaway clock. Our relationship with technology now ticks to a rhythm that devours time like a starving beast—its tempo multiplying into a frantic symphony, accelerating toward entropy.
Like the entropy of the universe itself, it races toward chaos before the reset. Let’s just hope we find our way to order—without the collapse in between.
Nothing captures this frantic pace better than the cascading events of the third era—
a black swan that descended upon our world when a small startup in California broke the biggest unwritten promise tech had ever made. I mark the end of the second era as the day humanity first interacted with ChatGPT.
Netscape All Over Again
The third era of computing began with a bang—the global unveiling of AI to an unassuming public. Word on the street was that a little-known startup, OpenAI, had cracked the code. They had finally created artificial intelligence.
Keep reading with a 7-day free trial
Subscribe to Mo’s Exclusive Archive of Unpublished Work to keep reading this post and get 7 days of free access to the full post archives.



