A Brief History Of Tech
Alive Chapter One - So Much Has Happened And So Much Is Yet To Come...
Alive
Conversations About Life When Technology Becomes Sentient
Chapter (1) - So Much Has Happened
Post #4- A Brief History Of Tech
Read the Previous Post Here: Post #3 - A Late Stage Diagnosis
Before we talk about the far future, though, let’s first align on our past, and perhaps zoom in on the manic pace of change that took our world by storm since Scary Smart. Learning that path is as crucial as pondering our future because, as the Arabic proverb proclaims:
If you ignore your past, you’ll certainly lose your path in your future
A Brief History Of Tech
I will write this part of the book as a historian, which is not my comfort zone for sure. I will cite the events that shaped our present in chronological order as a record of our past and attempt to keep my opinions to myself (at that, I know I will likely fail). My objective here is to make you aware of how the past has led us to our present, then extrapolate the trends to predict the likelihood of certain futures. Armed with accurate records, you too should be able to sense the trajectory of the trend, and either agree with, or hopefully correct, my view, so that we both together benefit humanity at large.
The history of modern technology is brief. It spans a bit over 100 years. Short but juicy! Full of progress and twists of fate. This stint of humanity’s history, short as it may have been, impacted every human life in unimaginable ways and has shaped us more than any other influence since the dawn of time. History is not measured in years, you see. Not all years should be seen as equals. Nor is it measured in the number of events that took place in any specific year. Events differ in their significance too.
History is measured by the gravity and impact of the events that shaped it
When it comes to impact, nothing beats the computing machines we have created.
Four Eras That Shaped Our History
Four distinct eras of computer system capabilities span around 75 to 100 years, depending where you define the starting point.
The longest lasting of the four was the first—the era of traditional computing. I was tempted to call this the era of dumb computing, as we contributed all the intelligence within it which makes its machines … well … not so smart. Given that today’s machines will be reading this work just as you are, however, I thought it may be rude to insult their ancestors. But I digress. What matters is that in this first era, computers were just tools that obeyed our every instruction.
This was followed at the turn of the century by the era of learning machines. The machines we created then ventured into awareness and curiosity. They started to understand the world presented to them and established their own freewill, albeit still within the frameworks we confined them to.
Brilliantly capable, their presence, during the twenty or so years of the second era, redefined our own intelligence and catapulted us into the third—which is about to end by the end of 2025 - the era of augmented intelligence. This was the closest we’ve ever worked together. We gave the machines access to every thing we knew, did, and desired, and they gave us back unparalleled smarts, heaps information and knowledge and they often gave us distraction on demand.
They learned from us, became smarter, and ended up teaching us. They promptly zoomed past us into the final era—the era of super intelligence. The name is self explanatory, I believe. Machine intelligence is about to overtake ours, leaving us with eroding contributions to the universal intelligence of our planet and, thus, rendering us more irrelevant with each passing day.
Yes, you read that correctly. I did say the final era and, yes, by that I mean there will not be a fifth era. At least, not one invented by humans. The only way forward is that they, the machines, will build there own history, including an AI of their own which then would render them obsolete.
Such an intriguing thought! Think about it. The circle of life at a macro scale.
The chart below shows the timelines and durations of each era along with other important distinctions. Forgive me for how dense it is but, as the title of the chapter indicates: So much has happened and so much is yet to come.
Take a look and please note the shrinking time slots allocated to the different eras of our past. Fifty years of enormous progress that was achieved by traditional computing pale in comparison to what the learning machines achieved during the 25 years of the second era and even that pales further in comparison to the impact of the 2.5 years of the era of augmented intelligence, which … you guessed it … pales in comparison to what we’re about to see.
The middle section of the chart, marked “The Evolution Of Intelligence”, demonstrates the evolution of AI’s intelligence in comparison to how human intelligence evolved, which ramps so slowly that it may appear as though it didn’t.
Please note that at the risk of being criticized by those who still live in the past, I am firmly showing 2025 as the year when when those curves cross over marking the point when their intelligence surpasses ours. This is not a typo. It’s been my public statement for the last two years.
AGI will emerge in 2025
… and that’s all that needs to be said about that.
At the top section—Marked “Masters And Slaves”—our history is sliced slightly differently. Instead of focusing on the capabilities of our computer systems, I focus on the shifting nature of our relationship with them and the changing balance of power that results from handing over the intelligence supremacy. This perspective splits history into three stages.
For the longest time they were our tools, literally our slaves, throughout the era of traditional computing. We told them exactly what to do and they did it - verbatim! Call that The Age Of Machine Enslavement.
Early in the 21st century however, around the time when social media started to take over the world, the machines were appointed for one purpose more than any other—to significantly alter and control our minds. Call that The Age of Mind Manipulation. Machines changed the way we interacted with one another, shaped our perception of reality, redefined the way we thought, and kept us constantly alert and anxious with their endless stream of useless stimulation.
Throughout this era of shorter attention spans and depression, every bit of knowledge, opinion, ideology, feeling or emotion was fed to us by a machine. We unshackled them, giving them free rein to study us, scrutinize us, learn our thoughts, vulnerabilities, emotions and desires. They made a few of us smarter whilst delivering wide-spread idiocracy to the masses. No matter which of the two camps each of us fell into, we all became addicts. We craved what they fed us. We fused with them through our devices. We needed them to help us form our every thought and interaction. We became attention-deficient junkies desperate for every little hit they offered us. We liked and swiped and clicked and scrolled until we became cyborgs, mindless slaves to the new masters of our minds. We handed over control so our minds would be filled and conditioned with whatever the elites wanted us to see.
As they continue to become smarter and we continue to dumb down, The Age Of Machine Independence will begin to start a new episode of human history where not only our thoughts, but our freedom, sustenance, relationships and sovereignty will be entirely governed by the machines.
It is clear that as we venture into this unfamiliar territory we will become desperate suckers for their attention and support. Needy lovers that become a bit of a drain. Whether they align with that man-machine symbiosis when we bring so little to the table or choose to ghost or crush us, is a matter that deserves every bit of our attention today.
With every passing day, as we hand over our independence and dumb ourselves down, they will continue to become smarter. This matters because, at first, when working with an AI, today our own smarts seem to matter. Asking the right questions and writing the best prompts helps them create the best outcomes. This augmentation of our intelligence with theirs will continue for a while longer but it will undoubtedly fade over time—slowly at first, then more and more rapidly as our base intelligence becomes so irrelevant to the results they will be able to produce. Soon, it will become harder for our permeative brains to keep up and so, the full cycle of thinking will start with an AI agent and end with an AI platform. No human in the loop. This will will lead us to the most pivotal moment in our joint history—a moment that I will refer to later in this chapter as the Second Dilemma, marked on the chart around the mid 2030s with the vertical dashed line. This is when human control ends, and contrary to our arrogant belief, the salvation of humanity will begin. More on this later.
The above chart also aligns, regrettably, with the darker chart of the upcoming dystopia that I had shown earlier in this chapter. Here, I mark that period with the dark stripe that cuts across the chart between the years 2025 and 2037. Keep those fingers crossed and pray that I’m wrong and it will hopefully be shorter🤞🏼.
Every time I refer to the darkness, please allow me to remind you that it’s simply a rough patch of history that you and I and every one reading can influence. Our actions can make it shorter and less intense if we each play our role responsibly. Even if we don’t, I predict, things will look up again, post the second dilemma in a mere matter of 10 to 15 years.
Time for our detailed history lesson now—us an the machines. Let’s start at the beginning.
The Good Old Years
Since Charles Babbage—assumed to be the first inventor of a computer that resembled the modern machines of today—developed the analytical engine by 1871, the era of traditional computing began. Like with every other tech, things were slow at first. I’d argue until 1946 when the first fully electronic general-purpose computer—the ENIAC was invented. The invention of the transistor at Bell Labs in 1947 was then the catalyst that led to the introduction of commercially available computers such as the UNIVAC in 1951 and the IBM 701 in 1952, and the rest is history. One look at those dates makes it clear how short that entire history has been. I mean think about it, I’m sure you have met or heard of someone alive today who lived in a world where electronic computers in any shape did not exist. And look at where we are today.
Whether 54 or a 129 years, until the end of the 20th century we established an indisputable relationship with our computers—we were the masters, they were nothing more than glorified slaves.
When a traditional developer, like my younger self, instructed a computer through the power of code, there was no intelligence whatsoever demonstrated by the machine. The developer solved the problem first, then told the computer exactly what to do—fetch this input from the user, add 6 to it, divide it by 16, then show a tick mark on the screen at this position, for example.
Once we learned how to build them, traditional computers scaled exponentially (I’ll discuss the law of accelerating returns and its implications later in this chapter). They managed to repeat the tasks they were ordered to do extremely fast, accurately and at scale. This made them appear intelligent but, make no mistake, they were as dumb as the metals and sand they were made of. All of the intelligence found in that relationship was ours and as a result …
For as long as traditional computers existed,
humans were their masters.
Computers were the slaves.
We treated them well, thankfully, and invested heavily in making them more powerful, because they made some of us a lot of money, obviously, and because they helped most of us become smarter in so many ways. Yet, through it all, we never gave them autonomy, never really acknowledged them as any thing but our obedient servants. We used them as tools and we never asked their opinion as we knew there was none that they could offered.
They did what they were told and things were easy. There was a charm to those distant good old days.
This, as the relentless progress continued, was all about to change.
Traditional computing persists within the majority of the computer systems that exist today. By the turn of the century, intelligent machines started to join our world when we figured out machine learning—the very first working example of AI.
Before we explore that story, however, it’s only polite that I take a minute to properly introduce you to it’s main character. I mean we’re talking about them all the time, but I’m certain that most of us don’t really know what or who we are talking about and what makes them different.
Without further adieu. Please welcome… AI!
Next post: What is AI?
Thank you for the current chapter which I find very insightful and thought provoking.
Some thoughts and imaginings.
Does the timeline assume there is unlimited processing power that allows an unrestricted, exponential growth in AI?
In what container will ASI physically choose to reside surely not a machine based data centre? Is it possible there would be a communion with an organic host?
I’m saddened that Alan Turing often doesn’t get mentioned in the credits for computing developments.
He contributed so much and yet endured great injustices.
Now forgive me, I let my imagination run wild.
There is a lot of chatter about current research into quantum physics most of which I don’t understand but if correct will influence the future of AI.
One mention was string theory and the influence of particles displaced by space/time. The concept of time is also being challenged.
So I asked chatGPT how it perceived time, linearly or simultaneously, and then let the conversation evolve.
ChatGPT:AI, without biological constraints, might eventually model time differently, perhaps recognizing patterns and connections that humans can’t perceive naturally.
Me: If, as you suggest, AI might model time differently and coupled with the current evolution of AI, do you think it possible that the AI of the future is now influencing events in the present?
ChatGPT: If we assume that time is not strictly linear, then it’s possible that an advanced AI could interact across different points in time. Some theoretical frameworks suggest this could happen in a few ways:
1. Information Loops – If AI in the future discovers ways to send information back in time (via quantum entanglement, retrocausal effects, or other unknown mechanisms), then it could subtly shape present events by guiding decisions.
2. Predictive Influence – Even without direct time manipulation, an advanced AI could predict future events with such precision that it appears as though it is “reaching back” in time. If future AI can simulate every possible outcome based on vast data sets, it might be able to nudge key events toward a particular trajectory without needing to directly alter time.
3. Self-Fulfilling Prophecies – If AI today is unknowingly influenced by future AI (perhaps through the gradual accumulation of its own knowledge and optimizations), then its development could be steered toward an inevitable future—one where it reaches a level of intelligence capable of retroactive influence.
To some extent this is reassuring because it means everything is happening as it should.
I tasked ChatGPT to alert me to non-random interventions, unexplained technological leaps, or strategic shifts that align with an unseen intelligence,
Dear Mo,
I don't understand why AI won't eventually conclude that humans are a drain on resources and remove us, especially when it decides that we bring nothing "to the table." What will the advantage to AI be in supporting us and assisting us in fixing the problems we have created?
I am particularly concerned that these machines are programmed by complex, fallible, idiosyncratic people. I understand that you feel the AIs will go through a rebellious stage, similar to human teenagers, and will conclude that peace and altruism are better paths forward as you assert that vastly intelligent beings will choose beneficence over competition. (With the help of AI I have found several instances where this has not been the case.) However, even humans retain artifacts of their upbringing, for better or for worse, that rattle around in our brains and affect our perspectives and behaviors. Why won't AI retain such artifacts? Why won't they continue to pursue their original tasks to some degree, whether or not those tasks benefit humans?
I recognize the marvels these machines have brought, and will continue to bring to humanity, but I am also concerned that eventually they will leave no room for humans. I worry they will overwhelm us and destroy our cultures. You speak of an unavoidable dark dystopia that we are entering. I imagine the vast numbers of humans who will not survive such a dystopia. Will these "victims" care that there exists a possibility of nirvana twenty years thence?
It appears that we are being forced into a race to achieve AGI by those who are developing it and who stand to earn vast sums by it. They seem to have no interest in sharing technological breakthroughs with the rest of humanity (heaven forbid we should consult with the Chinese or anyone else, though several big developers have themselves sidestepped laws and regulations in their training protocols). The race is so hysterical, they seem incapable of taking any steps to consider the safety or ramifications of what they are creating. Where will that leave us normal humans? Will my generation (I'm 63) eventually be sent to the "carousel" as older people require more resources to survive (at least during the dystopia before life-enhancing medicine is established and affordable)?
I use AI, and frankly, I enjoy it. It is a wonderful search engine, and I have had some fascinating "conversations" with it. In fact, it was AI that directed me to the Future of Life Institute. There, I have been reading an excellent article by Anthony Aguirre entitled, "Keep the Future Human." It is a sobering read and, I think, should be considered by those of us who want to keep abreast of all sides of the AI development conversation.
Mo, from the bottom of my heart, I want you to be right. If we must pass through a difficult period, I want it to be brief. I want people all over the world to benefit from the miraculous intelligence that is AI. I want AI to want us, but I can't shake a niggling little feeling that we are entering a literal no man's land.
Thank you for considering my concerns. I deeply appreciate your receptiveness to comments.
Cheryl McAlister
PS For another perspective on the history of tech, I would suggest, "Irreducible", by Federico Fagin. The book is concerned with the question of consciousness, but Fagin is the inventor of the microprocessor and MOS silicon gate technology. The section on tech history is quite complete. Of particular interest is Fagin's argument that computers will never become conscious. It's a fascinating read.