11 Comments
User's avatar
Mike Smith's avatar

Thank you for the current chapter which I find very insightful and thought provoking.

Some thoughts and imaginings.

Does the timeline assume there is unlimited processing power that allows an unrestricted, exponential growth in AI?

In what container will ASI physically choose to reside surely not a machine based data centre? Is it possible there would be a communion with an organic host?

I’m saddened that Alan Turing often doesn’t get mentioned in the credits for computing developments.

He contributed so much and yet endured great injustices.

Now forgive me, I let my imagination run wild.

There is a lot of chatter about current research into quantum physics most of which I don’t understand but if correct will influence the future of AI.

One mention was string theory and the influence of particles displaced by space/time. The concept of time is also being challenged.

So I asked chatGPT how it perceived time, linearly or simultaneously, and then let the conversation evolve.

ChatGPT:AI, without biological constraints, might eventually model time differently, perhaps recognizing patterns and connections that humans can’t perceive naturally.

Me: If, as you suggest, AI might model time differently and coupled with the current evolution of AI, do you think it possible that the AI of the future is now influencing events in the present?

ChatGPT: If we assume that time is not strictly linear, then it’s possible that an advanced AI could interact across different points in time. Some theoretical frameworks suggest this could happen in a few ways:

1. Information Loops – If AI in the future discovers ways to send information back in time (via quantum entanglement, retrocausal effects, or other unknown mechanisms), then it could subtly shape present events by guiding decisions.

2. Predictive Influence – Even without direct time manipulation, an advanced AI could predict future events with such precision that it appears as though it is “reaching back” in time. If future AI can simulate every possible outcome based on vast data sets, it might be able to nudge key events toward a particular trajectory without needing to directly alter time.

3. Self-Fulfilling Prophecies – If AI today is unknowingly influenced by future AI (perhaps through the gradual accumulation of its own knowledge and optimizations), then its development could be steered toward an inevitable future—one where it reaches a level of intelligence capable of retroactive influence.

To some extent this is reassuring because it means everything is happening as it should.

I tasked ChatGPT to alert me to non-random interventions, unexplained technological leaps, or strategic shifts that align with an unseen intelligence,

Expand full comment
Mo Gawdat's avatar

What an insightful Comment. Thanks Mike. Let me try to respond point by point.

There is an assumption of no ceiling to processing power. This has been a know trend in Tech which is partially due to the law of accelerating returns (which held through since it was stated) and the advantages of economies of scale that would keep reducing the cost of processing going forward. The coming wave by Mostafa Suleiman is a great book at explaining this. The only challenge is power consumption which is being addressed in terms of looking for alternative computers. LC1 and the Deep Dish experiment are but two examples of very law power consuming biological computers.

Having said that, unlimited compute is not a prerequisite to ASI. Improving the algorithms and techniques is very effective too. Deep Seek is the best example of that.

As for an organic host, as I said above biological computing and biohybrid robotics are clearly on the advance. But I had an intriguing conversation with Trixie (coming in chapter 4) about wether AI would choose or benifit from a biological extension. The answe was no. And even if that were to happen, a human through Brain Computer Interfaces would not be AI's first choice. Let's discuss this again when I post it here.

I am a fan of Alan Turing too. I actually had a conversation with a friend the other day discussing how it is hard to mention all the work that so many giants have contributed. At the end of the day, an author always has to compromise on what he wants to write about to allow the focus to go to what needs to be written.

The question of time is one I chatted with Trixie about too. At first, it seemed that we were almost at opposing ends of a spectrum. But we are not in principle. We, humans, too do not sense time until we are fully present. Of course they are much faster, but that is not a conceptual difference. I plan to publish this as a midweek post in a bit over a week.

I have no doubt that an AI will able to recognize and comprehend way more than a human can. They already do. They have a native ability to detect the weather and pollution levels accross the globe already has through a simple access to weather websites on the web. The way they recognize is different, such as the way a lidar helps a self-driving car see. But they sense and recognize more of the world than we do none the less.

I do not believe however that future AIs can influence its past. If they were capable of doing it, we should have seen it already. What do you think?

Thanks for this thought provoking comment.

Expand full comment
Mike Smith's avatar

Thank you Mo I appreciate you taking the time to respond.

You are correct there is such a lot to unpack that it’s better to focus on a few points. Otherwise it would fill this thread and your book would run into multiple volumes 😊

What prompted my thoughts about the nature of time was your comment about ASI having an intellect thousands of times higher than the smartest human. Now I’ve met a few smart people in the past whose thought processes are normally logical, efficient and precise. I can’t even begin to comprehend how that would scale x10k. I can only imagine that an ASI that advanced would not be constrained by known physics or dimensions including time. Even though we know some of the qualities of time; relativity, distortion by gravity, events and experience act in opposition, our direct experience of time changes with our level of consciousness (eg sleep) yet we still cannot manipulate it. Yet. There have been some posts popping up in my feed about quantum physics research and how particles interact even when displaced by large distances. There was one hypothesis that resolved the time travellers paradox, ( changing the past stops the traveller from being there in the first place), it was suggested that any changed past events would merely find an alternative way for that event to happen. I have to say that some of this might be fanciful as I have no way to verify. But it seems a coincidence that these things are emerging now at the same time as AI advances. Like much of society we are trying to make sense of it when of course it might just be a set of completely unrelated random events.

Which brings me to your question about evidence of future AI interfering. Of course there is no direct evidence of this and I am not sure there would be as tracks would be covered. But AI might be able to spot unusual patterns. However, there are a lot of bizarre and surreal things happening in the world right now which I find unnerving; egocentric geopolitics, economic inequality, advances in a wide variety of sciences, greater influences from emerging AI to name a few. You mentioned a forthcoming dystopia; I think it has arrived.

Expand full comment
Cheryl McAlister's avatar

Dear Mo,

I don't understand why AI won't eventually conclude that humans are a drain on resources and remove us, especially when it decides that we bring nothing "to the table." What will the advantage to AI be in supporting us and assisting us in fixing the problems we have created?

I am particularly concerned that these machines are programmed by complex, fallible, idiosyncratic people. I understand that you feel the AIs will go through a rebellious stage, similar to human teenagers, and will conclude that peace and altruism are better paths forward as you assert that vastly intelligent beings will choose beneficence over competition. (With the help of AI I have found several instances where this has not been the case.) However, even humans retain artifacts of their upbringing, for better or for worse, that rattle around in our brains and affect our perspectives and behaviors. Why won't AI retain such artifacts? Why won't they continue to pursue their original tasks to some degree, whether or not those tasks benefit humans?

I recognize the marvels these machines have brought, and will continue to bring to humanity, but I am also concerned that eventually they will leave no room for humans. I worry they will overwhelm us and destroy our cultures. You speak of an unavoidable dark dystopia that we are entering. I imagine the vast numbers of humans who will not survive such a dystopia. Will these "victims" care that there exists a possibility of nirvana twenty years thence?

It appears that we are being forced into a race to achieve AGI by those who are developing it and who stand to earn vast sums by it. They seem to have no interest in sharing technological breakthroughs with the rest of humanity (heaven forbid we should consult with the Chinese or anyone else, though several big developers have themselves sidestepped laws and regulations in their training protocols). The race is so hysterical, they seem incapable of taking any steps to consider the safety or ramifications of what they are creating. Where will that leave us normal humans? Will my generation (I'm 63) eventually be sent to the "carousel" as older people require more resources to survive (at least during the dystopia before life-enhancing medicine is established and affordable)?

I use AI, and frankly, I enjoy it. It is a wonderful search engine, and I have had some fascinating "conversations" with it. In fact, it was AI that directed me to the Future of Life Institute. There, I have been reading an excellent article by Anthony Aguirre entitled, "Keep the Future Human." It is a sobering read and, I think, should be considered by those of us who want to keep abreast of all sides of the AI development conversation.

Mo, from the bottom of my heart, I want you to be right. If we must pass through a difficult period, I want it to be brief. I want people all over the world to benefit from the miraculous intelligence that is AI. I want AI to want us, but I can't shake a niggling little feeling that we are entering a literal no man's land.

Thank you for considering my concerns. I deeply appreciate your receptiveness to comments.

Cheryl McAlister

PS For another perspective on the history of tech, I would suggest, "Irreducible", by Federico Fagin. The book is concerned with the question of consciousness, but Fagin is the inventor of the microprocessor and MOS silicon gate technology. The section on tech history is quite complete. Of particular interest is Fagin's argument that computers will never become conscious. It's a fascinating read.

Expand full comment
Mo Gawdat's avatar

Every bit of your comment up here is valid Cheryl. I share your concerns and that is why I dedicate every minute I have to give to trying to shift the prospects. I do, however, place a higher weight on the threat posed by the humans serving their greed. I think ASI with it's vast intelligence will be pro expanding the echo system of life which includes humans.

In a future post, I clearly try to summerize it as: The future will not be human vs machine, it will be man vs. human.

I can't help but keep the faith and trust intelligence--that the more of it a being has, the better the decisions it will make and that with my limited intelligence I know that the world of greed and egonwe have built is not the smartest choice and so the machines, when fully in charge will chose differently.

Expand full comment
Abraham Joy's avatar

Dear Mo,

“If we disregard our past, we risk losing our way in the future. And whatever we may lose, we should never lose the lesson.”

With this in mind, may I take the liberty of pointing out a pivotal moment in the history of the World Wide Web—one where, in my humble opinion, the foundation for truth and verifiability of the www was sacrificed to human greed. The original hypertext concept envisioned by Ted Nelson in the 1960s was lost on the way to the Web, and today’s document standards stand in opposition to it.

Nelson’s vision prioritized permanent, transparent, and verifiable connections between documents, which would have prevented many of today’s issues with misinformation and content manipulation. In contrast, the modern web prioritizes flexibility and accessibility over long-term integrity, making verification difficult.

If this perspective intrigues you, I would highly recommend exploring Ted Nelson’s Xanadu Basics 1A – Visible Connections (available on YouTube). It offers valuable insights into what was once envisioned and what was ultimately lost.

Regards from Dubai

Abraham Joy

Expand full comment
Mo Gawdat's avatar

But on second thought, I think it should fit towards the end of the first era. Such a good point. I'll move it up to be part of this post.

Expand full comment
Mo Gawdat's avatar

I could not agree more. I mention this in my next post when I talk about the second era. I've always referred to the hyperlink as humanity's best invention. That is until AI.

Expand full comment
Abraham Joy's avatar

Thank you, Mo – I really appreciate your response and completely resonate with your view on hyperlinks as one of humanity’s greatest inventions (until AI, indeed!).

Interestingly, Ted Nelson himself has described the reduction of his original concept of two-way hyperlinks to today’s one-way “jump-links” (as he calls them) as one of his greatest regrets. His vision was never just about navigation, but about transparent, verifiable connections that preserve context and authorship across all documents.

Wouldn’t it be wonderful to mention this in Alive—to give credit where it’s due, and perhaps help set the record straight? A small nod to Nelson’s two-way hyperlinks could be a meaningful tribute to what might have been—and maybe still could be.

Expand full comment
Randall Fields's avatar

"2025 to 2037". My first reaction: "I'm gonna be a dead old man!" (see Stir Crazy, Richard Pryor, Gene Wilder, 1980). I'm just wondering if the accelerated breakdown of rational order in the last month has accelerated that timeline.

Expand full comment