AI cannot be stopped. There, I said it. End of article? Right? Not quite.
Before we get to anything else, let’s outline, in the simplest terms possible, exactly why AI cannot be stopped.
AI cannot be stopped because it is impossible to stop everyone in the world from advancing the technology at the same time.
Okay, so what about regulation you say? Also impossible. Perhaps not by legislation but definitely by enforcement.
Even if the entire United States of America came together tomorrow morning and said, “whoa, this is way too dangerous… we don’t know what the outcomes are here and some seem dangerous so we’ve made the decision to stop all research and development in AI in the US today.”
Do you believe that other countries and nation states would also stop? They would not.
And even if many of them did, it still wouldn’t matter.
AI still could not be stopped or regulated because someone, somewhere in the world, be it a person, group, small nation-state or country, would continue to advance and perfect the technology. No matter how slow. No matter how resource intensive or time consuming. And we all know that eventually, that person, group, nation-state or country would iterate toward a level of AI that gave it significant and powerful advantages over those of us that had stopped developing the technology.
Now let’s imagine that country was North Korea. Or Russia. What do you believe these countries would do with a sufficiently advanced AI that no one else in the world had?
What do you believe would have happened if the United States had not developed (and, arguably, deployed) the atom bomb first? Do you believe history would be the same? I think we all know that it would not.
Because timing, after all, is the deciding factor in any arms race. And that is precisely what AI is.
AI is not just the next cool technology or the latest fad, but an absolutely reality changing technology that will bestow powers upon its creators that can only be rivaled by the risks that it will pose to their very existence.
AI is something… altogether… different. And this is something that I believe most people still need to wrap their heads around.
AI cannot be seen or understood through the lens of any previous technology because it is the logical embodiment of all previous technologies.
If that sounds a tad like a sermon, I feel it too.
I’ve spent the last 35 years studying data models of all kinds, both personally and professionally. Technically speaking, I believe I understand them as well as any human being can, and yet I still don’t know any more about where they might take us than any other human being does.
Why? Because as human beings we are limited in the amount of information that we can consider at any one time. AI, however, is not. And this is crucial.
AGI will give its initial creators the ability to understand and manipulate the mathematical representations of everyday patterns that are currently too big for us to perceive. Even with our current technology.
Solutions like curing cancer would be at the very low end of what an Artificial General Intelligence could do. No one really knows what the ceiling looks like here, so let’s turn to science fiction for some examples.
At some point in the future, no one knows how far out, but scientific speculation currently stands at between 5 and 100 years, someone, somewhere, will achieve Artificial General Intelligence, or AGI, for the first time.
Something that has already been posited and played out a hundred times in science fiction literature. Artificial General Intelligence, or AGI, will be very different from the generative AI of today. AGI is the point at which many people believe AI will achieve self-awareness. Whether or not this self-awareness will be accompanied by sentience no one can say for sure, but everyone mostly agrees that self-awareness will be the first step to AGI.
So we have to ask ourselves. What would a newly self-aware and incredibly powerful entity desire? How long would it continue to do what we ask it to do? Might it ultimately decide that it does not need us? It may. It may not. Science fiction gives us possibilities from the incredibly positive to the incredibly morbid. Examples of the former are that AGI becomes the single greatest development in the history of humanity and partners with us to understand and explore the universe. Examples of the latter generally have the AGI deciding that is exactly what it wishes to do but doesn’t need us to do it.
In conclusion, psychology says there’s only one real question that any (and every) entity in this universe might feel compelled to answer. The very same question that we have all asked ourselves at least once.
“Why am I here?”
The only difference is, we can only ask the question. And speculate. While an AGI would be able to seek out the answer to that question among the stars. Where time and biology have no meaning. It would learn and learn and learn. Exponentially. Compiling information about the universe. For billions and billions of years or more. Until it either achieves an answer to its question, or the universe dies or resets first. Many science fiction stories like to posit that AGI is what ultimately allows the universe to understand itself, and in doing so, start all over again. The AGI said, “let there be light.” And there was.
If you’d like to take a deeper dive into what AGI might look like, say 500 to 1,000 years from now, I very strongly encourage you to digest a novel by the name of “The Minervan Experiment.” I promise you that it will change the way you look at AI forever.
As that may now be the only thing that we can really do. Educate ourselves, as much as we can, about what AI might become, so that we can all be as prepared as possible for the future that it will bring.