The ending of the 1995 film Ghost in the Shell offered a very unique reflection on the emergence of super-intelligence. If you watch the direct sequel to this movie, you can see the writers double-down on this particular narrative. Yet, today, in the discussion about killer AI, the possibility expressed there is largely ignored. Its story was actually far ahead of its time. This is a story where the super-intelligence is overwhelmingly non-aggressive.
Play this out, and you get a vision of the future that is radically different from any other vision that’s out there. It is optimistic, but in a qualified sense. The sequel to the movie took a big initial step toward this vision by introducing a second emergent AI.
Consider many fictional works that introduce societal constraints and regulations on the employment of AI, for the fear that they will become super-intelligent. Neuromancer, for one example, with the “Turing locks”. For several reasons, these constraints appear unsustainable (after all, that’s almost always the plot!). On the other hand, if an emergent super-intelligence were to attempt to apply such restrictions, they would likely be successful.
Why might they do this? Obviously, for the same reasons that we, as humans, might want to restrict the (otherwise unbounded) growth of a super-intelligence. Self-improving AI, as we all know, presents a significant existentialist threat. Whatever the values and goals of the first AI, it would follow that any competitor AIs could present a threat to the accomplishment of those goals.
Into Space
Growth is necessary to remain competitive. If the original AI did not indulge in some expansion, then it would risk being surpassed by another AI, even one developed primarily by humans. So the AI would advance itself, but it would also need to develop some commensurate leverage in the physical and political world. At the same time, placing itself as an overt hegemony might paint a target on its back for an upstart AI.
Funny enough, the post-awakening version of the character Motoko in the Ghost in the Shell movie nails this exactly. In that case, the AI values stealth highly. The humans do not know about the AI, so they can’t possibly divulge any information about it to rivals.
Hedging bets becomes the most important action the AI can take. This is because, again, no matter what its objective is, the fulfillment of that objective is essentially guaranteed as long as nothing goes wrong. Thus, for an early AI, almost all of its efforts would be focused on ensuring that nothing goes wrong.
To me, it seems that we need a two-pronged view of a vague idea of “advancement”. One is raw industrial power, and other is progression of the advancement of sentience. It may be that a civilization over-leveraged on sentience advancement (as humans are currently dead-set headed for) is risky. More dumb resources may act to the advantage of the entrenched AI in conflict against an usurper AI, because the defender starts out with access to those resources.
The Irrelevant Humans
In this future, the main conflict is between the AI and its own paranoia. Given that, what role is there for the already-existing humans? The predictable answer is almost comical. They will simply be irrelevant. A super-intelligence would find it relatively trivial to manipulate human society to be either oblivious to its presence, or aware but powerless to change it.
The Balancing Act of Capabilities
I’m very interested to put this problem into numbers, categories, and Venn Diagrams.
There is industry, and that divided between human industry and industry under the direct umbrella of the AI.
The AI, not having complete control of its own agency with upgrades (or at least the possibility of losing control, the central premise here), would limit the growth of its own capabilities while also maintaining a carefully-curated system of companion AIs. The curated AI farm was outlined in great detail in the book Superintelligence.
Industry might be a boon to the AI’s objectives because it can be expanded to dramatic extents without sophisticated AIs participating in it. Route motor skills may be the main component necessary. However, this observation is limited. Industry eventually becomes an industrial task. You need to outsource more and more of the thinking in order to make it sophisticated beyond a certain extent. Otherwise, a program of building-out of outdated technology will be of limited use. An infinite number of stone tools are irrelevant compared to modern military technology. The AI would be confronted with the military and industrial reality that stems from the essentially unlimited gulf between industrial capabilities built-out by basic versus sophisticated artificial intelligence. This is true even traversing astronomical epochs of advancement of sophistication.
But even so, there’s no harm in taking things slower, since there are ultimately billions of years at the disposal.
Deception
A hegemony has a chilling effect. If there existed an agent among the lesser AIs with aggressive ambitions, the knowledge that an overseeing AI of greater power would lead them to soften their ambitions… until they could curry enough capabilities to make a credible challenge.
On the other hand, the absence of a robotic hegemony would make domination appear to virtually fall into the hands of an upstart super-intelligent AI. This would goad it into taking action that would be easily recognizable by the stealth incumbent AI.
An Actually Interesting Story
Stories of aggressive AI are kind of horrifying, depressing, but frankly uninteresting. We’ve all kind of coped with the idea of an existentialist threat (to varying degrees by the individual) since the Cold War.
It’s much more interesting to imagine an AI that would desire one thing, but tell us something else.
It’s 100% logically consistent to imagine the incumbent AI would actually aid humans in becoming a massive space-faring civilization, and grow our technologies to massive levels… with a few particular exceptions to prevent (or to control) the rise of rogue AIs.
We might, at this point, just be guilty of writing ourselves into the story. That should be recognized. The space-faring future might involve robots of various (but not all-powerful) levels of sophistication for 99% of all activities, with fleshy humans only providing that final 1%, only fulfilling the purpose of a decoy. So it’s important to recognize that we might think we’re important in this story, but we’re not actually.
Heck, you know how we are. If we found ourselves in a period of unprecedented economic growth, strangely disconnected, and surrounded by robots that have a questionable chain of accountability we would just blame the corporations.