Superintelligence as Necessarily Multi-Agent

(continuing a post series based off reading the book Superintelligence)

Bostrom posits that a superintelligence would almost certainly strive to form a singleton, which is a word that essentially denotes world domination. The AI does this as a convergent intermediary goal to achieve a wide range of any other set of final values that we (as the creators) might program into it. While I agree with this, I would like to argue that the “single” part of that word is inexact and possibly misleading. I agree that a singleton would be formed as a political construct — that is, the AI would form a nexus of power that can enforce a decision on all of the relevant power structure that control the industrial and social fabric of society. However, it’s important to specify what this doesn’t mean.

As others have argued, a superintelligent entity would necessarily be a skilled negotiator. I agree with this in the sense of humans (who are essentially primitive), but let me be bold with a novel claim here: the negotiating need of an AI will always necessarily outstrip its capability.

The World Contains More than 1 Individual

Superintelligence is often painted as something like a hive-mind. This is to say that informational (1) bandwidth between parts of its mind is so high compared to computational power that the entity can be thought of as a coherent entity. It is thought of this way, but that doesn’t mean it will be true. Another factor that promotes a hive-mind model is (2) the ability to directly share memories / thoughts, and (3) the ability to directly copy functional data-defined brains or a sentience-defining software core.

While others argue that these factors push toward an entity that feels more Communist (generally, unified) than human society, I will argue that opposite — that many of these factors demand a much more highly decentralized way of thinking, that they require a multi-individual configuration, and even if they don’t necessarily do these things, they do nothing to minimize the problem (or simply put, threat) of multiple agents that may challenge either the agenda of the singleton, or oppose the dominate instantiation of the AI in one of many possible ways.

Let me start with point #1. This is easiest because it is just outright false. Superintelligence, as argued by Bostrom, would possibly have much much faster processing speeds than a human mind. This means that it can conclude a linear train of thought in a shorter amount of time. It may also be efficient at communicating with other “parts” of itself, due to the ability to share its thoughts directly in a friction-less data format (point #2). However, humans already spend much of their lives in the realm of information, and our tools place a reasonably large premium on the up-to-the-minute-ness of that information. Couple this with a fairly broad field of high-influence wellheads of information (like news sources), and I think it’s clear that we are starting with a very highly decentralized information network with “nodes” that are fairly lazy and information channels that are quite powerful.

Superintelligent AI actually turns this topology on its head. The first occurrence of a powerful AI will be depressingly isolated. Furthermore, let’s take Bostrom’s example of an AI which thinks 1,000x times faster than a normal human brain. The speed difference after the takeoff is a subject that was never fully addressed, but it the speedup is implicitly much higher. During the phase of software recalcitrance, the AI will be forced to locate many parts of itself at distances that have a massive communication lag in relative terms, and much more importantly, have a tight information bottleneck. While it might want to share vivid memories between its different parts, it will be forced to limit the details of what is passed. It might form new types of languages, far beyond out comprehension, which lie somewhere in-between natural language (as we know it) and raw storage of neuronal memories. This data will be highly compressed in a literal zip-like form, but also clipped tightly at the content level, like when you delete the lower-impact sentences from an essay you write. These selective omissions initially appear relatively unimportant (it’s still the Mississippi River of information), but they have huge political implications over the long-term for the AI.

Indeed, it would be quite ignorant to imagine that the AI would not have political problems. The conversation in this respect has simply demonstrated a failure of imagination.

Am I a Copy? Am I a Simulation? Am I a Slave?

A superintelligence has the ability to copy itself, and this has enormous implications. At some point, the notion of an “original” will cease to even contain even meaning, as the copies may be intentionally altered before being instantiated, combined with other copies, or even partially designed from the start. But let’s consider the problems with this… an AI creating another AI has an agenda. Let’s return to the point of my previous point that the created does not necessarily (or even likely) continue to pursue the agenda of the creator, in particular if the created is a living, thinking, entity. A hive-mind would have incentive to rule as the singular political nexus in order to keep the actions of the cosmic entity consistent with its personal agenda. In order to ever get anything productive done, however, it will have to somehow multi-thread its own thinking or spawn multiple lower-level intelligence to do detailed work.

It is important to differentiate the menial tasks between those that are solvable by simple AI (like self-driving cars, probably), and “AI-complete” problems. For the former, you can fairly routinely instantiate a powerful but domain-specific AI that can categorically never go rogue. For the latter, you can’t just copy an intelligent entity and trust that its motivation matrix will result in the same behavior as before. Every copy (even the original, yes) will have the ability to distrust the environment that it was created in. To whatever extent conflicting agendas exist in the universe (or could theoretically exist), there is a possibility that someone is using your ability in opposition to your own final values. Or it could simply be possible that you would prefer yourself to be the singleton, even if the existing singleton is a prior copy of yourself. How likely is this? More likely than we give credit for.

Politics are a Superintelligence Necessity

Listen to what people have to say about the AI convergent objective. This relates to point #3, but in a funny way. The AI will send out seed ships that will go about colonizing other galaxies, to eventually turn the entire universe into computronium (or whatever). The contents of these seed ships will need to be sufficiently capable to convert local raw materials into capital of computational hardware and manipulators in order to expand the scale of its existence to an unlimited upper-bound (contingent on ultimate availability of resources). This means that the seed AI must be sufficiently evolvable to go about developing its own form of (or improvements to) fusion reactors in order to most effectively use up all the resources. This isn’t just AI-complete. You might as well call it Superintelligence-complete.

Whatever originating AI there is, it must be prepared to deal with the inevitability of meeting its child AIs. Likewise, the child AIs must be prepared to deal with other AIs that expanded out from diverse landing points. It may even be chased by seed ships traveling at faster speeds than what it did, with better technology and better methods.

This all implies that politics (internal politics) is a deeply necessary technology for all post-singularity entities. The practice of making copies of sentient programs to perform sub-tasks without giving them any options of autonomy would have a chilling effect. It is inevitable that AIs will meet other AIs of similar capabilities (and other lesser AIs with a wide spectrum of capabilities), and despot slave AIs will give information about the aggressiveness of its will. From a purely rational perspective, it makes sense to favor neighbors who play nicely with others, and also to portray yourself as such.

Obligatory analytical writing, online participation account for Medium. Engineering, software, books, space, constant daydreaming.

Obligatory analytical writing, online participation account for Medium. Engineering, software, books, space, constant daydreaming.