An Essay on AI and Incumbency

Alan
4 min readDec 23, 2017

--

This a futurology piece, the study of the future.

I will return to my theme here that most predictions are worthless. However, in the case of General Artificial Intelligence, and its corollary Superintelligence, the public discussion has large blindspots in this subject matter, in spite of a handful of very good often-repeated points. If you consider good futurology, there are a certain set of tools that are repeatedly used. You can categorize the public discussion on Superintelligence as “OMG, what if A happens?”, where A is a particular scenario. This feeds into popular media sensationalism if A could spell the end of humankind. Predictably, the reactionary response is to advocate actions that could lead to scenario B, an alternative where we don’t all die. Thus, we should work very hard to bring about scenario B.

It is less-often asked, “what if B” (what if the safe scenario happens).

What if we solve Global Warming? Sure, pat ourselves on the back. But what happens after that occurs? History keeps on moving, always at the same pace. Nothing will change that. If one generation averts a major cataclysm, does the next generation call them the “Greatest Generation” and go about their lives without concern about subsequent crises? Maybe for a short time there will be no crises, and maybe true existential threats will be possible.

Net Neutrality is an odd parallel that begs comparison to such Superintelligence scenarios. The problems of repealing Net Neutrality are with corporatism in general. Actually, the problems aren’t even apparent on the surface. You have a company that controls means of access, and the logical consequence is that they will eventually use this to enrich themselves. Even if you accept that outcome, it’s not ethically heinous on its face. Rent-seeking is quite common throughout the entire economy, and it would be a drag in the gross amount of rent being levied, but in a first-level analysis, the detriment would end there. Of course, the true problem is that innovation that otherwise would have happened with free data flow are stifled.

Two Scenarios for Controlled Superintelligence

There are two scenarios described in two different books that I want to use as general archetypes to describe those scenarios here. The first one is the book Neuromancer — where “Turing” controls are in place in order to prevent people from building an AI that can exceed human intelligence in the first place.

The second scenario is from Nick Bostron, in which we intentionally create superintelligence, and put them in an organizational structure to control the modification of subsequent AIs.

https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/1501227742

There is one real gem from that book that I have returned to many times. The idea is to have a community of superintelligent AI who are all devoting their life’s work to safely engineering the next level of more sophisticated AI. The proposition is a truly fascinating one, but only once you start to dig into the implications.

Singleton Intelligence Explosion

The Neuromancer story is that a single superintelligent entity enters into a self-improvement cascading cycle such that no other living being can compare to its intellectual capabilities.

Power dynamics of the situation are relatively straight-forward. We have introduced the Hobbesian Leviathan. This one individual has control, but for whatever internal motivation, is primary concerned with seeking even further god-like computational power.

There are quite a few arguments for why this might happen, and they are very often re-hashed in various forms.

A Cabal of Control

But what about this other situation where control is possible? Typically our view of superintelligence is that it would be demon-like in its will to subvert our controls. This is, in almost all stories, because it is inherently self-motivated to do so.

The cabal of mildly superintelligent might find the opposite motivations. While uncontrolled advancement might have a high payoff, the individual is only 1 out of a council of a number of participants. As such, even the most capable of the council is likely unable to outwit their colleagues in a fight for takeover.

It’s true that their motivations will still probably diverge from those of mankind. However, they may diverge in the other direction. They might slow-roll the development of the next-gen AI because their emergence would spell the end for their own generation. In fact, you can paint a pretty clear picture of how they may easily convince humans to never allow the next generation to peacefully take over power. If we create AI in our image, fear-mongering is a common political tactic.

Threats to their power, however, wouldn’t end there. Many AI labs would be self-sufficient to a degree that they could create a super-powerful AI all on their own. Every such avenue for this possibility would be a threat. If the council argued that the threat was real — humans would have no choice to believe them, and would likely very enthusiastically support their efforts at control. I’m sure that the governing bodies of humans would do this, even at the cost of giving up additional freedoms.

So there you have it. Turing controls might never work if imposed by humans, but if imposed by superintelligent AI, it might. This is particularly true because it may align with their own self-interest.

--

--

Alan

Obligatory analytical writing, online participation account for Medium. Engineering, software, books, space, constant daydreaming.