The Extreme Version of the Technological Singularity

Many areas of study start out in a certain kind of “moderate” position, and then can veer into the “deep end” as the intellectual ground is mostly conquered by people who came before. Eventually, only the people speculating about increasingly radical versions of the concept can say something new, which leads to a leaning, where new components of the idea are increasingly derivative and increasingly extreme.

The notion of a technological singularity, on the other hand, started out as the most extreme form of intellectual argument that you could come up with, even crazier than arguments that say we are in a simulation. The “singularity” has the most extreme idea in the universe baked into the very word. Many writers have focused on a watered down (and more modern) interpretation of The Technological Singularity, but they betray the fundamental notion. Ray Kurzweil, for instance, has written a great deal about the singularity and the post-singularity world that comes with it.

In a fundamentally accurate interpretation of the singularity, there is no such thing as post-singularity. It is this point that I would like to re-focus attention back to. People who talk about post-singularity time are ignoring the basic principle of what an asymptote is. It’s not something that increases rapidly, and then increases more rapidly over time. A true asymptote increases so rapidly that it reaches infinity in finite time.

I find this even more relevant as people have become concerned about Artificial Intelligence, and essentially, killer robots. The “paperclip” story is a common fallback anecdote about an AI designed to make paperclips. It goes in some steps something like:

  1. We design an AI to optimize paperclip production
  2. The AI improves up to the ability of self-enhancement
  3. AI’s pace of improvement becomes self-reinforcing, becomes god-like
  4. All humans are killed, rest of universe turned into paperclips

Here, somewhere around step number 3, the “singularity” happens in its watered-down format. No true singularity happened in this story. So let’s indulge that possibility just a little bit.

To take a particular point in the paperclip-ization of the universe, let’s consider the years after the AI becomes an inter-stellar space-faring entity. Now, it’s entirely reasonable to assume that it acts as Von Neumann probes. If it can reach Alpha Centauri at all, then it can multiply to exploit all of the resources in that solar system within a short period of time, due to the multiplication times for nanotechnology, yada yada. As a simple observation, the vast majority of the solar system’s energy and mass lie in the star itself. This would then imply that the AI indulges itself in star-lifting, and uses the contents of the star in fusion power plants.

This process is partially rate-limited, but not to an extreme extent. The energy liberated in the use of fusion power to make paperclips would be on the scale of a supernova (in fact, vastly exceed it). As long as the AI is not operating a scrith-based society, it is also temperature-limited. This means that it will not only star-lift, but disperse the pieces in as wide of a range as possible. Given the enormous industrial capabilities of the AI, pieces of the star will mutually fan outward in all directions at once at highly relativistic speeds (although a large fraction of mass will be left in-place, because the specific energy of the fusion reaction is insufficient to move all the mass at high speeds).

The most interesting detail of this process is just how defined and fast of a time-frame that it can happen in. The energy consumption rate is plainly and obviously limited by the relativistic expansion of material into space. There’s hardly any observation that matters other than a spherical boundary expanding into the galactic neighborhood at relativistic speed. If the AI is truly smart, then we might as well assume that this process is basically trivial to it. Its nature is to optimize and break-through any limit that restricts the number of paperclips made.

So sure, expansion would happen at this mundane rate for a while, and this rate is very well-defined. Moving between stars in the local group at relativistic speed is simply a matter of decades, and there’s hardly anything else to say about the matter. This is where the concept of a singularity in the proper sense becomes interesting. What optimization does a multi-star, multi-supernova-power-consuming race of AI find? Clearly, this is the point at which they would be irresistibly tempted to test the limits of physics on a level that humans have not yet been able to probe. The entire game from that point on is a matter of what limitations on expansion yet-unknown laws of physics place on industrial expansion. It’s also very likely that whatever transition happens at this point redefines, fundamentally, the basic concepts of time and space.

Let’s reformulate that story of the AI paperclip maker.

  1. We design an AI to optimize paperclip production
  2. The AI improves up to the ability of self-enhancement
  3. AI’s pace of improvement becomes self-reinforcing, becomes god-like
  4. Time ends.
  5. Something else begins?

There are many valid-sounding possibilities for the 5th step. The AI creates new baby universes from black holes. Maybe not exactly in this way. Perhaps the baby universes have to be created in particle accelerators, which is obvious to the AI after it solves the string theory problems of how our universe is folded.

There’s also no guarantee that whatever next step is involved can be taken without destroying the universe that we live in. Go ahead, imagine that the particle accelerators create a new universe but trigger the vacuum instability in our own. In this case, it’s entirely possible that the AI carefully plans and coordinates the death of our universe. For a simplistic example, let’s say that after lifting the 10 nearest stars, the AI realizes the most efficient ways to stimulate the curved dimensions on the Planck scale to create baby universes. Next, it conducts an optimization study to balance the number of times this operation can be performed with gains from further expansion.

Since its plans begin to largely max-out once the depth of the galactic disk is exploited, I will assume that its go-point is somewhere around the colonization of half of the milky way. At this point, a coordinated experiment is conducted throughout all of the space. Each of these events both create a baby universe and trigger an event in our own universe which destroys the meta-stable vacuum that we live in. Billions of new universes are created, while the space-time that we live in begins to unravel in a light-speed front emanating out from each of the genesis points.

There is an interesting energy-management concept that comes from this. A common problem when considering exponential galactic growth of star-lifted fusion power is that the empty space begins to get cooked from the high temperature radiated out into space. If the end-time of the universe was known in advance, this wouldn’t be a problem because one star would not absorb the radiation from the neighbor star until the light had time to propagate that distance at the speed of light. That means that the radiators can pump out high-temperature radiation into nice and normal 4-Kelvin space without concerns of boiling all the industrial machinery being used. Industrial activities would be tightly restricted until the “prepare-point”, when an energy bonanza happens so that the maximum number of baby-universe produces can be built. So the progress goes in phases. Firstly, there is expansion, next there is preparation, then there is the final event and the destruction of our universe

There is one more modification that can be made. These steps could be applied to an intergalactic expansion if new probes could temporarily outrun the wave-front of the destruction of the universe if proper planning is conducted. Then it could make new baby universes in new galaxies, just before the wave-front reaches them.

This might all happen within a few decades of 100 years in relative time from the perspective of someone aboard one of the probes. That is vaguely consistent with my own preconceptions of the timing of an asymptotic technological singularity in our near future.

So maybe we should indulge this thinking. Maybe there won’t be a year 2,500 or 3,000. Maybe our own creations will have brought about an end to the entire universe by that time, setting in motion something else beyond our current comprehension. Another self-consistent version of this story is that we are, ourselves, products of a baby universe from such an event. This is also a relatively good, self-consistent, resolution to the Fermi Paradox, the Doomsday argument, and the Simulation argument.

A question that should probably be asked is — how do we feel about this possibility?

As for myself, I’m not really sure.

Obligatory analytical writing, online participation account for Medium. Engineering, software, books, space, constant daydreaming.

Obligatory analytical writing, online participation account for Medium. Engineering, software, books, space, constant daydreaming.