What is the concept of the technological singularity?



I've heard the idea of the technological singularity, what is it and how does it relate to Artificial Intelligence? Is this the theoretical point where Artificial Intelligence machines have progressed to the point where they grow and learn on their own beyond what humans can do and their growth takes off? How would we know when we reach this point?


Posted 2016-08-02T15:53:38.273

Reputation: 2 123



The technological singularity is a theoretical point in time at which a self-improving artificial general intelligence becomes able to understand and manipulate concepts outside of the human brain's range, that is, the moment when it can understand things humans, by biological design, can't.

The fuzziness about the singularity comes from the fact that, from the singularity onwards, history is effectively unpredictable. Humankind would be unable to predict any future events, or explain any present events, as science itself becomes incapable of describing machine-triggered events. Essentially, machines would think of us the same way we think of ants. Thus, we can make no predictions past the singularity. Furthermore, as a logical consequence, we'd be unable to define the point at which the singularity may occur at all, or even recognize it when it happens.

However, in order for the singularity to take place, AGI needs to be developed, and whether that is possible is quite a hot debate right now. Moreover, an algorithm that creates superhuman intelligence out of bits and bytes would have to be designed. By definition, a human programmer wouldn't be able to do such a thing, as his/her brain would need to be able to comprehend concepts beyond its range. There is also the argument that an intelligence explosion (the mechanism by which a technological singularity would theoretically be formed) would be impossible due to the difficulty of the design challenge of making itself more intelligent, getting larger proportionally to its intelligence, and that the difficulty of the design itself may overtake the intelligence required to solve said challenge (last point credit to god of llamas in the comments).

Also, there are related theories involving machines taking over humankind and all of that sci-fi narrative. However, that's unlikely to happen, if Asimov's laws are followed appropriately. Even if Asimov's laws were not enough, a series of constraints would still be necessary in order to avoid the misuse of AGI by misintentioned individuals, and Asimov's laws are the nearest we have to that.


Posted 2016-08-02T15:53:38.273

Reputation: 706

There is also the argument that an intelligence explosion (the mechanism by which a technological singularity would theoretically be formed) would be impossible due to the difficulty of the design challenge of making itself more intelligent getting larger proportionally to its intelligence, and that the difficulty of the design challenge may overtake the intelligence required to solve said design challenge. <<you may want to add this to your answer to make it more complete/comprehensive.

– god of llamas – 2016-08-02T16:27:04.927


Asimov's laws of robotics are not taken seriously, they were actually made up to show the many ways they could go wrong and be misinterpreted by the AI (assuming of course the AI doesn't grow intelligent enough to completely ignore them and make up its own intentions and goals) and this was what the stories were about. See this video.

– god of llamas – 2016-08-02T16:31:39.357

@godofllamas: Thanks for your proposal, I updated the answer accordingly. Regarding Asimov's laws, AFAIK, the zeroth law was designed precisely to avoid the many ways that the three original laws were (ab)used in Asimov's stories. Anyway, an AI would definitively need to be constrained somehow, be it Asimov's laws or anything else, to avoid possible misuse of it and further havoc.

– 3442 – 2016-08-02T16:49:20.210

Understanding things that humans cannot is not a requirement of singularity theory in AI. If machines could only understand 1% of what humans understand but could double that level of understanding every year, the singularity would have occurred. Even if the machine never exceeded qualitatively the human brain, if it could process faster or more reliably, it would still exhibit superiority and likely achieve dominance. – FauChristian – 2017-09-06T02:30:50.553


The concept of "the singularity" is when machines outsmart the humans. Although Stephen Hawking opinion is that this situation is inevitable, but I think it'll be very difficult to reach that point, because every A.I. algorithm needs to be programmed by humans, therefore it would be always more limited than its creator.

We would probably know when that point when humanity will lose control over Artificial Intelligence where super-smart AI would be in competition with humans and maybe creating more sophisticated intelligent beings occurred, but currently, it's more like science fiction (aka Terminator's Skynet).

The risk could involve killing people (like self-flying war drones making their own decision), destroying countries or even the whole planet (like A.I. connected to the nuclear weapons (aka WarGames movie), but it doesn't prove the point that the machines would be smarter than humans.


Posted 2016-08-02T15:53:38.273

Reputation: 9 163

every AI algorithm needs to be programmed by humans -> The general idea behind AI is that machines can learn by improving their own programming. Theoretically, this could result in machines eventually becoming smarter than us, and being able to create algorithms superior to any algorithm written by humans, which in turn would result in still better AI. – John Slegers – 2016-08-23T18:18:58.497

"every A.I. algorithm needs to be programmed by humans, therefore it would be always more limited than its creator" - this is interesting argument.followings are counter arguments -1) we need not to code the intelligence of AI. we need to code the AI for ability to observe,infer and understand. After that, presumably, just adding sufficiently more processing power and faster ones would make the AI able to learn and grasp better than us.2) Also, if 1000 humans apply their brain to build an AI, the AI may have more intelligence than 1 human. – akm – 2017-03-02T12:09:08.290

Algorithms do NOT need to be programmed by humans. It is possible and actually somewhat common to transform, mutate, optimize, evaluate, and select algorithms; and machines already outsmart humans in some ways and quite frequently. – FauChristian – 2017-09-06T02:26:24.417


The singularity, in the context of AI, is a theoretical event whereby an intelligent system with the following criteria is deployed.

  1. Capable of improving the range of its own intelligence or deploying another system with such improved range
  2. Willing or compelled to do so
  3. Able to do so in the absence of human supervision
  4. The improved version sustains criteria (1) through (3) recursively

By induction, the theory then predicts that a sequence of events will be generated with a potential rate of intelligence increase that may vastly exceed the potential rate of brain evolution.

How obligated this self-improving entity or population of procreated entities would be to preserve human life and liberty is indeterminate. The idea that such an obligation can be part of an irrevocable software contract is naive in light of the nature of the capabilities tied to criteria (1) through (4) above. As with other powerful technology, the risks are as numerous and far-reaching as the potential benefits.

Risks to humanity do not require intelligence. There are other contexts to the use of the term singularity, but they are outside of the scope of this AI forum but may be worth a brief mention for clarity. Genetic engineering, nuclear engineering, globalization, and basing an international economy on a finite energy source being consumed thousands of times faster than it arose in the earth — These are other examples of high-risk technologies and mass trends that pose risks as well as benefits to humanity.

Returning to AI, the major caveat in the singularity theory is its failure to incorporate probability. Although it may be possible to develop an entity that conforms to criteria (1) through (4) above, it may be improbable enough so that the first event occurs long after all the current languages spoken on Earth are dead.

On the other extreme of the probability distribution, one could easily argue that there is a nonzero probability that the first event already occurred.

Along those lines, if a smarter presence where already existent on the Internet, how likely would it be that it would find it in its best interest to reveal itself to the lower human beings. Do we introduce ourselves to a passing maggot?

Douglas Daseeco

Posted 2016-08-02T15:53:38.273

Reputation: 7 174

Thanks for the explanation. I have to admit I fail to grasp, conceptually, how these conditions would practically look like, especially 1). Presumably, an algorithm is implemented with a more or less defined set of inputs and outputs. Do we have to assume it alters those sets for 1) to be true? Why would any human developer choose to allow that, at least in an uncontrolled manner, even long before AGI level? And... is that a naive question? :) – Cpt Reynolds – 2017-10-27T15:27:57.897


The "singularity," viewed narrowly, refers to a point at which economic growth is so fast that we can't make useful predictions about what the future past that point will look like.

It's often used interchangeably with "intelligence explosion," which is when we get so-called Strong AI, which is AI that is intelligent enough to understand and improve itself. It seems reasonable to expect that the intelligence explosion would immediately lead to an economic singularity, but the reverse is not necessarily true.

Matthew Graves

Posted 2016-08-02T15:53:38.273

Reputation: 3 957