## Infinite Monkey Theorem

2

2

Comparing to a monkey with a keyboard creating Shakespeare a.k.a Infinity Monkey Theorem, We now have AutoML : A machine learning software that can create self-learning code. | Wiki

Are we near to Singularity (self-aware machines) ?

2Also, your title and the Infinite Monkey Theorem have no obvious link to your question on the last line, or to the Wikipedia entry on automated machine learning. It would help if you explained their relevance to your question. – Neil Slater – 2018-06-22T14:32:45.993

@Neil Slater Thanks for your suggestions, It's a correlation metaphor, please note the "instead" and also continue with AutoML without jumping to Singularity part! Hope you like it. – Surya Sg – 2018-06-22T15:03:30.503

1Yes, we are near to Singularity. Recently, Sophia the robot meets a stylist and gets makeup and a haircut. While the procedure Sophia was online, that means she recognized any problem with the lipstick, the eye gloss and the hair extensions. If that wasn't a sign, what else? – Manuel Rodriguez – 2018-06-25T07:14:46.053

2@ManuelRodriguez Not sure if you are trolling but good one if so – hisairnessag3 – 2018-06-25T13:45:43.713

2I actually get what you're asking. In some sense, we humans are millions of monkeys moving AI forward through trial and error. Now that machine learning has been validated, there may be exponentially more monkeys engaging in this trial and error. (My advice is to edit this question to provide more detail on your thesis!) – DukeZhou – 2018-06-25T17:51:02.703

Depends if learnings a function of conscious self awareness and not a product of material construction with which the conscious could be separated from. – Bobs – 2018-08-28T19:36:05.707

1

Your analogy is wrong , in the infinite monkey theorem , the monkey randomly types some characters let us put it in a more formal way . consider a perfect random bit generator , which randomly generates bits , then if we take a string 'x' on a long run the generator somehow generates 'x' since it has to go through all possible (infinite) combinations in a long run . but in automl we already know about the model we want to achieve , we randomly try some model and calculate the "discrepancy" between the required model and our current model which is passed as "gradient" and based on this "gradient" information the model changes.

But in infinite monkey theorem the monkey has no clue about "shakesphere" text it is just randomly hitting the keys and we as an observer should observe if the text matches , and no "information/gradient" is passed to the monkey about why it is not hitting the right keys.

put it simply , the monkey is not (machine)learning.

1

Your point about acceleration of "monkeying around" via Machine Learning is a good one.

In the past, we've only had millions of humans messing around with technology and nature to "see what happens" when you do this or that. (The case could be made that this is not random, rather it results from conscious intentionality, but I'm not fully convinced it's not simple evolutionary process with humans as yet another instrument. We certainly perceive ourselves to have free will, but even in that case, there are greater forces at work.)

Now that we have unassisted machine learning, there are less limitations on the expansion of the number of monkeys that can mess around, (although the scope of what they can mess around with is more limited, being algorithms and not having physical bodies in the same sense as organic life and proper primates (including humans.)

One of my favorite recent results is Brown's Evolutionary Game Design, in which an automated system produced novel, playable games. Games are, fundamentally, algorithms, and if working models can be produced by artificial intelligence, it seems reasonable to assume that producing other types of functioning algorithms, specifically goal-oriented algorithms, is merely a matter of complexity.

• This would definitely seem to put us closer to the hypothetical singularity.

However, artificial consciousness is an entirely separate issue, and, likely impossible to confirm. (See the Chinese Room Argument, which is itself problematic in that the only way to truly validate the qualia of another entity is to be that entity, and even then, the assessment is still entirely subjective.)

1

No, we're still very far away from Singularity or Terminator's Skynet.

Put in very simple terms: A machine learning software that can create self-learning code is completely unaware of it's environment and thus will just see the world through the sensors it has, available to perceive it only that way, and in this case it's just files and code based on it's programming language and meta definitions. That is it's universe, it's world and everything. It doesn't know about trees, humans, ants, bees, flowers, water, earth, fire etc. Well if like terminators Skynet it would gain access to new systems, perhaps one day, with years of evolution, trial and error, something more intelligent would emerge.

Take into account: most times, real intelligence emerges over time, and is not programmed. As long as we humans are not being able to define intelligence, we cannot program it, so it has to emerge by itself following a real evolutionary proces. A ML Software writing self-learning code could be therefore seen as a most primitive single-celled organism, at max. but it still would need years and years of evolution. BTW: I mean googles AI which is able to "paint" by itself using animal faces, psychedelic colours and fractals is imho far away from being singularity or creative. The same applies to the robots talking to themselves who "invented" their own language nobody could understand or evaluate if it was even a language...

I deeply recommend some books of Any Clark concerning filosofical issues and AI: e.g. "Being there: Putting Brain, Body and World together". Andy Clark is a filosofer which takes a deeper look at AI, the human brain, cognition sciences, filosofy, psychology, sociology, etc. Also very interesting book of him: "Natural Born Cyborgs - Minds, Technologies, and the Future of Human Intelligence"