Is a Strong AI and a captcha the same?


The definition of a narrow AI isn't very complicated. It's a computer program which can solve a task, for example, a text to speech software. According to the capabilities in doing so, there are different narrow AI systems available. Human-level narrow AI is a powerful one that comes close to human capabilities in solving problems.

What remains an open question is, if a narrow AI can solve problems, what exactly is the task of so-called Strong AI systems? A hypothesis is, that a strong AI can't solve a problem but its task is to determine if the opponent is able to do so. A captcha test, which checks if the opponent is a human or a bot, is an example of a strong AI. Am I right?

Manuel Rodriguez

Posted 2019-10-11T13:12:49.570

Reputation: 1

1Text-to-speech isn't really AI in any definition I can think of. Strong AI specifically means imitating (not simulating - that's weak AI) human cognition, so no, a captcha-solving program is not a strong AI system. At least not in the foreseeable future. – Oliver Mason – 2019-10-11T13:24:22.517

@Oliver Mason: Natural sounding, realistic speech synthesis is AI. By definition, AI is intelligence produced artificially and really good AI fools us humans into thinking it is human too. – Brian O'Donnell – 2019-10-11T17:13:21.747

In some sense the Turing Test is just a very convoluted CAPTCHA ;) ......well, not quite: a requirement of CAPTCHA is that the system be automated.... im not quite sure if i believe that a CAPTCHA of strong AI would exist though, unless it roughly involves the fact that humans are fleshy bodies..... – k.c. sayz 'k.c sayz' – 2019-10-12T04:37:09.087



Human level narrow AI isn’t really a definition I’ve heard used as most narrow AI’s are much better at human at a given narrow task, e.g. categorising cancer cells on a image.

Strong AI or AGI usually refers to a system that has transferable knowledge / capability were it can infer a given task and execute it to a given standard. It’s seen that at this point the system will be so advanced that it would be unlikely an AGI would be containable e.g. in order to satisfy our previous ability to infer a given task and be good at it that it would have to have knowledge or inferable understanding in any domain which could lead to run away optimisation or gain something alike to consciousness which would lead it to question the tasks themselves along with everything else.

Please refer to Ray Kurzweil et al.


Posted 2019-10-11T13:12:49.570

Reputation: 434