Problems that only humans will ever be able to solve

20

5

With the increasing complexity of reCAPTCHA, I wondered about the existence of some problem, that only a human will ever be able to solve (or that AI won't be able to solve as long as it doesn't reproduce exactly the human brain).

For instance, the distorted text used to be only possible to solve by humans. Although...

The computer now got the [distorted text] test right 99.8%, even in the most challenging situations.

It seems also obvious that the distorted text can't be used for real human detection anymore.

I'd also like to know whether an algorithm can be employed for the creation of such a problem (as for the distorted text), or if the originality of a human brain is necessarily needed.

Marc Perlade

Posted 2019-05-05T08:45:40.797

Reputation: 303

Answers

13

Informally, AI-complete problems are the most difficult problems for an AI. The concept is not mathematically defined yet, as e.g. NP-complete problems. However, intuitively, these are the problems that require a human-level or general intelligence to be solved.

Real natural language understanding is believed to be an AI-complete problem (this is also discussed in the paper Making AI Meaningful Again by Jobst Landgrebe and Barry Smith, 2019). There are a lot more AI-complete problems. For example, problems that involve emotions.

nbro

Posted 2019-05-05T08:45:40.797

Reputation: 19 783

3NP-complete problems are called 'complete' because they are the hardest in a precise sense: that any other problem in NP can be reduced to one of them by a deterministic, polynomial-time algorithm. Therefore an oracle that could solve any one of the NP-complete problems in polynomial time could be used to solve every problem in NP in poly time.

Is there any sign that your notion of AI-completeness makes objective sense in a similar way? – Mike Spivey – 2019-05-05T16:54:10.473

1

@MikeSpivey This is a good question. I've not read an extensive amount of related literature, but, in the paper Towards a Theory of AI Completeness that I link you to in my answer, the authors state "Those measures can also tell us how much of the problem we already managed to delegate to a computer; using reductions, progress in one problem can translate to progress in another". So, in that paper, at least, there is the desire to develop a formal complexity theory for AI.

– nbro – 2019-05-05T17:10:40.977

And computers (traditional, quantum, etc) won’t ever be able to solve these? I disagree. Technology evolves and computers may even use organic material. It is infeasible that humans will not be surpassed by AI over time. The question is only what’s required and how much time. – vol7ron – 2020-02-26T12:39:48.273

@vol7ron I followed a course on quantum computing in the past, so I know the basics. Quantum computers are not special or magic. They can only calculate certain operations slightly faster than traditional computers, but that's it. There's no evidence that computers (quantum or not) will be able to solve certain problems that we humans are able to solve. – nbro – 2020-02-26T13:58:21.690

@nbro I still consider quantum computing in its infancy. The question wasn’t whether computers could solve problems that humans can’t, it’s whether there will be only-human solvable problems. A major deterrent with computers is being able to process a lot of information efficiently and accurately. I suggest that will scale and either Quantum computing or a newer type of computing (organic, metaphysic, etc) will surpass humans in all ways. – vol7ron – 2020-02-26T14:22:13.443

As I understand Quantum Computing, Qubits allow for more than a binary/trinary state as with today’s digital components (processors, memory, etc). So being able to superpose, or combine those states, will yield results in flux much faster. I consider the tech still in its infancy and I’m not even sure it’s fault-tolerant; but the paradigm shift is what interests me most. So a speculative question invites some degree of speculation in its answer. – vol7ron – 2020-02-26T14:28:58.870

@vol7ron I encourage you to learn more about quantum computing. Even though qubits can be in a superposition, in reality, this doesn't translate to an exponential speedup or something like that. – nbro – 2020-02-26T14:30:07.400

@nbro I think I need to :) I’m out of my league when it comes to that. Do you recommend any good books? – vol7ron – 2020-02-26T14:32:22.757

1@vol7ron Yes, I recommend the book "Quantum Computing for Computer Scientists". This book introduces you to the mathematical preliminaries in a very clear way (at least, if you are already familiar with linear algebra, which you should be if you want to learn about QC) and the course by Umesh Varizan that you can find on Youtube. – nbro – 2020-02-26T14:35:11.803

@nbro Thanks! I’ll check it out – vol7ron – 2020-02-26T14:39:42.283

3

This is more of a comment and philosophical opinion, but I don’t believe that there are any problems an AI couldn’t solve, that a human can. Being new to this forum, I cannot make it a comment on the question (and it would probably be too long) — I preemptively ask for your forgiveness.

AI Eventually Will Mimic Humans (and surpass them)

Humans by nature are logical. Logic is learned or hardwired, and influenced by observation and chemical impulses.

As long as an AI can be trained to act like a human, it will be able to act like one. Currently, those behaviors are limited to technology (space, connections, etc), which the human brain has been optimized to rule out or disregard certain “fluff” automatically enabling it certain super capabilities. For instance, not everything seen is registered through the brain; often, the brain performs differential comparisons and updates for changes to reduce processing time and energy. It will only be a matter of time before AI can also be programmed to behave this way, or technological advancements will allow it to not need some of this function, which will allow it to leapfrog humans.

In the current state, we recognize humans are sometimes irrational or inconsistent. In those cases, AI could mimic human limitations with configured randomization patterns, but again, there really won’t be a need since it can be programmed and learn those patterns automatically (if necessary).

It all comes down to consumption of data, retention of information, and learned corrections. So, there is no problem that a human can perform (to my knowledge) that AI couldn’t theoretically ever perform. Even in the case of chemistry. As we are manufacturing foods and organs, an AI could also, theoretically, one day reproduce and survive via biological functions.

Instead of the question being binary about human capability vs that of artificial intelligence, I’d be more interested to see what people think are the more challenging things humans can do, which will take AI time to accomplish.

vol7ron

Posted 2019-05-05T08:45:40.797

Reputation: 139

Hi vol7ron. I downvoted this answer because it is full of speculations, thus I don't think it is a very useful answer and I don't agree with a lot of your statements. Do not take it personally. Anyway, AFAIK, you can comment, because you have more than 50 points of reputation. – nbro – 2019-05-06T14:20:26.647

@nbro comments have length limitations and I could only comment after posting because I was not a member, I receive the +100 after. – vol7ron – 2019-05-06T14:25:50.980

You receive 100 points of reputation once you create your account, AFAIK. However, I would like to point out a few of your statements that I don't agree with and I think are inconsistent or ambiguous. First, you say "Humans by nature are logical". What do you mean by logical here? Then you say "In the current state, we recognize humans are sometimes irrational or inconsistent". It seems to me that your arguments are not very consistent and thus contradictory. Furthermore, your main point is: AI will be able to do whatever humans can do because I have no proof against it: it is a poor argument. – nbro – 2019-05-06T14:29:22.683

1@nbro thank you for your feedback! I would say that the heart of my answer addresses the question — “no” is the answer. It is not purely on speculation, but I would argue that the question was inviting speculation. The title says “will ever be able to”. How can anyone predict the future? It is speculative. My speculation is founded on track record and capabilities within the bounds of AI operations. It considers current limitations that prevent acting like humans in their current state and the speculation is that won’t be an issue in the future. – vol7ron – 2019-05-06T14:30:04.190

Given that I don't think that speculations are, in general, useful, I down-voted. I would not have down-voted if you had been more consistent and had argued your statements. For example, I don't even understand what you mean by "learned corrections" in the statement "It all comes down to consumption of data, retention of information, and learned corrections". I would also like to note, in case you are not aware of it, that not everything can be inferred from data (that is, ML is often insufficient: e.g. it can't detect causal relations, but only correlations). – nbro – 2019-05-06T14:36:06.840

@nbro very good observations and questions. Even inconsistent and irrational behavior follows a system of logic and reasoning — logic in a very simplistic form for comments meaning the inputs influence the outputs. The inconsistency of people means that similar decisions sometimes have different actions, but if all factors were equal (e.g., chemical, reasoning, knowledge) decisions would be the same. Influences change between seemingly similar decisions. – vol7ron – 2019-05-06T14:38:44.373

@nbro thank you, when I can get back to a computer I will have to elaborate so that I could accommodate people that may not have a psychological/philosophical background mixed with a computer science degree. I didn’t think the given question warranted the need to write a paper and thought that simple examples would be enough. That may have been a wrong assumption and for that I apologize. Your feedback, though, has been constructive and instrumental to making improvements; for that, I am grateful. – vol7ron – 2019-05-06T14:42:47.890

Ok. I will have a look at your answer later when you update it :) – nbro – 2019-05-06T14:45:12.643

"Humans by nature are logical." You based most of your answer on that assertion, but I believe you forget that humans are not only logical. We are more (or less) than that. – MarcioB – 2019-06-04T07:59:00.600