Is the imagination of AI limited to our own imagination?



Is AI limited by the fact that it requires us to give it a task or goal to achieve? It has all the capability to get to that goal in ways we might not think of but it still only gets to a goal we can imagine, how do we get AI to think of goals or tasks that go beyond us? Do we create a sense of 'Passion' in the AI to drive it to better than the goal it is given, if so how do we quantify a goal we see as 100% of our need when in fact it could be only a fraction of that?


Posted 2018-11-19T10:41:46.293

Reputation: 39

what kind of AI are you talking about ? Fictional ? Real ? Future AI ? – Jérémy Blain – 2018-11-19T10:43:43.710

1Well future AI since I can't think of an AI that can go out of the constraints given to it to find alternative solutions to a problem. – Justin – 2018-11-19T11:17:14.627

2This post will lead to highly speculative/opinionated answers. Also, you're asking too many questions in the same post. Only one question per thread, ideally. Please, edit your post to just leave one question and I will remove my downvote. – nbro – 2018-11-19T11:19:25.163



Questions about the creative boundaries of AI have been interesting to many in the AI field and outside since before the dawn of computers. This post's main question is common but not yet clearly represented in this community.

Is the imagination of AI limited to our own imagination?

Before addressing the main question, there are some preliminary questions in the body of this question post.

The answer to, "Is AI limited by the fact that it requires us to give it a task or goal to achieve?" is, "No," because there is no theoretical reason why a machine cannot compose a goal. Given a model of objects humans and other organisms use in goal formation, a computer can select from the objects and actions its own choice for objectives, with some element of randomness in its selection.

We don't see examples of this for three reasons.

  • Currently, AI doesn't write, manufacture, or otherwise create AI, and humans are rightfully apprehensive about creating conditions that could lead to such recursion
  • Humans are also rightfully apprehensive about making AI that is fully autonomous. Who would buy a robot costing more than a house that doesn't follow instructions and may become a bad roommate or citizen? The return on research investment would be low.
  • The science of AI has not yet entered into researching the generalization of goals to the degree necessary to create a goal generator.

If one asks the question, "Should AI limited by a programmed requirement that it must be given a task or goal to achieve along with a set of conditions to avoid?" then the answer is, "Probably so." Here's an intermediate question not in the question post, but central to the post's header question.

Can an intelligent machine be imaginative or innovative if not given full autonomy?

The AI community hasn't formed anything close to unanimity around how these features can be represented in a way that permits proofs about their interrelationships. We, as an AI community, should also be careful with the term, "We," when referencing human kind, as if all people are united in beliefs about much of anything. There are dictionary definitions for these words, but none are properties in the way that population size, mass in grams, time in seconds, or even health in terms of life expectancy is defined.

  • Creativity
  • Imaginative
  • Innovative

Even the word intelligence is arguably not easily quantified. The IQ tests and college boards that exist hardly measure the forms of intelligence described in the three words above. They are measures that largely relate to academia. Net worth is another measure that is correlated, but not tightly so with the academic measurements, and some quite creative (and affluent) people left school. Howard Hughes, Ty Warner, Steve Jobs, Bill Gates, David Murdock, Mark Zuckerberg all dropped out of degree programs.

How do we get AI to think of goals or tasks that go beyond us?

We make the objective a variable and allow it to form randomly with the following constraints.

  • Practical, in that the resources to exercise the creative, imaginative, or innovative products of AI are available for use
  • Feasible, in that the products of AI thought is possible to exercise in the physical or virtual world
  • Likely to perturb, not in the negative sense, but in the mathematical one in that it will impact the world around it, otherwise one wouldn't qualify its actions as creative, imaginative, or innovative

How to search this space of practical, feasible, and perturbing space efficiently yet with considerable randomness is the interesting challenge to the boundaries of AI theory.

Do we create a sense of passion in the AI to drive it to better than the goal it is given, if so how do we quantify a goal we see as 100% of our need when in fact it could be only a fraction of that?

AI already has a passion. Passion in humans is when we drop all other things to the degree they can be dropped without dying or being jailed or otherwise rejected by society and push on to a goal. Even early rules based programs and multilayer perceptrons did that. The drive and focus of passion is programmed into the algorithms, compilers, and operating systems and present in the fault tolerance of networks and processors. That's the strong point of digital systems. It does not get distracted and, of one kills all the processes on the computer except the ones key to the computer's functioning, it will proceed passionately toward its remaining objectives and disregard dispassionately all other pursuits.

Douglas Daseeco

Posted 2018-11-19T10:41:46.293

Reputation: 7 174


For clarity, imagination is a very difficult thing to define, similarly with consciousness. Although its possible to discuss our own imagination as we can experience having it, considering the potential of the imagination of a machine intelligence won't lead to any concrete conclusions until we can theoretically define what imagination is (which we probably won't be able to anytime soon).

That being said, your last few questions are a very important consideration and its one of the questions being addressed by leading research groups such as DeepMind and OpenAI. Referred to as the Agent Alignment Problem, it can be phrased:

How can we create agents that behave in accordance with the user’s intentions?

For example, if we want an agent to design a microchip or a traffic system, how can we tell if the agent is doing what we want it to do when we ourselves have a difficult time determining what a good microchip or traffic system looks like?

Deepmind recently posed a method of recursive reward modeling which could assist humans in the evaluation process of outcomes produced by the agent currently being trained by incorporating human feedback at evaluation phases.

In the traffic system example, its difficult for humans to evaluate the entire traffic system but its easier to evaluate a single intersection. But even a single intersection can be hard to consider in its entirety, but this too can be broken down into the individual paths the cross. A human can then evaluate a single path of an intersection and, with enough data, an agent can learn the human's evaluation function. Using multiple paths through different intersections, an agent will have a better idea about what a human intended if a human was to evaluate the entire intersection. This recursive effect then bubbles up until the agent can evaluate the entire traffic system all while aligning with the original human intention.

OpenAI has also made a blog post addressing this problem with a relatively similar approach. The idea being that humans might not be able to do the full problem on their own, but might be able to do a subset of the problem. Using the examples of humans performing the subset-problems, agents should be able to construct evaluation criteria that will assist them when they scale up to the full problem.

Jaden Travnik

Posted 2018-11-19T10:41:46.293

Reputation: 3 242


Technically speaking, AI which are involved in supervised classification problems can also grow their knowledge and abilities. Many classification tasks are performed better by computers than humans. Computers can perform complex classification tasks which is beyond human intelligence. AlphaGo has defeated world champions and even itself in the ancient game of Go. This proves that, in the game of Go, there is more machine intelligence than human intelligence on the planet. We create AI. We provide it a basic intelligence of our choice. It develops itself and makes itself smarter and more intelligent. An AI is nothing a computer software which works over mathematical concepts. Maths and computers are both human made. Computers provide a platform for mathematical operations.

So, all the basics are given by humans. Using the rules, an AI can reach far beyond humans in many fields.

Shubham Panchal

Posted 2018-11-19T10:41:46.293

Reputation: 394

There is also a maximum number of digits ever computed by a human. This proves that, in the computation of digits of pi, there is more machine intelligence than human intelligence on the planet /s – Martin Thoma – 2018-11-19T16:54:53.577

Only because a machine is able to play Go, it is not proven that machine intelligence is superior to human one. Humans can do much more for example writing stories. If the aim is to make computers more human like much more effort has to be taken. It is not enough, to solve mathematical toy problems. – Manuel Rodriguez – 2018-11-20T09:20:10.473