What are the current theories on the development of a conscious AI?



What are the current theories on the development of a conscious AI? Is anyone even trying to develop a conscious AI?

Is it possible that consciousness is an emergent phenomenon, that is, once we put enough complexity into our system, it will become self-aware?


Posted 2018-02-11T05:27:00.877

Reputation: 351

Supposing the cognitive organization we abstractly recognize consciousness developed as a result of physical laws of the evolution of life. And while being a matter of chance, it was inevitable as any event with a probability approaching one, even though the exact route to get there is unpredictable. If so, this suggests that artificial creation of a digital environment that simulates the essential components of the environment which gave rise to our consciousness could be used to evolve "artificial" consciousness. (cont....) – Craig Hicks – 2018-02-12T18:47:23.037

(....cont) Compressing a lifetime of human scholarly educational materials to a few milliseconds seems entirely plausible. How about a million years of the evolution of cognition into a day or two on a supercomputer? I would argue that simulation of environment is the key, and the laws of evolution will take care of the rest. Simulation of environment is the kind of task that humans could methodically attack and, in the limit, succeed at, with the inevitability of Moore's Law. – Craig Hicks – 2018-02-12T18:53:21.040

Recursion has been proposed by Rajaneimi, and I've found several cognitive science papers on the topic regarding humans, but am still looking for papers discussing this in terms of algorithmic consciousness. – DukeZhou – 2018-02-14T19:13:49.213



To answer this question, first we need to know why developing conscious AI is hard. The main reason is that there is no mathematically or otherwise rigorous definition of consciousness. Sure you have an idea of consciousness as you experience it and we can talk about philosophical zombies but it isn’t a tangible concept that can be broken down and worked on. Moreover, the majority of current research in AI is primarily a pragmatic approach in that one is trying to construct a model that can perform well according to some desired cost function. This is a very very big and exciting field and encompasses many research problems and every new finding is based either on mathematical theory or empirical evidence of a new algorithm/model construction/etc. Because of this, progress is based on and compared against previous progress as it is the scientific method.

So to answer your question, no one is trying to actually make a “conscious” AI because we don’t know what that word means yet, however that doesn’t stop people talking about it.

Jaden Travnik

Posted 2018-02-11T05:27:00.877

Reputation: 3 242

Comments are not for extended discussion; this conversation has been moved to chat.

– nbro – 2020-03-13T17:15:40.483


What is consciousness? There are some real challenges in setting up consciousness as a goal, because we don't have that much scientific understanding yet of how the brain does it or what balance there needs to be between long-term memory, short-term memory, the implicit work of interpretation, the contrasting conscious modes of automatic processing and deliberate processing (Khanemann's S1 and S2). John Kihlstrom (psychology emeritus at Berkeley) has a lecture set on Consciousness available in iTunesU that you might check out. Carnegie-Mellon Uni has a model called ACT-R which directly models conscious behaviours like attention-paying.

What might bound our understanding of it? Philosophy has been considering the question of consciousness for a long time. Personally I like Hegel and Heidegger (philosophers). Both are very difficult to read, but Heidegger (interpreted by Hubert Dreyfus) usefully critiqued the 'Good Old-Fashioned AI' projects of the seventies and pointed out how much work there is just interpreting a visual input. Hegel is often maligned, but to see him well interpreted, check out Robert Brandom's talks to LMU on the logic of consciousness and Hegel as an early Sellers-ian pragmatist. If consciousness is to take hold of the truth and the certainty, it undertakes 'a path of doubt, or more properly a highway of despair', along which it never sets itself above correction. There is something about Hegel's treatment of consciousness in recursive terms, without succumbing to a vicious regress, that I think is going to be borne out before the end.

Recent developments. The Deep Learning approaches and pragmatic successes of the present are exciting, but it will be interesting to see how far they can go in integrating and generalising from necessarily the small information sets actual human minds are exposed to. While Deep Learning and data mining are hugely visible, symbolic approaches are also out there still getting better and more varied. But there is a lack of overarching theoretical interpretation that would allow generalisations.

Two big-theory toe-holds. If I had to pick a project I thought worth attending to, Giulio Tononi (et al) have set up a very nice modernisation of the problem in 'Integrated Information Theory' But you might want to extend that with something like Rolf Pfeifer's 'How the body shapes the way we think', because some of the 'integrated information' is implicit in having arms and legs, eyes and nose (put there by the information accumulating work of evolution.) But there's so much good work that has been done - the pros are writing papers faster than I can read them.

More specific to your question, there are attempts to simulate human brains hoping that overall aim will help fund research and produce answers to each para above.


Posted 2018-02-11T05:27:00.877

Reputation: 171

I read an interesting and honest article by Douglas Hofstadter in The Atlantic (https://www.theatlantic.com/technology/archive/2018/01/the-shallowness-of-google-translate/551570/) critiquing the use of the term "Deep Learning" to refer to Google's Deep Learning translation program, and using entertaining examples to show the programs lack of true understanding. He made a convincing argument that good language translation will not be possible without first developing a conscious AI.

– Craig Hicks – 2018-02-12T03:17:11.547


Yes, I don't think anyone conscious would make quite this mistake below, although I can imagine a translator sometimes being unable to make out any better with arcane terms of art. Its the lack of self-awareness in the output: https://www.reddit.com/r/softwaregore/comments/6bxh2m/when_google_translate_meets_the_german_language/

– Atcrank – 2018-02-12T05:18:28.093

1About the first part: My old AI prof used to say that asking if computers can think is like asking if submarines swim. The answer depends on how you want to define "swim" more than it does on what the machine is actually doing. – T.E.D. – 2018-02-12T16:40:23.840


In addition to Jaden's excellent answer "no one is trying to actually make a “conscious” AI because we don’t know what that word means yet" I'd like to add that the word "yet" there is highly optimistic.

It's highly problematic and likely impossible to distinguish between a conscious being and a being that behaves exactly as if it was conscious. Philosophers have been struggling with that for centuries; some even espoused solipsism, which is a "I live in the Matrix" philosophy. In particular, how can you tell whether your childhood friend or your spouse or anybody else is a conscious being rather than an embodiment of AI that acts exactly as a conscious being would?

It's possible, of course, to go "if it walks like a duck and quacks as a duck then it's a duck" way. In that case a Turing Test passing AI would be automatically considered conscious. However, most people wouldn't accept the duck criteria of consciousness; otherwise they would very soon have to call their Alexa operated household appliances conscious.

My two cents are basically the same as Jaden's, except that I'm more pessimistic about ever understanding what consciousness is.


Posted 2018-02-11T05:27:00.877

Reputation: 121

I think the Duck Test is sufficient if an AI passes a full Turing Test, administered by intelligent adults. (My problem with the Chinese Room is that it is based on qualia, and I suspect the only way to validate the qualia of another entity is to become that entity.) – DukeZhou – 2019-02-28T21:11:29.800

I might also argue that any percept fulfills the most basic definition of consciousness, and the ability to make any decision based on input satisfies the most basic definition of intelligence. For all we know, consciousness may just be some form of meta-cognition, and entirely a function of the complexity of the system. – DukeZhou – 2019-02-28T21:14:22.023


CERA-CRANIUM is an example for a cognitive-architecture to generate Machine-Consciousness (MC). Towards conscious-like behavior in computer game characters, 2009 It was realized as a blackboard system which is able to execute threaded tasks. The implementation itself works with natural language. That means, a CERA-CRANIUM agent has a variable called “I'm in fear”, and if this variable is set True, than the emotion is activated. So it is not real consciousness, but has more in common with the internal states of characters from “The Sims”.

The interesting aspect is, that “machine consciousness” isn't so esoteric, as it looks like. Google scholar finds around 3k papers about it. In most cases, the development starts with the aim to implement emotions for in-game-characters which were later extended to general thoughts in a virtual human.

Manuel Rodriguez

Posted 2018-02-11T05:27:00.877

Reputation: 1

Wouldn't that be funny if true AI "life" develops (evolves?) not from research but ad-hoc by game-software writers trying to make a buck. – Craig Hicks – 2018-02-12T02:53:17.193

@CraigHicks : you mean, like in Friendship is Optimal?

– vsz – 2018-02-12T07:14:47.193


Consciousness is the ability to be aware of your own thoughts, your immediate environs, feelings and nothing more. It is the mechanism of our brain to control our lower kind of thoughts, the one based on associations and emotions. Consciousness is observing our thoughts and feelings just like we observe real world with our eyes. It is not complicated. The real question is not whether machines are capable of consciousness but whether they are capable of emotions.

Tone Škoda

Posted 2018-02-11T05:27:00.877

Reputation: 227

There are plenty of species apart from modern humans exhibiting strong emotions. These emotions are instinctive - they are very important in defining social behavior which enhance the survivability and fitness of the "group". I can't see how human emotion is qualitatively any different from that of other mammals. In contrast, human consciousness is several orders of magnitude more developed in humans than the nearest other species. Where human emotion does seem to differ from other species, is only the ability to control emotion through consciousness. – Craig Hicks – 2018-02-12T03:06:02.417

This answer makes no sense without a definition of 'your'. – DrMcCleod – 2018-02-12T09:22:33.443