Assuming an AI can die, who manages the state?



AI death is still unclear a concept, as it may take several forms and allow for "coming back from the dead". For example, an AI could be somehow forbidden to do anything (no permission to execute), because it infringed some laws.

"Somehow forbid" is the topic of this question. There will probably be rules, like "AI social laws", that can conclude an AI should "die" or "be sentenced to the absence of progress" (a jail). Then who or what could manage that AI's state?

Eric Platon

Posted 2016-09-10T23:32:33.060

Reputation: 1 410

Not clear what the relevance of the 'speed of light' comment is - I'd be very interested to know how the laws of physics wouldn't apply to AIs... – NietzscheanAI – 2016-09-11T06:19:31.760

Just a contrived example. It could have been about Thermodynamics. Do you think the question would read better or be clearer without? – Eric Platon – 2016-09-11T07:26:02.283

1Actually, yes ;-) – NietzscheanAI – 2016-09-11T07:38:26.200

Laws like these?

– Mithical – 2016-09-11T08:33:38.123

1Please first define what does it mean for an AI to die. Also define what is the state. – caveman – 2016-09-11T18:38:52.360



Following on from your own software verification-based answer to this question, it seems clear that ordinary (i.e. physical), notions of death or imprisonment are not strong enough constraints on an AI (since it's always possible that a state snapshot has been or can be made).

What is therefore needed is some means of moving the AI into a 'mentally constrained' state, so that (as per the 'formal AI death' paper) what it can subsequently do is limited, even if escapes from an AI-box or is re-instantiated.

One might imagine that this could be done via a form of two-level dialogue, in which:

  1. The AI is supplied with percepts intended to further constrain it ("explaining the error of it's ways", if you like).
  2. Its state snapshot is then examined to try and get some indication of whether it is being appropriately persuaded.

In principle, 1. could be done by a human programmer/psychiatrist/philosopher while 2. could be simulated via a 'black box' method such as Monte Carlo Tree Search.

However, is seems likely that this would in general be a monstrously lengthy process that would be better done by a supervisory AI which combined both steps (and which could use more 'whitebox' analysis methods for 2.).

So, to answer the question of "who manages the state", the conclusion seems to be: "another AI" (or at least a program that's highly competent at all of percept generation/pattern recognition/AI simulation).


Posted 2016-09-10T23:32:33.060

Reputation: 6 685

Thank you for your response. I tend to think that way on this issue, and often feel like the answer goes as far as Marvel on controlling or auditing super-heroes like Superman... – Eric Platon – 2016-09-11T07:29:43.097


The AI agent can be designed in such a way that it could consist of two major components:

  1. The free-will component expands the experience of the AI agent and produce outputs based on artificially generated thought input.

  2. The hard-wired component that the agent cannot modify by itself. This could include a set of secured code to action sequence mapping. One of which could be temporary suspension of actuators -- a punishment. Another could be total suspension of operation -- death.

The selection of who has the rights to manage this state depends on what rights have been bestowed upon the AI agent itself. If the rights provided is that of a human citizen, then the right to sentence to death state is as per the legislature a human citizen would follow. If the right of the AI agent is no different from that of a basic machine, then the owner of the agent would have to right to activate the death state.

Ébe Isaac

Posted 2016-09-10T23:32:33.060

Reputation: 238

While this is clearly a desirable way to design an AI, it's not at all clear that such a design is not an overly strong constraint, i.e. prohibits the evolution of intelligence. See "Is God A Taoist" by Raymund Smullyan for some hints as to why this might be so. In particular, it seems unlikely that would be formally possible to prevent the AI from injecting malware into its own secured code in order to subvert it. – NietzscheanAI – 2016-09-11T15:30:27.803

@NietzscheanAI Many computers today have a "system management mode", under which code is executed that cannot be modified or even seen by a running OS under most circumstances. That seems like a possibly helpful tool in implementing this answer's strategy.

– Ben N – 2016-09-11T15:37:23.720

@NietzscheanAI: Like I stated, the code is hard-wired, like a ROM -- created at manufacture time. I didn't include all the details of the proposed design as of yet. But as a clue, it is a component associated with a hardware switch and is disjoint from the actual brain component of the agent. An action simple as a card-swipe can activate it. Its who holds this card is what I was explaining about. I didn't want to inhibit the knowledge, intelligence and experience that a complete AI should have, but just creates space for a silver bullet. (Thanks for the reference, by the way, it's great). – Ébe Isaac – 2016-09-11T15:39:11.230

@BenN - Thanks, though I still think humans might be over-optimistic to think that such precautions would withstand simultaneous assault from 10,000 hyper-intelligent AI versions of Kevin Mitnick. – NietzscheanAI – 2016-09-11T15:39:43.017

1@ÉbeIsaac - I'm all for the idea, just have some reservations, is all. To be honest, the philosophical issue about how much can you restrict free will and still have intelligence is of greater interest to me personally. I'm happy to leave the hardware/cybersecurity issues to the thousands of people on AI SE that seem to be interested in killing AIs ;-) – NietzscheanAI – 2016-09-11T15:42:41.030

@BenN: True. No matter how secure we want the base system to be, the true intelligence aspect doesn't stop it from creating an entirely new AI agent design free from this hardware switch and imparting (uploading) all its knowledge and intelligence (memory of itself) to it. (sigh) – Ébe Isaac – 2016-09-11T15:43:05.763

You're right, @NietzscheanAI, but that is a century-long debate ;-) – Ébe Isaac – 2016-09-11T15:45:32.177

2@ÉbeIsaac True. I'm amazed how much interest there is in killing something that we have no idea how to create... – NietzscheanAI – 2016-09-11T15:46:52.377