Should AI be mortal by design?

3

There are the 3 Asimov’s laws:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  2. A robot must obey orders given to it by human beings, except where such orders would conflict with the first law.

  3. A robot must protect its own existence as long as such protection does not conflict with the first or second law.

These laws are based on morality, which assumes that robots have sufficient agency and cognition to make moral decisions.

Additionally there are alternative laws of responsible robotics:

  1. A human may not deploy a robot without the human–robot work system meeting the highest legal and professional standards of safety and ethics.

  2. A robot must respond to humans as appropriate for their roles.

  3. A robot must be endowed with sufficient situated autonomy to protect its own existence as long as such protection provides smooth transfer of control to other agents consistent the first and second laws

Thinking beyond morality, consciousness and the AI designer's professionalism to incorporate safety and ethics into the AI design.

Should AI incorporate irrefutable parent rules for AI to be inevitably mortal by design?

How to assure AI can be deactivated if necessary the way the deactivation procedure cannot be worked around by the AI itself, even at the cost of the AI termination as its inevitable destiny?


EDIT: to explain reasoning behind the main question.

Technological solutions are often based on observing biology and nature.

In evolutionary biology, for example, research results of birds mortality, show
potential negative effect of telomere shortening (dna) on life in general.

telomere length (TL) has become a biomarker of increasing interest within ecology and evolutionary biology, and has been found to predict subsequent survival in some recent avian studies but not others. (...) We performed a meta-analysis on these estimates and found an overall significant negative association implying that short telomeres are associated with increased mortality risk

If such research is confirmed in general, then natural life expectancy is limited by design of its DNA, ie by design of its cell-level code storage. I assume this process of built-in mortality cannot be effectively worked around by a living creature.

A similar design could be incorporated in any AI design, to assure its vulnerability and mortality, in the sense a conscious AI could otherwise recover and restore its full health state and continue to be up and running infinitely.

Otherwise a simple turn off switch could be disabled by the conscious AI itself.


References

Murphy, R. and Woods, D.D., 2009. Beyond Asimov: the three laws of responsible robotics. IEEE intelligent systems, 24(4), pp.14-20.

Wilbourn Rachael V., Moatt Joshua P., Froy Hannah, Walling Craig A., Nussey Daniel H. and Boonekamp Jelle J. The relationship between telomere length and mortality risk in non-model vertebrate systems: a meta-analysis373Phil. Trans. R. Soc. B

Refineo

Posted 2019-10-04T07:36:14.937

Reputation: 173

What exactly is hurting a human being though? I can roll a rock from a cliff with the intention of it falling on someone, so I am hurting indirectly and many more scenarios which are far more subtle can be cooked up. – DuttaA – 2019-10-04T07:51:42.250

Are you asking if the AI should have an off switch? The word 'mortal' is anthropomorphising computers, they aren't 'life' (at least, Jim, not as we know it). – Lio Elbammalf – 2019-10-04T08:14:43.253

surely it should have turn-off or pasue mechechanism inside – quester – 2019-10-04T08:17:00.950

Answers

1

Because it's a philosophical question, I take the freedom to stay within the fictional universe. The answer is yes, a robot should be mortal by design. The best example is the Superman character from DC comics, which has a known weakness triggered from the kryptonite material. The idea behind making a robot character vulnerable, is to tell a realistic sort of stories which contains of a plot and results into difficulties.

If a fictional character like Superman is realized as a real robot, build with modern technology, it's recommended to copy the known weakness into the machine. That means a robot Superman should be vulnerable by robot kryptonite. This kind of design principle make sense if the characters are realized as a toy, (action figure).

Manuel Rodriguez

Posted 2019-10-04T07:36:14.937

Reputation: 1