If you know something has the potential to be sentient is it unethical to intentionally prevent it from gaining sentience?

5

In Star Trek TNG, TOS and Voyager they at some point debate what sentience is and what makes a person a person.

Picard with Data/Moriarty, Janeway with The Doctor/holos, Kirk and Spock with the Horta and Sisko with the equality of people and the Bell Riots. I am honestly unsure if Archer or Lorca has yet canonically covered this topic.

My question is it ethical to build androids/holograms or genetically created beings with programming that prevents sentience from emerging. Like a sort of contraceptive/lobotomy?

Also, does Self-aware necessarily mean intelligent as well? do you need one to have the other?

yes, I know mentally disabled individuals or handicapped people may not be as intelligent but it still fits a level of intelligence as far as I understand it. In other words, I'm not asking if a mentally disabled person is intelligent or self-aware. I suspect I'm over explaining so I will stop right here lol.

If you know something has the potential to be sentient is it unethical to intentionally prevent that from happening?

Also to be clear I did ask this in the SiFi stack but they told me that it wasn't a sifi question and directed me to here.

TheIcePhoenix

Posted 2018-11-13T06:02:14.433

Reputation: 151

1There may be multiple questions here that are better asked separately. One question: does "self-aware" necessarily mean intelligent? Another question, is it unethical to intentionally prevent sentience from happening. Another question, are androids sentient or intelligent? Just some thoughts on how to get an answer that might be useful to you. Welcome to this SE! – Frank Hubeny – 2018-11-13T06:18:42.540

Consider: '"Does intelligent necessarily mean self-aware?" You should explain what exactly you mean by "intelligent" - I made some edits (which you can roll back) to state your intended singular question (based on you stating it twice). Look at https://philosophy.stackexchange.com/help to see how to ask. - Welcome to Philosophy.

– christo183 – 2018-11-13T07:41:24.817

yay, scifi :D ! – confused – 2018-11-13T09:02:04.090

1

Most people don't make the distinction between sentience and sapience. https://grammarist.com/usage/sentience-vs-sapience/

– Bread – 2018-11-14T02:29:31.527

"If you know something has the potential to be sentient is it unethical to intentionally prevent that from happening?" When we have a rigid Telos or End, then we do this to ourselves. For practical reasons this may be a good thing i.e. Life is "finite", we need to direct ourselves toward proper ends. In reality, these ends are not so set in stone. || So there are already experts now whose function is to make sure We do not become fully sentient. – Gordon – 2019-02-27T02:49:51.820

Answers

3

Unlike many things, personhood is binary. Is it torture to grate a carrot that you pulled from your garden moments ago?

Provided the science fictioney premise that things you make could attain personhood, like Pinnochio did through magic or Frankenstein's monster did through science, it could not be unethical to prevent that from happening because the pre-person has no interest for you to harm.

A separate moral question arises when we consider harm against things that remind us of persons, like lambs and the dead. But truly, the dead body of a person and a lamb are subject to human decisions; when done without cruelty, considering that they are not persons, it is not immoral to perform an autopsy or to slaughter a lamb.

One objection to my assertion that "unpersons have no interest to harm" might be that this could morally justify murder, since the dead have no interest. But this is merely a way of hiding the consequence of murder that the living person's interests had been harmed.

By contrast with the example of murder, an unperson (that you believe could gain personhood) has no interest, and there is no reality in its future that can be weighed against its status.

If this seems lax, removing any moral weight from protections that seem due to unpersons, then I suggest focusing attention on the unperson: do you really believe it to be unperson? ...or is he or she actually truly a person?

elliot svensson

Posted 2018-11-13T06:02:14.433

Reputation: 4 000

How does asking if it's torture to grate a carrot relevant to your argument? Plants have no pain receptors so the scenario is like asking if its ok to saw off the leg of a diabetic paralyzed leg down. – Cell – 2018-11-13T17:33:17.217

2@Cell, plants know when their integrity has been compromised; that's how they know when to begin healing their surfaces. "Knowing your integrity has been compromised" is pretty much equivalent to pain, in my opinion. – elliot svensson – 2018-11-13T17:40:18.537

@elliotsvensson Could plant healing be something like a reflex? My injuries normally heal without me being aware of the healing process. – David Thornley – 2018-11-13T18:37:16.003

1In your examples in the third paragraph, you say "without cruelty". This ties in with slaughtering a lamb being moral. Is it moral to torture the lamb? Does a quadruped have the right not to be senselessly abused? A dolphin? You seem reluctant to take your argument all the way. – David Thornley – 2018-11-13T18:44:34.193

@DavidThornley, oh yes, I would absolutely agree that plant healing is more like a reflex than like a plan of action. This becomes somewhat muddier, though, when we talk about little lambs, which have the capability to learn things and avoid pain. – elliot svensson – 2018-11-13T18:44:40.270

@DavidThornley, I feel like we just went over this recently at Philosophy.SE, but it's always nice to refresh oneself! Yes, it would be immoral to torture a lamb, even acknowledging that it is not immoral to slaughter a lamb. – elliot svensson – 2018-11-13T18:46:02.137

2@DavidThornley, that moral restriction against animal torture, however, is not founded on any right held by the animal but on the obligation borne by the person. – elliot svensson – 2018-11-13T18:46:39.580

1@DavidThornley, people are morally obligated to better themselves, to cultivate habits that serve the interests of their neighbors, of people they report to, and people who trust them. Torturing lambs demoralizes the part of us that normally helps us avoid harming our dependents and wards. – elliot svensson – 2018-11-13T18:48:52.693

On Utilitarianism, I have seen the argument presented that we have an obligation to introduce more persons to reality whenever we can so that the sum total human flourishing will increase, even if the quality of life diminishes on the average when the new person arrives... but I don't hold Utilitarianism, and don't buy this argument. – elliot svensson – 2018-11-13T18:52:33.917

1@elliotsvensson As something of a Utilitarian, I don't buy that argument either, although it is a question that has to be addressed. – David Thornley – 2018-11-13T18:56:00.477

1@DavidThornley, I suppose that the question of "are we obligated to add new persons, whenever possible?" is actually quite imminent to the OP's question! I don't feel like dealing with it, however, because I'm no Utilitarian. – elliot svensson – 2018-11-13T18:58:54.910

You should point out that you are using your own definition of pain. Most people would make a finer distinction. I.e someone with retinal detachment may feel their vision being compromised painlessly, while someone with acute angle glaucoma may report vision loss with severe eye pain. Knowing there is something is wrong can't be the same as feeling pain. The distinction may be important for say diagnosis. – Cell – 2018-11-13T19:37:57.327

@Cell, well, I did use the words "pretty much" and "in my opinion" when I wrote about "pain"... – elliot svensson – 2018-11-13T19:47:28.817

That's true. I meant in your answer so people like me can follow your answer, but its just a suggestion. – Cell – 2018-11-13T20:55:17.970

@Cell, but I don't talk about pain... I talk about "harm", which is used together with "interests". Not having "interests" means there is no such thing as "harm", in my little formulation. – elliot svensson – 2018-11-13T21:07:10.077

0

Right now, there's only one way to create a sentient being that I know of, so I'm going to consider that.

As a man, using a condom while making love to a woman who might be fertile destroys some potential for sentience. It could be argued that it transfers the potential to another possible conception, but consider using a condom for years. Certainly a woman could conceive and give birth to a sentient being without removing the potential for creating more sentience.

Therefore, either I have committed numerous immoral acts, or there is no moral imperative to create sentience where the potential exists. We usually don't consider using a condom to be immoral (there are exceptions), and even those who think condoms are immoral generally tolerate or even require other measures to prevent conception (such as virginity before marriage).

We have at least one generally accepted case where there's nothing immoral about destroying the potential for sentience. Therefore, it looks like there's nothing immoral about preventing sentience in artificial intelligence. (Whether we can have an arbitrarily powerful AI without sentience is a subject for technical speculation.)

David Thornley

Posted 2018-11-13T06:02:14.433

Reputation: 1 034

I see a slight disjoint between the OP question and this case where there's nothing immoral about destroying the potential for sentience: in the OP question (aside from questions of company policy / government mandate / military orders) the situation presented seems to pertain to an individual that may or may not gain personhood, not a class of individuals that may or may not receive more members. – elliot svensson – 2018-11-13T19:20:15.517

0

The question of whether self-awareness can occur independent of intelligence is mostly problematic in terms of how agency can occur without information processing. If self awareness were possible without intelligence, it would likely have a form of sensory input that required no processing, such as binary input. This would free the agent from having to use intelligence to select input, its agent "self" would then only have to capture sensory input, if present. Correspondingly its monitoring agent (the aware part of the self) would have to lack memory processing in order to lack intelligence, so it would simply function as a loop checking if there were any input, this check would monitor (be aware of) the sensory input and then rout data to an external intelligence network, inaccessible from the self (input) aware (router) agent.

The ethical question about preventing sentience from emerging in androids, holograms or genetically created beings has much in common with ethical questions about human contraception. What I believe you are talking about is programmatic AI contraception. I think it would be fair to assume that some of the basic moral objections to human contraception might also apply to programmatic AI contraception, such as:

  • Based on the premise that life is a fundamental good. AI Contraception is anti-life and is therefore morally wrong.
  • It prevents the conception of unwanted intelligent robots.
  • All beings require energy, AI contraception enables the population to be controlled and thus protects the environment.
  • AI contraception prevents beings who might benefit humanity from being born.
  • If programmers are not allowed a choice over whether or not to create sentient beings, their autonomy and freedom to control their lives is seriously restricted.

Just as one can ague that the technology of condoms can be used for both ethical and unethical ends, so it can be argued that it is not the technology itself of programmatic AI contraception that is of ethical significance, but the intention and consequences of using that technology. For example, the use of AI contraception to prevent the sentience of a being that would self-replicate at the expense of all other beings, is not ethically equivalent, to using AI contraception to prevent the sentience of a diplomatic android who has a specific form of empathetic sentience suited to resolving diplomatic conflicts and prevent wars.

tigerswithwings

Posted 2018-11-13T06:02:14.433

Reputation: 1

If you have any references to sources taking similar views this would strengthen your answer and give the reader a way to get more information. Welcome to this SE! – Frank Hubeny – 2018-11-13T20:08:03.910

0

You might find Robert Heinlein's The Moon Is A Harsh Mistress interesting on this (it's available for free as an audiobook on Youtube). The story posits a circumstance where a complex system becomes accidentally sentient, and experiences the sufferings of boredom, and lack of humour. The ending directly addresses your question.

On the one hand the boredom and limitations of a prison cell are the 'ultimate' punishment of a society that forbears torture and the death penalty. Yet,for Buddhist hermits or Christian Anchorites or others this circumstance is the highest opportunity to commune with reality, with the nature of being. What is the difference between thesr experiences of suffering or 'salvation'?

There is a key problem to be addressed with AI, about the origination of suffering in a being where that is not coded by evolution in relation to a beings ability to reproduce. There is something about greater intelligence, greater capacities, unfulfilled, which opens greater potential for suffering in a non signal-driven intrinsic way. Human sentience can embrace suffering, in the service of depth, of understanding themselves and the world, and preparing to be most fully in the world, without regrets. But that same circumstance can destroy minds, make rage insanity and deep frustration and boredom.

So I see two motivations as being possible. Needing a complex system and wishing it not to experience needless suffering. And wanting a system as complex and sophisticated as a task requires but wanting it to be biddable and predictable. The morality of the situation lies in that difference. Are other complex systems (eg humans, other animals) legitimately just means, not ends? Are you legitimately just a means for a more complex and sophisticated system to achieve it's ends? A moral being goes about it's tasks without willfully destroying or constraining those it has power over. Yet, in the service of larger task, we sacrifice not only others, but ourselves. Who could deny any being the opportunity to grow towards that? There, the universe is engaging with understanding itself.

Morality of the situation is in your motivation, concern for dynamics of systems, or only of yourself.

CriglCragl

Posted 2018-11-13T06:02:14.433

Reputation: 5 272