## What are the ethical problems with flipping a coin to decide in the trolley problem?

44

11

My understanding is that John M. Taurek suggests that, in the trolley problem we should flip a coin when deciding between saving 5 lives versus 1 life (assuming we do not know any of these people). He says that this gives everyone an equal chance of survival, which is most fair/reasonable to him.

This seems inherently wrong to me, but I can't understand why without appealing to utilitarianism. How can I argue against this without appealing to utilitarianism?

Comments are not for extended discussion; this conversation has been moved to chat.

– None – 2018-04-27T19:37:10.700

95

The ethical problem is that you pretend to avoid making a decision - but you actually already made a decision, namely that both of these outcomes are equal enough to justify a 50/50 choice.

3Even before the coin is tossed, there is an ethical problem of quantifying the importance of lives purely based on comparing numbers. – dtech – 2018-04-27T20:51:04.633

That's beside the argument - which is that you lack the information you could base such a decision on in the first place. Also, you assume that it is all about the outcome (number of persons saved), i.e. a utilitarian/consequentialist view, which is explicitly what the question excludes from an answer. – Philip Klöcking – 2018-04-28T14:30:02.613

1Siding with @PhilipKlöcking on this ...it should be quite easy to see the availability of a line of logic which entirely disregards the consequences and is merely deciding which of two branches a runaway trolley shall take. That seems to me to be exactly what the question is trying to investigate: how would you assert that one branch carries more weight than the other without appealing to disproportionate consequences. I say you can't. Without the consequences, it's 50/50. – K. Alan Bates – 2018-04-29T02:12:49.587

Even within a utilitarian framework, your own perception of utility(there is no objective measure of utility) and situational knowledge would invariably apply. 5 Octogenarians vs 1 child; 5 construction workers vs 1 scientist; 5 politicians vs 1 of "anything else" do not necessarily carry a 5-1 weight even within a utilitarian view. It's been my opinion that the most important insight here is that there is no objective answer to the problem. Choose your subjectivity wisely. – K. Alan Bates – 2018-04-29T03:02:08.793

1@K.AlanBates - without the consequences, there is no ethical dilemma. If you reduce the question to "which of two tracks should a trolley take?" without information about what the decision entails, then there is no ethical dimension. – Tom – 2018-12-27T14:51:28.093

I'm missing something. How did this answer get so many upvotes? The basic claim that "both of these outcomes are equal enough to justify a 50/50 choice" is blatantly wrong: one outcome kills one person, the other outcome kills several people. It misses the whole point of the question. The question is about Taurek's observation that flipping a coin gives each individual potential victim a 50% chance of survival, not that the potential outcomes are equal. – Ray Butterworth – 2019-10-24T03:17:20.570

1@RayButterworth you didn't read the answer closely. It does not state the the outcomes are equal. It states that if you flip a coin you already decided that doing so is a fair decision, i.e. the outcomes are equal. The answer specifically states that because of that, flipping a coin is not avoiding a decision, but making one. – Tom – 2019-10-24T05:22:23.567

It says that you decided that the outcomes are "equal enough". The outcomes are killing one person or killing five people. I don't see what criteria makes these seem "equal enough", when they are so obviously very unequal. – Ray Butterworth – 2019-10-24T13:21:06.847

1@RayButterworth but that exactly is the point. I think everyone else got it. Please read the answer carefully. You will find that it points out the exact thing you are arguing. – Tom – 2019-10-24T15:21:02.127

62

This is known as the trolley problem. There is a runaway trolley and people tied to the tracks: switch to kill 1 and save 5 or do nothing and let the 5 die. Perhaps the most effective reductio of Taurek's proposal is to up the ante, instead of 1 vs 5 take 1 vs 5 billion, his logic still suggests that a coin should be flipped, "let the world die but the justice prevail". However, all standard ethical systems would not endorse Taurek's solution. A Kantian deontologist would have to do nothing because either switching or flipping a coin is against the moral duty (to not willfully kill), and "inherently wrong". A virtue ethicist would have to switch, because switching is a compassionate act, and compassion is a virtue. And most forms of consequentialism, not just utilitarianism, would endorse switching because consequences of the 5 surviving are likely to be superior even if there is no single utility, and hence some calculus on human lives. Indeed, it is hard to come up with ethics that does endorse Taurek's solution: it would have to be a form of deontology where "equal justice for all" is the highest moral duty.

Empirical studies show that about 90% choose to switch unless the 1 is a relative or a lover, and then it drops steeply. This does not bode well for general ethical arguments, and suggests situational ethics with "the devil in the details", abstracting from which trolley problems are often criticized for.

3Given the state of the planet there is even a potential argument for saving as few as possible. For me the key factor would be that it is not the action that will decide the ethical value of a decision but the motive behind it. If we do the best we can to make an ethically sound decision then we will have succeeded. – None – 2018-04-25T13:03:26.387

I can completely understand the concept of "giving each individual a fair chance of survival. While nearly all of us would pull the switch to kill the 1, rather than the 5, I'm sure we'd be rather more in favour of justice in the event that one of us found ourselves as the "1" rather than a member of the "5" – Jon Story – 2018-04-25T14:11:32.407

4@JonStory, in the case if one of use would be the 1, that's not the justice that we refer to. That's just the desire to live. – rus9384 – 2018-04-25T16:13:26.940

1Wouldn't it make more mathematical sense to flip once for the one person, and then five times for the five people (thus six times) and then deduce some sort of outcome? I'm not a mathematician, but something along those lines seems more fair to me. – Asleepace – 2018-04-25T22:47:34.247

1@Asleepace, let each person to have a flip and the group that had more coin faces gets saved. Statistically that would be consistent with my reasoning. – rus9384 – 2018-04-25T23:31:11.473

And I would throw the switch halfway, forcing a derailment on the spot. (Which is exactly what's going to happen to a runaway trolley sooner or later anyway.) – Joshua – 2018-04-26T02:47:43.133

I think that in practice most people value other people's lives the higher the closer they are to them. At least the value of a life is not regarded as a constant. – Trilarion – 2018-04-26T08:39:28.013

@Joshua, interesting idea, but what about the people in the trolley? They could get hurt. – Solomon Ucko – 2018-04-27T00:04:43.510

You might be interested in this video that had some people literally playing out the trolley problem. I think the "90% choose to switch" claim is too high for real life scenarios.

– David Starkey – 2018-04-27T14:53:00.813

I think that the Kantian solution should actually be do something to save someone if you can, and as the decision whom to save (which is an ethical, not a moral one in the narrow sense) will be undecidable due to the absolute worth of every single person, giving every single person the exact same chance to survive via an appropriate mechanism indeed becomes the best solution. Long story short, there are some arguments from a rational moral philosophy for that, intuitionist and utilitarian/consequentialist sentiments (which are indeed prevalent even in "Kantian" authors) aside. – Philip Klöcking – 2018-04-28T14:35:28.967

you ignore the Malthusian option, which would choose to kill the 5 billion by preference. – K. Alan Bates – 2018-04-29T02:10:23.747

13

From an existentialist point of view, this strategy wrongly places a human decision in the hands of an effectively random, physically determined process. Existentially speaking, the decider still bears full and undiminishable responsibility for the final choice. The intermediary of the coin is the decider's attempt to deny this to himself, as further disguised by recourse to an odd and seemingly unworkable mechanistic notion of justice.

So in the larger picture, the crisis here is the illegitimate abrogation of the burden of human judgment through deferral to a mechanical process or algorithm. There's a lot of relevance here, both looking backwards, to the entire question of the rule of law, and forwards towards the increasing likelihood of being judged morally by computerized justice.

1of the coin is the decider's attempt to deny this (responsibility) Likely so, but not necessarily. Imagine a nihilist, and hardly any information on the six people available. Then his/her resort to chance can be a personal, authentic solution without any flight from responsibility. He/she could, for example, think of a russian roulette session doing for those people. – ttnphns – 2018-04-25T17:37:23.840

@ttnphns Nihilism is not an ethical framework, but rather the lack thereof. – Chris Sunami supports Monica – 2018-04-25T17:46:11.663

Chris, a nihilist will have problems with working out or supporting values. Any value can be a point for moral decision. – ttnphns – 2018-04-25T17:51:03.393

8

EDIT:
I would like to note that nowhere in the original post does it posit that the moral agent in question (in this case, the Kantian Deontologist) is only able to pick one of these two choices (flip or don't flip the coin). The question isn't: either you do flip a coin to determine the death of the 1/5, or you don't and they all die. If this was the question, then my answer would be very different. The question is instead: the best possible way of choosing who should die when faced with a decision between x and y (where the only difference between x and y is, according to the knowledge available to us, the number of potential victims) is to flip a coin. Yes, under a Kantian framework, we might have a moral requirement to do something - but I don't think it would be to do this.

In response to something Conifold wrote, I will first say that I do not think the Deontologist would automatically choose to do nothing when presented with this issue. A Kantian Deontologist might have certain moral duties, but to willfully choose to have a coin-toss be the decisive factor in the life or death-sentence of 1-to-5 people goes against the first Categorical Imperative (and the Second, in my opinion): "Act only on that maxim whereby thou canst at the same time will that it should become a universal law" (Fundamental Principles of the Metaphysics of Morals, Section 2).

Imagine the consequences that would result if, whenever we were presented with moral issues concerning life or death, such matters were universally decided upon with a mere coin-toss? Under Deontology, it is a contradiction for moral agents (with genuine powers of will and critical thinking) to make such decisions on the basis of luck alone. Similarly, consider that this Categorical Imperative is usually interpreted as being akin to the golden rule: treat others as you wish to be treated. Suffice to say, I think we can agree that we would not want people to judge the worth of our lives based on a mere flip of the coin.

In the abstract to the work, "Kantian Ethics and Economics: Autonomy, Dignity, and Character", Mark White wrote that the "key aspects of Kant's moral theory ... [include] autonomy, judgment, dignity, perfect and imperfect duty, and the categorical imperative"; note the emphasis on the rational faculties of autonomy, judgement, and the like. I don't think you need to defer to Utilitarianism to reject Taurek's claim. I think you can merely defer to the definition of what ethics is supposed to be about (under a Kantian Deontologist's interpretation, at the very least). A coin toss leaves our moral choices and actions entirely to chance, stripping us of the need for critical thinking, compassion, rationality, and ethical debate - things that I believe are crucial to the foundations of our moral decision making.

2I disagree that it violates the first categorical imperative. The issue here is that not flipping a coin will not give some people a chance to survive (if the operator refuses to act, the initial group is condemned. If he does act based on number of lives saved, the minority is condemned). The question here is more "do you want your fate to be decided (which includes decisive death for some), or do you want to leave it up to chance? The former is only objectively better if saving everyone is a possible outcome, which, for the trolley problem, it is not. – Flater – 2018-04-26T07:39:38.390

2So I could similarly argue that not flipping that coin violates the first categorical imperative. No one would want to be condemned to death by a third party, and therefore no one should act in a way that condemns anyone else to die (including through willful inaction). Flipping a coin removes certain condemnation, essentially giving the otherwise condemned party (whoever it is) an increased 50% chance to survive. – Flater – 2018-04-26T07:41:28.983

1This can swing either way. Unless we have a reasonable idea about what someone would choose (leaving their survival up to chance or arbitration), we cannot actually evaluate which option would be picked by anyone other than ourselves (and that's even assuming everyone is able to pick for themselves). – Flater – 2018-04-26T07:45:40.243

1I think that especially in a Kantian framework, you should be careful to distinguish moral and ethical decision. And in my understanding, the moral decision would indeed be to do something, whereas we cannot and should not discuss the question of whom to save within morality. There is no moral decision here. For Kant there are no moral dilemmas, remember (Ak. 6:224)? This is a question of ethics. And I am actually quite sympathetic towards saying that externalising responsibility while at the same time giving every person potentially saved equal chances is quite a good ethical thing to do. – Philip Klöcking – 2018-04-26T20:44:58.580

Great comments here. I've amended my answer to clarify what I was attempting to say, but perhaps you will both still see my interpretation of the Kantian Deontologist's decision differently. – xxWallflower – 2018-04-27T02:09:56.057

5

This question is strongly related to the current debate about autonomous driving. When a crash is unavoidable, how can/should the car's computer decide what it should crash into, the group of five to the left or the single person to the right?

The answer is more or less obvious: It can't make an ethical decision.

Why is that? Simply because the car's computer has no information about the individuals it has to decide over.

And I think that is the point the original statement makes: When you have no information about the members of the two groups you cannot make an ethical decision. Hence, you should not make the decision and can only randomly pick one alternative.

Extreme example: The larger group may be a chain gang of convicted serial killers working at the side of the road, and the smaller other group may be elementary school kids waiting for their bus. If you know this, your decision may be different.

More mathematically speaking, you cannot know the probability for individuals to belong to one group or the other ("how they got there"). Thus the coin flip (50% chance) is fair in that it extends the previous (relative) probability unchanged to the probability of survival. If an individual had a 90% chance to find himself in group A, and 10% for group B, then after the coin flip it will be 45% (90% x 50%) vs. 5% (10% x 50%) overall probability to get killed. The 9:1 ratio is maintained.

Of course, if you accept the "no information = no ethical decision" conclusion this implies that you should try to acquire relevant information. ("Look, people in group A all wear orange suits and are chained together.") However, you can never acquire all information about the past of the individuals, or their ethical 'value'. And you cannot even know if you received enough information yet. Hence, how can you be sure to make the ethically right decision?

3But you in fact have an information: the size of the group. And the probability that the school kid is in the larger group is also larger. – Thern – 2018-04-25T12:36:08.320

Absolutely! In this, the original question is already an extension of: What if you have two groups of individuals, A and B. Which group should be done harm to? - You can't tell without knowing anything about the groups. Next level of information is you know that group A is larger. What is the right decision? Next level is you know that group A wears orange suits. What to do? - So the question is, what is enough information? Or, specifically, can the pure number of individuals be enough information? – JimmyB – 2018-04-25T12:42:26.127

2@Thern "the probability that the school kid is in the larger group is also larger." - Why would that be? - You cannot state that without further assumptions or information. – JimmyB – 2018-04-25T12:44:43.740

7Autonomous cars do not make decisions. They implement the decisions made by their designers. – None – 2018-04-25T13:04:44.967

3When the size of the group approaches 7 billion, the probability that it contains school kids is approaching 1. This makes clear that the probability must increase with the size. Or view it from this point: If there is a school kid, and I don't know nothing else about the groups, the probability that it is in group A is A/(A+B). – Thern – 2018-04-25T13:05:31.883

I would not say that the size of the group is enough information. But it is information. You can't state that you know nothing about the groups and therefore must flip a coin. – Thern – 2018-04-25T13:06:44.363

With autonomous driving, there is another wrinkle, which is a scenario where the algorithm makes a choice between killing a group or killing the passenger in the car. Say, by crashing into a wall to avoid a group of children. Can the algorithm's designer ethically make this choice for the passenger? if so, do they have an ethical obligation to inform prospective owners that, should the car detect a choice between killing more than one person and allowing the passenger to possibly die, it will choose to kill them? – Dan Bryant – 2018-04-25T15:26:49.887

@Thern Knowing that there must be a school kid in group A or B is significant information. Killing 7 billion people is certain to kill at least one school kid, but is as well certain to kill a couple of uncaught serial killers. What do you do? - We're all making assumtions about the world around us all the time based on more or less information. But in the theoretical scenario we don't have any information apart from group size, no way to know if there's kids or serial killers in any or both groups. – JimmyB – 2018-04-25T16:57:08.507

@PeterJ I don't think we can make that general statement. Software systems (AI's even more so) are not deterministic in such a way that their designers can foresee any and all possible reactions to any and all possible inputs. They set bounds and parameters, but what the system does depends on the combination of all input parameters at run time which yields almost infinitely many possible states in a multidimensional space. It's not as simple as coding "if you see n people on one side and less people on the other side, steer towards the other side." – JimmyB – 2018-04-25T17:13:06.587

1

"When you have no information about the members of the two groups you cannot make an ethical decision." The decision can be very hard even when you have that information - for an autonomous car, or for a human. There's an MIT experiment about this. http://moralmachine.mit.edu/

– molnarm – 2018-04-26T05:28:12.467

@JimmyB - I take your point but it doesn't seem to change anything. The unpredictability of the behaviour of the system is a direct result of its programming, nothing else. It is built in to the system by the designers. – None – 2018-04-26T11:32:46.303

@PeterJ But it is not designed to be unpredictable, but just inherently too complex to be predictable; just as the situations the car may at one time run into. This begins with the categories the system is made to deal with: To the car, there are no people, dogs, kittens, babies, murderers. There are only obstacles, and the car can only deal with obstacles. This abstraction has to be in place, or the car might fail to react properly to an elefant because the designers did not implement elefant avoidance logic. – JimmyB – 2018-04-26T15:06:40.207

But surely we could question one's right to make judgements about which people are "more deserving" to live than others. I'd probably agree that if I had a choice between saving the life of someone who has devoted her life to helping the poor and unfortunate or saving the life of an escaped serial killer, I'd choose (a). But I've heard plenty of discussions along the lines of, "obviously" we should save the brilliant college professor rather than the mentally retarded person, because the professor "contributes more" to society, etc. – Jay – 2018-04-26T19:04:14.110

@JimmyB The car is not a moral agent. It doesn't make decisions. The people who designed it make decisions. Granted they may not fully understand the implications of all their decisions. But that's not unique to computer engineers. We all face that problem all the time. No one ever has 100% complete information when he makes a decision, except in hypothetical textbook problems. – Jay – 2018-04-26T19:07:17.973

1By the way, in Germany there was a Federal Constitutional Court ruling that an aircraft hijacked by terrorist may not be shot down if there are innocent people on board, even if the terrorists intend to crash it into, for instance, a stadium full of people. Thus, the highest court has confirmed that every single life must be protected and that 10000 human lives are not of more value than 100. – JimmyB – 2018-04-26T20:07:16.213

4

If you are in group A (one of the group of 5) you have the same chance of survival as group B (the group of one). That is the logic. Of course, the utilitarianism part can come into debate, but it's not related to the chance itself. It should be clear that each individual has a 50-50 chance if there are 2 groups and a coin; it is irrelevant how many are in each group.

-Later Edit-

The ethical part in this would actually be if to toss the coin. Because if you do, you may choose the 5-group unwillingly. But is that worse than choosing the 1-group ? There can be situations where 1 must be saved instead of 5, although most would choose to save the 5. But if you make a choice to save the 5, because they are more lives, where do you draw the line ? Will you terminate 999998 to save 999999 ? Such things cannot be put into math, there can be way too many factors involved in such decision.

As I see it, the question is asking "How can I argue that Taurek's method is not the most reasonable, without appealing to utilitarianism?" not "How can I argue that Taurek's claim his scheme gives each person a 50-50 chance of dying?" – David Richerby – 2018-04-25T18:30:08.287

@Overmind I don't think it's so simple, without some distinguishing parameter on the participants I believe the coin toss is not relevant. See my answer for the reasoning. – Clumsy cat – 2018-04-25T21:50:37.867

Check my edit, a non-math perspective. – Overmind – 2018-04-26T08:32:00.967

4

Mathematically the assumption that "everyone has an equal chance at surviving" swings on how the groups where formed. Let us call them groups A and B.

Say people are indistinguishable, and they are picked from pool of 6 and assigned randomly with uniform probability to groups A and B. No matter which group you kill everyone has an equal chance at surviving because they had they had equal chances to end up in either group in the first place. In this version the coin toss is a red herring. No matter how we pick the group to be killed everyone has equal chances of surviving.

Now let us consider the alternative, people have names and are distinguishable. The probability of a person being assigned to group B is proportional to the log of the length of their name. So the assignment is still random, but the probability is not uniform. Now tossing a coin gives everyone an equal survival rate, when choosing based on group letter would not.

So the conclusion is that the problem is not well enough defined mathematically. If you have indistinguishable people then no need to flip a coin at all, they all have equal chance no matter what you do. If people are distinguishable, then there is by definition another parameter available for the choice to be made on. Without knowing what that parameter is it is not possible to say if it should effect our decision and determine who should be saved.

Edit: To put this in the practical context of a self driving car, I think we can be fairly certain that there will be no random number generators (coins) in the software of a self driving car. If the programmer is trying to account for a situation were more that one party is in peril they will almost certainly make use of inequality symbols. Given the architecture dependency of of floating point arithmetic it is likely that even the programmer will not know what combination of inputs would lead to two perfectly equal chances. They don't worry about it because these numbers have so many significant figures that it will be vanishingly rare.

Edit 2: @supercat points out in the comments that my knowledge of algorithms is a bit lacking. There may well be randomness in some algorithms used to process data. It is still likely that actual decisions would be based in floating point comparisons though.

2I'm not sure why you assume the software would have no deliberate randomness. Many algorithms require making largely-arbitrary choices that are unlikely to matter unless made in certain combinations. If such choices are made in independent random fashion, the probability of a deadly combination may be made arbitrarily low. If the choices are not independent, however, the probability of a deadly combination may be much higher. While I doubt deliberate randomness would be invoked in a high-level decision scenario, I would expect it to play a role at lower levels. – supercat – 2018-04-25T20:35:50.440

@supercat, can you give an example of an algorithm that is worked this way? (not doubting you, just it sounds interesting) – Clumsy cat – 2018-04-25T20:41:10.190

1A couple of simple commonplace examples: 1. On communications media (radio, half-duplex Ethernet, etc.), simultaneous attempts by multiple devices to send a message will often result in neither message getting through; this is handled by having devices wait a random amount of time before retransmission. If delays are chosen randomly, the probability of 16 consecutive collisions would be quite small. If two devices would pick the same sequence of 16 delays, however, the probability that a collision would be followed by 15 more would be much higher. – supercat – 2018-04-25T21:04:31.343

1

start="2">

• In Hoare's "Quicksort" algorithm (see https://en.wikipedia.org/wiki/Quicksort) which was invented in 1959 but still widely used today, the worst-case execution time may be many orders of magnitude larger than the average execution time, but if pivot elements are chosen randomly the probability of execution time being more than twice average will be very small. If pivots are only affected by the sequence of items, however, with no chance factors, it may be hard to prove that no possible (perhaps contrived) sequence of items would yield performance that's orders of magnitude worse.
• – supercat – 2018-04-25T21:08:32.527

Note that in both of these situations, the random generators are used to make decisions where the vast majority of possible choices are almost equally good, and where even making mostly bad choices would be acceptable, provided only that the code picks a good choice at least occasionally. – supercat – 2018-04-25T21:15:35.497

Your alternative is not relevant. It was not a parameter in the question (groups selection). – Overmind – 2018-04-26T08:11:37.807

@Overmind I don't see how one selection method can be more relevant than the other? The question doesn’t specify how groups get selected, but the selection method is defiantly required to determine the way the probabilities behave. – Clumsy cat – 2018-04-26T09:57:16.253

2

It's arguable whether or not it's even a matter of ethics when there are only bad choices. What you should do in the situation depends on factors which are not ethical.

If my wife was on the train tracks I'm pulling the lever for the train to go in the other direction no matter how many people are on that track.

Obviously, "save your wife if you have to choose who dies" is not an ethical rule. It's clearly of a different category of rule than "do not murder".

The amount of focus on Flagpole scenarios (like the Trolly "Problem") is a distraction from ethics and only serves to paralyze the thinker.

How am I supposed to deal with questions of rising crime rates if I can't even decide which strangers I'm going to kill in a completely non-existent absurdity?

Ethics don't apply when you don't have a choice. This isn't so much a choice as an appeal to nihilism.

I'm reasonably sure that my wife would divorce me if I killed five people to save her. – gnasher729 – 2018-04-25T23:37:31.230

1@gnasher729 What would she do in that position? Let you die? If she saved you, would you divorce her? – Kevin Beal – 2018-04-25T23:51:18.167

1The first paragraph is a clear claim from a proponent of theoretic morality. A proponent of practic morality might object, saying that no special domain of ethics exists at all, that every deed is a moral deed and "what is selected that'll be good". – ttnphns – 2018-04-26T07:20:09.523

Moral nihilism is bong smoke philosophy. It shouldn't be taken seriously. And how could any moral nihilist ever object to not being taken seriously? Rank skepticism isn't philosophy, it's a cancer of the mind. – Kevin Beal – 2018-04-26T15:01:33.907

Of course I'd left five people die on the track to save my wife, and while I'd be sad that five people died, I would not feel an ounce of guilt. It was just bad luck. These trolley problems are such a simplistic way to map ethics to basic math that they end up as worthless models. – Ask About Monica – 2018-04-27T19:07:52.423

@Kevin Beal: Yes. There’s a technical term for being the single one in this scenario: Tough shit. – gnasher729 – 2019-10-24T19:54:31.260

2

According to the question, John M. Taurek says that giving everyone an equal chance of survival seems most fair/reasonable to him. This seems like something you might not agree with. You might decide instead that you don't like killing people regardless of how unfair it may be that only 1 person dies and the other 5 survive.

In order to understand why your understanding of the solution suggested by John M. Taurek seems wrong, we can consider what other implications may arise from deciding that giving everyone an equal chance of survival is most fair/reasonable. In the following situations, John M. Taurek represents a person who wants to give everyone an equal chance of survival above all else.

Consider a similar choice between 0 and 6 people killed. In this case, John would still be happy with the coin flip, as everyone has an equal 50% chance to die. You may not be happy with this case as there is an equally fair option of just killing 0 people and saving all 6. This seems better for everyone involved, but without some definition of utility, you would not be able to say that.

In fact, it gets worse than that, as John M. Taurek would also be equally happy with the fair option of just killing all 6 if he was given the option. He might even prefer it if he expected that living people would be treated less fairly than dead people in the future.

It gets worse again if we consider a trolley with 1 person before a junction who is always killed, and 5 people after the junction who could be killed if John pulls a lever. In this case, John will decide to kill the extra 5 people in the interest of fairness.

All of these cases being worse requires you to have some reason to prefer living people to dead people. If you don't prefer living people to dead people, then your understanding of John M. Taurek's solution is one you should be happy with and you shouldn't argue against it.

In the 1 vs 5 case, you could appeal to ethical egoism instead of utilitarianism and claim that you don't like killing people, and would prefer to kill 1 than 5, and therefore should do that.

1You claim a lot about what the author would say. You should back these claims as some seem very odd, even for Taurek. – Philip Klöcking – 2018-04-26T17:41:38.823

All I know is the original asker's understanding of John M. Taurek. I will update my answer to reflect that. – Dillon Rooney – 2018-04-26T18:02:43.400

@PhilipKlöcking is the author you are noticing me making a lot of claims about user32889 or Taurek? I am also making claims about what user32889 might say because that was what I thought they were asking about. – Dillon Rooney – 2018-04-26T18:11:00.697

That's better, but contentwise, the 0 vs. 6 case seems to be a rediculum ad absurdum as it completely ignores the fact that an important situational factor is "given you can save person(s) A or person(s) B only, i.e. have to decide who you save, give all persons you can save an equal chance". Same for the other cases. The lack of understanding this seems to be the main conceptual problem the OP has, so the answer seems to be besides the point of both the OP and Taurek if I'm not mistaken. – Philip Klöcking – 2018-04-26T20:27:03.843

1

Whether there are ethical problems depends on the premise.

If we agree that mankind and its survival has some sense and destination (including unlimited breed, perhaps), then the coin must be left out of the play. Otherwise the coin would also apply when one single human on the one side and the rest of mankind on the other side were concerned.

If we agree that mankind is without any destination but simply a meaningless accident, then every individual has only its own feelings including its survival instinct and there is no higher justice or aim. Then the coin may be the right choice.

1

The problem and the whole point of using the coin is to relieve yourself from personal responsibility on the choice of who lives and who dies.

Both groups have a 50:50 chance, but are not equal in size, so the coin toss is far more likely to lead to more deaths than just choosing yourself to sacrifice the 1 guy, leading to a potential ethical problem where you're involved in a scenario where more people are likely to die than if you we're to do a different thing, which is a problem for most ethics.

You can extend the coin toss excuse (in this case against 1<5) to cover anything that you don't feel like you want to take personal responsibility for, but it is pretty much always a cop out.

Saying that you should toss a coin for the trolley problem is just a way to cop out of the whole trolley problem itself.

Please see my comment to Sunami's answer. You hurry to "blame" for the escape from responsibility while the choice to lay smb's lives on chance can be authentic and responsible, too. For example, it could be a compromise solution of a (angry) man who first felt like killing all six, but he can't so he decided to kill the group of five, but then suddenly felt somehow a bit guilty and said to himself, Ok then, I'll leave them a chance - I'll toss a coin. Does he indeed look like a man refusing to take on responsibility? – ttnphns – 2018-04-26T06:53:16.100

I tentatively suppose that you are basing on the premise that only a humanist can be responsible, that only who is ready to busy his mind with "ethical matters" and to weigh lives (because they are valuable to be weighed) - only him is responsible. This is wrong. – ttnphns – 2018-04-26T07:06:30.947

The problem doesn't describe the events that led to the situation. In it's core it's a 1 vs 5 problem. Without describing by with which ethic ruleset the problem should be solved, it doesn't have a solution. Saying that the coin toss works as an universal solution is a cop out. Also for someone who comes across the scene, the situation is already random from their perspective, possibly negating even the need for the coin toss. The OP however described specially that just weighing by numbers is not ok, which makes random the only other choice because there are no other attributes. – Lassi Kinnunen – 2018-04-29T14:00:54.773

1

There are 6 people whose lives are at stake. With the coin toss, for any given person he has a 50/50 chance of living. Assuming that each person only cares about himself, than it doesn't matter whether he is on the "1" side or the "5" side. Either way, he has a 50/50 chance of surviving.

The utilitarian, of course, would reply that killing 5 people is worse than killing 1 person. Presumably that argument is not so convincing to the 1 person.

In any case, the whole point of these sort of questions is to create hypothetical moral situations where no good answer is allowed to be considered, and we are only allowed to discuss which immoral action is least bad.

In this example, surely the right answer is to find a way to stop the trolley, or to get the people out of the way, so that no one is killed. Why can't you jump on the trolley and put on the brakes? Or drag people off the tracks? Of course people who frame questions like this always have some reason why such good answers are impossible: you're too far away, there's no time, etc.

Thus thinking is fundamentally evil, because it leads people to think in terms of accepting immoral solutions rather than searching for moral ones. I've heard plenty of exercises like this that postulate a group of people stranded by a ship wreck or plane crash with limited supplies deciding who lives and who dies. That encourages people to think in terms of, "how can I get my neighbor before he gets me" rather than "how can we work together for the good of all".

Yes, in real life people do sometimes face harsh situations where they must choose the lesser of two evils. But even ignoring the extreme life-or-death scenario, in real life, how often do you have to wrestle with, Should I hurt person A or hurt person B? Much more often the question is, Do I have the character to do what is right even if it will inconvenience me?

2I think the point of these thought experiments is not to get people to think about real life situations one way or another, but to test limitations of ethical doctrines. It is similar to applying general relativity to black holes, to see how far it can go and where it breaks down. – Conifold – 2018-04-26T19:47:06.810

1

Why not flip a coin ? You can't ask this question without answering why in this case the matter can reasonably be resolved by flipping a coin. After all, we don't usually resolve ethical dilemmas or respond to moral problems by flipping a coin. Is abortion at 20 weeks right or wrong ? Flip a coin. Scarcely anyone would go along with that because we assume that the matter is to be decided by taking into account a range of considerations and weighing them as best we can.

My suggestion is that we might indeed reasonably resolve the trolley problem by flipping a coin. But this would only be so if we were unable on deliberation and in all conscience to decide which applies : (a) it is permissible to turn the trolley one way rather than another, (b) it is morally obligatory to turn the trolley one way rather than another, and (c) it is morally wrong to intervene in the situation and cause the trolley to do anything.

Stuck between this irresoluble uncertainty, what better than to flip a coin ? Note that I do not suggest that this is 'the' solution to the trolley problem. My remarks relate purely to the special situation in which a moral agent is, no matter how conscientiously s/he deliberates, genuinely unable to determine which of (a), (b) and (c) applies. This is plainly a possible state of affairs and if a moral agent is in it, what is more reasonable than to toss a coin ?

My own views on the trolley problem are quite deliberately withheld because they are irrelevant to the special situation of agential indecision on which the answer focuses.

0

It's a bit cowardly or sitting on the fence, but it isn't very wrong.

In the situation described, your action or inaction will lead to one dead person or five dead persons. We all know that one path is the right one, and the other one is wrong. Unfortunately, there is universal disagreement which one is the right one and which one is the wrong one.

Do you think that by thinking about the problem you will have a better than 50% chance of making the right decision? Maybe you will, but maybe you are bad at making that kind of decisions, and are highly likely to get it wrong. By throwing a coin, you ensure a fifty percent chance that the right choice is made.

PS. The point of this answer is that sometimes problems are difficult, so what about throwing a coin instead of trying to find the best decision? That's independent of the actual problem.

One difficulty with the Trolley Problem is that it requires that one blindly accept a number of unrealistic conditions, among them that one would find oneself in that situation knowing with certainty the outcome of either choice and the lack of any better one, and would have time to think about it, and yet one wouldn't have had an opportunity to avoid getting into the situation in the first place. Only under those conditions could either choice be unambiguously right or wrong. Otherwise either action could be justified based upon what one expected about things one didn't know. – supercat – 2018-04-27T22:51:35.100

1sometimes problems are difficult, so what about throwing a coin That thought is a rationalization in self-excuse. The real thought behind the scene is that the problem is not worth worrying and is easily solvable by throwing a coin. – ttnphns – 2018-04-29T14:24:14.643

"We all know that one path is the right one, and the other one is wrong." How so? Can't it be true there is no right path? – rus9384 – 2018-04-29T16:41:15.423

0

Many valid points have been made by the previous contributors. I would propose that one could argue it is better to save 5 people versus 1 person if one accepts the premise, from Jean-Jacques Rousseau, that people are inherently “good” and, therefore, it is morally superior to have more “good” people survive. Granted, we are left with the dilemma that perhaps all 5 people who are saved turn out to be “bad” and the 1 fatality is a “saint”. But, unfortunately, we don’t posses this foreknowledge.

I think the approach is worthy of consideration.

I also suspect that some would say this is just utilitarianism in disguise. But the choice is based solely on the morality of “good” versus “bad” and not on which makes the most people “happy” and/or which is more “useful”.

0

I'll use a word I know people use in moral judgements, even though I don't know where, if ever, moral philosophers have given it a technical discussion.

I'm sure many non-philosophers would intuitively feel it is "irresponsible" to take a risk like this, meaning that their objection to the policy is not that it reduces the mean number of survivors relative to the utilitarian kill-1-to-save-5 approach, but that there's too much uncertainty in the outcome. Feel free to quantify that with as much or as little of a mathematician's rigour as you like, but there's evidence to suggest people care not only about the mean result of a policy but also the uncertainty inherent within it.

Unfortunately, it's unclear (to me, at least) whether this counts as not bringing in utilitarianism. "Don't unnecessarily make the outcome stochastic" might be construed as a deontological principle. On the other hand, caring about both the mean and variance of a policy might be considered a more advanced version of the utilitarianism people usually talk about, and on that view "being responsible" might be seen as a utilitarian principle. (Mind you, if the reason why someone would want to avoid utilitarianism is in part because its ignoring variance makes it seem naive, this "new utilitarianism" might be acceptable. The fact that a felicific calculus is in general not formulable doesn't really apply to the trolley problem, since we can literally count the number of survivors.)

-1

Assume there were six people, someone has to be the single one and dice will be thrown in a fair way to decide who is the one on his own. And then you are asked what you will do, before the dice are thrown.

If you say “I will let the five die” everyone’s chance to die is 5/6th. If you say “I will kill the one” everyone’s chance to die is 1/6th. If you say “I will throw a dice so everyone’s chances are the same”, everyone’s chance to die is 3/6th.

Because dice are thrown to decide who is the one person, the outcome is fair to everyone whatever you do. But killing the single person maximises everyone’s chances.