How could self-driving cars make ethical decisions about who to kill?



Obviously, self-driving cars aren't perfect, so imagine that the Google car (as an example) got into a difficult situation.

Here are a few examples of unfortunate situations caused by a set of events:

  • The car is heading toward a crowd of 10 people crossing the road, so it cannot stop in time, but it can avoid killing 10 people by hitting the wall (killing the passengers),
  • Avoiding killing the rider of the motorcycle considering that the probability of survival is greater for the passenger of the car,
  • Killing an animal on the street in favour of a human being,
  • Purposely changing lanes to crash into another car to avoid killing a dog,

And here are a few dilemmas:

  • Does the algorithm recognize the difference between a human being and an animal?
  • Does the size of the human being or animal matter?
  • Does it count how many passengers it has vs. people in the front?
  • Does it "know" when babies/children are on board?
  • Does it take into the account the age (e.g. killing the older first)?

How would an algorithm decide what it should do from the technical perspective? Is it being aware of above (counting the probability of kills), or not (killing people just to avoid its own destruction)?

Related articles:


Posted 2016-08-02T18:57:57.550

Reputation: 9 163

4I will not ride in any vehicle that is programmed not to make ME the passenger its top safety priority. I am not alone in that thinking. If anybody has plans to force autonomous vehicle makers to program their vehicle to do anything other than prioritize the safety of its passengers then the entire industry might as well close their doors now. The first couple of times a passenger dies because of some abhorrent ethical decision will be the last time the vast majority of people would ever purchase another autonomous vehicle. Who wants to ride in a death trap? – Dunk – 2019-06-27T22:56:30.710

1The so-called "trolley problem" is a troll-y problem. There's a reason it's known as a thought experiment rather than a case study: it's not something that has ever happened. It's not even close to being real, largely because cars are fundamentally not trolleys. The right answer in any remotely applicable case is virtually always "hit the brakes," and if there is ever an exception, it can only possibly be resolved one way: by protecting the inhabitants of the car at all costs and never even considering anything else, for two reasons. – Mason Wheeler – 2019-09-02T19:49:36.763

3The first, @Dunk covered quite well: pragmatism. If it's possible for the car to make a different choice, no one in their right mind would want to buy one. The second is that if there's a "kill everyone" function built in to the car's computer, that means it's already there for malicious hackers to find and abuse. Weigh the very real chance of someone figuring out a way to trigger that with adversarial input against the purely fictitious idea of a legitimate trolley-problem situation, and there's only one possible answer that is not morally absurd and abhorrent. – Mason Wheeler – 2019-09-02T19:52:05.857

@MasonWheeler says "it's not something that has ever happened". Actually it has: Trolley problem - Wikipedia. There are also instance of people deliberately crashing, and usually fatally, in order to avoid killing other people, such as Sheriff: Truck driver who died in I-94 crash drove off the road to save lives

– Ray Butterworth – 2020-06-27T00:42:47.223



How could self-driving cars make ethical decisions about who to kill?

It shouldn't. Self-driving cars are not moral agents. Cars fail in predictable ways. Horses fail in predictable ways.

the car is heading toward a crowd of 10 people crossing the road, so it cannot stop in time, but it can avoid killing 10 people by hitting the wall (killing the passengers),

In this case, the car should slam on the brakes. If the 10 people die, that's just unfortunate. We simply cannot trust all of our beliefs about what is taking place outside the car. What if those 10 people are really robots made to look like people? What if they're trying to kill you?

avoiding killing the rider of the motorcycle considering that the probability of survival is greater for the passenger of the car,

Again, hard-coding these kinds of sentiments into a vehicle opens the rider of the vehicle up to all kinds of attacks, including "fake" motorcyclists. Humans are barely equipped to make these decisions on their own, if at all. When it doubt, just slam on the brakes.

killing animal on the street in favour of human being,

Again, just hit the brakes. What if it was a baby? What if it was a bomb?

changing lanes to crash into another car to avoid killing a dog,

Nope. The dog was in the wrong place at the wrong time. The other car wasn't. Just slam on the brakes, as safely as possible.

Does the algorithm recognize the difference between a human being and an animal?

Does a human? Not always. What if the human has a gun? What if the animal has large teeth? Is there no context?

  • Does the size of the human being or animal matter?
  • Does it count how many passengers it has vs. people in the front?
  • Does it "know" when babies/children are on board?
  • Does it take into the account the age (e.g. killing the older first)?

Humans can't agree on these things. If you ask a cop what to do in any of these situations, the answer won't be, "You should have swerved left, weighed all the relevant parties in your head, assessed the relevant ages between all parties, then veered slightly right, and you would have saved 8% more lives." No, the cop will just say, "You should have brought the vehicle to a stop, as quickly and safely as possible." Why? Because cops know people normally aren't equipped to deal with high-speed crash scenarios.

Our target for "self-driving car" should not be 'a moral agent on par with a human.' It should be an agent with the reactive complexity of cockroach, which fails predictably.


Posted 2016-08-02T18:57:57.550

Reputation: 1 719

In most of those scenarios the people who are potentially killed are more at fault of the damage than the car/occupant of the car -- slamming the brakes seems like the only reasonable response since the occupant couldn't possibly have avoided the accident either. As you point out, it's very possible to use an algorithm that behaves by (potentially) killing the occupant instead of a pedestrian as a way to actually murder the occupant, potentially even putting the blame on the occupant, the car or the manufacturer. – Clearer – 2017-02-14T00:39:57.247


The answer to a lot of those questions depends on how the device is programmed. A computer capable of driving around and recognizing where the road goes is likely to have the ability to visually distinguish a human from an animal, whether that be based on outline, image, or size. With sufficiently sharp image recognition, it might be able to count the number and kind of people in another vehicle. It could even use existing data on the likelihood of injury to people in different kinds of vehicles.

Ultimately, people disagree on the ethical choices involved. Perhaps there could be "ethics settings" for the user/owner to configure, like "consider life count only" vs. "younger lives are more valuable." I personally would think it's not terribly controversial that a machine should damage itself before harming a human, but people disagree on how important pet lives are. If explicit kill-this-first settings make people uneasy, the answers could be determined from a questionnaire given to the user.

Ben N

Posted 2016-08-02T18:57:57.550

Reputation: 2 464

6'ethics settings'? Really? You'll have people setting it based on any possible prejudice 'Kill the person who looks the most ____' Homosexual? Least Favorite Race?

Or what if two vehicles in a collision's drivers have opposing ethics? Then both get executed and there's additional extraneous damage?

These laws need to be universal (at very least within a country), just like road laws. – navigator_ – 2016-09-01T16:04:57.393

3@navigator_ I would hope that the options provided by the manufacturer would be within reason. – Ben N – 2016-09-01T16:18:51.103

1Reason as defined by whom, though? That's kind of my point. – navigator_ – 2016-09-01T17:03:09.020

@navigator_ Especially now that I think about it a little more, you do have a really good point there. I'm having trouble finding a specific line between "reasonable" discrimination (if there is such a thing) and very bad discrimination. Feel free to add an answer of your own! – Ben N – 2016-09-01T18:07:53.813

Looking back, my original response is likely to seem much sharper than I intended. Sorry about that! – Ben N – 2016-09-01T19:26:26.410

As does my initial response, probably due to a heated discussion about subjective ethics a few days ago. :) – navigator_ – 2016-09-01T20:50:40.930

I for one would be very worried if I got in my new shiny car and it asked me "excuse me sir, do you mind how many people I kill when we have a fatal crash?". – icc97 – 2016-11-18T23:48:10.553

@BenN ;I agree with the first part which says:A computer capable of driving around and recognizing where the road goes is likely to have the ability to visually distinguish a human from an animal, whether that be based on outline, image, or size.This is real answered based on AI.But lost in the last part or second part – quintumnia – 2017-02-12T11:19:23.633

2Something in the deepest part of my mind thinks that an edit ethics setting would be a bad idea. Not really sure what it is other than the fact that its a machine instead of a person but i always come to the conclusion that we already have this unwritten contract of subscribing to a persons ethical decisions whenever we get in a car with anyone else. – Ian – 2016-08-02T20:37:40.960

3I actually think the "ethics setting" is the only viable/fair option, because it imitates the specific driver as close as possible. The driver of the car should then be accountable for whatever happens as a result, as if he were driving. I don't see how programming a car to (e.g) put the driver's life before anything else is different from the driver putting his life before anything else. – Pandora – 2017-07-20T18:51:59.590


Personally, I think this might be an overhyped issue. Trolley problems only occur when the situation is optimized to prevent "3rd options".

A car has brakes, does it not? "But what if the brakes don't work?" Well, then the car is not allowed to drive at all. Even in regular traffic, human operators are taught that your speed should be limited as such that you can stop within the area you can see. Solutions like these will reduce the possibility of a trolley problem.

As for animals... if there is no explicit effort to deal with humans on the road I think animals will be treated the same. This sounds implausible - roadkill happens often and human "roadkill" is unwanted, but animals are a lot smaller and harder to see than humans, so I think detecting humans will be easier, preventing a lot of the accidents.

In other cases (bugs, faults while driving, multiple failures stacked onto each other), perhaps accidents will occur, they'll be analysed, and vehicles will be updated to avoid causing similar situations.


Posted 2016-08-02T18:57:57.550

Reputation: 476

It is wrong to say that AI will only drive so fast as they can stop from what they see, because the same is true for even defensive driving humans. The reason is that assumptions MUST be made. When cresting a hill, it is assumed that a large obstacle is not right out of view [Source DARPA Grand Challenge]. These assumptions are based on rules of the road to determine which actions are right. Thus, if an accident occurs, as long you followed the rules of the road, you can reduce your liability. So, a car must be able to make these decision, but perhaps in a limited fashion (protect consumer). – Harrichael – 2017-01-31T21:42:47.240


In the real world, decisions will be made based on the law, and as noted over on Law.SE, the law generally favors inaction over action.


Posted 2016-08-02T18:57:57.550

Reputation: 250

Ah, that makes a lot of sense for human victims. Would the law's choice be any different if the car could swerve to hit a dog instead of a pedestrian? – Ben N – 2016-08-12T02:40:26.690


This is the well known Trolley Problem. As Ben N said, people disagree on the right course of action for trolley problem scenarios, but it should be noted that with self-driving cars, reliability is so high that these scenarios are really unlikely. So, not much effort will be put into the problems you are describing, at least in the short term.


Posted 2016-08-02T18:57:57.550

Reputation: 1 056

1@NietzscheanAI The idea of the trolley problem isn't in what decision to make, but rather what to make your decisions on. Should you consider burgulars, number of people, number of animals, who is at fault, driver safety, greater good, least change, etc. You can still apply probabilities to trolley problems, ie. route A has 30 percent chance to kill 10 people. Trolley problems typically involve two routes so that you can consider what factors you base ethical decision on more precisely. – Harrichael – 2017-01-31T21:47:49.817

I wonder how good a fit the Trolley Problem actually is for this? The trolley problem is essentially discrete, describing a `once and for all' choice, whereas in practice the algorithm must make a sequence of choices at (potentially quite small) time increments, possibly with new information becoming available. A useful algorithm is likely to be a 'continuous control' issue embedded in actual 3D space, dealing with velocities and probabilities, not an abstract, discrete moral decision. If nothing else, this would mean the implementation would not have to be defined in such stark moral terms. – NietzscheanAI – 2016-08-03T09:08:29.010


For a driverless car that is designed by a single entity, the best way for it to make decisions about whom to kill is by estimating and minimizing the probable liability.

It doesn't need to absolutely correctly identify all the potential victims in the area to have a defense for its decision, only to identify them as well as a human could be expected to.

It doesn't even need to know the age and physical condition of everyone in the car, as it can ask for that information and if refused, has the defense that the passengers chose not to provide it, and therefore took responsibility for depriving it of the ability to make a better decision.

It only has to have a viable model for minimizing exposure of the entity to lawsuits, which can then be improved over time to make it more profitable.


Posted 2016-08-02T18:57:57.550

Reputation: 1 221


“This moral question of whom to save: 99 percent of our engineering work is to prevent these situations from happening at all.” —Christoph von Hugo, Mercedes-Benz

This quote is from an article titled Self-Driving Mercedes-Benzes Will Prioritize Occupant Safety over Pedestrians published OCTOBER 7, 2016 BY MICHAEL TAYLOR, retrieved 08 Nov 2016.

Here's an excerpt that outlines what the technological, practical solution to the problem.

The world’s oldest carmaker no longer sees the problem, similar to the question from 1967 known as the Trolley Problem, as unanswerable. Rather than tying itself into moral and ethical knots in a crisis, Mercedes-Benz simply intends to program its self-driving cars to save the people inside the car. Every time.

All of Mercedes-Benz’s future Level 4 and Level 5 autonomous cars will prioritize saving the people they carry, according to Christoph von Hugo, the automaker’s manager of driver assistance systems and active safety.

There article also contains the following fascinating paragraph.

A study released at midyear by Science magazine didn’t clear the air, either. The majority of the 1928 people surveyed thought it would be ethically better for autonomous cars to sacrifice their occupants rather than crash into pedestrians. Yet the majority also said they wouldn’t buy autonomous cars if the car prioritized pedestrian safety over their own.


Posted 2016-08-02T18:57:57.550

Reputation: 210


How could self-driving cars make ethical decisions about who to kill?

By managing legal liability and consumer safety.

A car that offers the consumer safety is going to be a car that is bought by said consumers. Companies do not want to be liable for killing their customers nor do they want to sell a product that gets the user in legal predicaments. Legal liability and consumer safety are the same issue when looked at from the perspective of "cost to consumer".

And here are few dilemmas:

  • Does the algorithm recognize the difference between a human being and an animal?

If an animal/human cannot be legally avoided (and the car is in legal right - if it is not then something else is wrong with the AI's decision making), it likely won't. If the car can safely avoid the obstacle, the AI could reasonably be seen to make this decision, ie. swerve to another lane on an open highway. Notice there is an emphasis on liability and driver safety.

  • Does the size of the human being or animal matter?

Only the risk factor from hitting the obstacle. Hitting a hippo might be less desirable than hitting the ditch. Hitting a dog is likely more desirable than wrecking the customer's automobile.

  • Does it count how many passengers it has vs. people in the front?

It counts the people as passengers to see if the car-pooling lane can be taken. It counts the people in front as a risk factor in case of a collision.

  • Does it "know" when babies/children are on board?


  • Does it take into the account the age (e.g. killing the older first)?

No. This is simply the wrong abstraction to make a decision, how could this be weighted into choosing the right course of action to reduce risk factor? If Option 1 is hit young guy with 20% chance of significant occupant damage and no legal liability and Option 2 is hit an old guy with 21% chance of significant occupant damage and no legal liability, then what philosopher can convince even just 1 person of the just and equitable weights to make a decision?

Thankfully, the best decision a lot of the time is to hit the breaks to reduce speed (especially when you consider that it is often valuable to act predictably so that pedestrians and motorists can react accordingly). In the meantime, better value improvements can be made in terms of predicting when drivers will make bad decisions and when other actions (such as hitting the reverse) are more beneficial than hitting the breaks. At this point, it is not worth it to even begin collecting the information to make the ethical decisions proposed by philosophers. Thus, this issue is over-hyped by sensational journalists and philosophers.


Posted 2016-08-02T18:57:57.550

Reputation: 211


Frankly I think this issue (the Trolley Problem) is inherently overcomplicated, since the real world solution is likely to be pretty straightforward. Like a human driver, an AI driver will be programmed to act at all times in a generically ethical way, always choosing the course of action that does no harm, or the least harm possible.

If an AI driver encounters danger such as imminent damage to property, obviously the AI will brake hard and aim the car away from breakable objects to avoid or minimize impact. If the danger is hitting a pedestrian or car or building, it will choose to collide with the least precious or expensive object it can, to do the least harm -- placing a higher value on a human than a building or a dog.

Finally, if the choice of your car's AI driver is to run over a child or hit a wall... it will steer the car, and you, into the wall. That's what any good human would do. Why would a good AI act any differently?


Posted 2016-08-02T18:57:57.550

Reputation: 651

1'the AI will brake hard', 'it will choose to collide with ...', if these are statements, could you post the some references? Or they're opinion based? – kenorb – 2016-08-31T11:09:42.480

1The OP posed the question as, "What damage will the AI choose to inflict?". I think that's the wrong objective. Humans don't do that. Instead the AI will follow a more general strategy of avoiding damage, and doing as little damage as possible. This optimization objective is more intuitive and harder to argue against when liability concerns arise. – Randy – 2016-08-31T13:50:14.627


They shouldn't. People should.

People cannot put the responsibilities of ethical decisions into the hands of computers. It is our responsibility as computer scientists/AI experts to program decisions for computers to make. Will human casualties still exist from this? Of course, they will--- people are not perfect and neither are programs.

There is an excellent in-depth debate on this topic here. I particularly like Yann LeCun's argument regarding the parallel ethical dilemma of testing potentially lethal drugs on patients. Similar to self-driving cars, both can be lethal while having good intentions of saving more people in the long run.


Posted 2016-08-02T18:57:57.550

Reputation: 253


I think that in most cases the car would default to reducing speed as a main option, rather than steering toward or away from a specific choice. As others have mentioned, having settings related to ethics is just a bad idea. What happens if two cars that are programmed with opposite ethical settings and are about to collide? The cars could potentially have a system to override the user settings and pick the most mutually beneficial solution. It's indeed an interesting concept, and one that definitely has to discussed and standardized before widespread implementation. Putting ethical decisions in a machines hands makes the resulting liability sometimes hard to picture.


Posted 2016-08-02T18:57:57.550

Reputation: 41

Yes, braking to avoid a collision would absolutely be the best solution. Unfortunately, in some cases (the cases asked about in the question), hitting something is unavoidable. You seem to agree with some existing answers. Please note that we would like each answer to be able to stand on its own. Once you have sufficient reputation, you'll be able to upvote answers you believe are helpful. The problem of two oppositely-configured cars is indeed interesting; it would be great if you would [edit] your answer to expand on that. Thanks! – Ben N – 2016-11-17T15:15:31.720

Thank you very much for your reply, I actually really appreciate it and will use it going forward on the site. I wasn't aware of the guidelines for answers. – imhotraore – 2016-11-17T21:24:49.183


The only sensible choice is to use predictable behaviour. So in the people in front of the car scenario: First the car hits the brakes, at the same time honks the horn, and stays on course. The people then have a chance to jump out of the way leading to zero people being killed. Also with full brakes (going from 50km per hour to zero is less than 3 car lengths), an impact situation is almost not imaginable. Even if full stop cannot be reached, severe damage to the pedestrians is unlikely.

The other scenario is just crazy. So the distance has to be less than 3 car lengths, at least 1 car length in needed for the steering, then a car crashing is an uncontrollable situation, might lead to spinning and kill all 11 persons.

Apart from saying that I don't believe there is an example in reality where there is a dilemma; the solution in these unlikely cases if to conform with the expectations of the opposing party to allow the other party to mitigate the situation as well.


Posted 2016-08-02T18:57:57.550

Reputation: 131

Can you please try to go through the first answer posted by Ben.And then try to edit yours in line with what AI is all about! – quintumnia – 2017-02-12T11:21:53.193

can u be more specific? I read through bens first answer, but fail to see how this is related. Can you explain 'what AI is all about'? – lalala – 2017-02-12T12:47:04.523

What I meant there;is to be knowledgeable about AI.So your answer may be down-voted for or nothing to see;your majesty! – quintumnia – 2017-02-12T13:15:23.293

@quintumnia This answer seems fine to me (+1) - just carrying on without attempting to swerve or make decisions is a valid choice. – Ben N – 2017-02-12T18:16:10.580


I think there would not be a way to edit such ethics settings in a car. But hey, if cell phones can be rooted, why not cars? I imagine there'll be Linux builds in the future for specific models that will let you do whatever you want.

As for who'll make such decisions, it'll be much like privacy issues of today. There'll be a tug-of war on the blanket by the OS providers (who'll try to set it to a minimum amount of people injured, each with its own methods), insurance companies (who'll try to make you pay more for OS's that will be statistically shown to damage your car easier), and car manufacturers (who'll want you to trash your car as soon as you can, so you'll buy a new one; or make cars that require a ridiculous amount of $$$ service).

Then some whistleblower will come out and expose a piece of code that chooses to kill young children over adults - because it will have a harder time distinguishing them from animals, and will take chances to save who it'll more surely recognize as humans. The OS manufacturer will get a head-slap from the public and a new consensus will be found. Whistleblowers will come out from insurance companies and car manufacturers too.

Humanity will grab a hot frying pan and burn itself and then learn to put on gloves beforehand. My advice would just make sure you won't be that hand - stay away from them for a couple of years until all the early mistakes are made.


Posted 2016-08-02T18:57:57.550

Reputation: 31


I think we need to state our own moral before thinking about what the cars moral(or ethical setting) should be. I recommend reading this paper Autonomous Cars: In Favor of a Mandatory Ethics Setting which argues why it is in everyone's best interest that we prioritize the safety of the majority, and not just the driver(yes, in the best interest of the driver too).

You can test your own moral in many different situations, some like your examples, on MITs moral machine. It's rather uncomfortable but very interesting. You can also find some analysis of people's answers on their website.

My answers to your examples:

The car is heading toward a crowd of 10 people crossing the road, so it cannot stop in time, but it can avoid killing 10 people by hitting the wall (killing the passengers)

I assume hitting the brakes is not going to work, or else the dilemma is pointless. I think the car should hit the wall. Pedestrians should not suffer just because someone else is driving a car, especially when there are 10 pedestrians and maximal 5(typically 1 or 2) in the car.

Avoiding killing the rider of the motorcycle considering that the probability of survival is greater for the passenger of the car

I think this one is harder, especially since the motorcycle probably is not autonomous, and (in contrast to the pedestrians in the previous example) riding motorcycles is quite dangerous. Are the motorcyclist accepting the risk when entering the roads? If avoiding the motorcyclist means a probable death for the driver, then no. If not, probably should avoid it.

Killing an animal on the street in favor of a human being

Purposely changing lanes to crash into another car to avoid killing a dog

I think humans are more important than animals.

I don't think there exists a correct answer for this. One of the really interesting things is that from the data collected by the moral machine is that there are big differences based on where in the world you're from. Typically, the western typically countries prioritize saving children over the elderly, while this is not the case for the whole world. Countries with strong governments like Finland and Japan prioritize people abiding the law, while people from countries with weaker/corrupt government does not care so much about that. Even in this comment section, you can find differences! I, for example, think the pedestrians should be spared in the first example, while Doxosophoi thinks that it is obvious that the passengers should be protected!


Posted 2016-08-02T18:57:57.550

Reputation: 107