Stop talking about the trolley problem

Five people are tied up on the tram tracks...and there’s a tram heading straight for them! You can pull a lever to divert the course of the tram, but if you do that, it’ll go onto another track that also has someone tied up on it. So you either actively choose to kill one person, or passively allow five people to be killed. What do you do?

No, don’t answer that. I don’t care.

The trolley problem is, as you’ve probably guessed if you didn’t already know, a thought experiment. (It’s a trolley in the American sense of the word, so what most British people would call a tram.) You might wonder why six different people are all tied to the tracks. (Were they all auditioning for a part in a silent film? Do they not realise that this film cliché is more honoured in the spoof than in the observance?)

The scenario is wildly unlikely because thought experiments in ethics don’t have to be realistic. They’re about probing our intuitive moral responses, supplying a jumping-off point for discussion. The trolley problem does that, as do the different variants on it. (If you’re OK with diverting the tram by pushing a lever and killing someone, would you be equally comfortable with stopping the tram by pushing someone off a bridge into its path? OK, well, what if you were a doctor who could save someone by taking the organs from a goldfish while crossing the international date line?)

So why do I want us to stop talking about the trolley problem? Well, MIT are currently using it as the basis for crowd-sourcing how humans think machines should deal with moral dilemmas. You can go to the Moral Machine website and make choices about what a self-driving car should do in various hypothetical scenarios. Do you want to kill the doctor and three criminals, or the pregnant woman and four cats? Decisions, decisions.

I’m assuming that the long-term plan with this research is to create some kind of consensus on how humans think machines should make moral choices, then for this to feed into how machine intelligence is designed.

What troubles me deeply here is how limited and loaded the “choices” are. As the website tells us, “From self-driving cars on public roads to self-piloting reusable rockets landing on self-sailing ships, machine intelligence is supporting or entirely taking over ever more complex human activities at an ever increasing pace.

We get to play God and choose whether the self-driving car kills three teachers or a beautician and a penguin. But we don’t get to ask why self-driving cars have to be allowed on the public roads to kill people at all. And we certainly don’t get to ask why cars in general get to take up so much public space.

In every example that I’ve seen from the MIT quiz, the people who aren’t inside the car are using a pedestrian crossing. We don’t get to ask why the car is approaching a designated crossing so fast that it’s absolutely certain to kill at least one person, probably several. We also don’t get to ask what the point is of pedestrian crossings that don’t protect pedestrians from cars. Why don’t we create extensive car-free networks for pedestrians to get around conveniently, and then allow cars to cross them, but only at specific designated points and only after pushing a button and then waiting for ages?

I realise you can’t have a hypothetical dilemma about whose life to save without making the scenario one where someone’s death is inevitable. The problem is, car culture already frames the real world in this way and thereby turns the avoidable into the unavoidable. This kind of framing happens when we talk about “traffic” as if it’s another type of weather. It happens when a road safety expert opposes the introduction of a 20mph limit in York with the following logic:

At speeds above 40mph the pedestrian passes on to or over the roof of the vehicle, he said.

At lower speeds, in the 20mph to 30mph range, the pedestrian is rolled on to the bonnet, then knocked forwards and away as the vehicles brakes.

Mr Natt said: “At speeds of 12mph or less, however, the pedestrian is pushed forwards down on to the road ahead of the vehicle, which then runs over them, causing serious injury or even death. Children, due to their height, are even more at risk.

Like the MIT experiment, he’s presenting various options. Unfortunately, none of these options include not being hit by a car at all. So what is his idea for reducing the number of people killed by the drivers of motor vehicles? Easy! Pedestrians need to obey the Green Cross Code and York needs more railings to stop people walking into the road.

You can’t come to that conclusion without taking it for granted that roads belong to cars and everybody else is there on sufferance. It’s exactly the same assumption that underlies the whole concept of jaywalking as a crime. And this sense of entitlement is framing our choices by making it look like they’re not choices. So we teach children to wait for the green man even though he won’t stick around long enough for them to cross without hurrying. We accept that most of the transport budget will go towards roads and motorways that will be mainly or exclusively used by cars, while active travel campaigners focus on asking for an extra percent or two. We accept that the public health impact of air pollution, mostly caused by cars, is costing the NHS about 16% of its total budget.

Back to the original trolley thought experiment. Here, it goes without saying that the person with their hand on the lever doesn’t want anybody to die and would save everybody if they could, but that the only two options available are both terrible. That’s precisely why it’s a difficult ethical dilemma.

But I don’t want to discuss the ethical problem of letting five hypothetical people be killed when five real people are killed every single day on British roads. In the real world, a kind of trolley dilemma plays out daily, but in this version the decisions are about a trade-off between saving lives and other priorities. As a society, we keep deciding that other things are more important than saving lives. And so those five people go over and over again to their “inevitable” deaths.


It's quite frightening to think that computers/AI are being developed based on these false choices. I can see in the future when the 'we did the trolley problem thing and this is what humans actually think', excuse is used when the computers/AI fuck up. Except it's not their fuck up, it's ours, or rather, car culture etc, that is the real reason for the deaths. 

Great post - so well expressed. It will be my go to post when 'trolley problems' come up again in discussions. Thanks.