← All writing Research

Why a 1967 thought experiment is still the right tool for AI ethics

4 min read

AI-ethics conversations are stuck. Same five talking points, same five hand-wringing tones. Bias. Jobs. Alignment. Doom. Regulation. Everyone agrees in vague terms. Nobody knows what to do on Monday morning.

I’ve been running an NSF-funded workshop called Would You Pull the Lever? with audiences from Honors colleges to corporate L&D rooms. The workshop’s load-bearing claim is contrarian for a 2026 AI ethics talk: the most useful framework you can give your audience isn’t the latest one. It’s the 1967 Trolley Problem.

Here’s the argument.

A two-paragraph refresher

In 1967, Philippa Foot proposed: a runaway trolley is hurtling toward five workers tied to the track. You stand at a lever that can divert it to a side track, where one worker is tied. Do you pull?

Most people say yes. Five lives saved, one lost.

Then Foot offered a variant. You’re a surgeon. Five patients are dying of organ failure. A healthy person walks into the clinic. Do you kill them and harvest their organs to save the five?

Same math. Most people say no — emphatically. Foot’s actual interest was never the answer. It was the difference between the two answers. Same math, opposite intuition. The work of ethics is in that gap.

The gap is the point

The reason people split on the two cases isn’t body count. It’s a cluster of distinctions that ethicists have been refining for sixty years:

  • Acts vs. omissions. Pulling the lever is an action. So is harvesting the organs. But the trolley case feels more like “redirecting a force already in motion” — a step removed from the killing. The doctor case has no such removal.
  • Doctrine of double effect. It is morally different to foresee a harmful side effect than to intend it as a means. The trolley’s one worker dies as a side effect. The surgical patient dies as the means.
  • Moral distance. The lever is at arm’s length from the killing. The scalpel is the killing.

These are not academic curiosities. They are the working vocabulary for almost every AI ethics dilemma you’ll encounter this year.

The vocabulary in action

Three AI scenarios from the workshop:

Scenario 1 — the autonomous vehicle. Brake failure. The car can hit five jaywalking pedestrians or swerve into one child on the sidewalk. This is the trolley problem with a different costume. Audiences split roughly five-to-one in favor of swerving — until you ask them whether it would be different if the child was on the road and the five were on the sidewalk. Same math. The vote shifts. Acts-vs-omissions is in play.

Scenario 2 — the automated hiring system. A model trained on historical hiring data systematically rejects candidates from a class the historical data underrepresented. The model didn’t choose. The training data did. Or did the engineer who chose the training data? Or did the manager who chose the engineer? The lever has been distributed across so many hands that there is no one at the lever anymore. Moral distance has been stretched across an entire supply chain.

This is the category modern AI excels at producing — diffused-agency harm. And it’s almost invisible to the discrete “bias / job loss / alignment” framings AI ethics usually operates in. The trolley vocabulary gives audiences a way to see it.

Scenario 3 — the entry-level role that quietly doesn’t get refilled. A generative system absorbs the work of an entry-level helpdesk role. Nobody got fired. When the incumbent leaves, the role simply is not reopened. There’s no act of firing. There’s the absence of an act of hiring.

The trolley framework recognizes this as a distinct category: an omission with foreseen consequences. It’s not nothing. It’s also not the dramatic “AI took my job” headline. It’s the quiet structural shift that’s actually happening, in numbers that matter, right now — and it’s the thing my current research focuses on.

What the workshop gives that current AI ethics doesn’t

The workshop runs 90 minutes to half a day depending on format. Audiences vote on each scenario, then debate, then vote again. The vote almost always shifts. That shift is the learning. People leave with a vocabulary for the hard cases — and with the visceral experience of having their first answer change after they really thought about it.

The vocabulary survives the workshop. Five years from now, the AI scenarios will be different. Some of these models will be ancient history. The framework — acts vs. omissions, doctrine of double effect, moral distance, distributed agency — will still work.

The bigger point

AI ethics doesn’t need newer frameworks. It needs older ones, applied better.

The 1967 Trolley Problem is more useful for navigating the dilemmas of 2026 AI than ninety percent of the AI-ethics content published this year. The reason is structural, not nostalgic: the trolley was designed for the kind of decisions that AI now forces on us — discrete moments where intent, consequence, action, and inaction have to be untangled fast.

If you’re building AI systems, or teaching about them, or trying to set policy around them, this is the framework you can teach a room of non-philosophers in 90 minutes and have them leave actually equipped for the conversation. Most of what’s competing for that slot doesn’t equip anyone for anything.

That’s why the workshop opens with the same question Foot’s students were asked in 1967. The hardware has changed. The question hasn’t.

Written by Frazier Smith. Department chair at Central Piedmont, NSF co-PI, and AI & IT educator. More about me →

Subscribe

Get new writing in your inbox.

About one post a month — research notes, classroom field reports, and what's working (or not) in AI + community-college IT education. No spam. Easy unsubscribe.

Prefer RSS? Subscribe via RSS — same posts, no inbox.