Researchers get public to decide who to save in a driverless car crash

Researchers get public to decide who to save in a driverless car crash Ryan is a senior editor at TechForge Media with over a decade of experience covering the latest technology and interviewing leading industry figures. He can often be sighted at tech conferences with a strong coffee in one hand and a laptop in the other. If it's geeky, he’s probably into it. Find him on Twitter (@Gadget_Ry) or Mastodon (@gadgetry@techhub.social)


Researchers have conducted an experiment intending to solve the ethical conundrum of who to save if a fatal driverless car crash is unavoidable.

A driverless car AI will need to be programmed with decisions such as who to prioritise if it came down to choices such as between swerving and hitting a child on the left, or an elderly person on the right.

It may seem a fairly simple choice for some – children have their whole life in front of them, the elderly have fewer years ahead. However, arguments could be made such as younger people often have a greater recovery chance so both people could ultimately survive.

This is a fairly simple example, but things could get even more controversial when taking into account things such as choosing between someone with a criminal record, or a law-abiding citizen.

No single person should be made to make such decisions, nobody wants to be accountable for explaining to a family member why their loved one was chosen to die over another.

In their paper, the researchers wrote:

“Never in the history of humanity have we allowed a machine to autonomously decide who should live and who should die, in a fraction of a second, without real-time supervision.

We are going to cross that bridge any time now, and it will not happen in a distant theatre of military operations; it will happen in that most mundane aspect of our lives, everyday transportation.

Before we allow our cars to make ethical decisions, we need to have a global conversation to express our preferences to the companies that will design moral algorithms, and to the policymakers that will regulate them.”

The best way forward is establishing what the majority feel should happen in such accidents for collective accountability.

Researchers from around the world conducted research involving millions of participants from more than 200 countries to answer hypothetical questions in an experiment called the Moral Machine.

Here are the results:

In the driverless car world, you’re relatively safe if you’re not:

    • A passenger
    • Male
    • Unhealthy
    • Considered poor / low status
    • Unlawful
    • Elderly
  • An animal

If you’re any of these, I suggest you start taking extra care crossing the road.

The research was conducted by researchers from Harvard University and MIT in the US, University of British Columbia in Canada, and the Université Toulouse Capitole in France.

 Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.

Tags: , , , , , ,

View Comments
Leave a comment

Leave a Reply