Morality algorithm proves AI can also be friendly

Morality algorithm proves AI can also be friendly Ryan is a senior editor at TechForge Media with over a decade of experience covering the latest technology and interviewing leading industry figures. He can often be sighted at tech conferences with a strong coffee in one hand and a laptop in the other. If it's geeky, he’s probably into it. Find him on Twitter (@Gadget_Ry) or Mastodon (@gadgetry@techhub.social)


We’re used to seeing headlines of AI beating us at our own games in adversarial roles, but a new study has proven they can also excel when it comes to cooperation and compromise.

The study, from an international team of computer scientists, found AI can be programmed with a higher degree of morality than humans. The researchers set out to build a new type of algorithm for playing games which requires working together rather than simply winning at all cost.

Jacob Crandall, BYU Computer Science Professor and lead author of the study, comments:

“The end goal is that we understand the mathematics behind cooperation with people and what attributes artificial intelligence needs to develop social skills. AI needs to be able to respond to us and articulate what it’s doing. It has to be able to interact with other people.”

In many real world applications, AI will have to compromise and cooperate with both humans and other machines. The ability to program AIs with friendly traits is unlikely to come as much surprise, but evidence they can express them better than humans opens up new possibilities.

Crandall and his team created an algorithm called S# and tested its performance across a variety of two-player games. In most cases, the machine outperformed humans in the games.

“Two humans, if they were honest with each other and loyal, would have done as well as two machines,” says Crandall. “As it is, about half of the humans lied at some point. So essentially, this particular algorithm is learning that moral characteristics are good. It’s programmed to not lie, and it also learns to maintain cooperation once it emerges.”

Take the current negotiations between Britain and the EU as the former exits the bloc. Both sides claim to want an ‘orderly exit’, but it’s clear human emotions are involved which keeps leading to deadlocks in the talks despite time running out. AI negotiators could run through all the scenarios and find the best areas to compromise without any feelings of mistrust.

“In society, relationships break down all the time,” he said. “People that were friends for years all of a sudden become enemies. Because the machine is often actually better at reaching these compromises than we are, it can potentially teach us how to do this better.”

What are your thoughts on AI morality? Let us know in the comments.

 Interested in hearing industry leaders discuss subjects like this and sharing their use-cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London and Amsterdam to learn more. Co-located with the  IoT Tech Expo, Blockchain Expo and Cyber Security & Cloud Expo so you can explore the future of enterprise technology in one place.

Tags: , , , , , , , ,

View Comments
Leave a comment

Leave a Reply