AI is sentencing people based on their ‘risk’ assessment

AI is sentencing people based on their ‘risk’ assessment Ryan is a senior editor at TechForge Media with over a decade of experience covering the latest technology and interviewing leading industry figures. He can often be sighted at tech conferences with a strong coffee in one hand and a laptop in the other. If it's geeky, he’s probably into it. Find him on Twitter (@Gadget_Ry) or Mastodon (@gadgetry@techhub.social)


AI-powered tools for determining the risk of an individual are being used to make incarceration and sentencing decisions.

During the Data for Black Lives conference last weekend, several experts shared how AI is evolving America’s controversial prison system.

America imprisons more people than any other nation. It’s not just a result of the population of the country, the incarceration per head is the highest in the world at ~716 per 100,000 of the national population. The second largest, Russia, incarcerates ~455 per 100,000 population.

Black males are, by far, America’s most incarcerated:

AI has been proven to have bias problems. Last year, the American Civil Liberties Union found that Amazon’s facial recognition technology disproportionately flagged those with darker skin colours as criminals more often.

The bias is not intentional but a result of a wider problem in STEM career diversity. In the West, the fields are dominated by white males.

A 2010 study by researchers at NIST and the University of Texas in Dallas found (PDF) algorithms designed and tested in East Asia are better at recognising East Asians, while those developed in Western countries are more accurate when detecting Caucasians.

Deploying such inherently-biased AIs is bound to exacerbate societal problems. Most concerning, US courtrooms are using AI tools for ‘risk’ assessments to make sentencing decisions.

Using a defendant’s profile, the AI generates a recidivism score – a number which aims to estimate if an individual will reoffend. A judge then uses that score to make decisions such as the severity of their sentence, what services the individual should be provided, and if a person should be held in jail before trial.

Last July, a statement (PDF) was signed by over 100 civil rights organisations – including the ACLU – calling for AI to be kept clear of risk assessments.

When the bias problem with AIs is solved, their use in the justice system could improve trust in decisions. Current questions over whether a judge was prejudiced in their sentencing will be reduced. However, we’re yet to be anywhere near that point.

Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.

Tags: , , , , , , , , , , , , , , , ,

View Comments
Leave a comment

Leave a Reply