Over 1,000 researchers sign letter opposing ‘crime predicting’ AI

Over 1,000 researchers sign letter opposing ‘crime predicting’ AI Ryan is a senior editor at TechForge Media with over a decade of experience covering the latest technology and interviewing leading industry figures. He can often be sighted at tech conferences with a strong coffee in one hand and a laptop in the other. If it's geeky, he’s probably into it. Find him on Twitter (@Gadget_Ry) or Mastodon (@gadgetry@techhub.social)


More than 1,000 researchers, academics, and experts have signed an open letter opposing the use of AI to predict crime.

Anyone who has watched the sci-fi classic Minority Report will be concerned about attempts to predict crime before it happens. In an ideal scenario, crime prediction could help determine where to allocate police resources – but the reality will be very different.

The researchers are speaking out ahead of an imminent publication titled ‘A Deep Neural Network Model to Predict Criminality Using Image Processing’. In the paper, the authors claim to be able to predict whether a person will become a criminal based on automated facial recognition.

“By automating the identification of potential threats without bias, our aim is to produce tools for crime prevention, law enforcement, and military applications that are less impacted by implicit biases and emotional responses,” says Harrisburg University Professor and co-author of the paper Nathaniel J.S. Ashby.

“Our next step is finding strategic partners to advance this mission.”

Finding willing partners may prove to be a challenge. Signatories of the open letter include employees working on AI from tech giants including Microsoft, Google, and Facebook.

In their letter, the signatories highlight the many issues of today’s AI technologies which make dabbling in crime prediction so dangerous.

Chief among the concerns is the well-documented racial bias of algorithms. Every current facial recognition system is more accurate when detecting white males and often incorrectly flags members of the BAME community as criminals more often when used in a law enforcement setting.

However, even if the inaccuracies with facial recognition algorithms are addressed, the researchers highlight the problems with the current justice system which have been put in the spotlight in recent weeks following the murder of George Floyd.

In their letter, the researchers explain:

“Research of this nature — and its accompanying claims to accuracy — rest on the assumption that data regarding criminal arrest and conviction can serve as reliable, neutral indicators of underlying criminal activity. Yet these records are far from neutral.

As numerous scholars have demonstrated, historical court and arrest data reflect the policies and practices of the criminal justice system. These data reflect who police choose to arrest, how judges choose to rule, and which people are granted longer or more lenient sentences.

Countless studies have shown that people of color are treated more harshly than similarly situated white people at every stage of the legal system, which results in serious distortions in the data. Thus, any software built within the existing criminal legal framework will inevitably echo those same prejudices and fundamental inaccuracies when it comes to determining if a person has the ‘face of a criminal.’”

Among the co-authors of the disputed paper is Jonathan W. Korn, a Ph.D. student who is highlighted as an NYPD veteran. Korn says that AI which can predict criminality would be “a significant advantage for law enforcement agencies.”

While such a system would make the lives of law enforcement officers easier, it would do so at the cost of privacy and the automation of racial profiling.

“Machine learning programs are not neutral; research agendas and the data sets they work with often inherit dominant cultural beliefs about the world,” warn the letter’s authors.

“The uncritical acceptance of default assumptions inevitably leads to discriminatory design in algorithmic systems, reproducing ideas which normalise social hierarchies and legitimise violence against marginalised groups.”

You can find the full open letter here.

(Photo by Bill Oxford on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

Tags: , , , , , , , , , , , , ,

View Comments
Leave a comment

Leave a Reply