policing Archives - AI News https://www.artificialintelligence-news.com/tag/policing/ Artificial Intelligence News Tue, 28 Mar 2023 15:26:07 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png policing Archives - AI News https://www.artificialintelligence-news.com/tag/policing/ 32 32 Clearview AI used by US police for almost 1M searches https://www.artificialintelligence-news.com/2023/03/28/clearview-ai-us-police-almost-1m-searches/ https://www.artificialintelligence-news.com/2023/03/28/clearview-ai-us-police-almost-1m-searches/#respond Tue, 28 Mar 2023 15:26:04 +0000 https://www.artificialintelligence-news.com/?p=12871 Facial recognition firm Clearview AI has revealed that it has run almost a million searches for US police. Facial recognition technology is a controversial topic, and for good reason. Clearview AI’s technology allows law enforcement to upload a photo of a suspect’s face and find matches in a database of billions of images it has... Read more »

The post Clearview AI used by US police for almost 1M searches appeared first on AI News.

]]>
Facial recognition firm Clearview AI has revealed that it has run almost a million searches for US police.

Facial recognition technology is a controversial topic, and for good reason. Clearview AI’s technology allows law enforcement to upload a photo of a suspect’s face and find matches in a database of billions of images it has collected.

Clearview AI CEO Hoan Ton-That disclosed in an interview with the BBC that the firm has scraped 30 billion images from platforms such as Facebook. The images were taken without the users’ permission.

The company has been repeatedly fined millions in Europe and Australia for breaches of privacy, but US police are still using its powerful software.

Matthew Guaragilia from the Electronic Frontier Foundation said that police use of Clearview puts everyone into a “perpetual police line-up.”

While the use of facial recognition by the police is often sold to the public as being used only for serious or violent crimes, Miami Police confirmed to the BBC that it uses Clearview AI’s software for every type of crime.

Miami’s Assistant Chief of Police Armando Aguilar said his team used Clearview AI’s system about 450 times a year, and that it had helped to solve several murders. 

However, there are numerous documented cases of mistaken identity using facial recognition by the police. Robert Williams, for example, was wrongfully arrested on his lawn in front of his family and held overnight in a “crowded and filthy” cell.

“The perils of face recognition technology are not hypothetical — study after study and real-life have already shown us its dangers,” explained Kate Ruane, Senior Legislative Counsel for the ACLU, following the reintroduction of the Facial Recognition and Biometric Technology Moratorium Act.

“The technology’s alarming rate of inaccuracy when used against people of colour has led to the wrongful arrests of multiple black men including Robert Williams, an ACLU client.”

The lack of transparency around police use of facial recognition means the true figure of wrongful arrests it’s led to is likely far higher.

Civil rights campaigners want police forces that use Clearview AI to openly say when it is used, and for its accuracy to be openly tested in court. They want the systems to be scrutinised by independent experts.

The use of facial recognition technology by police is a contentious issue. While it may help solve crimes, it also poses a threat to civil liberties and privacy.

Ultimately, it’s a fine line between using technology to fight crime and infringing on individual rights, and we need to tread carefully to ensure we don’t cross it.

Related: Clearview AI lawyer: ‘Common law has never recognised a right to privacy for your face’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Clearview AI used by US police for almost 1M searches appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/03/28/clearview-ai-us-police-almost-1m-searches/feed/ 0
Italy’s facial recognition ban exempts law enforcement https://www.artificialintelligence-news.com/2022/11/15/italy-facial-recognition-ban-exempts-law-enforcement/ https://www.artificialintelligence-news.com/2022/11/15/italy-facial-recognition-ban-exempts-law-enforcement/#respond Tue, 15 Nov 2022 15:47:07 +0000 https://www.artificialintelligence-news.com/?p=12484 Italy has banned the use of facial recognition, except for law enforcement purposes. On Monday, the country’s Data Protection Authority (Garante per la protezione dei dati personali) issued official stays to two municipalities – the southern Italian city of Lecce and the Tuscan city of Arezzo – over their experiments with biometrics technologies. The agency... Read more »

The post Italy’s facial recognition ban exempts law enforcement appeared first on AI News.

]]>
Italy has banned the use of facial recognition, except for law enforcement purposes.

On Monday, the country’s Data Protection Authority (Garante per la protezione dei dati personali) issued official stays to two municipalities – the southern Italian city of Lecce and the Tuscan city of Arezzo – over their experiments with biometrics technologies.

The agency banned facial recognition systems using biometric data until a specific law governing its use is adopted.

“The moratorium arises from the need to regulate eligibility requirements, conditions and guarantees relating to facial recognition, in compliance with the principle of proportionality,” the agency said in a statement.

However, an exception was added for biometric data technology that is being used “to fight crime” or in a judicial investigation.

In Lecce, the municipality’s authorities said they would begin using facial recognition technologies. Italy’s Data Protection Agency ordered Lecce’s authorities to explain what systems will be used, their purpose, and the legal basis.

As for the Arezzo case, the city’s police were to be equipped with infrared smart glasses that could recognise car license plates.

Facial recognition technology is a central concern in the EU’s proposed AI regulation. The proposal has been released but will need to pass consultations within the EU before it’s adopted into law.

(Photo by Mikita Yo on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Italy’s facial recognition ban exempts law enforcement appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/11/15/italy-facial-recognition-ban-exempts-law-enforcement/feed/ 0
Deepfakes are now being used to help solve crimes https://www.artificialintelligence-news.com/2022/05/25/deepfakes-are-now-being-used-to-help-solve-crimes/ https://www.artificialintelligence-news.com/2022/05/25/deepfakes-are-now-being-used-to-help-solve-crimes/#respond Wed, 25 May 2022 16:03:12 +0000 https://www.artificialintelligence-news.com/?p=11998 A deepfake video created by Dutch police could help to change the often negative perception of the technology. Deepfakes use generative neural network architectures – such as autoencoders or generative adversarial networks (GANs) – to manipulate or generate visual and audio content. The technology is already being used for malicious purposes including generating sexual content... Read more »

The post Deepfakes are now being used to help solve crimes appeared first on AI News.

]]>
A deepfake video created by Dutch police could help to change the often negative perception of the technology.

Deepfakes use generative neural network architectures – such as autoencoders or generative adversarial networks (GANs) – to manipulate or generate visual and audio content.

The technology is already being used for malicious purposes including generating sexual content of individuals without their consent, fraud, and the creation of deceptive content aimed at changing views and influencing democratic processes.

However, authorities in Rotterdam have proven the technology can be put to use for good.

Dutch police have created a deepfake video of 13-year-old Sedar Soares – a young footballer who was shot dead in 2003 while throwing snowballs with his friends in the car park of a Rotterdam metro station – in an appeal for information to finally solve his murder.

The video depicts Soares picking up a football in front of the camera and walking through a guard of honour on the field that comprises his relatives, friends, and former teachers.

“Somebody must know who murdered my darling brother. That’s why he has been brought back to life for this film,” says a voice in the video, before Soares drops his ball.

“Do you know more? Then speak,” his relatives and friends say, before his image disappears from the field. The video then gives the police contact details.

It’s hoped the stirring video and a reminder of what Soares would have looked like at the time will help to jog memories and lead to the case finally being solved.

Daan Annegarn, a detective with the National Investigation Communications Team, said:

“We know better and better how cold cases can be solved. Science shows that it works to hit witnesses and the perpetrator in the heart—with a personal call to share information. What better way to do that than to let Sedar and his family do the talking? 

We had to cross a threshold. It is not nothing to ask relatives: ‘Can I bring your loved one to life in a deepfake video? We are convinced that it contributes to the detection, but have not done it before.

The family has to fully support it.”

So far, it seems to have had an impact. The police claim to have already received dozens of tips but they need to see whether they’re credible. In the meantime, anyone that may have any information is encouraged to come forward.

“The deployment of deepfake is not just a lucky shot. We are convinced that it can touch hearts in the criminal environment—that witnesses and perhaps the perpetrator can come forward,” Annegarn concludes.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Deepfakes are now being used to help solve crimes appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/05/25/deepfakes-are-now-being-used-to-help-solve-crimes/feed/ 0
AI in the justice system threatens human rights and civil liberties https://www.artificialintelligence-news.com/2022/03/30/ai-in-the-justice-system-threatens-human-rights-and-civil-liberties/ https://www.artificialintelligence-news.com/2022/03/30/ai-in-the-justice-system-threatens-human-rights-and-civil-liberties/#respond Wed, 30 Mar 2022 16:30:18 +0000 https://artificialintelligence-news.com/?p=11820 The House of Lords Justice and Home Affairs Committee has determined the proliferation of AI in the justice system is a threat to human rights and civil liberties. A report published by the committee today highlights the rapid pace of AI developments that are largely happening out of the public eye. Alarmingly, there seems to... Read more »

The post AI in the justice system threatens human rights and civil liberties appeared first on AI News.

]]>
The House of Lords Justice and Home Affairs Committee has determined the proliferation of AI in the justice system is a threat to human rights and civil liberties.

A report published by the committee today highlights the rapid pace of AI developments that are largely happening out of the public eye. Alarmingly, there seems to be a focus on rushing the technology into production with little concern about its potential negative impact.

Baroness Hamwee, Chair of the Justice and Home Affairs Committee, said:

“We had a strong impression that these new tools are being used without questioning whether they always produce a justified outcome. Is ‘the computer’ always right? It was different technology, but look at what happened to hundreds of Post Office managers.

Government must take control. Legislation to establish clear principles would provide a basis for more detailed regulation. A ‘kitemark’ to certify quality and a register of algorithms used in relevant tools would give confidence to everyone – users and citizens.

We welcome the advantages AI can bring to our justice system, but not if there is no adequate oversight. Humans must be the ultimate decision-makers, knowing how to question the tools they are using and how to challenge their outcome.”

The concept of XAI (Explainable AI) is growing traction and would help to address the problem of humans not always understanding how an AI has come to make a specific recommendation. 

Having fully-informed humans make the final decisions would go a long way toward building trust in the technology—ensuring clear accountability and minimising errors.

“What would it be like to be convicted and imprisoned on the basis of AI which you don’t understand and which you can’t challenge?” says Baroness Hamwee.

“Without proper safeguards, advanced technologies may affect human rights, undermine the fairness of trials, worsen inequalities, and weaken the rule of law. The tools available must be fit for purpose, and not be used unchecked.”

While there must be clear accountability for decision-makers in the justice system; the report also says governance needs reform.

The report notes there are more than 30 public bodies, initiatives, and programmes that play a role in the governance of new technologies in the application of the law. Without reform, where responsibility lies will be difficult to identify due to unclear roles and overlapping functions.

Societal discrimination also risks being exacerbated through bias in data being embedded in algorithms used for increasingly critical decisions from who to offer a loan to, all the way to who to arrest and potentially even put in prison.

Across the pond, Democrats reintroduced their Algorithmic Accountability Act last month which seeks to hold tech firms accountable for bias in their algorithms.

“If someone decides not to rent you a house because of the colour of your skin, that’s flat-out illegal discrimination. Using a flawed algorithm or software that results in discrimination and bias is just as bad,” said Senator Ron Wyden.

“Our bill will pull back the curtain on the secret algorithms that can decide whether Americans get to see a doctor, rent a house, or get into a school. Transparency and accountability are essential to give consumers choice and provide policymakers with the information needed to set the rules of the road for critical decision systems.”

Biased AI-powered facial recognition systems have already led to wrongful arrests of people from marginalised communities. Robert Williams, for example, was wrongfully arrested on his lawn in front of his family.

“The perils of face recognition technology are not hypothetical — study after study and real-life have already shown us its dangers,” explained Kate Ruane, Senior Legislative Counsel for the ACLU, last year following the reintroduction of the Facial Recognition and Biometric Technology Moratorium Act.

“The technology’s alarming rate of inaccuracy when used against people of colour has led to the wrongful arrests of multiple black men including Robert Williams, an ACLU client.”

Last year, UK Health Secretary Sajid Javid greenlit a series of AI-based projects aiming to tackle racial inequalities in the healthcare system. Among the greenlit projects is the creation of new standards for health inclusivity to improve the representation of ethnic minorities in datasets used by the NHS.

“If we only train our AI using mostly data from white patients it cannot help our population as a whole,” said Javid. “We need to make sure the data we collect is representative of our nation.”

Stiffer penalties for AI misuse, a greater push for XAI, governance reform, and improving diversity in datasets all seem like great places to start to prevent AI from undermining human rights and civil liberties.

(Photo by Tingey Injury Law Firm on Unsplash)

Related: UN calls for ‘urgent’ action over AI’s risk to human rights

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI in the justice system threatens human rights and civil liberties appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/03/30/ai-in-the-justice-system-threatens-human-rights-and-civil-liberties/feed/ 0
Reducing crime with better visualisation of data https://www.artificialintelligence-news.com/2022/03/16/reducing-crime-with-better-visualisation-of-data/ https://www.artificialintelligence-news.com/2022/03/16/reducing-crime-with-better-visualisation-of-data/#respond Wed, 16 Mar 2022 17:43:55 +0000 https://artificialintelligence-news.com/?p=11769 Effective policing relies on good data. The prevention and reduction of crime, particularly serious and organised crime, depends on law enforcement agencies being able to gain swift insights from the huge and increasing amount of information at their disposal. The problem, given the sheer volume and variety of that data, is where to look first.... Read more »

The post Reducing crime with better visualisation of data appeared first on AI News.

]]>
Effective policing relies on good data. The prevention and reduction of crime, particularly serious and organised crime, depends on law enforcement agencies being able to gain swift insights from the huge and increasing amount of information at their disposal.

The problem, given the sheer volume and variety of that data, is where to look first. So much of the data available to law enforcement data analysts and senior staff is unstructured. In other words, it doesn’t line up in an orderly fashion in a relational database or spreadsheet. Police forces collect data of many different types – images from CCTV, phone records, social media conversations and images, and so on. Tying that variety of sources together to achieve valuable insights is difficult.

It demands the very latest in data integration tools, able to aggregate all information of possible relevance and present it so that it delivers insights via a single, easy-to-use platform and allows correlations between datasets to be discovered. With today’s data visualisation techniques, a picture emerges from different data sets without time being wasted on wading through information. Organised criminals work fast and change tactics regularly. Time lost in elaborate and complex manual data searches can give them the chance they need to move on and evade detection.

Data visualisation is critical to today’s law enforcement efforts. It complements data analytics, converting information collected from various sources into a clear picture, displayed using familiar elements such as graphs, charts, and maps. By using natural language processing as well as artificial intelligence and machine learning capabilities, otherwise invisible patterns emerge.

An easily digestible view of data can help in several ways. Here are a few of them:

Interpreting visual data: The human brain can process visual data 60,000 times faster than it does text. Data visualisation gives law enforcement professionals a crucial edge because smart visual tools amplify human abilities and allow them to more easily spot anomalies or patterns in the data. They can also better understand operations, identify areas for improvement, and uncover missing evidence links for faster case resolution.

Deploying predictive analytics: Having access to predictive and prescriptive analytics means that law enforcement professionals can build and deploy statistical models that provide alerts when new incidents are likely to happen, with context on circumstances that require pro-active investigation. Data visualisation is core to this because it provides an easy-to-understand translation of machine learning models and presents actionable intelligence. Patterns can be spotted, giving law enforcers a critical head start. Simple visual techniques such as assigning a range of amber to red colours to areas of concern on a map are highly effective.

Sharing critical data: Data visualisation is not just of academic use to data scientists. It is useful for everyone in the law enforcement team, from officers on the street to supervisors and analysts in the office. Detectives investigating organised crime can use the visual output of these tools to see the connections between people, property and financial transactions within a crime syndicate without needing data science qualifications. Anyone can see what the data is saying. Different teams, indeed different police forces, can share information seamlessly without fear of system incompatibilities.

More than that, today’s tools can aggregate all the relevant information within and outside an agency and analyse it to deliver insights via a single platform. Crucially here, data can be handled in a secure manner so only those with the appropriate clearance can see it.

Managing tight resources: Law enforcers are always looking for more efficient resource allocation and better ways to juggle limited amounts of personnel and equipment. Badly organised resources can impact everything from crime clearance, departmental morale, and perception in the community. With a data visualisation platform, they can spot areas that need immediate and long-term attention. They can also see which crimes have the biggest community impact and therefore need the most resources.

Improving community relations: Data visualisation gives police a chance to connect with their communities, demonstrating the results of their work in a digestible and interactive form. They can showcase incident-rate trends, initiate awareness about emerging security concerns and foster community engagement. Sharing data builds trust and cooperation, making it easier in the longer term to gather evidence and solve cases.

The right platforms are available today to allow law enforcers to make faster and more accurate decisions. The insights derived from visual analytics are already helping keep law enforcement personnel and civilians safe, reduce operational costs and improve investigation outcomes.

The police are not in a position to share all of the successes they have enjoyed with data visualisation, but others can. For example, how the Scottish Environment Protection Agency (SEPA) uses data to address the threat of illegal polluters offers a close and relevant comparison.

SEPA has a vital role in working with government, industry and the public to ensure regulatory compliance with environmental rules. It has a range of enforcement powers which it can apply to ensure that regulations are complied with. However, enforcement relies on the ability to intelligently analyse data from multiple sources, on air, water and soil quality for example.

SEPA has millions of records dating back decades in a huge variety of formats and used to rely on manual collection, analysis and reporting of its testing samples to set alongside historic data to help spot pollution trends. With an analytics platform supplemented by data science and visualisation, SEPA has built a range of customisable solutions to address a wide variety of data-related tasks. Staff members carry visual analytics on a tablet wherever they go. No longer needing to write code or carry physical binders of data analyses, they can run data analytics on the spot and answer questions in the moment. Use cases can involve looking at pollutants, ecology and lab measurements, while others have covered industry compliance, laws and licences.

Just as it has done for SEPA, data visualisation can help law enforcers to identify never-before-seen patterns in data to make better decisions now and help steer future direction to resolve hidden challenges in their effort to reduce crime.

(Photo by Scott Rodgerson on Unsplash)

The post Reducing crime with better visualisation of data appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/03/16/reducing-crime-with-better-visualisation-of-data/feed/ 0
Clearview AI is close to obtaining a patent despite regulatory crackdown https://www.artificialintelligence-news.com/2021/12/06/clearview-ai-close-obtaining-patent-despite-regulatory-crackdown/ https://www.artificialintelligence-news.com/2021/12/06/clearview-ai-close-obtaining-patent-despite-regulatory-crackdown/#respond Mon, 06 Dec 2021 16:06:14 +0000 https://artificialintelligence-news.com/?p=11469 Clearview AI is reportedly just a bank transfer away from receiving a US patent for its controversial facial recognition technology. Politico reports that Clearview AI has been sent a “notice of allowance” by the US Patent and Trademark Office. The notice means that it will be granted the patent once it pays the administration fees.... Read more »

The post Clearview AI is close to obtaining a patent despite regulatory crackdown appeared first on AI News.

]]>
Clearview AI is reportedly just a bank transfer away from receiving a US patent for its controversial facial recognition technology.

Politico reports that Clearview AI has been sent a “notice of allowance” by the US Patent and Trademark Office. The notice means that it will be granted the patent once it pays the administration fees.

Clearview AI offers one of the most powerful facial recognition systems in the world. In the wake of the US Capitol raid, Clearview AI boasted that police use of its facial recognition system increased 26 percent.

The controversy around Clearview AI is that – aside from some potential far-right links – its system uses over 10 billion photos scraped from online web profiles without the explicit consent of the individuals.

“Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland once argued. Ekeland, it’s worth noting, gained notoriety as “The Troll’s Lawyer” after defending clients including self-described neo-Nazi troll Andrew Auernheimer.

‘Unreasonably intrusive and unfair’

Last month, the UK’s Information Commissioner’s Office (ICO) imposed a potential fine of just over £17 million to Clearview AI and ordered the company to destroy the personal data it holds on British citizens and cease further processing.

Leaked documents suggest Clearview AI’s system was tested by UK authorities including the Metropolitan Police, Ministry of Defense, the National Crime Agency, and a number of police constabularies including Surrey, North Yorkshire, Suffolk, and Northamptonshire. However, the system is said to no longer be being used or tested in the UK.

“UK data protection legislation does not stop the effective use of technology to fight crime, but to enjoy public trust and confidence in their products technology providers must ensure people’s legal protections are respected and complied with,” commented UK Information Commissioner Elizabeth Denham.

The UK’s decision was the result of a joint probe launched with the Office of the Australian Information Commissioner (OAIC) into Cleaview AI’s practices.

Earlier in the month, the OAIC reached the same conclusion as the ICO and ordered Clearview AI to to destroy the biometric data that it collected on Australians and cease further collection.

“I consider that the act of uploading an image to a social media site does not unambiguously indicate agreement to collection of that image by an unknown third party for commercial purposes,” said Australia’s Information Commissioner Angelene Falk.

“The covert collection of this kind of sensitive information is unreasonably intrusive and unfair,” Falk said. “It carries significant risk of harm to individuals, including vulnerable groups such as children and victims of crime, whose images can be searched on Clearview AI’s database.”

The first patent ‘around the use of large-scale internet data’

Major web companies like Facebook, Twitter, Google, YouTube, LinkedIn, and Venmo sent cease-and-desist letters to Clearview AI demanding the company stops scraping photos and data from their platforms.

Clearview AI founder Hoan Ton-That is unabashedly proud of the mass data-scraping system that his company has built and believes that it’s key to fighting criminal activities such as human trafficking. The company even says its application could be useful for finding out more about a person they’ve just met, such as through dating or business.

“There are other facial recognition patents out there — that are methods of doing it — but this is the first one around the use of large-scale internet data,” Ton-That told Politico in an interview.

Rights groups have criticised the seemingly imminent decision to grant Clearview AI a patent as essentially patenting a violation of human rights law.

(Photo by Etienne Girardet on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo North America on 11-12 May 2022.

The post Clearview AI is close to obtaining a patent despite regulatory crackdown appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/12/06/clearview-ai-close-obtaining-patent-despite-regulatory-crackdown/feed/ 0
Reintroduction of facial recognition legislation receives mixed responses https://www.artificialintelligence-news.com/2021/06/17/reintroduction-facial-recognition-legislation-mixed-responses/ https://www.artificialintelligence-news.com/2021/06/17/reintroduction-facial-recognition-legislation-mixed-responses/#respond Thu, 17 Jun 2021 11:57:38 +0000 http://artificialintelligence-news.com/?p=10698 The reintroduction of the Facial Recognition and Biometric Technology Moratorium Act in the 117th Congress has received mixed responses. An initial version of the legislation was introduced in 2020 but was reintroduced June 15 2021 by Sen. Edward Markey (D-Mass.) “We do not have to forgo privacy and justice for safety,” said Senator Markey. “This... Read more »

The post Reintroduction of facial recognition legislation receives mixed responses appeared first on AI News.

]]>
The reintroduction of the Facial Recognition and Biometric Technology Moratorium Act in the 117th Congress has received mixed responses.

An initial version of the legislation was introduced in 2020 but was reintroduced June 15 2021 by Sen. Edward Markey (D-Mass.)

“We do not have to forgo privacy and justice for safety,” said Senator Markey. “This legislation is about rooting out systemic racism and stopping invasive technologies from becoming irreversibly embedded in our society.

“We simply cannot ignore the technologies that perpetuate injustice and that means that law enforcement should not be using facial recognition tools today. I urge my colleagues in Congress to join this effort and pass this important legislation.”

The legislation aims for a blanket ban on the use of facial and biometric recognition technologies by government agencies following a string of abuses and proven biases.

“This is a technology that is fundamentally incompatible with basic liberty and human rights. It’s more like nuclear weapons than alcohol or cigarettes –– it can’t be effectively regulated, it must be banned entirely. Silicon Valley lobbyists are already pushing for weak regulations in the hopes that they can continue selling this dangerous and racist technology to law enforcement. But experts and advocates won’t be fooled,” said Evan Greer, Director of Fight for the Future.

Human rights group ACLU (American Civil Liberties Union) has also been among the leading voices in opposing facial recognition technologies. The group’s lawyers have supported victims of facial recognition – such as the wrongful arrest of black male Robert Williams on his lawn in front of his family – and backed both state- and national-level attempts to ban government use of the technology.

Kate Ruane, Senior Legislative Counsel for the ACLU, said:

“The perils of face recognition technology are not hypothetical — study after study and real life have already shown us its dangers. 

The technology’s alarming rate of inaccuracy when used against people of colour has led to the wrongful arrests of multiple Black men including Robert Williams, an ACLU client.

Giving law enforcement even more powerful surveillance technology empowers constant surveillance, harms racial equity, and is not the answer.

It’s past time to take action, and the Facial Recognition and Biometric Technology Moratorium Act is an important step to halt government use of face recognition technology.”

Critics of the legislation have pointed towards the social benefits of such technologies and propose that more oversight is required rather than a blanket ban.

The Security Industry Association (SIA) claims that a blanket ban would prevent legitimate uses of facial and biometric recognition technologies including:

  • Reuniting victims of human trafficking with their families and loved ones.
  • Identifying the individuals who stormed the US Capitol on 6 Jan.
  • Detecting use of fraudulent documentation by non-citizens at air ports of entry.
  • Aiding counterterrorism investigations in critical situations.
  • Exonerating innocent individuals accused of crimes.

“Rather than impose sweeping moratoriums, SIA encourages Congress to propose balanced legislation that promulgates reasonable safeguards to ensure that facial recognition technology is used ethically, responsibly and under appropriate oversight and that the United States remains the global leader in driving innovation,” comments Don Erickson, CEO of the SIA.

To support its case, the SIA recently commissioned a poll (PDF) from Schoen Cooperman Research which found that 68 percent of Americans believe facial recognition can make society safer. Support is higher for specific applications such as for airlines (75%) and security at office buildings (70%).

As part of ACLU-led campaigns, multiple jurisdictions have already prohibited police use of facial recognition technology. These jurisdictions include San Francisco, Berkeley, and Oakland, California; Boston, Brookline, Cambridge, Easthampton, Northampton, Springfield, and Somerville, Massachusetts; New Orleans, Louisiana; Jackson, Mississippi; Portland, Maine; Minneapolis, Minnesota; Portland, Oregon; King County, Washington; and the states of Virginia and Vermont. New York state also suspended use of face recognition in schools and California suspended its use with police-worn body cameras.

A copy of the legislation can be found here (PDF)

(Photo by Joe Gadd on Unsplash)

Find out more about Digital Transformation Week North America, taking place on November 9-10 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post Reintroduction of facial recognition legislation receives mixed responses appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/06/17/reintroduction-facial-recognition-legislation-mixed-responses/feed/ 0
Amazon will continue to ban police from using its facial recognition AI https://www.artificialintelligence-news.com/2021/05/24/amazon-continue-ban-police-using-facial-recognition-ai/ https://www.artificialintelligence-news.com/2021/05/24/amazon-continue-ban-police-using-facial-recognition-ai/#respond Mon, 24 May 2021 16:27:29 +0000 http://artificialintelligence-news.com/?p=10587 Amazon will extend a ban it enacted last year on the use of its facial recognition for law enforcement purposes. The web giant’s Rekognition service is one of the most powerful facial recognition tools available. Last year, Amazon signed a one-year moratorium that banned its use by police departments following a string of cases where... Read more »

The post Amazon will continue to ban police from using its facial recognition AI appeared first on AI News.

]]>
Amazon will extend a ban it enacted last year on the use of its facial recognition for law enforcement purposes.

The web giant’s Rekognition service is one of the most powerful facial recognition tools available. Last year, Amazon signed a one-year moratorium that banned its use by police departments following a string of cases where facial recognition services – from various providers – were found to be inaccurate and/or misused by law enforcement.

Amazon has now extended its ban indefinitely.

Facial recognition services have already led to wrongful arrests that disproportionally impacted marginalised communities.

Last year, the American Civil Liberties Union (ACLU) filed a complaint against the Detroit police after black male Robert Williams was arrested on his front lawn “as his wife Melissa looked on and as his daughters wept from the trauma” following a misidentification by a facial recognition system.

Williams was held in a “crowded and filthy” cell overnight without being given any reason before being released on a cold and rainy January night where he was forced to wait outside on the curb for approximately an hour while his wife scrambled to find childcare so that she could come and pick him up.

“Facial recognition is inherently dangerous and inherently oppressive. It cannot be reformed or regulated. It must be abolished,” said Evan Greer, Deputy Director of digital rights group Fight for the Future.

Clearview AI – a controversial facial recognition provider that scrapes data about people from across the web and is used by approximately 2,400 agencies across the US alone – boasted in January that police use of its system jumped 26 percent following the Capitol raid.

Last year, the UK and Australia launched a joint probe into Clearview AI’s practices. Clearview AI was also forced to suspend operations in Canada after the federal Office of the Privacy Commissioner of Canada opened an investigation into the company.

Many states, countries, and even some police departments are taking matters into their own hands and banning the use of facial recognition by law enforcement. Various rights groups continue to apply pressure and call for more to follow.

Human rights group Liberty won the first international case banning the use of facial recognition technology for policing in August last year. Liberty launched the case on behalf of Cardiff, Wales resident Ed Bridges who was scanned by the technology first on a busy high street in December 2017 and again when he was at a protest in March 2018.

Following the case, the Court of Appeal ruled that South Wales Police’s use of facial recognition technology breaches privacy rights, data protection laws, and equality laws. South Wales Police had used facial recognition technology around 70 times – with around 500,000 people estimated to have been scanned by May 2019 – but must now halt its use entirely.

Facial recognition tests in the UK so far have been nothing short of a complete failure. An initial trial at the 2016 Notting Hill Carnival led to not a single person being identified. A follow-up trial the following year led to no legitimate matches, but 35 false positives.

A 2019 independent report into the Met Police’s facial recognition trials concluded that it was only verifiably accurate in just 19 percent of cases.

(Photo by Bermix Studio on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Amazon will continue to ban police from using its facial recognition AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/05/24/amazon-continue-ban-police-using-facial-recognition-ai/feed/ 0
CDEI launches a ‘roadmap’ for tackling algorithmic bias https://www.artificialintelligence-news.com/2020/11/27/cdei-launches-roadmap-tackling-algorithmic-bias/ https://www.artificialintelligence-news.com/2020/11/27/cdei-launches-roadmap-tackling-algorithmic-bias/#respond Fri, 27 Nov 2020 16:10:35 +0000 http://artificialintelligence-news.com/?p=10058 A review from the Centre for Data Ethics and Innovation (CDEI) has led to the creation of a “roadmap” for tackling algorithmic bias. The analysis was commissioned by the UK government in October 2018 and will receive a formal response. Algorithms bring substantial benefits to businesses and individuals able to use them effectively. However, increasing... Read more »

The post CDEI launches a ‘roadmap’ for tackling algorithmic bias appeared first on AI News.

]]>
A review from the Centre for Data Ethics and Innovation (CDEI) has led to the creation of a “roadmap” for tackling algorithmic bias.

The analysis was commissioned by the UK government in October 2018 and will receive a formal response.

Algorithms bring substantial benefits to businesses and individuals able to use them effectively. However, increasing evidence suggests biases are – often unconsciously – making their way into algorithms and creating an uneven playing field.

The CDEI is the UK government’s advisory body on the responsible use of AI and data-driven technology. CDEI has spent the past two years examining the issue of algorithmic bias and how it can be tackled.

Adrian Weller, Board Member for the Centre for Data Ethics and Innovation, said:

“It is vital that we work hard now to get this right as adoption of algorithmic decision-making increases. Government, regulators, and industry need to work together with interdisciplinary experts, stakeholders, and the public to ensure that algorithms are used to promote fairness, not undermine it.

The Centre for Data Ethics and Innovation has today set out a range of measures to help the UK to achieve this, with a focus on enhancing transparency and accountability in decision-making processes that have a significant impact on individuals.

Not only does the report propose a roadmap to tackle the risks, but it highlights the opportunity that good use of data presents to address historical unfairness and avoid new biases in key areas of life.”

The report focuses on four key sectors where algorithmic bias poses the biggest risk: policing, recruitment, financial services, and local government.

Today’s facial recognition algorithms are relatively effective when used on white males, but research has consistently shown how ineffective they are with darker skin colours and females. The error rate is, therefore, higher when facial recognition algorithms are used on some parts of society over others.

In June, Detroit Police chief James Craig said facial recognition would misidentify someone around 96 percent of the time—not particularly comforting when they’re being used to perform mass surveillance of protests.

Craig’s comments were made just days after the ACLU (American Civil Liberties Union) lodged a complaint against Detroit Police following the harrowing wrongful arrest of black male Robert Williams due to a facial recognition error.

And that’s just one example of where AI can unfairly impact some parts of society over another.

“Fairness is a highly prized human value,” the report’s preface reads. “Societies in which individuals can flourish need to be held together by practices and institutions that are regarded as fair.”

Ensuring fairness in algorithmic decision-making

Transparency is required for algorithms. In financial services, a business loan or mortgage could be rejected without transparency simply because a person was born in a poor neighbourhood. Job applications could be rejected not on a person’s actual skill but dependent on where they were educated.

Such biases exist in humans and our institutions today, but automating them at scale is a recipe for disaster. Removing bias from algorithms is not an easy task but if achieved would lead to increased fairness by taking human biases out of the equation.

“It is well established that there is a risk that algorithmic systems can lead to biased decisions, with perhaps the largest underlying cause being the encoding of existing human biases into algorithmic systems. But the evidence is far less clear on whether algorithmic decision-making tools carry more or less risk of bias than previous human decision-making processes. Indeed, there are reasons to think that better use of data can have a role in making decisions fairer, if done with appropriate care.

When changing processes that make life-affecting decisions about individuals we should always proceed with caution. It is important to recognise that algorithms cannot do everything. There are some aspects of decision-making where human judgement, including the ability to be sensitive and flexible to the unique circumstances of an individual, will remain crucial.”

The report’s authors examined the aforementioned four key sectors to determine their current “maturity levels” in algorithmic decision-making.

In recruitment, the authors found rapid growth in the use of algorithms to make decisions at all stages. They note that adequate data is being collected to monitor outcomes but found that understanding of how to avoid human biases creeping in is lacking.

“More guidance is needed on how to ensure that these tools do not unintentionally discriminate against groups of people, particularly when trained on historic or current employment data.”

The financial services industry has relied on data to make decisions for longer than arguably any other to determine things like how likely it is an individual can repay a debt.

“Specific groups are historically underrepresented in the financial system, and there is a risk that these historic biases could be entrenched further through algorithmic systems.”

CDEI found limited use of algorithmic decision-making in UK policing but found variance across forces with regards to both usage and managing ethical risks.

“The use of data analytics tools in policing carries significant risk. Without sufficient care, processes can lead to Review into bias in algorithmic decision-making: Executive summary Centre for Data Ethics and Innovation 8 outcomes that are biased against particular groups, or systematically unfair.

In many scenarios where these tools are helpful, there is still an important balance to be struck between automated decision-making and the application of professional judgement and discretion.”

Finally, in local government, CDEI noted an increased use of algorithms to inform decision-making but most are in their early stages of deployment. Such tools can be powerful assets for societal good – like helping to plan where resources should be allocated to maintain vital services – but can also carry significant risks.

“Evidence has shown that certain people are more likely to be overrepresented in data held by local authorities and this can then lead to biases in predictions and interventions.”

The CDEI makes a number of recommendations in its report but among them is:

  • Clear and mandatory transparency over how algorithms are used for public decision-making and steps taken to ensure the fair treatment of individuals.
  • Full accountability for organisations implementing such technologies.
  • Improving the diversity of roles involved with developing and deploying decision-making tools.
  • Updating model contracts and framework agreements for public sector procurement to incorporate minimum standards around the ethical use of AI.
  • The government working with regulators to provide clear guidance on the collection and use of protected characteristic data in outcome monitoring and decision-making processes. They should then encourage the use of that guidance and data to address current and historic bias in key sectors.
  • Ensuring that the Equality and Human Rights Commission has sufficient resources to investigate cases of alleged algorithmic discrimination.

CDEI is overseen by an independent board which is made up of experts from across industry, civil society, academia, and government; it is an advisory body and does not directly set policies. The Department for Digital, Culture, Media & Sport is consulting on whether a statutory status would help the CDEI to deliver its remit as part of the National Data Strategy.

You can find a full copy of the CDEI’s report into tackling algorithmic bias here (PDF)

(Photo by Matt Duncan on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post CDEI launches a ‘roadmap’ for tackling algorithmic bias appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/11/27/cdei-launches-roadmap-tackling-algorithmic-bias/feed/ 0
The ACLU uncovers the first known wrongful arrest due to AI error https://www.artificialintelligence-news.com/2020/06/25/aclu-uncovers-wrongful-arrest-ai-error/ https://www.artificialintelligence-news.com/2020/06/25/aclu-uncovers-wrongful-arrest-ai-error/#respond Thu, 25 Jun 2020 12:05:26 +0000 http://artificialintelligence-news.com/?p=9711 The ACLU (American Civil Liberties Union) has forced the police to acknowledge a wrongful arrest due to an erroneous algorithm. While it’s been suspected that documented racial bias with facial recognition algorithms has led to false arrests, it’s been difficult to prove. On Wednesday, the ACLU lodged a complaint against the Detroit police after black... Read more »

The post The ACLU uncovers the first known wrongful arrest due to AI error appeared first on AI News.

]]>
The ACLU (American Civil Liberties Union) has forced the police to acknowledge a wrongful arrest due to an erroneous algorithm.

While it’s been suspected that documented racial bias with facial recognition algorithms has led to false arrests, it’s been difficult to prove.

On Wednesday, the ACLU lodged a complaint against the Detroit police after black male Robert Williams was arrested on his front lawn “as his wife Melissa looked on and as his daughters wept from the trauma”. Williams was held in a “crowded and filthy” cell overnight without being given any reason.

Detroit Police arrested Williams for allegedly stealing five watches valued at $3800 from a store in October 2018. A blurry CCTV image was matched by a facial recognition algorithm to Williams’ driver’s license photo.

During an interrogation the day after his arrest, the police admitted that “the computer must have gotten it wrong”. Williams was kept incarcerated until after dark “at which point he was released out the front door, on a cold and rainy January night, where he was forced to wait outside on the curb for approximately an hour while his wife scrambled to find child care for the children so that she could come pick him up.”

Speaking to the NY Times, a Detroit police spokesperson said the department “does not make arrests based solely on facial recognition,” and claims witness interviews and a photo lineup were used.

However, a response from the Wayne County prosecutor’s office confirms the department used facial recognition to identify Williams using the security footage and an eyewitness to the crime was not shown the alleged photo lineup.

In its complaint, the ACLU demands that Detroit police end the use of facial recognition “as the facts of Mr. Williams’ case prove both that the technology is flawed and that DPD investigators are not competent in making use of such technology.”

This week, Boston became the latest city to ban facial recognition technology for municipal use. Boston follows an increasing number of cities like San Francisco, Oakland, and California who’ve banned the technology over human rights concerns.

“Facial recognition is inherently dangerous and inherently oppressive. It cannot be reformed or regulated. It must be abolished,” said Evan Greer, deputy director of the digital rights group Fight for the Future.

“Boston just became the latest major city to stop the use of this extraordinary and toxic surveillance technology. Every other city should follow suit.”

Cases like Mr Williams’ are certainly strengthening such calls. Over 1,000 experts signed an open letter this week against the use of AI for the next chilling step, crime prediction.

(Photo by ev on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post The ACLU uncovers the first known wrongful arrest due to AI error appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/06/25/aclu-uncovers-wrongful-arrest-ai-error/feed/ 0