aclu Archives - AI News https://www.artificialintelligence-news.com/tag/aclu/ Artificial Intelligence News Mon, 07 Aug 2023 10:43:49 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png aclu Archives - AI News https://www.artificialintelligence-news.com/tag/aclu/ 32 32 Error-prone facial recognition leads to another wrongful arrest https://www.artificialintelligence-news.com/2023/08/07/error-prone-facial-recognition-another-wrongful-arrest/ https://www.artificialintelligence-news.com/2023/08/07/error-prone-facial-recognition-another-wrongful-arrest/#comments Mon, 07 Aug 2023 10:43:46 +0000 https://www.artificialintelligence-news.com/?p=13436 The Detroit Police Department (DPD) is once again under scrutiny as a new lawsuit emerges, revealing that another innocent person has been wrongly arrested due to a flawed facial recognition match. Porcha Woodruff, an African American woman who was eight months pregnant at the time, is the sixth individual to come forward and report being... Read more »

The post Error-prone facial recognition leads to another wrongful arrest appeared first on AI News.

]]>
The Detroit Police Department (DPD) is once again under scrutiny as a new lawsuit emerges, revealing that another innocent person has been wrongly arrested due to a flawed facial recognition match.

Porcha Woodruff, an African American woman who was eight months pregnant at the time, is the sixth individual to come forward and report being falsely accused of a crime because of the controversial technology utilised by law enforcement.

Woodruff was accused of robbery and carjacking.

“Are you kidding?” Woodruff claims to have said to the officers, gesturing to her stomach to highlight how nonsensical the allegation was while being eight months pregnant.

The pattern of wrongful arrests based on faulty facial recognition has raised serious concerns, particularly as all six victims known by the American Civil Liberties Union (ACLU) have been African Americans. However, Woodruff’s case is notable as she is the first woman to report such an incident happening to her.

This latest incident marks the third known allegation of a wrongful arrest in the past three years attributed to the Detroit Police Department specifically and its reliance on inaccurate facial recognition matches.

Robert Williams, represented by the ACLU of Michigan and the University of Michigan Law School’s Civil Rights Litigation Initiative (CRLI), has an ongoing lawsuit against the DPD for his wrongful arrest in January 2020 due to the same technology.

Phil Mayor, Senior Staff Attorney at ACLU of Michigan, commented: “It’s deeply concerning that the Detroit Police Department knows the devastating consequences of using flawed facial recognition technology as the basis for someone’s arrest and continues to rely on it anyway.

“As Ms Woodruff’s horrifying experience illustrates, the Department’s use of this technology must end.”

The use of facial recognition technology in law enforcement has been a contentious issue, with concerns raised about its accuracy, racial bias, and potential violations of privacy and civil liberties.

Studies have shown that these systems are more prone to errors when identifying individuals with darker skin tones, leading to a disproportionate impact on marginalised communities.

Critics argue that relying on facial recognition as the sole basis for an arrest poses significant risks and can lead to severe consequences for innocent individuals, as seen in the case of Woodruff.

Calls for transparency and accountability have escalated, with civil rights organisations urging the Detroit Police Department to halt its use of facial recognition until the technology is thoroughly vetted and proven to be unbiased and accurate.

“The DPD continues to hide its abuses of this technology, forcing people whose rights have been violated to expose its wrongdoing case by case,” added Mayor.

“DPD should not be permitted to avoid transparency and hide its own misconduct from public view at the same time it continues to subject Detroiters to dragnet surveillance.” 

As the case unfolds, the public remains watchful of how the Detroit Police Department will respond to the mounting pressure to address concerns about the misuse of facial recognition technology and its impact on the rights and lives of innocent individuals.

(Image Credit: Oleg Gamulinskii from Pixabay)

See also: UK will host global AI summit to address potential risks

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Error-prone facial recognition leads to another wrongful arrest appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/07/error-prone-facial-recognition-another-wrongful-arrest/feed/ 1
Clearview AI agrees to restrict sales of its faceprint database https://www.artificialintelligence-news.com/2022/05/10/clearview-ai-agrees-restrict-sales-faceprint-database/ https://www.artificialintelligence-news.com/2022/05/10/clearview-ai-agrees-restrict-sales-faceprint-database/#respond Tue, 10 May 2022 16:06:18 +0000 https://www.artificialintelligence-news.com/?p=11959 Clearview AI has proposed to restrict sales of its faceprint database as part of a settlement with the American Civil Liberties Union (ACLU). The controversial facial recognition firm caused a stir due to scraping billions of images of people across the web without their consent. As a result, the company has faced the ire of... Read more »

The post Clearview AI agrees to restrict sales of its faceprint database appeared first on AI News.

]]>
Clearview AI has proposed to restrict sales of its faceprint database as part of a settlement with the American Civil Liberties Union (ACLU).

The controversial facial recognition firm caused a stir due to scraping billions of images of people across the web without their consent. As a result, the company has faced the ire of regulators around the world and numerous court cases.

One court case filed against Clearview AI was by the ACLU in 2020, claiming that it violated the Biometric Information Privacy Act (BIPA). That act covers Illinois and requires companies operating in the state to obtain explicit consent from individuals to collect their biometric data.

“Fourteen years ago, the ACLU of Illinois led the effort to enact BIPA – a groundbreaking statute to deal with the growing use of sensitive biometric information without any notice and without meaningful consent,” explained Rebecca Glenberg, staff attorney for the ACLU of Illinois.

“BIPA was intended to curb exactly the kind of broad-based surveillance that Clearview’s app enables.”

The case is ongoing but the two sides have reached a draft settlement. As part of the proposal, Clearview AI has agreed to restrict sales of its faceprint database to businesses and other private entities across the country.

“By requiring Clearview to comply with Illinois’ pathbreaking biometric privacy law not just in the state, but across the country, this settlement demonstrates that strong privacy laws can provide real protections against abuse,” said Nathan Freed Wessler, a deputy director of the ACLU Speech, Privacy, and Technology Project.

“Clearview can no longer treat people’s unique biometric identifiers as an unrestricted source of profit. Other companies would be wise to take note, and other states should follow Illinois’ lead in enacting strong biometric privacy laws.” 

The most protections will be offered to residents in Illinois. Clearview AI will be banned from sharing access to its database to any private company in the state in addition to any local public entity for five years.

Furthermore, Clearview AI plans to filter out images from Illinois. This may not catch all images so residents will be able to upload their image and Clearview will block its software from finding matches for their face. Clearview AI will spend $50,000 on adverts in online ads to raise awareness for this feature.

“This settlement is a big win for the most vulnerable people in Illinois,” commented Linda Xóchitl Tortolero, president and CEO of Mujeres Latinas en Acción, a Chicago-based non-profit.

“Much of our work centres on protecting privacy and ensuring the safety of survivors of domestic violence and sexual assault. Before this agreement, Clearview ignored the fact that biometric information can be misused to create dangerous situations and threats to their lives. Today that’s no longer the case.” 

The protections offered to American citizens outside Illinois aren’t quite as stringent.

Clearview AI is still able to sell access to its huge database to public entities, including law enforcement. In the wake of the US Capitol raid, the company boasted that police use of its facial recognition system increased 26 percent.

However, the company would be banned from selling access to its complete database to private companies. Clearview AI could still sell its software, but any purchaser would need to source their own database to train it.

“There is a battle being fought in courtrooms and statehouses across the country about who is going to control biometrics—Big Tech or the people being tracked by them—and this represents one of the biggest victories for consumers to date,” said J. Eli Wade-Scott from Edelson PC.

In November 2021, the UK’s Information Commissioner’s Office (ICO) imposed a potential fine of just over £17 million to Clearview AI and ordered the company to destroy the personal data it holds on British citizens and cease further processing.

Earlier that month, the OAIC reached the same conclusion as the ICO and ordered Clearview AI to destroy the biometric data it collected on Australians and cease further collection.

The full draft settlement between Clearview AI and the ACLU can be found here.

(Photo by Maksim Chernishev on Unsplash)

Related: Ukraine harnesses Clearview AI to uncover assailants and identify the fallen

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Clearview AI agrees to restrict sales of its faceprint database appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/05/10/clearview-ai-agrees-restrict-sales-faceprint-database/feed/ 0
Reintroduction of facial recognition legislation receives mixed responses https://www.artificialintelligence-news.com/2021/06/17/reintroduction-facial-recognition-legislation-mixed-responses/ https://www.artificialintelligence-news.com/2021/06/17/reintroduction-facial-recognition-legislation-mixed-responses/#respond Thu, 17 Jun 2021 11:57:38 +0000 http://artificialintelligence-news.com/?p=10698 The reintroduction of the Facial Recognition and Biometric Technology Moratorium Act in the 117th Congress has received mixed responses. An initial version of the legislation was introduced in 2020 but was reintroduced June 15 2021 by Sen. Edward Markey (D-Mass.) “We do not have to forgo privacy and justice for safety,” said Senator Markey. “This... Read more »

The post Reintroduction of facial recognition legislation receives mixed responses appeared first on AI News.

]]>
The reintroduction of the Facial Recognition and Biometric Technology Moratorium Act in the 117th Congress has received mixed responses.

An initial version of the legislation was introduced in 2020 but was reintroduced June 15 2021 by Sen. Edward Markey (D-Mass.)

“We do not have to forgo privacy and justice for safety,” said Senator Markey. “This legislation is about rooting out systemic racism and stopping invasive technologies from becoming irreversibly embedded in our society.

“We simply cannot ignore the technologies that perpetuate injustice and that means that law enforcement should not be using facial recognition tools today. I urge my colleagues in Congress to join this effort and pass this important legislation.”

The legislation aims for a blanket ban on the use of facial and biometric recognition technologies by government agencies following a string of abuses and proven biases.

“This is a technology that is fundamentally incompatible with basic liberty and human rights. It’s more like nuclear weapons than alcohol or cigarettes –– it can’t be effectively regulated, it must be banned entirely. Silicon Valley lobbyists are already pushing for weak regulations in the hopes that they can continue selling this dangerous and racist technology to law enforcement. But experts and advocates won’t be fooled,” said Evan Greer, Director of Fight for the Future.

Human rights group ACLU (American Civil Liberties Union) has also been among the leading voices in opposing facial recognition technologies. The group’s lawyers have supported victims of facial recognition – such as the wrongful arrest of black male Robert Williams on his lawn in front of his family – and backed both state- and national-level attempts to ban government use of the technology.

Kate Ruane, Senior Legislative Counsel for the ACLU, said:

“The perils of face recognition technology are not hypothetical — study after study and real life have already shown us its dangers. 

The technology’s alarming rate of inaccuracy when used against people of colour has led to the wrongful arrests of multiple Black men including Robert Williams, an ACLU client.

Giving law enforcement even more powerful surveillance technology empowers constant surveillance, harms racial equity, and is not the answer.

It’s past time to take action, and the Facial Recognition and Biometric Technology Moratorium Act is an important step to halt government use of face recognition technology.”

Critics of the legislation have pointed towards the social benefits of such technologies and propose that more oversight is required rather than a blanket ban.

The Security Industry Association (SIA) claims that a blanket ban would prevent legitimate uses of facial and biometric recognition technologies including:

  • Reuniting victims of human trafficking with their families and loved ones.
  • Identifying the individuals who stormed the US Capitol on 6 Jan.
  • Detecting use of fraudulent documentation by non-citizens at air ports of entry.
  • Aiding counterterrorism investigations in critical situations.
  • Exonerating innocent individuals accused of crimes.

“Rather than impose sweeping moratoriums, SIA encourages Congress to propose balanced legislation that promulgates reasonable safeguards to ensure that facial recognition technology is used ethically, responsibly and under appropriate oversight and that the United States remains the global leader in driving innovation,” comments Don Erickson, CEO of the SIA.

To support its case, the SIA recently commissioned a poll (PDF) from Schoen Cooperman Research which found that 68 percent of Americans believe facial recognition can make society safer. Support is higher for specific applications such as for airlines (75%) and security at office buildings (70%).

As part of ACLU-led campaigns, multiple jurisdictions have already prohibited police use of facial recognition technology. These jurisdictions include San Francisco, Berkeley, and Oakland, California; Boston, Brookline, Cambridge, Easthampton, Northampton, Springfield, and Somerville, Massachusetts; New Orleans, Louisiana; Jackson, Mississippi; Portland, Maine; Minneapolis, Minnesota; Portland, Oregon; King County, Washington; and the states of Virginia and Vermont. New York state also suspended use of face recognition in schools and California suspended its use with police-worn body cameras.

A copy of the legislation can be found here (PDF)

(Photo by Joe Gadd on Unsplash)

Find out more about Digital Transformation Week North America, taking place on November 9-10 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post Reintroduction of facial recognition legislation receives mixed responses appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/06/17/reintroduction-facial-recognition-legislation-mixed-responses/feed/ 0
ACLU joins over 50 groups in calling for Homeland Security to halt use of Clearview AI https://www.artificialintelligence-news.com/2021/04/20/aclu-joins-over-50-groups-homeland-security-halt-use-clearview-ai/ https://www.artificialintelligence-news.com/2021/04/20/aclu-joins-over-50-groups-homeland-security-halt-use-clearview-ai/#respond Tue, 20 Apr 2021 15:10:53 +0000 http://artificialintelligence-news.com/?p=10482 The American Civil Liberties Union (ACLU) has joined over 50 other rights and advocacy groups in calling for the Department of Homeland Security (DHS) to halt the use of Clearview AI’s controversial facial recognition system. In a letter (PDF) addressed to DHS Secretary Alejandro Mayorkas, the signatories wrote: “The undersigned organizations have serious concerns about... Read more »

The post ACLU joins over 50 groups in calling for Homeland Security to halt use of Clearview AI appeared first on AI News.

]]>
The American Civil Liberties Union (ACLU) has joined over 50 other rights and advocacy groups in calling for the Department of Homeland Security (DHS) to halt the use of Clearview AI’s controversial facial recognition system.

In a letter (PDF) addressed to DHS Secretary Alejandro Mayorkas, the signatories wrote: “The undersigned organizations have serious concerns about the federal government’s use of facial recognition technology provided by private company Clearview AI. We request that the Department immediately stop using Clearview AI at its agencies on a contractual, trial, or any other basis.”

Clearview AI’s system has raised alarm among privacy advocates for its use of more than three billion biometric identifiers scraped without permission from websites including Facebook, Instagram, and LinkedIn.

An investigation by Huffington Post also found Clearview AI has extensive links to the far-right. Clearview AI lawyer Tor Ekeland, also known as The Troll’s Lawyer for taking on some rather unsavoury clients, argued last year: “Common law has never recognised a right to privacy for your face.”

Despite the concerns, thousands of state and federal law enforcement agencies have used Clearview AI’s system. In the wake of the Capitol Hill riot, Clearview AI itself says usage increased by 26 percent.

With warrants and other due processes, such tools can be a powerful asset in the fight against serious crime. However, worryingly, many of the leaders of the agencies using Clearview AI were unaware employees were using the tool.

Last year, the UK and Australia launched a joint probe into Clearview AI’s practices.

“The Office of the Australian Information Commissioner (OAIC) and the UK’s Information Commissioner’s Office (ICO) have opened a joint investigation into the personal information handling practices of Clearview Inc., focusing on the company’s use of ‘scraped’ data and biometrics of individuals,” the ICO wrote in a statement.

A similar probe was also launched by the EU’s privacy watchdog. The European Data Protection Board ruled that any use of the service by law enforcement in Europe would “likely not be consistent with the EU data protection regime” and that it “has doubts as to whether any Union or Member State law provides a legal basis for using a service such as the one offered by Clearview AI.”

The following organisations are signatories of the letter to the DHS:

  • Mijente
  • Just Futures Law
  • The Center on Privacy & Technology at Georgetown Law Access Now
  • American Civil Liberties Union (ACLU)
  • ACLU of Minnesota
  • ACLU of Northern California
  • ACLU of North Carolina
  • Adelante Alabama Worker Center
  • Asian Americans Advancing Justice | AAJC
  • CASA
  • Center for Constitutional Rights
  • Center for Popular Democracy
  • Centro de Trabajadores Unidos
  • Cleveland Jobs with Justice
  • Colectiva Legal del Pueblo
  • Colorado Immigrant Rights Coalition Community Justice Exchange Community Justice Project, Inc. Connecticut Shoreline Indivisible Defending Rights & Dissent Demand Progress Education Fund Detention Watch Network
  • Electronic Frontier Foundation (EFF) Electronic Privacy Information Center Fight for the Future
  • Freedom for Immigrants
  • Free Press
  • Government Information Watch
  • Immigrant Defense Project
  • Immigrant Legal Advocacy Project
  • La Resistencia
  • Laredo Immigrant Alliance
  • LatinoJustice PRLDEF
  • Legal Aid at Work
  • Louisiana Advocates for Immigrants in Detention
  • MA Jobs with Justice
  • MediaJustice
  • Meyer Law Office, P.C.
  • Muslim Advocates
  • Muslim Justice League
  • National Immigrant Justice Center
  • National Immigration Law Center
  • National Network for Immigrant & Refugee Rights
  • National Network of Arab American Communities (NNAAC) National Organization for Women
  • New America’s Open Technology Institute
  • New Mexico Immigrant Law Center
  • New York Civil Liberties Union
  • Northwest Immigrant Rights Project
  • Open MIC (Open Media and Information Companies Initiative) Open The Government
  • OpenMedia
  • Project for Privacy and Surveillance Accountability
  • Project On Government Oversight
  • Restore The Fourth
  • Southeastern Immigrant Rights Network
  • S.T.O.P. – Surveillance Technology Oversight Project Sanctuary DMV
  • Twin Cities Innovation Alliance
  • X-Lab
  • UndocuBlack Network
  • Unitarian Universalist Service Committee United We Dream Network
  • Upturn
  • Washington Defender Association
  • Win Without War
  • Wind of the Spirit Immigrant Resource Center

“We request that the Biden administration refrain from providing any new contracts to Clearview AI, and that any and all current or future use of the platform by federal agencies be suspended immediately, irrespective of the existence of a contract with the company,” the signatories wrote.

“Clearview AI’s continued violation of civil rights and privacy rights provide ample reason to discontinue its use.”

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post ACLU joins over 50 groups in calling for Homeland Security to halt use of Clearview AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/04/20/aclu-joins-over-50-groups-homeland-security-halt-use-clearview-ai/feed/ 0
Detroit Police chief says AI face recognition doesn’t work ‘96% of the time’ https://www.artificialintelligence-news.com/2020/06/30/detroit-police-chief-ai-face-recognition/ https://www.artificialintelligence-news.com/2020/06/30/detroit-police-chief-ai-face-recognition/#respond Tue, 30 Jun 2020 09:45:29 +0000 http://artificialintelligence-news.com/?p=9720 Detroit Police chief James Craig has acknowledged that AI-powered face recognition doesn’t work the vast majority of times. “If we would use the software only [for subject identification], we would not solve the case 95-97 percent of the time,” Craig said. “If we were just to use the technology by itself to identify someone, I... Read more »

The post Detroit Police chief says AI face recognition doesn’t work ‘96% of the time’ appeared first on AI News.

]]>
Detroit Police chief James Craig has acknowledged that AI-powered face recognition doesn’t work the vast majority of times.

“If we would use the software only [for subject identification], we would not solve the case 95-97 percent of the time,” Craig said. “If we were just to use the technology by itself to identify someone, I would say 96 percent of the time it would misidentify.”

Craig’s comments arrive just days after the ACLU (American Civil Liberties Union) lodged a complaint against the Detroit police following the harrowing wrongful arrest of black male Robert Williams due to a facial recognition error.

Detroit Police arrested Williams for allegedly stealing five watches valued at $3800 from a store in October 2018. A blurry CCTV image was matched by a facial recognition algorithm to Williams’ driver’s license photo.

Current AI algorithms are known to have a racism issue. Extensive studies have repeatedly shown that facial recognition algorithms are almost 100 percent accurate when used on white males, but have serious problems when it comes to darker skin colours and the fairer sex.

This racism issue was shown again this week after an AI designed to upscale blurry photos, such as those often taken from security cameras, was applied to a variety of people from the BAME communities.

Here’s a particularly famous one:

And some other examples:

Last week, Boston followed in the footsteps of an increasing number of cities like San Francisco, Oakland, and California in banning facial recognition technology over human rights concerns.

“Facial recognition is inherently dangerous and inherently oppressive. It cannot be reformed or regulated. It must be abolished,” said Evan Greer, deputy director of the digital rights group Fight for the Future.

Over the other side of the pond, facial recognition tests in the UK so far have also been nothing short of a complete failure. An initial trial at the 2016 Notting Hill Carnival led to not a single person being identified. A follow-up trial the following year led to no legitimate matches but 35 false positives.

An independent report into the Met Police’s facial recognition trials, conducted last year by Professor Peter Fussey and Dr Daragh Murray, concluded that it was only verifiably accurate in just 19 percent of cases.

The next chilling step for AI in surveillance is using it to predict crime. Following news of an imminent publication called ‘A Deep Neural Network Model to Predict Criminality Using Image Processing,’ over 1000 experts signed an open letter last week opposing the use of AI for such purposes.

“Machine learning programs are not neutral; research agendas and the data sets they work with often inherit dominant cultural beliefs about the world,” warned the letter’s authors.

The acknowledgement from Detroit’s police chief that current facial recognition technologies do not work in around 96 percent of cases should be reason enough to halt its use, especially for law enforcement, at least until serious improvements are made.

(Photo by Joshua Hoehne on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Detroit Police chief says AI face recognition doesn’t work ‘96% of the time’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/06/30/detroit-police-chief-ai-face-recognition/feed/ 0
The ACLU uncovers the first known wrongful arrest due to AI error https://www.artificialintelligence-news.com/2020/06/25/aclu-uncovers-wrongful-arrest-ai-error/ https://www.artificialintelligence-news.com/2020/06/25/aclu-uncovers-wrongful-arrest-ai-error/#respond Thu, 25 Jun 2020 12:05:26 +0000 http://artificialintelligence-news.com/?p=9711 The ACLU (American Civil Liberties Union) has forced the police to acknowledge a wrongful arrest due to an erroneous algorithm. While it’s been suspected that documented racial bias with facial recognition algorithms has led to false arrests, it’s been difficult to prove. On Wednesday, the ACLU lodged a complaint against the Detroit police after black... Read more »

The post The ACLU uncovers the first known wrongful arrest due to AI error appeared first on AI News.

]]>
The ACLU (American Civil Liberties Union) has forced the police to acknowledge a wrongful arrest due to an erroneous algorithm.

While it’s been suspected that documented racial bias with facial recognition algorithms has led to false arrests, it’s been difficult to prove.

On Wednesday, the ACLU lodged a complaint against the Detroit police after black male Robert Williams was arrested on his front lawn “as his wife Melissa looked on and as his daughters wept from the trauma”. Williams was held in a “crowded and filthy” cell overnight without being given any reason.

Detroit Police arrested Williams for allegedly stealing five watches valued at $3800 from a store in October 2018. A blurry CCTV image was matched by a facial recognition algorithm to Williams’ driver’s license photo.

During an interrogation the day after his arrest, the police admitted that “the computer must have gotten it wrong”. Williams was kept incarcerated until after dark “at which point he was released out the front door, on a cold and rainy January night, where he was forced to wait outside on the curb for approximately an hour while his wife scrambled to find child care for the children so that she could come pick him up.”

Speaking to the NY Times, a Detroit police spokesperson said the department “does not make arrests based solely on facial recognition,” and claims witness interviews and a photo lineup were used.

However, a response from the Wayne County prosecutor’s office confirms the department used facial recognition to identify Williams using the security footage and an eyewitness to the crime was not shown the alleged photo lineup.

In its complaint, the ACLU demands that Detroit police end the use of facial recognition “as the facts of Mr. Williams’ case prove both that the technology is flawed and that DPD investigators are not competent in making use of such technology.”

This week, Boston became the latest city to ban facial recognition technology for municipal use. Boston follows an increasing number of cities like San Francisco, Oakland, and California who’ve banned the technology over human rights concerns.

“Facial recognition is inherently dangerous and inherently oppressive. It cannot be reformed or regulated. It must be abolished,” said Evan Greer, deputy director of the digital rights group Fight for the Future.

“Boston just became the latest major city to stop the use of this extraordinary and toxic surveillance technology. Every other city should follow suit.”

Cases like Mr Williams’ are certainly strengthening such calls. Over 1,000 experts signed an open letter this week against the use of AI for the next chilling step, crime prediction.

(Photo by ev on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post The ACLU uncovers the first known wrongful arrest due to AI error appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/06/25/aclu-uncovers-wrongful-arrest-ai-error/feed/ 0
ACLU sues Clearview AI calling it a ‘nightmare scenario’ for privacy https://www.artificialintelligence-news.com/2020/05/29/aclu-clearview-ai-nightmare-scenario-privacy/ https://www.artificialintelligence-news.com/2020/05/29/aclu-clearview-ai-nightmare-scenario-privacy/#comments Fri, 29 May 2020 13:48:55 +0000 http://artificialintelligence-news.com/?p=9660 The American Civil Liberties Union (ACLU) is suing controversial facial recognition provider Clearview AI over privacy concerns. “Companies like Clearview will end privacy as we know it, and must be stopped,” said Nathan Freed Wessler, senior staff attorney with the ACLU’s Speech, Privacy, and Technology Project. “The ACLU is taking its fight to defend privacy... Read more »

The post ACLU sues Clearview AI calling it a ‘nightmare scenario’ for privacy appeared first on AI News.

]]>
The American Civil Liberties Union (ACLU) is suing controversial facial recognition provider Clearview AI over privacy concerns.

“Companies like Clearview will end privacy as we know it, and must be stopped,” said Nathan Freed Wessler, senior staff attorney with the ACLU’s Speech, Privacy, and Technology Project.

“The ACLU is taking its fight to defend privacy rights against the growing threat of this unregulated surveillance technology to the courts, even as we double down on our work in legislatures and city councils nationwide.”

Clearview AI has repeatedly come under fire due to its practice of scraping billions of photos from across the internet and storing them in a database for powerful facial recognition services.

“Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland said recently.

The company’s facial recognition system is used by over 2,200 law enforcement agencies around the world – and even commercial businesses like Best Buy and Macy’s, according to a recent leak.

In a press release, the ACLU wrote:

“The New York Times revealed the company was secretly capturing untold numbers of biometric identifiers for purposes of surveillance and tracking, without notice to the individuals affected.

The company’s actions embodied the nightmare scenario privacy advocates long warned of, and accomplished what many companies — such as Google — refused to try due to ethical concerns.”

However, even more concerning is Clearview AI’s extensive ties with the far-right.

Clearview AI founder Hoan Ton-That claims to have since disassociated from far-right views, movements, and individuals. Ekeland, meanwhile, has gained notoriety as “The Troll’s Lawyer” for defending clients such as neo-Nazi troll Andrew Auernheimer.

The ACLU says its lawsuit represents the first “to force any face recognition surveillance company to answer directly to groups representing survivors of domestic violence and sexual assault, undocumented immigrants, and other vulnerable communities uniquely harmed by face recognition surveillance.”

Facial recognition technologies have become a key focus for the ACLU.

Back in March, AI News reported the ACLU was suing the US government for blocking a probe into the use of facial recognition technology at airports. In 2018, the union caught our attention for highlighting the inaccuracy of Amazon’s facial recognition algorithm – especially when identifying people of colour and females.

“Clearview’s actions represent one of the largest threats to personal privacy by a private company our country has faced,” said Jay Edelson of Edelson PC, lead counsel handling this case on a pro bono basis.

“If a well-funded, politically connected company can simply amass information to track all of us, we are living in a different America.”

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post ACLU sues Clearview AI calling it a ‘nightmare scenario’ for privacy appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/05/29/aclu-clearview-ai-nightmare-scenario-privacy/feed/ 1
Amazon joins calls to establish facial recognition standards https://www.artificialintelligence-news.com/2019/02/08/amazon-calls-facial-recognition-standards/ https://www.artificialintelligence-news.com/2019/02/08/amazon-calls-facial-recognition-standards/#respond Fri, 08 Feb 2019 15:36:58 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4911 Amazon has put its weight behind the growing number of calls from companies, individuals, and rights groups to establish facial recognition standards. Michael Punke, VP of Global Public Policy at Amazon Web Services, said. “Over the past several months, we’ve talked to customers, researchers, academics, policymakers, and others to understand how to best balance the... Read more »

The post Amazon joins calls to establish facial recognition standards appeared first on AI News.

]]>
Amazon has put its weight behind the growing number of calls from companies, individuals, and rights groups to establish facial recognition standards.

Michael Punke, VP of Global Public Policy at Amazon Web Services, said.

“Over the past several months, we’ve talked to customers, researchers, academics, policymakers, and others to understand how to best balance the benefits of facial recognition with the potential risks.

It’s critical that any legislation protect civil rights while also allowing for continued innovation and practical application of the technology.”

In a blog post today, Amazon highlighted five guidelines to ensure facial recognition is developed and used ethically.

The first of the five calls for facial recognition to follow existing laws which protect civil liberties. To ensure accountability, the second guideline wants all facial recognition to be reviewed by humans before any decision is taken.

Other guidelines include a call for transparancy in how agencies are using facial recognition technology, and visual notices placed where it’s being used in public or commercial settings.

Facial Recognition Concerns

The company has faced criticism of its ‘Rekognition’ system which is used by police forces and has been pitched to agencies such as US Immigration and Customs Enforcement (ICE).

In a letter addressed to Amazon CEO Jeff Bezos, employees wrote:

“We refuse to build the platform that powers ICE, and we refuse to contribute to tools that violate human rights.

As ethically concerned Amazonians, we demand a choice in what we build and a say in how it is used.”

The letter was sent following ICE’s separation of immigrant children from their families at the US border and subsequent detainment. There’s no evidence ICE ultimately purchased or used Amazon’s technology.

In July last year, the American Civil Liberties Union tested Amazon’s facial recognition technology on members of Congress to see if they match with a database of criminal mugshots.

Rekognition compared pictures of all members of the House and Senate against 25,000 arrest photos. The false matches disproportionately affected members of the Congressional Black Caucus.

Dr Matt Wood, General Manager of AI at Amazon Web Services, commented on the ACLU’s findings later that month. He said the ACLU left Rekognition’s default confidence setting of 80 percent on when it suggests 95 percent or higher for law enforcement.

Wood, however, went on to say it showed how standards are needed to ensure facial recognition systems are used properly. He called for “the government to weigh in and specify what temperature (or confidence levels) it wants law enforcement agencies to meet to assist in their public safety work.”

The call for facial recognition standards extends beyond the US. In China, the CEO of SenseTime – the world’s most funded AI startup – also said he wants to see facial recognition standards established for a ‘healthier’ industry.

In the UK, Information Commissioner Elizabeth Denham announced her office has identified facial recognition technology as a priority to establish what protections are needed for the public.

SenseTime is so well-funded not just because of its powerful facial recognition technology, but also from adoption by the Chinese government. The firm aims to process and analyse over 100,000 simultaneous real-time streams from traffic cameras, ATMs, and more as part of its ‘Viper’ system.

If such a system was deployed with biased algorithms, it will exacerbate current societal problems. Algorithmic Justice League founder Joy Buolamwini gave a fantastic presentation during the World Economic Forum last month on the need to fight AI bias.

As Spider-Man’s Uncle Ben would say: “With great power, comes great responsibility”.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Amazon joins calls to establish facial recognition standards appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2019/02/08/amazon-calls-facial-recognition-standards/feed/ 0
Microsoft warns its AI offerings ‘may result in reputational harm’ https://www.artificialintelligence-news.com/2019/02/06/microsoft-ai-result-reputational-harm/ https://www.artificialintelligence-news.com/2019/02/06/microsoft-ai-result-reputational-harm/#respond Wed, 06 Feb 2019 17:46:17 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4903 Microsoft has warned investors that its AI offerings could damage the company’s reputation in a bid to prepare them for the worst. AI can be unpredictable, and Microsoft already has experience. Back in 2016, a Microsoft chatbot named Tay became a racist, sexist, generally-rather-unsavoury character after internet users took advantage of its machine learning capabilities... Read more »

The post Microsoft warns its AI offerings ‘may result in reputational harm’ appeared first on AI News.

]]>
Microsoft has warned investors that its AI offerings could damage the company’s reputation in a bid to prepare them for the worst.

AI can be unpredictable, and Microsoft already has experience. Back in 2016, a Microsoft chatbot named Tay became a racist, sexist, generally-rather-unsavoury character after internet users took advantage of its machine learning capabilities

The chatbot was covered in media around the world and itself was bound to have caused Microsoft some reputational damage.

In the company’s latest quarterly report, Microsoft warned investors:

“AI algorithms may be flawed. Datasets may be insufficient or contain biased information. Inappropriate or controversial data practices by Microsoft or others could impair the acceptance of AI solutions.

These deficiencies could undermine the decisions, predictions, or analysis AI applications produce, subjecting us to competitive harm, legal liability, and brand or reputational harm.

Some AI scenarios present ethical issues. If we enable or offer AI solutions that are controversial because of their impact on human rights, privacy, employment, or other social issues, we may experience brand or reputational harm.”

Several companies have been criticised for unethical AI development, including several of Microsoft’s competitors.

AI Backlash

Google was embroiled in a backlash over its ‘Project Maven’ defence contract to supply drone analysing AI to the Pentagon. The contract received both internal and external criticism.

Several Googlers left the company and others threatened if the contract was not dropped. Over 4,000 signed a petition demanding their management cease the project and never again “build warfare technology.”

Following the Project Maven backlash, Google CEO Sundar Pichai said the contract would not be renewed.

Pichai also promised in a blog post the company will not develop technologies or weapons that cause harm or anything which can be used for surveillance violating ‘internationally accepted norms’ or ‘widely accepted principles of international law and human rights’.

In June last year, Microsoft faced its own internal revolt over a $19 million contract with ICE (Immigration and Customs Enforcement) during a time when authorities were splitting up immigrant families.

Microsoft CEO Satya Nadella was forced to clarify that Microsoft isn’t directly involved with the government’s policy of separating families at the US-Mexico border.

A report from the American Civil Liberties Union found bias in Amazon’s facial recognition algorithm. Amazonians wrote a letter to CEO Jeff Bezos expressing their concerns.

Problems with AI bias keep arising and likely will continue for some time. It’s an issue which needs to be tackled before any mass rollouts.

Last month, Algorithmic Justice League founder Joy Buolamwini gave an impactful presentation during the World Economic Forum on the AI bias issue.

Microsoft is clearly preparing investors for some controversial slip-ups of its own along its AI development journey.

Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.

The post Microsoft warns its AI offerings ‘may result in reputational harm’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2019/02/06/microsoft-ai-result-reputational-harm/feed/ 0
AI is sentencing people based on their ‘risk’ assessment https://www.artificialintelligence-news.com/2019/01/22/ai-sentencing-people-risk-assessment/ https://www.artificialintelligence-news.com/2019/01/22/ai-sentencing-people-risk-assessment/#respond Tue, 22 Jan 2019 10:42:12 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4489 AI-powered tools for determining the risk of an individual are being used to make incarceration and sentencing decisions. During the Data for Black Lives conference last weekend, several experts shared how AI is evolving America’s controversial prison system. America imprisons more people than any other nation. It’s not just a result of the population of... Read more »

The post AI is sentencing people based on their ‘risk’ assessment appeared first on AI News.

]]>
AI-powered tools for determining the risk of an individual are being used to make incarceration and sentencing decisions.

During the Data for Black Lives conference last weekend, several experts shared how AI is evolving America’s controversial prison system.

America imprisons more people than any other nation. It’s not just a result of the population of the country, the incarceration per head is the highest in the world at ~716 per 100,000 of the national population. The second largest, Russia, incarcerates ~455 per 100,000 population.

Black males are, by far, America’s most incarcerated:

AI has been proven to have bias problems. Last year, the American Civil Liberties Union found that Amazon’s facial recognition technology disproportionately flagged those with darker skin colours as criminals more often.

The bias is not intentional but a result of a wider problem in STEM career diversity. In the West, the fields are dominated by white males.

A 2010 study by researchers at NIST and the University of Texas in Dallas found (PDF) algorithms designed and tested in East Asia are better at recognising East Asians, while those developed in Western countries are more accurate when detecting Caucasians.

Deploying such inherently-biased AIs is bound to exacerbate societal problems. Most concerning, US courtrooms are using AI tools for ‘risk’ assessments to make sentencing decisions.

Using a defendant’s profile, the AI generates a recidivism score – a number which aims to estimate if an individual will reoffend. A judge then uses that score to make decisions such as the severity of their sentence, what services the individual should be provided, and if a person should be held in jail before trial.

Last July, a statement (PDF) was signed by over 100 civil rights organisations – including the ACLU – calling for AI to be kept clear of risk assessments.

When the bias problem with AIs is solved, their use in the justice system could improve trust in decisions. Current questions over whether a judge was prejudiced in their sentencing will be reduced. However, we’re yet to be anywhere near that point.

Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.

The post AI is sentencing people based on their ‘risk’ assessment appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2019/01/22/ai-sentencing-people-risk-assessment/feed/ 0