facial recognition Archives - AI News https://www.artificialintelligence-news.com/tag/facial-recognition/ Artificial Intelligence News Mon, 07 Aug 2023 10:43:49 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png facial recognition Archives - AI News https://www.artificialintelligence-news.com/tag/facial-recognition/ 32 32 Error-prone facial recognition leads to another wrongful arrest https://www.artificialintelligence-news.com/2023/08/07/error-prone-facial-recognition-another-wrongful-arrest/ https://www.artificialintelligence-news.com/2023/08/07/error-prone-facial-recognition-another-wrongful-arrest/#comments Mon, 07 Aug 2023 10:43:46 +0000 https://www.artificialintelligence-news.com/?p=13436 The Detroit Police Department (DPD) is once again under scrutiny as a new lawsuit emerges, revealing that another innocent person has been wrongly arrested due to a flawed facial recognition match. Porcha Woodruff, an African American woman who was eight months pregnant at the time, is the sixth individual to come forward and report being... Read more »

The post Error-prone facial recognition leads to another wrongful arrest appeared first on AI News.

]]>
The Detroit Police Department (DPD) is once again under scrutiny as a new lawsuit emerges, revealing that another innocent person has been wrongly arrested due to a flawed facial recognition match.

Porcha Woodruff, an African American woman who was eight months pregnant at the time, is the sixth individual to come forward and report being falsely accused of a crime because of the controversial technology utilised by law enforcement.

Woodruff was accused of robbery and carjacking.

“Are you kidding?” Woodruff claims to have said to the officers, gesturing to her stomach to highlight how nonsensical the allegation was while being eight months pregnant.

The pattern of wrongful arrests based on faulty facial recognition has raised serious concerns, particularly as all six victims known by the American Civil Liberties Union (ACLU) have been African Americans. However, Woodruff’s case is notable as she is the first woman to report such an incident happening to her.

This latest incident marks the third known allegation of a wrongful arrest in the past three years attributed to the Detroit Police Department specifically and its reliance on inaccurate facial recognition matches.

Robert Williams, represented by the ACLU of Michigan and the University of Michigan Law School’s Civil Rights Litigation Initiative (CRLI), has an ongoing lawsuit against the DPD for his wrongful arrest in January 2020 due to the same technology.

Phil Mayor, Senior Staff Attorney at ACLU of Michigan, commented: “It’s deeply concerning that the Detroit Police Department knows the devastating consequences of using flawed facial recognition technology as the basis for someone’s arrest and continues to rely on it anyway.

“As Ms Woodruff’s horrifying experience illustrates, the Department’s use of this technology must end.”

The use of facial recognition technology in law enforcement has been a contentious issue, with concerns raised about its accuracy, racial bias, and potential violations of privacy and civil liberties.

Studies have shown that these systems are more prone to errors when identifying individuals with darker skin tones, leading to a disproportionate impact on marginalised communities.

Critics argue that relying on facial recognition as the sole basis for an arrest poses significant risks and can lead to severe consequences for innocent individuals, as seen in the case of Woodruff.

Calls for transparency and accountability have escalated, with civil rights organisations urging the Detroit Police Department to halt its use of facial recognition until the technology is thoroughly vetted and proven to be unbiased and accurate.

“The DPD continues to hide its abuses of this technology, forcing people whose rights have been violated to expose its wrongdoing case by case,” added Mayor.

“DPD should not be permitted to avoid transparency and hide its own misconduct from public view at the same time it continues to subject Detroiters to dragnet surveillance.” 

As the case unfolds, the public remains watchful of how the Detroit Police Department will respond to the mounting pressure to address concerns about the misuse of facial recognition technology and its impact on the rights and lives of innocent individuals.

(Image Credit: Oleg Gamulinskii from Pixabay)

See also: UK will host global AI summit to address potential risks

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Error-prone facial recognition leads to another wrongful arrest appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/07/error-prone-facial-recognition-another-wrongful-arrest/feed/ 1
Clearview AI used by US police for almost 1M searches https://www.artificialintelligence-news.com/2023/03/28/clearview-ai-us-police-almost-1m-searches/ https://www.artificialintelligence-news.com/2023/03/28/clearview-ai-us-police-almost-1m-searches/#respond Tue, 28 Mar 2023 15:26:04 +0000 https://www.artificialintelligence-news.com/?p=12871 Facial recognition firm Clearview AI has revealed that it has run almost a million searches for US police. Facial recognition technology is a controversial topic, and for good reason. Clearview AI’s technology allows law enforcement to upload a photo of a suspect’s face and find matches in a database of billions of images it has... Read more »

The post Clearview AI used by US police for almost 1M searches appeared first on AI News.

]]>
Facial recognition firm Clearview AI has revealed that it has run almost a million searches for US police.

Facial recognition technology is a controversial topic, and for good reason. Clearview AI’s technology allows law enforcement to upload a photo of a suspect’s face and find matches in a database of billions of images it has collected.

Clearview AI CEO Hoan Ton-That disclosed in an interview with the BBC that the firm has scraped 30 billion images from platforms such as Facebook. The images were taken without the users’ permission.

The company has been repeatedly fined millions in Europe and Australia for breaches of privacy, but US police are still using its powerful software.

Matthew Guaragilia from the Electronic Frontier Foundation said that police use of Clearview puts everyone into a “perpetual police line-up.”

While the use of facial recognition by the police is often sold to the public as being used only for serious or violent crimes, Miami Police confirmed to the BBC that it uses Clearview AI’s software for every type of crime.

Miami’s Assistant Chief of Police Armando Aguilar said his team used Clearview AI’s system about 450 times a year, and that it had helped to solve several murders. 

However, there are numerous documented cases of mistaken identity using facial recognition by the police. Robert Williams, for example, was wrongfully arrested on his lawn in front of his family and held overnight in a “crowded and filthy” cell.

“The perils of face recognition technology are not hypothetical — study after study and real-life have already shown us its dangers,” explained Kate Ruane, Senior Legislative Counsel for the ACLU, following the reintroduction of the Facial Recognition and Biometric Technology Moratorium Act.

“The technology’s alarming rate of inaccuracy when used against people of colour has led to the wrongful arrests of multiple black men including Robert Williams, an ACLU client.”

The lack of transparency around police use of facial recognition means the true figure of wrongful arrests it’s led to is likely far higher.

Civil rights campaigners want police forces that use Clearview AI to openly say when it is used, and for its accuracy to be openly tested in court. They want the systems to be scrutinised by independent experts.

The use of facial recognition technology by police is a contentious issue. While it may help solve crimes, it also poses a threat to civil liberties and privacy.

Ultimately, it’s a fine line between using technology to fight crime and infringing on individual rights, and we need to tread carefully to ensure we don’t cross it.

Related: Clearview AI lawyer: ‘Common law has never recognised a right to privacy for your face’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Clearview AI used by US police for almost 1M searches appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/03/28/clearview-ai-us-police-almost-1m-searches/feed/ 0
Italy’s facial recognition ban exempts law enforcement https://www.artificialintelligence-news.com/2022/11/15/italy-facial-recognition-ban-exempts-law-enforcement/ https://www.artificialintelligence-news.com/2022/11/15/italy-facial-recognition-ban-exempts-law-enforcement/#respond Tue, 15 Nov 2022 15:47:07 +0000 https://www.artificialintelligence-news.com/?p=12484 Italy has banned the use of facial recognition, except for law enforcement purposes. On Monday, the country’s Data Protection Authority (Garante per la protezione dei dati personali) issued official stays to two municipalities – the southern Italian city of Lecce and the Tuscan city of Arezzo – over their experiments with biometrics technologies. The agency... Read more »

The post Italy’s facial recognition ban exempts law enforcement appeared first on AI News.

]]>
Italy has banned the use of facial recognition, except for law enforcement purposes.

On Monday, the country’s Data Protection Authority (Garante per la protezione dei dati personali) issued official stays to two municipalities – the southern Italian city of Lecce and the Tuscan city of Arezzo – over their experiments with biometrics technologies.

The agency banned facial recognition systems using biometric data until a specific law governing its use is adopted.

“The moratorium arises from the need to regulate eligibility requirements, conditions and guarantees relating to facial recognition, in compliance with the principle of proportionality,” the agency said in a statement.

However, an exception was added for biometric data technology that is being used “to fight crime” or in a judicial investigation.

In Lecce, the municipality’s authorities said they would begin using facial recognition technologies. Italy’s Data Protection Agency ordered Lecce’s authorities to explain what systems will be used, their purpose, and the legal basis.

As for the Arezzo case, the city’s police were to be equipped with infrared smart glasses that could recognise car license plates.

Facial recognition technology is a central concern in the EU’s proposed AI regulation. The proposal has been released but will need to pass consultations within the EU before it’s adopted into law.

(Photo by Mikita Yo on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Italy’s facial recognition ban exempts law enforcement appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/11/15/italy-facial-recognition-ban-exempts-law-enforcement/feed/ 0
UK fines Clearview AI £7.5M for scraping citizens’ data https://www.artificialintelligence-news.com/2022/05/23/uk-fines-clearview-ai-7-5m-for-scraping-citizens-data/ https://www.artificialintelligence-news.com/2022/05/23/uk-fines-clearview-ai-7-5m-for-scraping-citizens-data/#respond Mon, 23 May 2022 15:05:22 +0000 https://www.artificialintelligence-news.com/?p=11992 Clearview AI has been fined £7.5 million by the UK’s privacy watchdog for scraping the online data of citizens without their explicit consent. The controversial facial recognition provider has scraped billions of images of people across the web for its system. Understandably, it caught the attention of regulators and rights groups from around the world.... Read more »

The post UK fines Clearview AI £7.5M for scraping citizens’ data appeared first on AI News.

]]>
Clearview AI has been fined £7.5 million by the UK’s privacy watchdog for scraping the online data of citizens without their explicit consent.

The controversial facial recognition provider has scraped billions of images of people across the web for its system. Understandably, it caught the attention of regulators and rights groups from around the world.

In November 2021, the UK’s Information Commissioner’s Office (ICO) imposed a potential fine of just over £17 million on Clearview AI. Today’s announcement suggests Clearview AI got off relatively lightly.

John Edwards, UK Information Commissioner, said:

“Clearview AI Inc has collected multiple images of people all over the world, including in the UK, from a variety of websites and social media platforms, creating a database with more than 20 billion images.

The company not only enables identification of those people, but effectively monitors their behaviour and offers it as a commercial service. That is unacceptable.

That is why we have acted to protect people in the UK by both fining the company and issuing an enforcement notice.”

The enforcement notice requires Clearview AI to delete all facial recognition data.

UK-Australia joint investigation

A joint investigation by the UK’s ICO and the Office of the Australian Information Commissioner (OAIC) was first launched in July 2020.

Angelene Falk, Australian Information Commissioner and Privacy Commissioner, commented:

“The joint investigation with the ICO has been highly valuable and demonstrates the benefits of data protection regulators collaborating to support effective and proactive regulation. 

The issues raised by Clearview AI’s business practices presented novel concerns in a number of jurisdictions. By partnering together, the OAIC and ICO have been able to contribute to an international position, and shape our global regulatory environment.”

Falk concluded that uploading an image to a social media site “does not unambiguously indicate agreement to collection of that image by an unknown third party for commercial purposes”.

The OAIC ordered Clearview AI to destroy the biometric data it collected of Australians.

“People expect that their personal information will be respected, regardless of where in the world their data is being used. That is why global companies need international enforcement. Working with colleagues around the world helped us take this action and protect people from such intrusive activity,” added Edwards.

“This international cooperation is essential to protect people’s privacy rights in 2022. That means working with regulators in other countries, as we did in this case with our Australian colleagues. And it means working with regulators in Europe, which is why I am meeting them in Brussels this week so we can collaborate to tackle global privacy harms.”

(Photo by quan le on Unsplash)

Related: Clearview AI agrees to restrict sales of its faceprint database

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK fines Clearview AI £7.5M for scraping citizens’ data appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/05/23/uk-fines-clearview-ai-7-5m-for-scraping-citizens-data/feed/ 0
Zoom receives backlash for emotion-detecting AI https://www.artificialintelligence-news.com/2022/05/19/zoom-receives-backlash-for-emotion-detecting-ai/ https://www.artificialintelligence-news.com/2022/05/19/zoom-receives-backlash-for-emotion-detecting-ai/#respond Thu, 19 May 2022 08:22:19 +0000 https://www.artificialintelligence-news.com/?p=11988 Zoom has caused a stir following reports that it’s developing an AI system for detecting emotions. The system, first reported by Protocol, claims to scan users’ faces and their speech to determine their emotions. Zoom detailed the system more in a blog post last month. The company says ‘Zoom IQ’ will be particularly useful for... Read more »

The post Zoom receives backlash for emotion-detecting AI appeared first on AI News.

]]>
Zoom has caused a stir following reports that it’s developing an AI system for detecting emotions.

The system, first reported by Protocol, claims to scan users’ faces and their speech to determine their emotions.

Zoom detailed the system more in a blog post last month. The company says ‘Zoom IQ’ will be particularly useful for helping salespeople improve their pitches based on the emotions of call participants.

Naturally, the system is seen as rather dystopian and has received its fair share of criticism.

On Wednesday, over 25 rights groups sent a joint letter to Zoom CEO Eric Yuan. The letter urges Zoom to cease research on emotion-based AI.

The letter’s signatories include the American Civil Liberties Union (ACLU), Muslim Justice League, and Access Now.

One of the key concerns is that emotion-detecting AI could be used for things like hiring or financial decisions; such as whether to grant loans. That has the possibility to increase existing inequalities.

“Results are not intended to be used for employment decisions or other comparable decisions. All recommended ranges for metrics are based on publicly available research,” Zoom explained.

Zoom IQ tracks metrics including:

  • Talk-listen ratio
  • Talking speed
  • Filler words
  • Longest spiel (monologue)
  • Patience
  • Engaging questions
  • Next steps set up
  • Sentiment/Engagement analysis

Esha Bhandari, Deputy Director of the ACLU Speech, Privacy, and Technology Project, called emotion-detecting AI “creepy” and “a junk science”.

(Photo by iyus sugiharto on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Zoom receives backlash for emotion-detecting AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/05/19/zoom-receives-backlash-for-emotion-detecting-ai/feed/ 0
Clearview AI agrees to restrict sales of its faceprint database https://www.artificialintelligence-news.com/2022/05/10/clearview-ai-agrees-restrict-sales-faceprint-database/ https://www.artificialintelligence-news.com/2022/05/10/clearview-ai-agrees-restrict-sales-faceprint-database/#respond Tue, 10 May 2022 16:06:18 +0000 https://www.artificialintelligence-news.com/?p=11959 Clearview AI has proposed to restrict sales of its faceprint database as part of a settlement with the American Civil Liberties Union (ACLU). The controversial facial recognition firm caused a stir due to scraping billions of images of people across the web without their consent. As a result, the company has faced the ire of... Read more »

The post Clearview AI agrees to restrict sales of its faceprint database appeared first on AI News.

]]>
Clearview AI has proposed to restrict sales of its faceprint database as part of a settlement with the American Civil Liberties Union (ACLU).

The controversial facial recognition firm caused a stir due to scraping billions of images of people across the web without their consent. As a result, the company has faced the ire of regulators around the world and numerous court cases.

One court case filed against Clearview AI was by the ACLU in 2020, claiming that it violated the Biometric Information Privacy Act (BIPA). That act covers Illinois and requires companies operating in the state to obtain explicit consent from individuals to collect their biometric data.

“Fourteen years ago, the ACLU of Illinois led the effort to enact BIPA – a groundbreaking statute to deal with the growing use of sensitive biometric information without any notice and without meaningful consent,” explained Rebecca Glenberg, staff attorney for the ACLU of Illinois.

“BIPA was intended to curb exactly the kind of broad-based surveillance that Clearview’s app enables.”

The case is ongoing but the two sides have reached a draft settlement. As part of the proposal, Clearview AI has agreed to restrict sales of its faceprint database to businesses and other private entities across the country.

“By requiring Clearview to comply with Illinois’ pathbreaking biometric privacy law not just in the state, but across the country, this settlement demonstrates that strong privacy laws can provide real protections against abuse,” said Nathan Freed Wessler, a deputy director of the ACLU Speech, Privacy, and Technology Project.

“Clearview can no longer treat people’s unique biometric identifiers as an unrestricted source of profit. Other companies would be wise to take note, and other states should follow Illinois’ lead in enacting strong biometric privacy laws.” 

The most protections will be offered to residents in Illinois. Clearview AI will be banned from sharing access to its database to any private company in the state in addition to any local public entity for five years.

Furthermore, Clearview AI plans to filter out images from Illinois. This may not catch all images so residents will be able to upload their image and Clearview will block its software from finding matches for their face. Clearview AI will spend $50,000 on adverts in online ads to raise awareness for this feature.

“This settlement is a big win for the most vulnerable people in Illinois,” commented Linda Xóchitl Tortolero, president and CEO of Mujeres Latinas en Acción, a Chicago-based non-profit.

“Much of our work centres on protecting privacy and ensuring the safety of survivors of domestic violence and sexual assault. Before this agreement, Clearview ignored the fact that biometric information can be misused to create dangerous situations and threats to their lives. Today that’s no longer the case.” 

The protections offered to American citizens outside Illinois aren’t quite as stringent.

Clearview AI is still able to sell access to its huge database to public entities, including law enforcement. In the wake of the US Capitol raid, the company boasted that police use of its facial recognition system increased 26 percent.

However, the company would be banned from selling access to its complete database to private companies. Clearview AI could still sell its software, but any purchaser would need to source their own database to train it.

“There is a battle being fought in courtrooms and statehouses across the country about who is going to control biometrics—Big Tech or the people being tracked by them—and this represents one of the biggest victories for consumers to date,” said J. Eli Wade-Scott from Edelson PC.

In November 2021, the UK’s Information Commissioner’s Office (ICO) imposed a potential fine of just over £17 million to Clearview AI and ordered the company to destroy the personal data it holds on British citizens and cease further processing.

Earlier that month, the OAIC reached the same conclusion as the ICO and ordered Clearview AI to destroy the biometric data it collected on Australians and cease further collection.

The full draft settlement between Clearview AI and the ACLU can be found here.

(Photo by Maksim Chernishev on Unsplash)

Related: Ukraine harnesses Clearview AI to uncover assailants and identify the fallen

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Clearview AI agrees to restrict sales of its faceprint database appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/05/10/clearview-ai-agrees-restrict-sales-faceprint-database/feed/ 0
AI in the justice system threatens human rights and civil liberties https://www.artificialintelligence-news.com/2022/03/30/ai-in-the-justice-system-threatens-human-rights-and-civil-liberties/ https://www.artificialintelligence-news.com/2022/03/30/ai-in-the-justice-system-threatens-human-rights-and-civil-liberties/#respond Wed, 30 Mar 2022 16:30:18 +0000 https://artificialintelligence-news.com/?p=11820 The House of Lords Justice and Home Affairs Committee has determined the proliferation of AI in the justice system is a threat to human rights and civil liberties. A report published by the committee today highlights the rapid pace of AI developments that are largely happening out of the public eye. Alarmingly, there seems to... Read more »

The post AI in the justice system threatens human rights and civil liberties appeared first on AI News.

]]>
The House of Lords Justice and Home Affairs Committee has determined the proliferation of AI in the justice system is a threat to human rights and civil liberties.

A report published by the committee today highlights the rapid pace of AI developments that are largely happening out of the public eye. Alarmingly, there seems to be a focus on rushing the technology into production with little concern about its potential negative impact.

Baroness Hamwee, Chair of the Justice and Home Affairs Committee, said:

“We had a strong impression that these new tools are being used without questioning whether they always produce a justified outcome. Is ‘the computer’ always right? It was different technology, but look at what happened to hundreds of Post Office managers.

Government must take control. Legislation to establish clear principles would provide a basis for more detailed regulation. A ‘kitemark’ to certify quality and a register of algorithms used in relevant tools would give confidence to everyone – users and citizens.

We welcome the advantages AI can bring to our justice system, but not if there is no adequate oversight. Humans must be the ultimate decision-makers, knowing how to question the tools they are using and how to challenge their outcome.”

The concept of XAI (Explainable AI) is growing traction and would help to address the problem of humans not always understanding how an AI has come to make a specific recommendation. 

Having fully-informed humans make the final decisions would go a long way toward building trust in the technology—ensuring clear accountability and minimising errors.

“What would it be like to be convicted and imprisoned on the basis of AI which you don’t understand and which you can’t challenge?” says Baroness Hamwee.

“Without proper safeguards, advanced technologies may affect human rights, undermine the fairness of trials, worsen inequalities, and weaken the rule of law. The tools available must be fit for purpose, and not be used unchecked.”

While there must be clear accountability for decision-makers in the justice system; the report also says governance needs reform.

The report notes there are more than 30 public bodies, initiatives, and programmes that play a role in the governance of new technologies in the application of the law. Without reform, where responsibility lies will be difficult to identify due to unclear roles and overlapping functions.

Societal discrimination also risks being exacerbated through bias in data being embedded in algorithms used for increasingly critical decisions from who to offer a loan to, all the way to who to arrest and potentially even put in prison.

Across the pond, Democrats reintroduced their Algorithmic Accountability Act last month which seeks to hold tech firms accountable for bias in their algorithms.

“If someone decides not to rent you a house because of the colour of your skin, that’s flat-out illegal discrimination. Using a flawed algorithm or software that results in discrimination and bias is just as bad,” said Senator Ron Wyden.

“Our bill will pull back the curtain on the secret algorithms that can decide whether Americans get to see a doctor, rent a house, or get into a school. Transparency and accountability are essential to give consumers choice and provide policymakers with the information needed to set the rules of the road for critical decision systems.”

Biased AI-powered facial recognition systems have already led to wrongful arrests of people from marginalised communities. Robert Williams, for example, was wrongfully arrested on his lawn in front of his family.

“The perils of face recognition technology are not hypothetical — study after study and real-life have already shown us its dangers,” explained Kate Ruane, Senior Legislative Counsel for the ACLU, last year following the reintroduction of the Facial Recognition and Biometric Technology Moratorium Act.

“The technology’s alarming rate of inaccuracy when used against people of colour has led to the wrongful arrests of multiple black men including Robert Williams, an ACLU client.”

Last year, UK Health Secretary Sajid Javid greenlit a series of AI-based projects aiming to tackle racial inequalities in the healthcare system. Among the greenlit projects is the creation of new standards for health inclusivity to improve the representation of ethnic minorities in datasets used by the NHS.

“If we only train our AI using mostly data from white patients it cannot help our population as a whole,” said Javid. “We need to make sure the data we collect is representative of our nation.”

Stiffer penalties for AI misuse, a greater push for XAI, governance reform, and improving diversity in datasets all seem like great places to start to prevent AI from undermining human rights and civil liberties.

(Photo by Tingey Injury Law Firm on Unsplash)

Related: UN calls for ‘urgent’ action over AI’s risk to human rights

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI in the justice system threatens human rights and civil liberties appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/03/30/ai-in-the-justice-system-threatens-human-rights-and-civil-liberties/feed/ 0
Ukraine harnesses Clearview AI to uncover assailants and identify the fallen https://www.artificialintelligence-news.com/2022/03/14/ukraine-harnesses-clearview-ai-uncover-assailants-identify-fallen/ https://www.artificialintelligence-news.com/2022/03/14/ukraine-harnesses-clearview-ai-uncover-assailants-identify-fallen/#respond Mon, 14 Mar 2022 14:37:53 +0000 https://artificialintelligence-news.com/?p=11758 Ukraine is using Clearview AI’s facial recognition software to uncover Russian assailants and identify Ukrainians who’ve sadly lost their lives in the conflict. The company’s chief executive, Hoan Ton-That, told Reuters that Ukraine’s defence ministry began using the software on Saturday. Clearview AI’s facial recognition system is controversial but indisputably powerful—using billions of images scraped... Read more »

The post Ukraine harnesses Clearview AI to uncover assailants and identify the fallen appeared first on AI News.

]]>
Ukraine is using Clearview AI’s facial recognition software to uncover Russian assailants and identify Ukrainians who’ve sadly lost their lives in the conflict.

The company’s chief executive, Hoan Ton-That, told Reuters that Ukraine’s defence ministry began using the software on Saturday.

Clearview AI’s facial recognition system is controversial but indisputably powerful—using billions of images scraped from the web to identify just about anyone. Ton-That says that Clearview has more than two billion images from Russian social media service VKontakte alone.

Reuters says that Ton-That sent a letter to Ukrainian authorities offering Clearview AI’s assistance. The letter said the software could help with identifying undercover Russian operatives, reuniting refugees with their families, and debunking misinformation.

Clearview AI’s software is reportedly effective even where there is facial damage or decomposition.

Ukraine is now reportedly using the facial recognition software for free, but the same offer has not been extended to Russia.

Russia has been widely condemned for its illegal invasion and increasingly brutal methods that are being investigated as likely war crimes. The Russian military has targeted not just the Ukrainian military but also civilians and even humanitarian corridors established to help people fleeing the conflict.

In response, many private companies have decided to halt or limit their operations in Russia and many are offering assistance to Ukraine in areas like cybersecurity and satellite internet access.

Clearview AI’s assistance could generate some positive PR for a company that is used to criticism.

Aside from its dystopian and invasive use of mass data scraped from across the web, the company has some potential far-right links.

“Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland once argued. Ekeland, it’s worth noting, gained notoriety as “The Troll’s Lawyer” after defending clients including self-described neo-Nazi troll Andrew Auernheimer.

Global regulators have increasingly clamped down on Clearview AI.

In November 2021, the UK’s Information Commissioner’s Office (ICO) imposed a potential fine of just over £17 million to Clearview AI and ordered the company to destroy the personal data it holds on British citizens and cease further processing.

Earlier that month, the OAIC reached the same conclusion as the ICO and ordered Clearview AI to destroy the biometric data it collected on Australians and cease further collection.

“I consider that the act of uploading an image to a social media site does not unambiguously indicate agreement to collection of that image by an unknown third party for commercial purposes,” said Australia’s Information Commissioner Angelene Falk at the time.

However, Clearview AI has boasted that police use of its facial recognition system increased 26 percent in the wake of the US Capitol raid.

Clearview AI’s operations in Ukraine could prove to be a positive case study, but whether it’s enough to offset the privacy concerns remains to be seen.

(Photo by Daniele Franchi on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Ukraine harnesses Clearview AI to uncover assailants and identify the fallen appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/03/14/ukraine-harnesses-clearview-ai-uncover-assailants-identify-fallen/feed/ 0
Clearview AI is close to obtaining a patent despite regulatory crackdown https://www.artificialintelligence-news.com/2021/12/06/clearview-ai-close-obtaining-patent-despite-regulatory-crackdown/ https://www.artificialintelligence-news.com/2021/12/06/clearview-ai-close-obtaining-patent-despite-regulatory-crackdown/#respond Mon, 06 Dec 2021 16:06:14 +0000 https://artificialintelligence-news.com/?p=11469 Clearview AI is reportedly just a bank transfer away from receiving a US patent for its controversial facial recognition technology. Politico reports that Clearview AI has been sent a “notice of allowance” by the US Patent and Trademark Office. The notice means that it will be granted the patent once it pays the administration fees.... Read more »

The post Clearview AI is close to obtaining a patent despite regulatory crackdown appeared first on AI News.

]]>
Clearview AI is reportedly just a bank transfer away from receiving a US patent for its controversial facial recognition technology.

Politico reports that Clearview AI has been sent a “notice of allowance” by the US Patent and Trademark Office. The notice means that it will be granted the patent once it pays the administration fees.

Clearview AI offers one of the most powerful facial recognition systems in the world. In the wake of the US Capitol raid, Clearview AI boasted that police use of its facial recognition system increased 26 percent.

The controversy around Clearview AI is that – aside from some potential far-right links – its system uses over 10 billion photos scraped from online web profiles without the explicit consent of the individuals.

“Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland once argued. Ekeland, it’s worth noting, gained notoriety as “The Troll’s Lawyer” after defending clients including self-described neo-Nazi troll Andrew Auernheimer.

‘Unreasonably intrusive and unfair’

Last month, the UK’s Information Commissioner’s Office (ICO) imposed a potential fine of just over £17 million to Clearview AI and ordered the company to destroy the personal data it holds on British citizens and cease further processing.

Leaked documents suggest Clearview AI’s system was tested by UK authorities including the Metropolitan Police, Ministry of Defense, the National Crime Agency, and a number of police constabularies including Surrey, North Yorkshire, Suffolk, and Northamptonshire. However, the system is said to no longer be being used or tested in the UK.

“UK data protection legislation does not stop the effective use of technology to fight crime, but to enjoy public trust and confidence in their products technology providers must ensure people’s legal protections are respected and complied with,” commented UK Information Commissioner Elizabeth Denham.

The UK’s decision was the result of a joint probe launched with the Office of the Australian Information Commissioner (OAIC) into Cleaview AI’s practices.

Earlier in the month, the OAIC reached the same conclusion as the ICO and ordered Clearview AI to to destroy the biometric data that it collected on Australians and cease further collection.

“I consider that the act of uploading an image to a social media site does not unambiguously indicate agreement to collection of that image by an unknown third party for commercial purposes,” said Australia’s Information Commissioner Angelene Falk.

“The covert collection of this kind of sensitive information is unreasonably intrusive and unfair,” Falk said. “It carries significant risk of harm to individuals, including vulnerable groups such as children and victims of crime, whose images can be searched on Clearview AI’s database.”

The first patent ‘around the use of large-scale internet data’

Major web companies like Facebook, Twitter, Google, YouTube, LinkedIn, and Venmo sent cease-and-desist letters to Clearview AI demanding the company stops scraping photos and data from their platforms.

Clearview AI founder Hoan Ton-That is unabashedly proud of the mass data-scraping system that his company has built and believes that it’s key to fighting criminal activities such as human trafficking. The company even says its application could be useful for finding out more about a person they’ve just met, such as through dating or business.

“There are other facial recognition patents out there — that are methods of doing it — but this is the first one around the use of large-scale internet data,” Ton-That told Politico in an interview.

Rights groups have criticised the seemingly imminent decision to grant Clearview AI a patent as essentially patenting a violation of human rights law.

(Photo by Etienne Girardet on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo North America on 11-12 May 2022.

The post Clearview AI is close to obtaining a patent despite regulatory crackdown appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/12/06/clearview-ai-close-obtaining-patent-despite-regulatory-crackdown/feed/ 0
Clearview AI could be fined £17M from UK privacy watchdog https://www.artificialintelligence-news.com/2021/11/30/clearview-ai-could-be-fined-17m-from-uk-privacy-watchdog/ https://www.artificialintelligence-news.com/2021/11/30/clearview-ai-could-be-fined-17m-from-uk-privacy-watchdog/#respond Tue, 30 Nov 2021 11:38:42 +0000 https://artificialintelligence-news.com/?p=11446 Clearview AI is back in hot water, this time from the UK’s Information Commissioner’s Office (ICO). The controversial facial recognition giant has caught the attention of global privacy regulators and campaigners for its practice of scraping personal photos from the web for its system without explicit consent. Clearview AI is expected to have scraped over... Read more »

The post Clearview AI could be fined £17M from UK privacy watchdog appeared first on AI News.

]]>
Clearview AI is back in hot water, this time from the UK’s Information Commissioner’s Office (ICO).

The controversial facial recognition giant has caught the attention of global privacy regulators and campaigners for its practice of scraping personal photos from the web for its system without explicit consent.

Clearview AI is expected to have scraped over 10 billion photos.

“Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland once argued.

The UK’s ICO launched a joint probe with the Office of the Australian Information Commissioner (OAIC) into Cleaview AI’s practices.

Earlier this month, Australia’s Information Commissioner Angelene Falk determined that “the act of uploading an image to a social media site does not unambiguously indicate agreement to collection of that image by an unknown third party for commercial purposes.”

Falk ordered Clearview AI to destroy the biometric data that it collected on Australians and cease further collection.

While we’ve had to wait a bit longer for the UK’s take, this week the ICO decided to impose a potential fine of just over £17 million on Clearview AI. The company must also delete the personal data currently held on British citizens and cease further processing.

Elizabeth Denham, UK Information Commissioner, said:

“I have significant concerns that personal data was processed in a way that nobody in the UK will have expected. It is therefore only right that the ICO alerts people to the scale of this potential breach and the proposed action we’re taking.

UK data protection legislation does not stop the effective use of technology to fight crime, but to enjoy public trust and confidence in their products technology providers must ensure people’s legal protections are respected and complied with.

Clearview AI Inc’s services are no longer being offered in the UK. However, the evidence we’ve gathered and analysed suggests Clearview AI Inc were and may be continuing to process significant volumes of UK people’s information without their knowledge.

We therefore want to assure the UK public that we are considering these alleged breaches and taking them very seriously.”

Leaked documents suggest Clearview AI’s system was tested by UK authorities including the Metropolitan Police, Ministry of Defense, the National Crime Agency, and a number of police constabularies including Surrey, North Yorkshire, Suffolk, and Northamptonshire. However, the system is said to no longer be being used or tested in the UK.

Following the US Capitol raid earlier this year, Clearview AI boasted that police use of its facial recognition system increased 26 percent.

(Photo by ev on Unsplash)

Looking to revamp your digital transformation strategy? Learn more about the Digital Transformation Week event taking place virtually 30 November – 1 December 2021 and discover key strategies for making your digital efforts a success.

The post Clearview AI could be fined £17M from UK privacy watchdog appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/11/30/clearview-ai-could-be-fined-17m-from-uk-privacy-watchdog/feed/ 0