AI Face Recognition News | Face & Facial Recognition News | AI News https://www.artificialintelligence-news.com/categories/ai-applications/ai-face-recognition/ Artificial Intelligence News Mon, 07 Aug 2023 10:43:49 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png AI Face Recognition News | Face & Facial Recognition News | AI News https://www.artificialintelligence-news.com/categories/ai-applications/ai-face-recognition/ 32 32 Error-prone facial recognition leads to another wrongful arrest https://www.artificialintelligence-news.com/2023/08/07/error-prone-facial-recognition-another-wrongful-arrest/ https://www.artificialintelligence-news.com/2023/08/07/error-prone-facial-recognition-another-wrongful-arrest/#comments Mon, 07 Aug 2023 10:43:46 +0000 https://www.artificialintelligence-news.com/?p=13436 The Detroit Police Department (DPD) is once again under scrutiny as a new lawsuit emerges, revealing that another innocent person has been wrongly arrested due to a flawed facial recognition match. Porcha Woodruff, an African American woman who was eight months pregnant at the time, is the sixth individual to come forward and report being... Read more »

The post Error-prone facial recognition leads to another wrongful arrest appeared first on AI News.

]]>
The Detroit Police Department (DPD) is once again under scrutiny as a new lawsuit emerges, revealing that another innocent person has been wrongly arrested due to a flawed facial recognition match.

Porcha Woodruff, an African American woman who was eight months pregnant at the time, is the sixth individual to come forward and report being falsely accused of a crime because of the controversial technology utilised by law enforcement.

Woodruff was accused of robbery and carjacking.

“Are you kidding?” Woodruff claims to have said to the officers, gesturing to her stomach to highlight how nonsensical the allegation was while being eight months pregnant.

The pattern of wrongful arrests based on faulty facial recognition has raised serious concerns, particularly as all six victims known by the American Civil Liberties Union (ACLU) have been African Americans. However, Woodruff’s case is notable as she is the first woman to report such an incident happening to her.

This latest incident marks the third known allegation of a wrongful arrest in the past three years attributed to the Detroit Police Department specifically and its reliance on inaccurate facial recognition matches.

Robert Williams, represented by the ACLU of Michigan and the University of Michigan Law School’s Civil Rights Litigation Initiative (CRLI), has an ongoing lawsuit against the DPD for his wrongful arrest in January 2020 due to the same technology.

Phil Mayor, Senior Staff Attorney at ACLU of Michigan, commented: “It’s deeply concerning that the Detroit Police Department knows the devastating consequences of using flawed facial recognition technology as the basis for someone’s arrest and continues to rely on it anyway.

“As Ms Woodruff’s horrifying experience illustrates, the Department’s use of this technology must end.”

The use of facial recognition technology in law enforcement has been a contentious issue, with concerns raised about its accuracy, racial bias, and potential violations of privacy and civil liberties.

Studies have shown that these systems are more prone to errors when identifying individuals with darker skin tones, leading to a disproportionate impact on marginalised communities.

Critics argue that relying on facial recognition as the sole basis for an arrest poses significant risks and can lead to severe consequences for innocent individuals, as seen in the case of Woodruff.

Calls for transparency and accountability have escalated, with civil rights organisations urging the Detroit Police Department to halt its use of facial recognition until the technology is thoroughly vetted and proven to be unbiased and accurate.

“The DPD continues to hide its abuses of this technology, forcing people whose rights have been violated to expose its wrongdoing case by case,” added Mayor.

“DPD should not be permitted to avoid transparency and hide its own misconduct from public view at the same time it continues to subject Detroiters to dragnet surveillance.” 

As the case unfolds, the public remains watchful of how the Detroit Police Department will respond to the mounting pressure to address concerns about the misuse of facial recognition technology and its impact on the rights and lives of innocent individuals.

(Image Credit: Oleg Gamulinskii from Pixabay)

See also: UK will host global AI summit to address potential risks

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Error-prone facial recognition leads to another wrongful arrest appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/07/error-prone-facial-recognition-another-wrongful-arrest/feed/ 1
Clearview AI used by US police for almost 1M searches https://www.artificialintelligence-news.com/2023/03/28/clearview-ai-us-police-almost-1m-searches/ https://www.artificialintelligence-news.com/2023/03/28/clearview-ai-us-police-almost-1m-searches/#respond Tue, 28 Mar 2023 15:26:04 +0000 https://www.artificialintelligence-news.com/?p=12871 Facial recognition firm Clearview AI has revealed that it has run almost a million searches for US police. Facial recognition technology is a controversial topic, and for good reason. Clearview AI’s technology allows law enforcement to upload a photo of a suspect’s face and find matches in a database of billions of images it has... Read more »

The post Clearview AI used by US police for almost 1M searches appeared first on AI News.

]]>
Facial recognition firm Clearview AI has revealed that it has run almost a million searches for US police.

Facial recognition technology is a controversial topic, and for good reason. Clearview AI’s technology allows law enforcement to upload a photo of a suspect’s face and find matches in a database of billions of images it has collected.

Clearview AI CEO Hoan Ton-That disclosed in an interview with the BBC that the firm has scraped 30 billion images from platforms such as Facebook. The images were taken without the users’ permission.

The company has been repeatedly fined millions in Europe and Australia for breaches of privacy, but US police are still using its powerful software.

Matthew Guaragilia from the Electronic Frontier Foundation said that police use of Clearview puts everyone into a “perpetual police line-up.”

While the use of facial recognition by the police is often sold to the public as being used only for serious or violent crimes, Miami Police confirmed to the BBC that it uses Clearview AI’s software for every type of crime.

Miami’s Assistant Chief of Police Armando Aguilar said his team used Clearview AI’s system about 450 times a year, and that it had helped to solve several murders. 

However, there are numerous documented cases of mistaken identity using facial recognition by the police. Robert Williams, for example, was wrongfully arrested on his lawn in front of his family and held overnight in a “crowded and filthy” cell.

“The perils of face recognition technology are not hypothetical — study after study and real-life have already shown us its dangers,” explained Kate Ruane, Senior Legislative Counsel for the ACLU, following the reintroduction of the Facial Recognition and Biometric Technology Moratorium Act.

“The technology’s alarming rate of inaccuracy when used against people of colour has led to the wrongful arrests of multiple black men including Robert Williams, an ACLU client.”

The lack of transparency around police use of facial recognition means the true figure of wrongful arrests it’s led to is likely far higher.

Civil rights campaigners want police forces that use Clearview AI to openly say when it is used, and for its accuracy to be openly tested in court. They want the systems to be scrutinised by independent experts.

The use of facial recognition technology by police is a contentious issue. While it may help solve crimes, it also poses a threat to civil liberties and privacy.

Ultimately, it’s a fine line between using technology to fight crime and infringing on individual rights, and we need to tread carefully to ensure we don’t cross it.

Related: Clearview AI lawyer: ‘Common law has never recognised a right to privacy for your face’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Clearview AI used by US police for almost 1M searches appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/03/28/clearview-ai-us-police-almost-1m-searches/feed/ 0
Italy’s facial recognition ban exempts law enforcement https://www.artificialintelligence-news.com/2022/11/15/italy-facial-recognition-ban-exempts-law-enforcement/ https://www.artificialintelligence-news.com/2022/11/15/italy-facial-recognition-ban-exempts-law-enforcement/#respond Tue, 15 Nov 2022 15:47:07 +0000 https://www.artificialintelligence-news.com/?p=12484 Italy has banned the use of facial recognition, except for law enforcement purposes. On Monday, the country’s Data Protection Authority (Garante per la protezione dei dati personali) issued official stays to two municipalities – the southern Italian city of Lecce and the Tuscan city of Arezzo – over their experiments with biometrics technologies. The agency... Read more »

The post Italy’s facial recognition ban exempts law enforcement appeared first on AI News.

]]>
Italy has banned the use of facial recognition, except for law enforcement purposes.

On Monday, the country’s Data Protection Authority (Garante per la protezione dei dati personali) issued official stays to two municipalities – the southern Italian city of Lecce and the Tuscan city of Arezzo – over their experiments with biometrics technologies.

The agency banned facial recognition systems using biometric data until a specific law governing its use is adopted.

“The moratorium arises from the need to regulate eligibility requirements, conditions and guarantees relating to facial recognition, in compliance with the principle of proportionality,” the agency said in a statement.

However, an exception was added for biometric data technology that is being used “to fight crime” or in a judicial investigation.

In Lecce, the municipality’s authorities said they would begin using facial recognition technologies. Italy’s Data Protection Agency ordered Lecce’s authorities to explain what systems will be used, their purpose, and the legal basis.

As for the Arezzo case, the city’s police were to be equipped with infrared smart glasses that could recognise car license plates.

Facial recognition technology is a central concern in the EU’s proposed AI regulation. The proposal has been released but will need to pass consultations within the EU before it’s adopted into law.

(Photo by Mikita Yo on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Italy’s facial recognition ban exempts law enforcement appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/11/15/italy-facial-recognition-ban-exempts-law-enforcement/feed/ 0
UK fines Clearview AI £7.5M for scraping citizens’ data https://www.artificialintelligence-news.com/2022/05/23/uk-fines-clearview-ai-7-5m-for-scraping-citizens-data/ https://www.artificialintelligence-news.com/2022/05/23/uk-fines-clearview-ai-7-5m-for-scraping-citizens-data/#respond Mon, 23 May 2022 15:05:22 +0000 https://www.artificialintelligence-news.com/?p=11992 Clearview AI has been fined £7.5 million by the UK’s privacy watchdog for scraping the online data of citizens without their explicit consent. The controversial facial recognition provider has scraped billions of images of people across the web for its system. Understandably, it caught the attention of regulators and rights groups from around the world.... Read more »

The post UK fines Clearview AI £7.5M for scraping citizens’ data appeared first on AI News.

]]>
Clearview AI has been fined £7.5 million by the UK’s privacy watchdog for scraping the online data of citizens without their explicit consent.

The controversial facial recognition provider has scraped billions of images of people across the web for its system. Understandably, it caught the attention of regulators and rights groups from around the world.

In November 2021, the UK’s Information Commissioner’s Office (ICO) imposed a potential fine of just over £17 million on Clearview AI. Today’s announcement suggests Clearview AI got off relatively lightly.

John Edwards, UK Information Commissioner, said:

“Clearview AI Inc has collected multiple images of people all over the world, including in the UK, from a variety of websites and social media platforms, creating a database with more than 20 billion images.

The company not only enables identification of those people, but effectively monitors their behaviour and offers it as a commercial service. That is unacceptable.

That is why we have acted to protect people in the UK by both fining the company and issuing an enforcement notice.”

The enforcement notice requires Clearview AI to delete all facial recognition data.

UK-Australia joint investigation

A joint investigation by the UK’s ICO and the Office of the Australian Information Commissioner (OAIC) was first launched in July 2020.

Angelene Falk, Australian Information Commissioner and Privacy Commissioner, commented:

“The joint investigation with the ICO has been highly valuable and demonstrates the benefits of data protection regulators collaborating to support effective and proactive regulation. 

The issues raised by Clearview AI’s business practices presented novel concerns in a number of jurisdictions. By partnering together, the OAIC and ICO have been able to contribute to an international position, and shape our global regulatory environment.”

Falk concluded that uploading an image to a social media site “does not unambiguously indicate agreement to collection of that image by an unknown third party for commercial purposes”.

The OAIC ordered Clearview AI to destroy the biometric data it collected of Australians.

“People expect that their personal information will be respected, regardless of where in the world their data is being used. That is why global companies need international enforcement. Working with colleagues around the world helped us take this action and protect people from such intrusive activity,” added Edwards.

“This international cooperation is essential to protect people’s privacy rights in 2022. That means working with regulators in other countries, as we did in this case with our Australian colleagues. And it means working with regulators in Europe, which is why I am meeting them in Brussels this week so we can collaborate to tackle global privacy harms.”

(Photo by quan le on Unsplash)

Related: Clearview AI agrees to restrict sales of its faceprint database

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK fines Clearview AI £7.5M for scraping citizens’ data appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/05/23/uk-fines-clearview-ai-7-5m-for-scraping-citizens-data/feed/ 0
Zoom receives backlash for emotion-detecting AI https://www.artificialintelligence-news.com/2022/05/19/zoom-receives-backlash-for-emotion-detecting-ai/ https://www.artificialintelligence-news.com/2022/05/19/zoom-receives-backlash-for-emotion-detecting-ai/#respond Thu, 19 May 2022 08:22:19 +0000 https://www.artificialintelligence-news.com/?p=11988 Zoom has caused a stir following reports that it’s developing an AI system for detecting emotions. The system, first reported by Protocol, claims to scan users’ faces and their speech to determine their emotions. Zoom detailed the system more in a blog post last month. The company says ‘Zoom IQ’ will be particularly useful for... Read more »

The post Zoom receives backlash for emotion-detecting AI appeared first on AI News.

]]>
Zoom has caused a stir following reports that it’s developing an AI system for detecting emotions.

The system, first reported by Protocol, claims to scan users’ faces and their speech to determine their emotions.

Zoom detailed the system more in a blog post last month. The company says ‘Zoom IQ’ will be particularly useful for helping salespeople improve their pitches based on the emotions of call participants.

Naturally, the system is seen as rather dystopian and has received its fair share of criticism.

On Wednesday, over 25 rights groups sent a joint letter to Zoom CEO Eric Yuan. The letter urges Zoom to cease research on emotion-based AI.

The letter’s signatories include the American Civil Liberties Union (ACLU), Muslim Justice League, and Access Now.

One of the key concerns is that emotion-detecting AI could be used for things like hiring or financial decisions; such as whether to grant loans. That has the possibility to increase existing inequalities.

“Results are not intended to be used for employment decisions or other comparable decisions. All recommended ranges for metrics are based on publicly available research,” Zoom explained.

Zoom IQ tracks metrics including:

  • Talk-listen ratio
  • Talking speed
  • Filler words
  • Longest spiel (monologue)
  • Patience
  • Engaging questions
  • Next steps set up
  • Sentiment/Engagement analysis

Esha Bhandari, Deputy Director of the ACLU Speech, Privacy, and Technology Project, called emotion-detecting AI “creepy” and “a junk science”.

(Photo by iyus sugiharto on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Zoom receives backlash for emotion-detecting AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/05/19/zoom-receives-backlash-for-emotion-detecting-ai/feed/ 0
Clearview AI agrees to restrict sales of its faceprint database https://www.artificialintelligence-news.com/2022/05/10/clearview-ai-agrees-restrict-sales-faceprint-database/ https://www.artificialintelligence-news.com/2022/05/10/clearview-ai-agrees-restrict-sales-faceprint-database/#respond Tue, 10 May 2022 16:06:18 +0000 https://www.artificialintelligence-news.com/?p=11959 Clearview AI has proposed to restrict sales of its faceprint database as part of a settlement with the American Civil Liberties Union (ACLU). The controversial facial recognition firm caused a stir due to scraping billions of images of people across the web without their consent. As a result, the company has faced the ire of... Read more »

The post Clearview AI agrees to restrict sales of its faceprint database appeared first on AI News.

]]>
Clearview AI has proposed to restrict sales of its faceprint database as part of a settlement with the American Civil Liberties Union (ACLU).

The controversial facial recognition firm caused a stir due to scraping billions of images of people across the web without their consent. As a result, the company has faced the ire of regulators around the world and numerous court cases.

One court case filed against Clearview AI was by the ACLU in 2020, claiming that it violated the Biometric Information Privacy Act (BIPA). That act covers Illinois and requires companies operating in the state to obtain explicit consent from individuals to collect their biometric data.

“Fourteen years ago, the ACLU of Illinois led the effort to enact BIPA – a groundbreaking statute to deal with the growing use of sensitive biometric information without any notice and without meaningful consent,” explained Rebecca Glenberg, staff attorney for the ACLU of Illinois.

“BIPA was intended to curb exactly the kind of broad-based surveillance that Clearview’s app enables.”

The case is ongoing but the two sides have reached a draft settlement. As part of the proposal, Clearview AI has agreed to restrict sales of its faceprint database to businesses and other private entities across the country.

“By requiring Clearview to comply with Illinois’ pathbreaking biometric privacy law not just in the state, but across the country, this settlement demonstrates that strong privacy laws can provide real protections against abuse,” said Nathan Freed Wessler, a deputy director of the ACLU Speech, Privacy, and Technology Project.

“Clearview can no longer treat people’s unique biometric identifiers as an unrestricted source of profit. Other companies would be wise to take note, and other states should follow Illinois’ lead in enacting strong biometric privacy laws.” 

The most protections will be offered to residents in Illinois. Clearview AI will be banned from sharing access to its database to any private company in the state in addition to any local public entity for five years.

Furthermore, Clearview AI plans to filter out images from Illinois. This may not catch all images so residents will be able to upload their image and Clearview will block its software from finding matches for their face. Clearview AI will spend $50,000 on adverts in online ads to raise awareness for this feature.

“This settlement is a big win for the most vulnerable people in Illinois,” commented Linda Xóchitl Tortolero, president and CEO of Mujeres Latinas en Acción, a Chicago-based non-profit.

“Much of our work centres on protecting privacy and ensuring the safety of survivors of domestic violence and sexual assault. Before this agreement, Clearview ignored the fact that biometric information can be misused to create dangerous situations and threats to their lives. Today that’s no longer the case.” 

The protections offered to American citizens outside Illinois aren’t quite as stringent.

Clearview AI is still able to sell access to its huge database to public entities, including law enforcement. In the wake of the US Capitol raid, the company boasted that police use of its facial recognition system increased 26 percent.

However, the company would be banned from selling access to its complete database to private companies. Clearview AI could still sell its software, but any purchaser would need to source their own database to train it.

“There is a battle being fought in courtrooms and statehouses across the country about who is going to control biometrics—Big Tech or the people being tracked by them—and this represents one of the biggest victories for consumers to date,” said J. Eli Wade-Scott from Edelson PC.

In November 2021, the UK’s Information Commissioner’s Office (ICO) imposed a potential fine of just over £17 million to Clearview AI and ordered the company to destroy the personal data it holds on British citizens and cease further processing.

Earlier that month, the OAIC reached the same conclusion as the ICO and ordered Clearview AI to destroy the biometric data it collected on Australians and cease further collection.

The full draft settlement between Clearview AI and the ACLU can be found here.

(Photo by Maksim Chernishev on Unsplash)

Related: Ukraine harnesses Clearview AI to uncover assailants and identify the fallen

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Clearview AI agrees to restrict sales of its faceprint database appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/05/10/clearview-ai-agrees-restrict-sales-faceprint-database/feed/ 0
Ukraine harnesses Clearview AI to uncover assailants and identify the fallen https://www.artificialintelligence-news.com/2022/03/14/ukraine-harnesses-clearview-ai-uncover-assailants-identify-fallen/ https://www.artificialintelligence-news.com/2022/03/14/ukraine-harnesses-clearview-ai-uncover-assailants-identify-fallen/#respond Mon, 14 Mar 2022 14:37:53 +0000 https://artificialintelligence-news.com/?p=11758 Ukraine is using Clearview AI’s facial recognition software to uncover Russian assailants and identify Ukrainians who’ve sadly lost their lives in the conflict. The company’s chief executive, Hoan Ton-That, told Reuters that Ukraine’s defence ministry began using the software on Saturday. Clearview AI’s facial recognition system is controversial but indisputably powerful—using billions of images scraped... Read more »

The post Ukraine harnesses Clearview AI to uncover assailants and identify the fallen appeared first on AI News.

]]>
Ukraine is using Clearview AI’s facial recognition software to uncover Russian assailants and identify Ukrainians who’ve sadly lost their lives in the conflict.

The company’s chief executive, Hoan Ton-That, told Reuters that Ukraine’s defence ministry began using the software on Saturday.

Clearview AI’s facial recognition system is controversial but indisputably powerful—using billions of images scraped from the web to identify just about anyone. Ton-That says that Clearview has more than two billion images from Russian social media service VKontakte alone.

Reuters says that Ton-That sent a letter to Ukrainian authorities offering Clearview AI’s assistance. The letter said the software could help with identifying undercover Russian operatives, reuniting refugees with their families, and debunking misinformation.

Clearview AI’s software is reportedly effective even where there is facial damage or decomposition.

Ukraine is now reportedly using the facial recognition software for free, but the same offer has not been extended to Russia.

Russia has been widely condemned for its illegal invasion and increasingly brutal methods that are being investigated as likely war crimes. The Russian military has targeted not just the Ukrainian military but also civilians and even humanitarian corridors established to help people fleeing the conflict.

In response, many private companies have decided to halt or limit their operations in Russia and many are offering assistance to Ukraine in areas like cybersecurity and satellite internet access.

Clearview AI’s assistance could generate some positive PR for a company that is used to criticism.

Aside from its dystopian and invasive use of mass data scraped from across the web, the company has some potential far-right links.

“Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland once argued. Ekeland, it’s worth noting, gained notoriety as “The Troll’s Lawyer” after defending clients including self-described neo-Nazi troll Andrew Auernheimer.

Global regulators have increasingly clamped down on Clearview AI.

In November 2021, the UK’s Information Commissioner’s Office (ICO) imposed a potential fine of just over £17 million to Clearview AI and ordered the company to destroy the personal data it holds on British citizens and cease further processing.

Earlier that month, the OAIC reached the same conclusion as the ICO and ordered Clearview AI to destroy the biometric data it collected on Australians and cease further collection.

“I consider that the act of uploading an image to a social media site does not unambiguously indicate agreement to collection of that image by an unknown third party for commercial purposes,” said Australia’s Information Commissioner Angelene Falk at the time.

However, Clearview AI has boasted that police use of its facial recognition system increased 26 percent in the wake of the US Capitol raid.

Clearview AI’s operations in Ukraine could prove to be a positive case study, but whether it’s enough to offset the privacy concerns remains to be seen.

(Photo by Daniele Franchi on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Ukraine harnesses Clearview AI to uncover assailants and identify the fallen appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/03/14/ukraine-harnesses-clearview-ai-uncover-assailants-identify-fallen/feed/ 0
The EU’s AI rules will likely take over a year to be agreed https://www.artificialintelligence-news.com/2022/02/17/eu-ai-rules-likely-take-over-year-to-be-agreed/ https://www.artificialintelligence-news.com/2022/02/17/eu-ai-rules-likely-take-over-year-to-be-agreed/#respond Thu, 17 Feb 2022 12:34:20 +0000 https://artificialintelligence-news.com/?p=11691 Rules governing the use of artificial intelligence across the EU will likely take over a year to be agreed upon. Last year, the European Commission drafted AI laws. While the US and China are set to dominate AI development with their vast resources, economic might, and light-touch regulation, European rivals – including the UK and... Read more »

The post The EU’s AI rules will likely take over a year to be agreed appeared first on AI News.

]]>
Rules governing the use of artificial intelligence across the EU will likely take over a year to be agreed upon.

Last year, the European Commission drafted AI laws. While the US and China are set to dominate AI development with their vast resources, economic might, and light-touch regulation, European rivals – including the UK and EU members – believe they can lead in ethical standards.

In the draft of the EU regulations, companies that are found guilty of AI misuse face a fine of €30 million or six percent of their global turnover (whichever is greater). The risk of such fines has been criticised as driving investments away from Europe.

The EU’s draft AI regulation classifies systems into three risk categories:

  • Limited risk – includes systems like chatbots, inventory management, spam filters, and video games.
  • High risk – includes systems that make vital decisions like evaluating creditworthiness, recruitment, justice administration, and biometric identification in non-public spaces.
  • Unacceptable risk – includes systems that are manipulative or exploitative, create social scoring, or conduct real-time biometric authentication in public spaces for law enforcement.

Unacceptable risk systems will face a blanket ban from deployment in the EU while limited risk will require minimal oversight.

Organisations deploying high-risk AI systems would be required to have things like:

  • Human oversight.
  • A risk-management system.
  • Record keeping and logging.
  • Transparency to users.
  • Data governance and management.
  • Conformity assessment.
  • Government registration.

However, the cumbersome nature of the EU – requiring agreement from all member states, each with their own priorities – means that new regulations are often subject to more debate and delay than national lawmaking.

Reuters reports that two key lawmakers on Wednesday said the EU’s AI regulations will likely take over a year more to agree. The delay is primarily due to debates over whether facial recognition should be banned and who should enforce the rules.

“Facial recognition is going to be the biggest ideological discussion between the right and left,” said one lawmaker, Dragos Tudorache, in a Reuters interview.

“I don’t believe in an outright ban. For me, the solution is to put the right rules in place.”

With leading academic institutions and more than 1,300 AI companies employing over 30,000 people, the UK is the biggest destination for AI investment in Europe and the third in the world. Between January and June 2021, global investors poured £13.5 billion into more than 1,400 “deep tech” UK private technology firms—more than Germany, France, and Israel combined.

In September 2021, the UK published its 10-year National Artificial Intelligence Strategy in a bid to secure its European AI leadership. Governance plays a large role in the strategy.

“The UK already punches above its weight internationally and we are ranked third in the world behind the USA and China in the list of top countries for AI,” commented DCMS Minister Chris Philp.

“We’re laying the foundations for the next ten years’ growth with a strategy to help us seize the potential of artificial intelligence and play a leading role in shaping the way the world governs it.”

As part of its strategy, the UK is creating an ‘AI Standards Hub’ to coordinate the country’s engagement in establishing global rules and is working with The Alan Turing Institute to update guidance on AI ethics and safety.

“We are proud of creating a dynamic, collaborative community of diverse researchers and are growing world-leading capabilities in responsible, safe, ethical, and inclusive AI research and innovation,” said Professor Sir Adrian Smith, Chief Executive of The Alan Turing Institute.

Striking a balance between innovation-stifling overregulation and ethics-compromising underregulation is never a simple task. It will be interesting to observe how AI regulations in Europe will differ across the continent and beyond.

(Photo by Christian Lue on Unsplash)

Related: British intelligence agency GCHQ publishes ‘Ethics of AI’ report

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post The EU’s AI rules will likely take over a year to be agreed appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/02/17/eu-ai-rules-likely-take-over-year-to-be-agreed/feed/ 0
Clearview AI is close to obtaining a patent despite regulatory crackdown https://www.artificialintelligence-news.com/2021/12/06/clearview-ai-close-obtaining-patent-despite-regulatory-crackdown/ https://www.artificialintelligence-news.com/2021/12/06/clearview-ai-close-obtaining-patent-despite-regulatory-crackdown/#respond Mon, 06 Dec 2021 16:06:14 +0000 https://artificialintelligence-news.com/?p=11469 Clearview AI is reportedly just a bank transfer away from receiving a US patent for its controversial facial recognition technology. Politico reports that Clearview AI has been sent a “notice of allowance” by the US Patent and Trademark Office. The notice means that it will be granted the patent once it pays the administration fees.... Read more »

The post Clearview AI is close to obtaining a patent despite regulatory crackdown appeared first on AI News.

]]>
Clearview AI is reportedly just a bank transfer away from receiving a US patent for its controversial facial recognition technology.

Politico reports that Clearview AI has been sent a “notice of allowance” by the US Patent and Trademark Office. The notice means that it will be granted the patent once it pays the administration fees.

Clearview AI offers one of the most powerful facial recognition systems in the world. In the wake of the US Capitol raid, Clearview AI boasted that police use of its facial recognition system increased 26 percent.

The controversy around Clearview AI is that – aside from some potential far-right links – its system uses over 10 billion photos scraped from online web profiles without the explicit consent of the individuals.

“Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland once argued. Ekeland, it’s worth noting, gained notoriety as “The Troll’s Lawyer” after defending clients including self-described neo-Nazi troll Andrew Auernheimer.

‘Unreasonably intrusive and unfair’

Last month, the UK’s Information Commissioner’s Office (ICO) imposed a potential fine of just over £17 million to Clearview AI and ordered the company to destroy the personal data it holds on British citizens and cease further processing.

Leaked documents suggest Clearview AI’s system was tested by UK authorities including the Metropolitan Police, Ministry of Defense, the National Crime Agency, and a number of police constabularies including Surrey, North Yorkshire, Suffolk, and Northamptonshire. However, the system is said to no longer be being used or tested in the UK.

“UK data protection legislation does not stop the effective use of technology to fight crime, but to enjoy public trust and confidence in their products technology providers must ensure people’s legal protections are respected and complied with,” commented UK Information Commissioner Elizabeth Denham.

The UK’s decision was the result of a joint probe launched with the Office of the Australian Information Commissioner (OAIC) into Cleaview AI’s practices.

Earlier in the month, the OAIC reached the same conclusion as the ICO and ordered Clearview AI to to destroy the biometric data that it collected on Australians and cease further collection.

“I consider that the act of uploading an image to a social media site does not unambiguously indicate agreement to collection of that image by an unknown third party for commercial purposes,” said Australia’s Information Commissioner Angelene Falk.

“The covert collection of this kind of sensitive information is unreasonably intrusive and unfair,” Falk said. “It carries significant risk of harm to individuals, including vulnerable groups such as children and victims of crime, whose images can be searched on Clearview AI’s database.”

The first patent ‘around the use of large-scale internet data’

Major web companies like Facebook, Twitter, Google, YouTube, LinkedIn, and Venmo sent cease-and-desist letters to Clearview AI demanding the company stops scraping photos and data from their platforms.

Clearview AI founder Hoan Ton-That is unabashedly proud of the mass data-scraping system that his company has built and believes that it’s key to fighting criminal activities such as human trafficking. The company even says its application could be useful for finding out more about a person they’ve just met, such as through dating or business.

“There are other facial recognition patents out there — that are methods of doing it — but this is the first one around the use of large-scale internet data,” Ton-That told Politico in an interview.

Rights groups have criticised the seemingly imminent decision to grant Clearview AI a patent as essentially patenting a violation of human rights law.

(Photo by Etienne Girardet on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo North America on 11-12 May 2022.

The post Clearview AI is close to obtaining a patent despite regulatory crackdown appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/12/06/clearview-ai-close-obtaining-patent-despite-regulatory-crackdown/feed/ 0
Clearview AI could be fined £17M from UK privacy watchdog https://www.artificialintelligence-news.com/2021/11/30/clearview-ai-could-be-fined-17m-from-uk-privacy-watchdog/ https://www.artificialintelligence-news.com/2021/11/30/clearview-ai-could-be-fined-17m-from-uk-privacy-watchdog/#respond Tue, 30 Nov 2021 11:38:42 +0000 https://artificialintelligence-news.com/?p=11446 Clearview AI is back in hot water, this time from the UK’s Information Commissioner’s Office (ICO). The controversial facial recognition giant has caught the attention of global privacy regulators and campaigners for its practice of scraping personal photos from the web for its system without explicit consent. Clearview AI is expected to have scraped over... Read more »

The post Clearview AI could be fined £17M from UK privacy watchdog appeared first on AI News.

]]>
Clearview AI is back in hot water, this time from the UK’s Information Commissioner’s Office (ICO).

The controversial facial recognition giant has caught the attention of global privacy regulators and campaigners for its practice of scraping personal photos from the web for its system without explicit consent.

Clearview AI is expected to have scraped over 10 billion photos.

“Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland once argued.

The UK’s ICO launched a joint probe with the Office of the Australian Information Commissioner (OAIC) into Cleaview AI’s practices.

Earlier this month, Australia’s Information Commissioner Angelene Falk determined that “the act of uploading an image to a social media site does not unambiguously indicate agreement to collection of that image by an unknown third party for commercial purposes.”

Falk ordered Clearview AI to destroy the biometric data that it collected on Australians and cease further collection.

While we’ve had to wait a bit longer for the UK’s take, this week the ICO decided to impose a potential fine of just over £17 million on Clearview AI. The company must also delete the personal data currently held on British citizens and cease further processing.

Elizabeth Denham, UK Information Commissioner, said:

“I have significant concerns that personal data was processed in a way that nobody in the UK will have expected. It is therefore only right that the ICO alerts people to the scale of this potential breach and the proposed action we’re taking.

UK data protection legislation does not stop the effective use of technology to fight crime, but to enjoy public trust and confidence in their products technology providers must ensure people’s legal protections are respected and complied with.

Clearview AI Inc’s services are no longer being offered in the UK. However, the evidence we’ve gathered and analysed suggests Clearview AI Inc were and may be continuing to process significant volumes of UK people’s information without their knowledge.

We therefore want to assure the UK public that we are considering these alleged breaches and taking them very seriously.”

Leaked documents suggest Clearview AI’s system was tested by UK authorities including the Metropolitan Police, Ministry of Defense, the National Crime Agency, and a number of police constabularies including Surrey, North Yorkshire, Suffolk, and Northamptonshire. However, the system is said to no longer be being used or tested in the UK.

Following the US Capitol raid earlier this year, Clearview AI boasted that police use of its facial recognition system increased 26 percent.

(Photo by ev on Unsplash)

Looking to revamp your digital transformation strategy? Learn more about the Digital Transformation Week event taking place virtually 30 November – 1 December 2021 and discover key strategies for making your digital efforts a success.

The post Clearview AI could be fined £17M from UK privacy watchdog appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/11/30/clearview-ai-could-be-fined-17m-from-uk-privacy-watchdog/feed/ 0