surveillance Archives - AI News https://www.artificialintelligence-news.com/tag/surveillance/ Artificial Intelligence News Mon, 07 Aug 2023 10:43:49 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png surveillance Archives - AI News https://www.artificialintelligence-news.com/tag/surveillance/ 32 32 Error-prone facial recognition leads to another wrongful arrest https://www.artificialintelligence-news.com/2023/08/07/error-prone-facial-recognition-another-wrongful-arrest/ https://www.artificialintelligence-news.com/2023/08/07/error-prone-facial-recognition-another-wrongful-arrest/#comments Mon, 07 Aug 2023 10:43:46 +0000 https://www.artificialintelligence-news.com/?p=13436 The Detroit Police Department (DPD) is once again under scrutiny as a new lawsuit emerges, revealing that another innocent person has been wrongly arrested due to a flawed facial recognition match. Porcha Woodruff, an African American woman who was eight months pregnant at the time, is the sixth individual to come forward and report being... Read more »

The post Error-prone facial recognition leads to another wrongful arrest appeared first on AI News.

]]>
The Detroit Police Department (DPD) is once again under scrutiny as a new lawsuit emerges, revealing that another innocent person has been wrongly arrested due to a flawed facial recognition match.

Porcha Woodruff, an African American woman who was eight months pregnant at the time, is the sixth individual to come forward and report being falsely accused of a crime because of the controversial technology utilised by law enforcement.

Woodruff was accused of robbery and carjacking.

“Are you kidding?” Woodruff claims to have said to the officers, gesturing to her stomach to highlight how nonsensical the allegation was while being eight months pregnant.

The pattern of wrongful arrests based on faulty facial recognition has raised serious concerns, particularly as all six victims known by the American Civil Liberties Union (ACLU) have been African Americans. However, Woodruff’s case is notable as she is the first woman to report such an incident happening to her.

This latest incident marks the third known allegation of a wrongful arrest in the past three years attributed to the Detroit Police Department specifically and its reliance on inaccurate facial recognition matches.

Robert Williams, represented by the ACLU of Michigan and the University of Michigan Law School’s Civil Rights Litigation Initiative (CRLI), has an ongoing lawsuit against the DPD for his wrongful arrest in January 2020 due to the same technology.

Phil Mayor, Senior Staff Attorney at ACLU of Michigan, commented: “It’s deeply concerning that the Detroit Police Department knows the devastating consequences of using flawed facial recognition technology as the basis for someone’s arrest and continues to rely on it anyway.

“As Ms Woodruff’s horrifying experience illustrates, the Department’s use of this technology must end.”

The use of facial recognition technology in law enforcement has been a contentious issue, with concerns raised about its accuracy, racial bias, and potential violations of privacy and civil liberties.

Studies have shown that these systems are more prone to errors when identifying individuals with darker skin tones, leading to a disproportionate impact on marginalised communities.

Critics argue that relying on facial recognition as the sole basis for an arrest poses significant risks and can lead to severe consequences for innocent individuals, as seen in the case of Woodruff.

Calls for transparency and accountability have escalated, with civil rights organisations urging the Detroit Police Department to halt its use of facial recognition until the technology is thoroughly vetted and proven to be unbiased and accurate.

“The DPD continues to hide its abuses of this technology, forcing people whose rights have been violated to expose its wrongdoing case by case,” added Mayor.

“DPD should not be permitted to avoid transparency and hide its own misconduct from public view at the same time it continues to subject Detroiters to dragnet surveillance.” 

As the case unfolds, the public remains watchful of how the Detroit Police Department will respond to the mounting pressure to address concerns about the misuse of facial recognition technology and its impact on the rights and lives of innocent individuals.

(Image Credit: Oleg Gamulinskii from Pixabay)

See also: UK will host global AI summit to address potential risks

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Error-prone facial recognition leads to another wrongful arrest appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/07/error-prone-facial-recognition-another-wrongful-arrest/feed/ 1
European Parliament adopts AI Act position https://www.artificialintelligence-news.com/2023/06/14/european-parliament-adopts-ai-act-position/ https://www.artificialintelligence-news.com/2023/06/14/european-parliament-adopts-ai-act-position/#respond Wed, 14 Jun 2023 14:27:26 +0000 https://www.artificialintelligence-news.com/?p=13192 The European Parliament has taken a significant step towards the regulation of artificial intelligence by voting to adopt its position for the upcoming AI Act with an overwhelming majority.  The act aims to regulate AI based on its potential to cause harm and follows a risk-based approach, prohibiting applications that pose an unacceptable risk while... Read more »

The post European Parliament adopts AI Act position appeared first on AI News.

]]>
The European Parliament has taken a significant step towards the regulation of artificial intelligence by voting to adopt its position for the upcoming AI Act with an overwhelming majority. 

The act aims to regulate AI based on its potential to cause harm and follows a risk-based approach, prohibiting applications that pose an unacceptable risk while imposing strict regulations for high-risk use cases.

The timing of AI regulation has been a subject of debate, but Dragoș Tudorache, one of the European Parliament’s co-rapporteurs on the AI Act, emphasised that it is the right time to regulate AI due to its profound impact.

Dr Ventsislav Ivanov, AI Expert and Lecturer at Oxford Business College, said: “Regulating artificial intelligence is one of the most important political challenges of our time, and the EU should be congratulated for attempting to tame the risks associated with technologies that are already revolutionising our daily lives.

“As the chaos and controversy accompanying this vote show, this will be not an easy feat. Taking on the global tech companies and other interested parties will be akin to Hercules battling the seven-headed hydra.”

The adoption of the AI Act faced uncertainty as a political deal crumbled, leading to amendments from various political groups.

One of the main points of contention was the use of Remote Biometric Identification, with liberal and progressive lawmakers seeking to ban its real-time use except for ex-post investigations of serious crimes. The centre-right European People’s Party attempted to introduce exceptions for exceptional circumstances like terrorist attacks or missing persons, but their efforts were unsuccessful.

A tiered approach for AI models will be introduced with the act, including stricter regulations for foundation models and generative AI.

The European Parliament intends to introduce mandatory labelling for AI-generated content and mandate the disclosure of training data covered by copyright. This move comes as generative AI, exemplified by ChatGPT, gained widespread attention—prompting the European Commission to launch outreach initiatives to foster international alignment on AI rules.

MEPs made several significant changes to the AI Act, including expanding the list of prohibited practices to include subliminal techniques, biometric categorisation, predictive policing, internet-scraped facial recognition databases, and emotion recognition software.

An extra layer was introduced for high-risk AI applications and extended the list of high-risk areas and use cases in law enforcement, migration control, and recommender systems of prominent social media platforms.

Robin Röhm, CEO of Apheris, commented: “The passing of the plenary vote on the EU’s AI Act marks a significant milestone in AI regulation, but raises more questions than it answers. It will make it more difficult for start-ups to compete and means that investors are less likely to deploy capital into companies operating in the EU.

“It is critical that we allow for capital to flow to businesses, given the cost of building AI technology, but the risk-based approach to regulation proposed by the EU is likely to lead to a lot of extra burden for the European ecosystem and will make investing less attractive.”

With the European Parliament’s adoption of its position on the AI Act, interinstitutional negotiations will commence with the EU Council of Ministers and the European Commission. The negotiations – known as trilogues – will address key points of contention such as high-risk categories, fundamental rights, and foundation models.

Spain, which assumes the rotating presidency of the Council in July, has made finalising the AI law its top digital priority. The aim is to reach a deal by November, with multiple trilogues planned as a backup.

The negotiations are expected to intensify in the coming months as the EU seeks to establish comprehensive regulations for AI, balancing innovation and governance while ensuring the protection of fundamental rights.

“The key to good regulation is ensuring that safety concerns are addressed while not stifling innovation. It remains to be seen whether the EU can achieve this,” concludes Röhm.

(Image Credit: European Union 2023 / Mathieu Cugnot)

Similar: UK will host global AI summit to address potential risks

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

The post European Parliament adopts AI Act position appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/06/14/european-parliament-adopts-ai-act-position/feed/ 0
EU committees green-light the AI Act https://www.artificialintelligence-news.com/2023/05/11/eu-committees-green-light-ai-act/ https://www.artificialintelligence-news.com/2023/05/11/eu-committees-green-light-ai-act/#respond Thu, 11 May 2023 12:09:27 +0000 https://www.artificialintelligence-news.com/?p=13048 The Internal Market Committee and the Civil Liberties Committee of the European Parliament have endorsed new transparency and risk-management rules for artificial intelligence systems known as the AI Act. This marks a major step in the development of AI regulation in Europe, as these are the first-ever rules for AI. The rules aim to ensure... Read more »

The post EU committees green-light the AI Act appeared first on AI News.

]]>
The Internal Market Committee and the Civil Liberties Committee of the European Parliament have endorsed new transparency and risk-management rules for artificial intelligence systems known as the AI Act.

This marks a major step in the development of AI regulation in Europe, as these are the first-ever rules for AI. The rules aim to ensure that AI systems are safe, transparent, traceable, and non-discriminatory.

After the vote, co-rapporteur Brando Benifei (S&D, Italy) said:

“We are on the verge of putting in place landmark legislation that must resist the challenge of time. It is crucial to build citizens’ trust in the development of AI, to set the European way for dealing with the extraordinary changes that are already happening, as well as to steer the political debate on AI at the global level.

We are confident our text balances the protection of fundamental rights with the need to provide legal certainty to businesses and stimulate innovation in Europe.”

Co-rapporteur Dragos Tudorache (Renew, Romania) added:

“Given the profound transformative impact AI will have on our societies and economies, the AI Act is very likely the most important piece of legislation in this mandate. It’s the first piece of legislation of this kind worldwide, which means that the EU can lead the way in making AI human-centric, trustworthy, and safe.

We have worked to support AI innovation in Europe and to give start-ups, SMEs and industry space to grow and innovate while protecting fundamental rights, strengthening democratic oversight, and ensuring a mature system of AI governance and enforcement.”

The rules are based on a risk-based approach and they establish obligations for providers and users depending on the level of risk that the AI system can generate. AI systems with an unacceptable level of risk to people’s safety would be strictly prohibited, including systems that deploy subliminal or purposefully manipulative techniques, exploit people’s vulnerabilities, or are used for social scoring.

MEPs also substantially amended the list of prohibited AI practices to include bans on intrusive and discriminatory uses of AI systems, such as real-time remote biometric identification systems in publicly accessible spaces, post-remote biometric identification systems (except for law enforcement purposes), biometric categorisation systems using sensitive characteristics, predictive policing systems, emotion recognition systems in law enforcement, border management, workplace, and educational institutions, and indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases.

MEPs also expanded the classification of high-risk areas to include harm to people’s health, safety, fundamental rights, or the environment. They also added AI systems that influence voters in political campaigns and recommender systems used by social media platforms to the high-risk list.

To boost AI innovation, MEPs added exemptions to these rules for research activities and AI components provided under open-source licenses. The new law also promotes regulatory sandboxes – or controlled environments established by public authorities – to test AI before its deployment.

MEPs want to boost citizens’ right to file complaints about AI systems and receive explanations of decisions based on high-risk AI systems that significantly impact their rights. MEPs also reformed the role of the EU AI Office, which would be tasked with monitoring how the AI rulebook is implemented.

Tim Wright, Tech and AI Regulatory Partner at London-based law firm Fladgate, commented:

“US-based AI developers will likely steal a march on their European competitors given the news that the EU parliamentary committees have green-lit its groundbreaking AI Act, where AI systems will need to be categorised according to their potential for harm from the outset. 

The US tech approach is typically to experiment first and, once market and product fit is established, to retrofit to other markets and their regulatory framework. This approach fosters innovation whereas EU-based AI developers will need to take note of the new rules and develop systems and processes which may take the edge off their ability to innovate.

The UK is adopting a similar approach to the US, although the proximity of the EU market means that UK-based developers are more likely to fall into step with the EU ruleset from the outset. However, the potential to experiment in a safe space – a regulatory sandbox – may prove very attractive.”

Before negotiations with the Council on the final form of the law can begin, this draft negotiating mandate needs to be endorsed by the whole Parliament, with the vote expected during the 12-15 June session.

(Photo by Denis Sebastian Tamas on Unsplash)

Related: UK details ‘pro-innovation’ approach to AI regulation

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post EU committees green-light the AI Act appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/05/11/eu-committees-green-light-ai-act/feed/ 0
Clearview AI used by US police for almost 1M searches https://www.artificialintelligence-news.com/2023/03/28/clearview-ai-us-police-almost-1m-searches/ https://www.artificialintelligence-news.com/2023/03/28/clearview-ai-us-police-almost-1m-searches/#respond Tue, 28 Mar 2023 15:26:04 +0000 https://www.artificialintelligence-news.com/?p=12871 Facial recognition firm Clearview AI has revealed that it has run almost a million searches for US police. Facial recognition technology is a controversial topic, and for good reason. Clearview AI’s technology allows law enforcement to upload a photo of a suspect’s face and find matches in a database of billions of images it has... Read more »

The post Clearview AI used by US police for almost 1M searches appeared first on AI News.

]]>
Facial recognition firm Clearview AI has revealed that it has run almost a million searches for US police.

Facial recognition technology is a controversial topic, and for good reason. Clearview AI’s technology allows law enforcement to upload a photo of a suspect’s face and find matches in a database of billions of images it has collected.

Clearview AI CEO Hoan Ton-That disclosed in an interview with the BBC that the firm has scraped 30 billion images from platforms such as Facebook. The images were taken without the users’ permission.

The company has been repeatedly fined millions in Europe and Australia for breaches of privacy, but US police are still using its powerful software.

Matthew Guaragilia from the Electronic Frontier Foundation said that police use of Clearview puts everyone into a “perpetual police line-up.”

While the use of facial recognition by the police is often sold to the public as being used only for serious or violent crimes, Miami Police confirmed to the BBC that it uses Clearview AI’s software for every type of crime.

Miami’s Assistant Chief of Police Armando Aguilar said his team used Clearview AI’s system about 450 times a year, and that it had helped to solve several murders. 

However, there are numerous documented cases of mistaken identity using facial recognition by the police. Robert Williams, for example, was wrongfully arrested on his lawn in front of his family and held overnight in a “crowded and filthy” cell.

“The perils of face recognition technology are not hypothetical — study after study and real-life have already shown us its dangers,” explained Kate Ruane, Senior Legislative Counsel for the ACLU, following the reintroduction of the Facial Recognition and Biometric Technology Moratorium Act.

“The technology’s alarming rate of inaccuracy when used against people of colour has led to the wrongful arrests of multiple black men including Robert Williams, an ACLU client.”

The lack of transparency around police use of facial recognition means the true figure of wrongful arrests it’s led to is likely far higher.

Civil rights campaigners want police forces that use Clearview AI to openly say when it is used, and for its accuracy to be openly tested in court. They want the systems to be scrutinised by independent experts.

The use of facial recognition technology by police is a contentious issue. While it may help solve crimes, it also poses a threat to civil liberties and privacy.

Ultimately, it’s a fine line between using technology to fight crime and infringing on individual rights, and we need to tread carefully to ensure we don’t cross it.

Related: Clearview AI lawyer: ‘Common law has never recognised a right to privacy for your face’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Clearview AI used by US police for almost 1M searches appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/03/28/clearview-ai-us-police-almost-1m-searches/feed/ 0
Axon’s AI ethics board resign after TASER drone announcement https://www.artificialintelligence-news.com/2022/06/07/axon-ai-ethics-board-resign-taser-drone-announcement/ https://www.artificialintelligence-news.com/2022/06/07/axon-ai-ethics-board-resign-taser-drone-announcement/#respond Tue, 07 Jun 2022 17:56:42 +0000 https://www.artificialintelligence-news.com/?p=12052 The majority of Axon’s AI ethics board have resigned after the company announced that it’s developing taser-equipped drones. In response to yet another shooting in a US school, Axon founder and CEO Rick Smith began thinking about how the company could help put a stop to the all too regular occurrence. The shooting kicked off... Read more »

The post Axon’s AI ethics board resign after TASER drone announcement appeared first on AI News.

]]>
The majority of Axon’s AI ethics board have resigned after the company announced that it’s developing taser-equipped drones.

In response to yet another shooting in a US school, Axon founder and CEO Rick Smith began thinking about how the company could help put a stop to the all too regular occurrence.

The shooting kicked off the usual debate over whether stricter gun laws are needed. Unfortunately, we all know nothing is likely to really change and we’ll be back to rehashing the same arguments the next time more children lose their lives.

“In the aftermath of these events, we get stuck in fruitless debates. We need new and better solutions,” Smith said in a statement.

Few would disagree with that statement but Smith’s proposed solution has caused quite a stir.

“We have elected to publicly engage communities and stakeholders, and develop a remotely operated, non-lethal drone system that we believe will be a more effective, immediate, humane, and ethical option to protect innocent people,” Smith explained.

The TASER drone system would use real-time security feeds supplied through a partnership with Fusus.

“Trying to find and stop an active shooter based on the telephone game connecting victim 911 callers is antiquated,” says Chris Lindenau, CEO of Fusus. “Fusus brings the ability to share any security camera to first responders, providing known locations and visual live feeds regardless of which security cameras they use.

“This network of cameras, with human and AI monitoring, together with panic buttons and other local communication tools, can detect and ID a threat before a shot is fired and dramatically improve response times and situational awareness.”

Nine out of 12 members of Azon’s AI ethics board resigned following the announcement and issued a statement explaining their decision.

“Only a few weeks ago, a majority of this board – by an 8-4 vote – recommended that Axon not proceed with a narrow pilot study aimed at vetting the company’s concept of Taser-equipped drones,” wrote the former board members.

“In that limited conception, the Taser-equipped drone was to be used only in situations in which it might avoid a police officer using a firearm, thereby potentially saving a life.”

“We understood the company might proceed despite our recommendation not to, and so we were firm about the sorts of controls that would be needed to conduct a responsible pilot should the company proceed. We just were beginning to produce a public report on Axon’s proposal and our deliberations.”

However, Smith overruled the ethics board and made the announcement regardless.

The board members go on to explain how they’ve been firm against Axon playing a role in supplying real-time, persistent surveillance capabilities that “undoubtedly will harm communities of color and others who are overpoliced, and likely well beyond that.”

“The Taser-equipped drone also has no realistic chance of solving the mass shooting problem Axon now is prescribing it for, only distracting society from real solutions to a tragic problem.”

Over the years, the board members believe they have been able to steer Axon away from implementing draconian facial recognition capabilities and ensure the withdrawal of a software tool to scrape data from social media websites. However, the members claim Axon has more recently rejected their advice on numerous occasions.

“We all feel the desperate need to do something to address our epidemic of mass shootings. But Axon’s proposal to elevate a tech-and-policing response when there are far less harmful alternatives, is not the solution,” explained the board members.

“Significantly for us, it bypassed Axon’s commitment to consult with the company’s own AI Ethics Board.”

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Axon’s AI ethics board resign after TASER drone announcement appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/06/07/axon-ai-ethics-board-resign-taser-drone-announcement/feed/ 0
Clearview AI agrees to restrict sales of its faceprint database https://www.artificialintelligence-news.com/2022/05/10/clearview-ai-agrees-restrict-sales-faceprint-database/ https://www.artificialintelligence-news.com/2022/05/10/clearview-ai-agrees-restrict-sales-faceprint-database/#respond Tue, 10 May 2022 16:06:18 +0000 https://www.artificialintelligence-news.com/?p=11959 Clearview AI has proposed to restrict sales of its faceprint database as part of a settlement with the American Civil Liberties Union (ACLU). The controversial facial recognition firm caused a stir due to scraping billions of images of people across the web without their consent. As a result, the company has faced the ire of... Read more »

The post Clearview AI agrees to restrict sales of its faceprint database appeared first on AI News.

]]>
Clearview AI has proposed to restrict sales of its faceprint database as part of a settlement with the American Civil Liberties Union (ACLU).

The controversial facial recognition firm caused a stir due to scraping billions of images of people across the web without their consent. As a result, the company has faced the ire of regulators around the world and numerous court cases.

One court case filed against Clearview AI was by the ACLU in 2020, claiming that it violated the Biometric Information Privacy Act (BIPA). That act covers Illinois and requires companies operating in the state to obtain explicit consent from individuals to collect their biometric data.

“Fourteen years ago, the ACLU of Illinois led the effort to enact BIPA – a groundbreaking statute to deal with the growing use of sensitive biometric information without any notice and without meaningful consent,” explained Rebecca Glenberg, staff attorney for the ACLU of Illinois.

“BIPA was intended to curb exactly the kind of broad-based surveillance that Clearview’s app enables.”

The case is ongoing but the two sides have reached a draft settlement. As part of the proposal, Clearview AI has agreed to restrict sales of its faceprint database to businesses and other private entities across the country.

“By requiring Clearview to comply with Illinois’ pathbreaking biometric privacy law not just in the state, but across the country, this settlement demonstrates that strong privacy laws can provide real protections against abuse,” said Nathan Freed Wessler, a deputy director of the ACLU Speech, Privacy, and Technology Project.

“Clearview can no longer treat people’s unique biometric identifiers as an unrestricted source of profit. Other companies would be wise to take note, and other states should follow Illinois’ lead in enacting strong biometric privacy laws.” 

The most protections will be offered to residents in Illinois. Clearview AI will be banned from sharing access to its database to any private company in the state in addition to any local public entity for five years.

Furthermore, Clearview AI plans to filter out images from Illinois. This may not catch all images so residents will be able to upload their image and Clearview will block its software from finding matches for their face. Clearview AI will spend $50,000 on adverts in online ads to raise awareness for this feature.

“This settlement is a big win for the most vulnerable people in Illinois,” commented Linda Xóchitl Tortolero, president and CEO of Mujeres Latinas en Acción, a Chicago-based non-profit.

“Much of our work centres on protecting privacy and ensuring the safety of survivors of domestic violence and sexual assault. Before this agreement, Clearview ignored the fact that biometric information can be misused to create dangerous situations and threats to their lives. Today that’s no longer the case.” 

The protections offered to American citizens outside Illinois aren’t quite as stringent.

Clearview AI is still able to sell access to its huge database to public entities, including law enforcement. In the wake of the US Capitol raid, the company boasted that police use of its facial recognition system increased 26 percent.

However, the company would be banned from selling access to its complete database to private companies. Clearview AI could still sell its software, but any purchaser would need to source their own database to train it.

“There is a battle being fought in courtrooms and statehouses across the country about who is going to control biometrics—Big Tech or the people being tracked by them—and this represents one of the biggest victories for consumers to date,” said J. Eli Wade-Scott from Edelson PC.

In November 2021, the UK’s Information Commissioner’s Office (ICO) imposed a potential fine of just over £17 million to Clearview AI and ordered the company to destroy the personal data it holds on British citizens and cease further processing.

Earlier that month, the OAIC reached the same conclusion as the ICO and ordered Clearview AI to destroy the biometric data it collected on Australians and cease further collection.

The full draft settlement between Clearview AI and the ACLU can be found here.

(Photo by Maksim Chernishev on Unsplash)

Related: Ukraine harnesses Clearview AI to uncover assailants and identify the fallen

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Clearview AI agrees to restrict sales of its faceprint database appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/05/10/clearview-ai-agrees-restrict-sales-faceprint-database/feed/ 0
AI in the justice system threatens human rights and civil liberties https://www.artificialintelligence-news.com/2022/03/30/ai-in-the-justice-system-threatens-human-rights-and-civil-liberties/ https://www.artificialintelligence-news.com/2022/03/30/ai-in-the-justice-system-threatens-human-rights-and-civil-liberties/#respond Wed, 30 Mar 2022 16:30:18 +0000 https://artificialintelligence-news.com/?p=11820 The House of Lords Justice and Home Affairs Committee has determined the proliferation of AI in the justice system is a threat to human rights and civil liberties. A report published by the committee today highlights the rapid pace of AI developments that are largely happening out of the public eye. Alarmingly, there seems to... Read more »

The post AI in the justice system threatens human rights and civil liberties appeared first on AI News.

]]>
The House of Lords Justice and Home Affairs Committee has determined the proliferation of AI in the justice system is a threat to human rights and civil liberties.

A report published by the committee today highlights the rapid pace of AI developments that are largely happening out of the public eye. Alarmingly, there seems to be a focus on rushing the technology into production with little concern about its potential negative impact.

Baroness Hamwee, Chair of the Justice and Home Affairs Committee, said:

“We had a strong impression that these new tools are being used without questioning whether they always produce a justified outcome. Is ‘the computer’ always right? It was different technology, but look at what happened to hundreds of Post Office managers.

Government must take control. Legislation to establish clear principles would provide a basis for more detailed regulation. A ‘kitemark’ to certify quality and a register of algorithms used in relevant tools would give confidence to everyone – users and citizens.

We welcome the advantages AI can bring to our justice system, but not if there is no adequate oversight. Humans must be the ultimate decision-makers, knowing how to question the tools they are using and how to challenge their outcome.”

The concept of XAI (Explainable AI) is growing traction and would help to address the problem of humans not always understanding how an AI has come to make a specific recommendation. 

Having fully-informed humans make the final decisions would go a long way toward building trust in the technology—ensuring clear accountability and minimising errors.

“What would it be like to be convicted and imprisoned on the basis of AI which you don’t understand and which you can’t challenge?” says Baroness Hamwee.

“Without proper safeguards, advanced technologies may affect human rights, undermine the fairness of trials, worsen inequalities, and weaken the rule of law. The tools available must be fit for purpose, and not be used unchecked.”

While there must be clear accountability for decision-makers in the justice system; the report also says governance needs reform.

The report notes there are more than 30 public bodies, initiatives, and programmes that play a role in the governance of new technologies in the application of the law. Without reform, where responsibility lies will be difficult to identify due to unclear roles and overlapping functions.

Societal discrimination also risks being exacerbated through bias in data being embedded in algorithms used for increasingly critical decisions from who to offer a loan to, all the way to who to arrest and potentially even put in prison.

Across the pond, Democrats reintroduced their Algorithmic Accountability Act last month which seeks to hold tech firms accountable for bias in their algorithms.

“If someone decides not to rent you a house because of the colour of your skin, that’s flat-out illegal discrimination. Using a flawed algorithm or software that results in discrimination and bias is just as bad,” said Senator Ron Wyden.

“Our bill will pull back the curtain on the secret algorithms that can decide whether Americans get to see a doctor, rent a house, or get into a school. Transparency and accountability are essential to give consumers choice and provide policymakers with the information needed to set the rules of the road for critical decision systems.”

Biased AI-powered facial recognition systems have already led to wrongful arrests of people from marginalised communities. Robert Williams, for example, was wrongfully arrested on his lawn in front of his family.

“The perils of face recognition technology are not hypothetical — study after study and real-life have already shown us its dangers,” explained Kate Ruane, Senior Legislative Counsel for the ACLU, last year following the reintroduction of the Facial Recognition and Biometric Technology Moratorium Act.

“The technology’s alarming rate of inaccuracy when used against people of colour has led to the wrongful arrests of multiple black men including Robert Williams, an ACLU client.”

Last year, UK Health Secretary Sajid Javid greenlit a series of AI-based projects aiming to tackle racial inequalities in the healthcare system. Among the greenlit projects is the creation of new standards for health inclusivity to improve the representation of ethnic minorities in datasets used by the NHS.

“If we only train our AI using mostly data from white patients it cannot help our population as a whole,” said Javid. “We need to make sure the data we collect is representative of our nation.”

Stiffer penalties for AI misuse, a greater push for XAI, governance reform, and improving diversity in datasets all seem like great places to start to prevent AI from undermining human rights and civil liberties.

(Photo by Tingey Injury Law Firm on Unsplash)

Related: UN calls for ‘urgent’ action over AI’s risk to human rights

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI in the justice system threatens human rights and civil liberties appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/03/30/ai-in-the-justice-system-threatens-human-rights-and-civil-liberties/feed/ 0
Ukraine harnesses Clearview AI to uncover assailants and identify the fallen https://www.artificialintelligence-news.com/2022/03/14/ukraine-harnesses-clearview-ai-uncover-assailants-identify-fallen/ https://www.artificialintelligence-news.com/2022/03/14/ukraine-harnesses-clearview-ai-uncover-assailants-identify-fallen/#respond Mon, 14 Mar 2022 14:37:53 +0000 https://artificialintelligence-news.com/?p=11758 Ukraine is using Clearview AI’s facial recognition software to uncover Russian assailants and identify Ukrainians who’ve sadly lost their lives in the conflict. The company’s chief executive, Hoan Ton-That, told Reuters that Ukraine’s defence ministry began using the software on Saturday. Clearview AI’s facial recognition system is controversial but indisputably powerful—using billions of images scraped... Read more »

The post Ukraine harnesses Clearview AI to uncover assailants and identify the fallen appeared first on AI News.

]]>
Ukraine is using Clearview AI’s facial recognition software to uncover Russian assailants and identify Ukrainians who’ve sadly lost their lives in the conflict.

The company’s chief executive, Hoan Ton-That, told Reuters that Ukraine’s defence ministry began using the software on Saturday.

Clearview AI’s facial recognition system is controversial but indisputably powerful—using billions of images scraped from the web to identify just about anyone. Ton-That says that Clearview has more than two billion images from Russian social media service VKontakte alone.

Reuters says that Ton-That sent a letter to Ukrainian authorities offering Clearview AI’s assistance. The letter said the software could help with identifying undercover Russian operatives, reuniting refugees with their families, and debunking misinformation.

Clearview AI’s software is reportedly effective even where there is facial damage or decomposition.

Ukraine is now reportedly using the facial recognition software for free, but the same offer has not been extended to Russia.

Russia has been widely condemned for its illegal invasion and increasingly brutal methods that are being investigated as likely war crimes. The Russian military has targeted not just the Ukrainian military but also civilians and even humanitarian corridors established to help people fleeing the conflict.

In response, many private companies have decided to halt or limit their operations in Russia and many are offering assistance to Ukraine in areas like cybersecurity and satellite internet access.

Clearview AI’s assistance could generate some positive PR for a company that is used to criticism.

Aside from its dystopian and invasive use of mass data scraped from across the web, the company has some potential far-right links.

“Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland once argued. Ekeland, it’s worth noting, gained notoriety as “The Troll’s Lawyer” after defending clients including self-described neo-Nazi troll Andrew Auernheimer.

Global regulators have increasingly clamped down on Clearview AI.

In November 2021, the UK’s Information Commissioner’s Office (ICO) imposed a potential fine of just over £17 million to Clearview AI and ordered the company to destroy the personal data it holds on British citizens and cease further processing.

Earlier that month, the OAIC reached the same conclusion as the ICO and ordered Clearview AI to destroy the biometric data it collected on Australians and cease further collection.

“I consider that the act of uploading an image to a social media site does not unambiguously indicate agreement to collection of that image by an unknown third party for commercial purposes,” said Australia’s Information Commissioner Angelene Falk at the time.

However, Clearview AI has boasted that police use of its facial recognition system increased 26 percent in the wake of the US Capitol raid.

Clearview AI’s operations in Ukraine could prove to be a positive case study, but whether it’s enough to offset the privacy concerns remains to be seen.

(Photo by Daniele Franchi on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Ukraine harnesses Clearview AI to uncover assailants and identify the fallen appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/03/14/ukraine-harnesses-clearview-ai-uncover-assailants-identify-fallen/feed/ 0
Clearview AI is close to obtaining a patent despite regulatory crackdown https://www.artificialintelligence-news.com/2021/12/06/clearview-ai-close-obtaining-patent-despite-regulatory-crackdown/ https://www.artificialintelligence-news.com/2021/12/06/clearview-ai-close-obtaining-patent-despite-regulatory-crackdown/#respond Mon, 06 Dec 2021 16:06:14 +0000 https://artificialintelligence-news.com/?p=11469 Clearview AI is reportedly just a bank transfer away from receiving a US patent for its controversial facial recognition technology. Politico reports that Clearview AI has been sent a “notice of allowance” by the US Patent and Trademark Office. The notice means that it will be granted the patent once it pays the administration fees.... Read more »

The post Clearview AI is close to obtaining a patent despite regulatory crackdown appeared first on AI News.

]]>
Clearview AI is reportedly just a bank transfer away from receiving a US patent for its controversial facial recognition technology.

Politico reports that Clearview AI has been sent a “notice of allowance” by the US Patent and Trademark Office. The notice means that it will be granted the patent once it pays the administration fees.

Clearview AI offers one of the most powerful facial recognition systems in the world. In the wake of the US Capitol raid, Clearview AI boasted that police use of its facial recognition system increased 26 percent.

The controversy around Clearview AI is that – aside from some potential far-right links – its system uses over 10 billion photos scraped from online web profiles without the explicit consent of the individuals.

“Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland once argued. Ekeland, it’s worth noting, gained notoriety as “The Troll’s Lawyer” after defending clients including self-described neo-Nazi troll Andrew Auernheimer.

‘Unreasonably intrusive and unfair’

Last month, the UK’s Information Commissioner’s Office (ICO) imposed a potential fine of just over £17 million to Clearview AI and ordered the company to destroy the personal data it holds on British citizens and cease further processing.

Leaked documents suggest Clearview AI’s system was tested by UK authorities including the Metropolitan Police, Ministry of Defense, the National Crime Agency, and a number of police constabularies including Surrey, North Yorkshire, Suffolk, and Northamptonshire. However, the system is said to no longer be being used or tested in the UK.

“UK data protection legislation does not stop the effective use of technology to fight crime, but to enjoy public trust and confidence in their products technology providers must ensure people’s legal protections are respected and complied with,” commented UK Information Commissioner Elizabeth Denham.

The UK’s decision was the result of a joint probe launched with the Office of the Australian Information Commissioner (OAIC) into Cleaview AI’s practices.

Earlier in the month, the OAIC reached the same conclusion as the ICO and ordered Clearview AI to to destroy the biometric data that it collected on Australians and cease further collection.

“I consider that the act of uploading an image to a social media site does not unambiguously indicate agreement to collection of that image by an unknown third party for commercial purposes,” said Australia’s Information Commissioner Angelene Falk.

“The covert collection of this kind of sensitive information is unreasonably intrusive and unfair,” Falk said. “It carries significant risk of harm to individuals, including vulnerable groups such as children and victims of crime, whose images can be searched on Clearview AI’s database.”

The first patent ‘around the use of large-scale internet data’

Major web companies like Facebook, Twitter, Google, YouTube, LinkedIn, and Venmo sent cease-and-desist letters to Clearview AI demanding the company stops scraping photos and data from their platforms.

Clearview AI founder Hoan Ton-That is unabashedly proud of the mass data-scraping system that his company has built and believes that it’s key to fighting criminal activities such as human trafficking. The company even says its application could be useful for finding out more about a person they’ve just met, such as through dating or business.

“There are other facial recognition patents out there — that are methods of doing it — but this is the first one around the use of large-scale internet data,” Ton-That told Politico in an interview.

Rights groups have criticised the seemingly imminent decision to grant Clearview AI a patent as essentially patenting a violation of human rights law.

(Photo by Etienne Girardet on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo North America on 11-12 May 2022.

The post Clearview AI is close to obtaining a patent despite regulatory crackdown appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/12/06/clearview-ai-close-obtaining-patent-despite-regulatory-crackdown/feed/ 0
Clearview AI could be fined £17M from UK privacy watchdog https://www.artificialintelligence-news.com/2021/11/30/clearview-ai-could-be-fined-17m-from-uk-privacy-watchdog/ https://www.artificialintelligence-news.com/2021/11/30/clearview-ai-could-be-fined-17m-from-uk-privacy-watchdog/#respond Tue, 30 Nov 2021 11:38:42 +0000 https://artificialintelligence-news.com/?p=11446 Clearview AI is back in hot water, this time from the UK’s Information Commissioner’s Office (ICO). The controversial facial recognition giant has caught the attention of global privacy regulators and campaigners for its practice of scraping personal photos from the web for its system without explicit consent. Clearview AI is expected to have scraped over... Read more »

The post Clearview AI could be fined £17M from UK privacy watchdog appeared first on AI News.

]]>
Clearview AI is back in hot water, this time from the UK’s Information Commissioner’s Office (ICO).

The controversial facial recognition giant has caught the attention of global privacy regulators and campaigners for its practice of scraping personal photos from the web for its system without explicit consent.

Clearview AI is expected to have scraped over 10 billion photos.

“Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland once argued.

The UK’s ICO launched a joint probe with the Office of the Australian Information Commissioner (OAIC) into Cleaview AI’s practices.

Earlier this month, Australia’s Information Commissioner Angelene Falk determined that “the act of uploading an image to a social media site does not unambiguously indicate agreement to collection of that image by an unknown third party for commercial purposes.”

Falk ordered Clearview AI to destroy the biometric data that it collected on Australians and cease further collection.

While we’ve had to wait a bit longer for the UK’s take, this week the ICO decided to impose a potential fine of just over £17 million on Clearview AI. The company must also delete the personal data currently held on British citizens and cease further processing.

Elizabeth Denham, UK Information Commissioner, said:

“I have significant concerns that personal data was processed in a way that nobody in the UK will have expected. It is therefore only right that the ICO alerts people to the scale of this potential breach and the proposed action we’re taking.

UK data protection legislation does not stop the effective use of technology to fight crime, but to enjoy public trust and confidence in their products technology providers must ensure people’s legal protections are respected and complied with.

Clearview AI Inc’s services are no longer being offered in the UK. However, the evidence we’ve gathered and analysed suggests Clearview AI Inc were and may be continuing to process significant volumes of UK people’s information without their knowledge.

We therefore want to assure the UK public that we are considering these alleged breaches and taking them very seriously.”

Leaked documents suggest Clearview AI’s system was tested by UK authorities including the Metropolitan Police, Ministry of Defense, the National Crime Agency, and a number of police constabularies including Surrey, North Yorkshire, Suffolk, and Northamptonshire. However, the system is said to no longer be being used or tested in the UK.

Following the US Capitol raid earlier this year, Clearview AI boasted that police use of its facial recognition system increased 26 percent.

(Photo by ev on Unsplash)

Looking to revamp your digital transformation strategy? Learn more about the Digital Transformation Week event taking place virtually 30 November – 1 December 2021 and discover key strategies for making your digital efforts a success.

The post Clearview AI could be fined £17M from UK privacy watchdog appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/11/30/clearview-ai-could-be-fined-17m-from-uk-privacy-watchdog/feed/ 0