AI Surveillance News | Latest Surveillance News | AI News https://www.artificialintelligence-news.com/categories/ai-surveillance/ Artificial Intelligence News Mon, 07 Aug 2023 10:43:49 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png AI Surveillance News | Latest Surveillance News | AI News https://www.artificialintelligence-news.com/categories/ai-surveillance/ 32 32 Error-prone facial recognition leads to another wrongful arrest https://www.artificialintelligence-news.com/2023/08/07/error-prone-facial-recognition-another-wrongful-arrest/ https://www.artificialintelligence-news.com/2023/08/07/error-prone-facial-recognition-another-wrongful-arrest/#comments Mon, 07 Aug 2023 10:43:46 +0000 https://www.artificialintelligence-news.com/?p=13436 The Detroit Police Department (DPD) is once again under scrutiny as a new lawsuit emerges, revealing that another innocent person has been wrongly arrested due to a flawed facial recognition match. Porcha Woodruff, an African American woman who was eight months pregnant at the time, is the sixth individual to come forward and report being... Read more »

The post Error-prone facial recognition leads to another wrongful arrest appeared first on AI News.

]]>
The Detroit Police Department (DPD) is once again under scrutiny as a new lawsuit emerges, revealing that another innocent person has been wrongly arrested due to a flawed facial recognition match.

Porcha Woodruff, an African American woman who was eight months pregnant at the time, is the sixth individual to come forward and report being falsely accused of a crime because of the controversial technology utilised by law enforcement.

Woodruff was accused of robbery and carjacking.

“Are you kidding?” Woodruff claims to have said to the officers, gesturing to her stomach to highlight how nonsensical the allegation was while being eight months pregnant.

The pattern of wrongful arrests based on faulty facial recognition has raised serious concerns, particularly as all six victims known by the American Civil Liberties Union (ACLU) have been African Americans. However, Woodruff’s case is notable as she is the first woman to report such an incident happening to her.

This latest incident marks the third known allegation of a wrongful arrest in the past three years attributed to the Detroit Police Department specifically and its reliance on inaccurate facial recognition matches.

Robert Williams, represented by the ACLU of Michigan and the University of Michigan Law School’s Civil Rights Litigation Initiative (CRLI), has an ongoing lawsuit against the DPD for his wrongful arrest in January 2020 due to the same technology.

Phil Mayor, Senior Staff Attorney at ACLU of Michigan, commented: “It’s deeply concerning that the Detroit Police Department knows the devastating consequences of using flawed facial recognition technology as the basis for someone’s arrest and continues to rely on it anyway.

“As Ms Woodruff’s horrifying experience illustrates, the Department’s use of this technology must end.”

The use of facial recognition technology in law enforcement has been a contentious issue, with concerns raised about its accuracy, racial bias, and potential violations of privacy and civil liberties.

Studies have shown that these systems are more prone to errors when identifying individuals with darker skin tones, leading to a disproportionate impact on marginalised communities.

Critics argue that relying on facial recognition as the sole basis for an arrest poses significant risks and can lead to severe consequences for innocent individuals, as seen in the case of Woodruff.

Calls for transparency and accountability have escalated, with civil rights organisations urging the Detroit Police Department to halt its use of facial recognition until the technology is thoroughly vetted and proven to be unbiased and accurate.

“The DPD continues to hide its abuses of this technology, forcing people whose rights have been violated to expose its wrongdoing case by case,” added Mayor.

“DPD should not be permitted to avoid transparency and hide its own misconduct from public view at the same time it continues to subject Detroiters to dragnet surveillance.” 

As the case unfolds, the public remains watchful of how the Detroit Police Department will respond to the mounting pressure to address concerns about the misuse of facial recognition technology and its impact on the rights and lives of innocent individuals.

(Image Credit: Oleg Gamulinskii from Pixabay)

See also: UK will host global AI summit to address potential risks

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Error-prone facial recognition leads to another wrongful arrest appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/07/error-prone-facial-recognition-another-wrongful-arrest/feed/ 1
European Parliament adopts AI Act position https://www.artificialintelligence-news.com/2023/06/14/european-parliament-adopts-ai-act-position/ https://www.artificialintelligence-news.com/2023/06/14/european-parliament-adopts-ai-act-position/#respond Wed, 14 Jun 2023 14:27:26 +0000 https://www.artificialintelligence-news.com/?p=13192 The European Parliament has taken a significant step towards the regulation of artificial intelligence by voting to adopt its position for the upcoming AI Act with an overwhelming majority.  The act aims to regulate AI based on its potential to cause harm and follows a risk-based approach, prohibiting applications that pose an unacceptable risk while... Read more »

The post European Parliament adopts AI Act position appeared first on AI News.

]]>
The European Parliament has taken a significant step towards the regulation of artificial intelligence by voting to adopt its position for the upcoming AI Act with an overwhelming majority. 

The act aims to regulate AI based on its potential to cause harm and follows a risk-based approach, prohibiting applications that pose an unacceptable risk while imposing strict regulations for high-risk use cases.

The timing of AI regulation has been a subject of debate, but Dragoș Tudorache, one of the European Parliament’s co-rapporteurs on the AI Act, emphasised that it is the right time to regulate AI due to its profound impact.

Dr Ventsislav Ivanov, AI Expert and Lecturer at Oxford Business College, said: “Regulating artificial intelligence is one of the most important political challenges of our time, and the EU should be congratulated for attempting to tame the risks associated with technologies that are already revolutionising our daily lives.

“As the chaos and controversy accompanying this vote show, this will be not an easy feat. Taking on the global tech companies and other interested parties will be akin to Hercules battling the seven-headed hydra.”

The adoption of the AI Act faced uncertainty as a political deal crumbled, leading to amendments from various political groups.

One of the main points of contention was the use of Remote Biometric Identification, with liberal and progressive lawmakers seeking to ban its real-time use except for ex-post investigations of serious crimes. The centre-right European People’s Party attempted to introduce exceptions for exceptional circumstances like terrorist attacks or missing persons, but their efforts were unsuccessful.

A tiered approach for AI models will be introduced with the act, including stricter regulations for foundation models and generative AI.

The European Parliament intends to introduce mandatory labelling for AI-generated content and mandate the disclosure of training data covered by copyright. This move comes as generative AI, exemplified by ChatGPT, gained widespread attention—prompting the European Commission to launch outreach initiatives to foster international alignment on AI rules.

MEPs made several significant changes to the AI Act, including expanding the list of prohibited practices to include subliminal techniques, biometric categorisation, predictive policing, internet-scraped facial recognition databases, and emotion recognition software.

An extra layer was introduced for high-risk AI applications and extended the list of high-risk areas and use cases in law enforcement, migration control, and recommender systems of prominent social media platforms.

Robin Röhm, CEO of Apheris, commented: “The passing of the plenary vote on the EU’s AI Act marks a significant milestone in AI regulation, but raises more questions than it answers. It will make it more difficult for start-ups to compete and means that investors are less likely to deploy capital into companies operating in the EU.

“It is critical that we allow for capital to flow to businesses, given the cost of building AI technology, but the risk-based approach to regulation proposed by the EU is likely to lead to a lot of extra burden for the European ecosystem and will make investing less attractive.”

With the European Parliament’s adoption of its position on the AI Act, interinstitutional negotiations will commence with the EU Council of Ministers and the European Commission. The negotiations – known as trilogues – will address key points of contention such as high-risk categories, fundamental rights, and foundation models.

Spain, which assumes the rotating presidency of the Council in July, has made finalising the AI law its top digital priority. The aim is to reach a deal by November, with multiple trilogues planned as a backup.

The negotiations are expected to intensify in the coming months as the EU seeks to establish comprehensive regulations for AI, balancing innovation and governance while ensuring the protection of fundamental rights.

“The key to good regulation is ensuring that safety concerns are addressed while not stifling innovation. It remains to be seen whether the EU can achieve this,” concludes Röhm.

(Image Credit: European Union 2023 / Mathieu Cugnot)

Similar: UK will host global AI summit to address potential risks

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

The post European Parliament adopts AI Act position appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/06/14/european-parliament-adopts-ai-act-position/feed/ 0
AI leaders warn about ‘risk of extinction’ in open letter https://www.artificialintelligence-news.com/2023/05/31/ai-leaders-warn-about-risk-of-extinction-in-open-letter/ https://www.artificialintelligence-news.com/2023/05/31/ai-leaders-warn-about-risk-of-extinction-in-open-letter/#respond Wed, 31 May 2023 08:33:10 +0000 https://www.artificialintelligence-news.com/?p=13124 The Center for AI Safety (CAIS) recently issued a statement signed by prominent figures in AI warning about the potential risks posed by the technology to humanity. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” reads the statement. Signatories of the... Read more »

The post AI leaders warn about ‘risk of extinction’ in open letter appeared first on AI News.

]]>
The Center for AI Safety (CAIS) recently issued a statement signed by prominent figures in AI warning about the potential risks posed by the technology to humanity.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” reads the statement.

Signatories of the statement include renowned researchers and Turing Award winners like Geoffery Hinton and Yoshua Bengio, as well as executives from OpenAI and DeepMind, such as Sam Altman, Ilya Sutskever, and Demis Hassabis.

The CAIS letter aims to spark discussions about the various urgent risks associated with AI and has attracted both support and criticism across the wider industry. It follows another open letter signed by Elon Musk, Steve Wozniak, and over 1,000 other experts who called for a halt to “out-of-control” AI development.

Despite its brevity, the latest statement does not provide specific details about the definition of AI or offer concrete strategies for mitigating the risks. However, CAIS clarified in a press release that its goal is to establish safeguards and institutions to ensure that AI risks are effectively managed.

OpenAI CEO Sam Altman has been actively engaging with global leaders and advocating for AI regulations. During a recent Senate appearance, Altman repeatedly called on lawmakers to heavily regulate the industry. The CAIS statement aligns with his efforts to raise awareness about the dangers of AI.

While the open letter has garnered attention, some experts in AI ethics have criticised the trend of issuing such statements.

Dr Sasha Luccioni, a machine-learning research scientist, suggests that mentioning hypothetical risks of AI alongside tangible risks like pandemics and climate change enhances its credibility while diverting attention from immediate issues like bias, legal challenges, and consent.

Daniel Jeffries, a writer and futurist, argues that discussing AI risks has become a status game in which individuals jump on the bandwagon without incurring any real costs.

Critics believe that signing open letters about future threats allows those responsible for current AI harms to alleviate their guilt while neglecting the ethical problems associated with AI technologies already in use.

However, CAIS – a San Francisco-based nonprofit – remains focused on reducing societal-scale risks from AI through technical research and advocacy. The organisation was co-founded by experts with backgrounds in computer science and a keen interest in AI safety.

While some researchers fear the emergence of a superintelligent AI that could surpass human capabilities and pose an existential threat, others argue that signing open letters about hypothetical doomsday scenarios distracts from the existing ethical dilemmas surrounding AI. They emphasise the need to address the real problems AI poses today, such as surveillance, biased algorithms, and the infringement of human rights.

Balancing the advancement of AI with responsible implementation and regulation remains a crucial task for researchers, policymakers, and industry leaders alike.

(Photo by Apolo Photographer on Unsplash)

Related: OpenAI CEO: AI regulation ‘is essential’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI leaders warn about ‘risk of extinction’ in open letter appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/05/31/ai-leaders-warn-about-risk-of-extinction-in-open-letter/feed/ 0
EU committees green-light the AI Act https://www.artificialintelligence-news.com/2023/05/11/eu-committees-green-light-ai-act/ https://www.artificialintelligence-news.com/2023/05/11/eu-committees-green-light-ai-act/#respond Thu, 11 May 2023 12:09:27 +0000 https://www.artificialintelligence-news.com/?p=13048 The Internal Market Committee and the Civil Liberties Committee of the European Parliament have endorsed new transparency and risk-management rules for artificial intelligence systems known as the AI Act. This marks a major step in the development of AI regulation in Europe, as these are the first-ever rules for AI. The rules aim to ensure... Read more »

The post EU committees green-light the AI Act appeared first on AI News.

]]>
The Internal Market Committee and the Civil Liberties Committee of the European Parliament have endorsed new transparency and risk-management rules for artificial intelligence systems known as the AI Act.

This marks a major step in the development of AI regulation in Europe, as these are the first-ever rules for AI. The rules aim to ensure that AI systems are safe, transparent, traceable, and non-discriminatory.

After the vote, co-rapporteur Brando Benifei (S&D, Italy) said:

“We are on the verge of putting in place landmark legislation that must resist the challenge of time. It is crucial to build citizens’ trust in the development of AI, to set the European way for dealing with the extraordinary changes that are already happening, as well as to steer the political debate on AI at the global level.

We are confident our text balances the protection of fundamental rights with the need to provide legal certainty to businesses and stimulate innovation in Europe.”

Co-rapporteur Dragos Tudorache (Renew, Romania) added:

“Given the profound transformative impact AI will have on our societies and economies, the AI Act is very likely the most important piece of legislation in this mandate. It’s the first piece of legislation of this kind worldwide, which means that the EU can lead the way in making AI human-centric, trustworthy, and safe.

We have worked to support AI innovation in Europe and to give start-ups, SMEs and industry space to grow and innovate while protecting fundamental rights, strengthening democratic oversight, and ensuring a mature system of AI governance and enforcement.”

The rules are based on a risk-based approach and they establish obligations for providers and users depending on the level of risk that the AI system can generate. AI systems with an unacceptable level of risk to people’s safety would be strictly prohibited, including systems that deploy subliminal or purposefully manipulative techniques, exploit people’s vulnerabilities, or are used for social scoring.

MEPs also substantially amended the list of prohibited AI practices to include bans on intrusive and discriminatory uses of AI systems, such as real-time remote biometric identification systems in publicly accessible spaces, post-remote biometric identification systems (except for law enforcement purposes), biometric categorisation systems using sensitive characteristics, predictive policing systems, emotion recognition systems in law enforcement, border management, workplace, and educational institutions, and indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases.

MEPs also expanded the classification of high-risk areas to include harm to people’s health, safety, fundamental rights, or the environment. They also added AI systems that influence voters in political campaigns and recommender systems used by social media platforms to the high-risk list.

To boost AI innovation, MEPs added exemptions to these rules for research activities and AI components provided under open-source licenses. The new law also promotes regulatory sandboxes – or controlled environments established by public authorities – to test AI before its deployment.

MEPs want to boost citizens’ right to file complaints about AI systems and receive explanations of decisions based on high-risk AI systems that significantly impact their rights. MEPs also reformed the role of the EU AI Office, which would be tasked with monitoring how the AI rulebook is implemented.

Tim Wright, Tech and AI Regulatory Partner at London-based law firm Fladgate, commented:

“US-based AI developers will likely steal a march on their European competitors given the news that the EU parliamentary committees have green-lit its groundbreaking AI Act, where AI systems will need to be categorised according to their potential for harm from the outset. 

The US tech approach is typically to experiment first and, once market and product fit is established, to retrofit to other markets and their regulatory framework. This approach fosters innovation whereas EU-based AI developers will need to take note of the new rules and develop systems and processes which may take the edge off their ability to innovate.

The UK is adopting a similar approach to the US, although the proximity of the EU market means that UK-based developers are more likely to fall into step with the EU ruleset from the outset. However, the potential to experiment in a safe space – a regulatory sandbox – may prove very attractive.”

Before negotiations with the Council on the final form of the law can begin, this draft negotiating mandate needs to be endorsed by the whole Parliament, with the vote expected during the 12-15 June session.

(Photo by Denis Sebastian Tamas on Unsplash)

Related: UK details ‘pro-innovation’ approach to AI regulation

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post EU committees green-light the AI Act appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/05/11/eu-committees-green-light-ai-act/feed/ 0
Clearview AI used by US police for almost 1M searches https://www.artificialintelligence-news.com/2023/03/28/clearview-ai-us-police-almost-1m-searches/ https://www.artificialintelligence-news.com/2023/03/28/clearview-ai-us-police-almost-1m-searches/#respond Tue, 28 Mar 2023 15:26:04 +0000 https://www.artificialintelligence-news.com/?p=12871 Facial recognition firm Clearview AI has revealed that it has run almost a million searches for US police. Facial recognition technology is a controversial topic, and for good reason. Clearview AI’s technology allows law enforcement to upload a photo of a suspect’s face and find matches in a database of billions of images it has... Read more »

The post Clearview AI used by US police for almost 1M searches appeared first on AI News.

]]>
Facial recognition firm Clearview AI has revealed that it has run almost a million searches for US police.

Facial recognition technology is a controversial topic, and for good reason. Clearview AI’s technology allows law enforcement to upload a photo of a suspect’s face and find matches in a database of billions of images it has collected.

Clearview AI CEO Hoan Ton-That disclosed in an interview with the BBC that the firm has scraped 30 billion images from platforms such as Facebook. The images were taken without the users’ permission.

The company has been repeatedly fined millions in Europe and Australia for breaches of privacy, but US police are still using its powerful software.

Matthew Guaragilia from the Electronic Frontier Foundation said that police use of Clearview puts everyone into a “perpetual police line-up.”

While the use of facial recognition by the police is often sold to the public as being used only for serious or violent crimes, Miami Police confirmed to the BBC that it uses Clearview AI’s software for every type of crime.

Miami’s Assistant Chief of Police Armando Aguilar said his team used Clearview AI’s system about 450 times a year, and that it had helped to solve several murders. 

However, there are numerous documented cases of mistaken identity using facial recognition by the police. Robert Williams, for example, was wrongfully arrested on his lawn in front of his family and held overnight in a “crowded and filthy” cell.

“The perils of face recognition technology are not hypothetical — study after study and real-life have already shown us its dangers,” explained Kate Ruane, Senior Legislative Counsel for the ACLU, following the reintroduction of the Facial Recognition and Biometric Technology Moratorium Act.

“The technology’s alarming rate of inaccuracy when used against people of colour has led to the wrongful arrests of multiple black men including Robert Williams, an ACLU client.”

The lack of transparency around police use of facial recognition means the true figure of wrongful arrests it’s led to is likely far higher.

Civil rights campaigners want police forces that use Clearview AI to openly say when it is used, and for its accuracy to be openly tested in court. They want the systems to be scrutinised by independent experts.

The use of facial recognition technology by police is a contentious issue. While it may help solve crimes, it also poses a threat to civil liberties and privacy.

Ultimately, it’s a fine line between using technology to fight crime and infringing on individual rights, and we need to tread carefully to ensure we don’t cross it.

Related: Clearview AI lawyer: ‘Common law has never recognised a right to privacy for your face’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Clearview AI used by US police for almost 1M searches appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/03/28/clearview-ai-us-police-almost-1m-searches/feed/ 0
GitHub CEO: The EU ‘will define how the world regulates AI’ https://www.artificialintelligence-news.com/2023/02/06/github-ceo-eu-will-define-how-world-regulates-ai/ https://www.artificialintelligence-news.com/2023/02/06/github-ceo-eu-will-define-how-world-regulates-ai/#respond Mon, 06 Feb 2023 17:04:56 +0000 https://www.artificialintelligence-news.com/?p=12708 GitHub CEO Thomas Dohmke addressed the EU Open Source Policy Summit in Brussels and gave his views on the bloc’s upcoming AI Act.  “The AI Act will define how the world regulates AI and we need to get it right, for developers and the open-source community,” said Dohmke. Dohmke was born and grew up in... Read more »

The post GitHub CEO: The EU ‘will define how the world regulates AI’ appeared first on AI News.

]]>
GitHub CEO Thomas Dohmke addressed the EU Open Source Policy Summit in Brussels and gave his views on the bloc’s upcoming AI Act

“The AI Act will define how the world regulates AI and we need to get it right, for developers and the open-source community,” said Dohmke.

Dohmke was born and grew up in Germany but now lives in the US. As such, he is all too aware of the widespread belief that the EU cannot lead when it comes to tech innovation.

“As a European, I love seeing how open-source AI innovations are beginning to break the narrative that only the US and China can lead on tech innovation.”

“I’ll be honest, as a European living in the United States, this is a pervasive – and often true – narrative. But this can change. And it’s already beginning to, thanks to open-source developers.”

AI will revolutionise just about every aspect of our lives. Regulation is vital to minimise the risks associated with AI while allowing the benefits to flourish.

“Together, OSS (Open Source Software) developers will use AI to help make our lives better. I have no doubt that OSS developers will help build AI innovations that empower those with disabilities, help us solve climate change, and save lives.”

A risk of overregulation is that it drives innovation elsewhere. Startups are more likely to establish themselves in countries like the US and China where they’re likely not subject to as strict regulations. Europe will find itself falling behind and having less influence on the global stage when it comes to AI.

“The AI Act is so crucial. This policy could well set the precedent for how the world regulates AI. It is foundationally important. Important for European technological leadership, and the future of the European economy itself. The AI Act must be fair and balanced for the open-source community.

“Policymakers should help us get there. The AI Act can foster democratised innovation and solidify Europe’s leadership in open, values-based artificial intelligence. That is why I believe that open-source developers should be exempt from the AI Act.”

In expanding on his belief that open-source developers should be exempt, Dohmke explains that the compliance burden should fall on those shipping products.

“OSS developers are often volunteers. Many are working two jobs. They are scientists, doctors, academics, professors, and university students alike. They don’t usually stand to profit from their contributions—and they certainly don’t have big budgets and compliance departments!”

EU lawmakers are hoping to agree on draft AI rules next month with the aim of winning the acceptance of member states by the end of the year.

“Open-source is forming the foundation of AI innovation in Europe. The US and China don’t have to win it all. Let’s break that narrative apart!

“Let’s give the open-source community the daylight and the clarity to grow their ideas and build them for the rest of the world! And by doing so, let’s give Europe the chance to be a leader in this new age of AI.”

GitHub’s policy paper on the AI Act can be found here.

(Image Credit: Collision Conf under CC BY 2.0 license)

Relevant: US and EU agree to collaborate on improving lives with AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post GitHub CEO: The EU ‘will define how the world regulates AI’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/02/06/github-ceo-eu-will-define-how-world-regulates-ai/feed/ 0
FBI director warns about Beijing’s AI program https://www.artificialintelligence-news.com/2023/01/23/fbi-director-warns-beijing-ai-program/ https://www.artificialintelligence-news.com/2023/01/23/fbi-director-warns-beijing-ai-program/#respond Mon, 23 Jan 2023 14:26:40 +0000 https://www.artificialintelligence-news.com/?p=12644 FBI Director Christopher Wray has warned about the national security threat posed by Beijing’s AI program. During a panel at the World Economic Forum, Wray explained that Beijing’s AI program “is not constrained by the rule of law”. Wray says Beijing has “a bigger hacking program than any other nation” and will use machine learning... Read more »

The post FBI director warns about Beijing’s AI program appeared first on AI News.

]]>
FBI Director Christopher Wray has warned about the national security threat posed by Beijing’s AI program.

During a panel at the World Economic Forum, Wray explained that Beijing’s AI program “is not constrained by the rule of law”.

Wray says Beijing has “a bigger hacking program than any other nation” and will use machine learning to further boost the capabilities of its state-sponsored hackers.

Much like nuclear expertise, AI can be used to benefit the world or harm it.

“I have the same reaction every time,” Wray explained. “I think, ‘Wow, we can do that.’ And then, ‘Oh god, they can do that.’”

Beijing is often accused of influencing other countries through its infrastructure investments. Washington largely views China’s expanding economic influence and military might as America’s main long-term security challenge.

Wray says that Beijing’s AI program “is built on top of the massive troves of intellectual property and sensitive data that they’ve stolen over the years.”

Furthermore, it will be used “to advance that same intellectual property theft, to advance the repression that occurs not just back home in mainland China but increasingly as a product they export around the world.”

Cloudflare CEO Matthew Prince spoke on the same panel and offered a more positive take: “The thing that makes me optimistic in this space: there are more good guys than bad guys.”

Prince acknowledges that whoever has the most data will win the AI race. Western data collection protections have historically been much stricter than in China.

“In a world where all these technologies are available to both the good guys and the bad guys, the good guys are constrained by the rule of law and international norms,” Wray added. “The bad guys aren’t, which you could argue gives them a competitive advantage.”

Prince and Wray say it’s the cooperation of the “good guys” that gives them the best chance at staying a step ahead of those wishing to cause harm.

“When we’re all working together, they’re no match,” concludes Wray.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with the Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post FBI director warns about Beijing’s AI program appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/01/23/fbi-director-warns-beijing-ai-program/feed/ 0
Cal Al-Dhubaib, Pandata: On developing ethical AI solutions https://www.artificialintelligence-news.com/2022/11/24/cal-al-dhubaib-pandata-on-developing-ethical-ai-solutions/ https://www.artificialintelligence-news.com/2022/11/24/cal-al-dhubaib-pandata-on-developing-ethical-ai-solutions/#respond Thu, 24 Nov 2022 13:49:53 +0000 https://www.artificialintelligence-news.com/?p=12500 Businesses that fail to deploy AI ethically will face severe penalties as regulations catch up with the pace of innovations. In the EU, the proposed AI Act features similar enforcement to GDPR but with even heftier fines of €30 million or six percent of annual turnover. Other countries are implementing variations, including China and a... Read more »

The post Cal Al-Dhubaib, Pandata: On developing ethical AI solutions appeared first on AI News.

]]>
Businesses that fail to deploy AI ethically will face severe penalties as regulations catch up with the pace of innovations.

In the EU, the proposed AI Act features similar enforcement to GDPR but with even heftier fines of €30 million or six percent of annual turnover. Other countries are implementing variations, including China and a growing number of US states.

Pandata are experts in human-centred, explainable, and trustworthy AI. The Cleveland-based outfit prides itself on delivering AI solutions that give enterprises a competitive edge in an ethical, and lawful, manner.

AI News caught up with Cal Al-Dhubaib, CEO of Pandata, to learn more about ethical AI solutions.

AI News: Can you give us a quick overview of what Pandata does?

Cal Al-Dhubaib: Pandata helps organisations to design and develop AI & ML solutions. We focus on heavily-regulated industries like healthcare, energy, and finance and emphasise the implementation of trustworthy AI.

We’ve built great expertise working with sensitive data and higher-risk applications and pride ourselves on simplifying complex problems. Our clients include globally-recognised brands like Cleveland Clinic, Progressive Insurance, Parker Hannifin, and Hyland Software. 

AN: What are some of the biggest ethical challenges around AI?

CA: A lot has changed in the last five years, especially our ability to rapidly train and deploy complex machine-learning models on unstructured data like text and images.

This increase in complexity has resulted in two challenges:

  1. Ground truth is more difficult to define. For example, summarising an article into a paragraph with AI may have multiple ‘correct’ answers.
  2. Models have become more complex and harder to interrogate.

The greatest ethical challenge we face in AI is that our models can break in ways we can’t even imagine. The result is a laundry list of examples from recent years of models that have resulted in physical harm or racial/gender bias.

AN: And how important is “explainable AI”?

CA: As models have increased in complexity, we’ve seen a rise in the field of explainable AI. Sometimes this means having more simple models that are used to explain more complex models that are better at performing tasks.

Explainable AI is critical in two situations:

  1. When an audit trail is necessary to support decisions made
  2. 2) When expert human decision-makers need to take action based on the output of an AI system.

AN: Are there any areas where AI should not be implemented by companies in your view?

CA: AI used to be the exclusive domain of data scientists. As the technology has become mainstream, it is only natural that we’re starting to work with a broader sphere of stakeholders including user experience designers, product experts, and business leaders. However, fewer than 25 percent of professionals consider themselves data literate (HBR 2021).

We often see this translate into a mismatch of expectations for what AI can reasonably accomplish. I share these three golden rules:

  1. If you can explain something procedurally, or provide a straightforward set of rules to accomplish a task, it may not be worth it to invest in AI.
  2. If a task is not performed consistently by equally trained experts, then there is little hope that an AI can learn to recognise consistent patterns.
  3. Proceed with caution when dealing with AI systems that directly impact the quality of human life – financially, physically, mentally, or otherwise.

AN: Do you think AI regulations need to be stricter or more relaxed?

CA: In some cases, regulation is long overdue. Regulation has hardly kept up with the pace of innovation.

As of 2022, the FDA has re-classified over 500 software applications that leverage AI as medical devices. The EU AI Act, anticipated to be rolled out in 2024-25 will be the first to set specific guidelines for AI applications that affect human life.

Just like GDPR created a wave of change in data privacy practices and the infrastructure to support them, the EU AI act will require organisations to be more disciplined in their approach to model deployment and management.

Organisations that start to mature their practices today will be well prepared to ride that wave and thrive in its wake.

AN: What advice would you provide to business leaders who are interested in adopting or scaling their AI practices?

CA: Use change management principles: understand, plan, implement, and communicate to prepare the organisation for AI-powered disruption.

Improve your AI literacy. AI is not intended to replace humans but rather to augment repetitive tasks; enabling humans to focus on more impactful work.

AI has to be boring to be practical. The real power of AI is to resolve redundancies and inefficiencies we experience in our daily work. Deciding how to use the building blocks of AI to get there is where the vision of a prepared leader can go a long way.

If any of these topics sound interesting, Cal has shared a recap of his session at this year’s AI & Big Data Expo North America here

(Photo by Nathan Dumlao on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Cal Al-Dhubaib, Pandata: On developing ethical AI solutions appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/11/24/cal-al-dhubaib-pandata-on-developing-ethical-ai-solutions/feed/ 0
Italy’s facial recognition ban exempts law enforcement https://www.artificialintelligence-news.com/2022/11/15/italy-facial-recognition-ban-exempts-law-enforcement/ https://www.artificialintelligence-news.com/2022/11/15/italy-facial-recognition-ban-exempts-law-enforcement/#respond Tue, 15 Nov 2022 15:47:07 +0000 https://www.artificialintelligence-news.com/?p=12484 Italy has banned the use of facial recognition, except for law enforcement purposes. On Monday, the country’s Data Protection Authority (Garante per la protezione dei dati personali) issued official stays to two municipalities – the southern Italian city of Lecce and the Tuscan city of Arezzo – over their experiments with biometrics technologies. The agency... Read more »

The post Italy’s facial recognition ban exempts law enforcement appeared first on AI News.

]]>
Italy has banned the use of facial recognition, except for law enforcement purposes.

On Monday, the country’s Data Protection Authority (Garante per la protezione dei dati personali) issued official stays to two municipalities – the southern Italian city of Lecce and the Tuscan city of Arezzo – over their experiments with biometrics technologies.

The agency banned facial recognition systems using biometric data until a specific law governing its use is adopted.

“The moratorium arises from the need to regulate eligibility requirements, conditions and guarantees relating to facial recognition, in compliance with the principle of proportionality,” the agency said in a statement.

However, an exception was added for biometric data technology that is being used “to fight crime” or in a judicial investigation.

In Lecce, the municipality’s authorities said they would begin using facial recognition technologies. Italy’s Data Protection Agency ordered Lecce’s authorities to explain what systems will be used, their purpose, and the legal basis.

As for the Arezzo case, the city’s police were to be equipped with infrared smart glasses that could recognise car license plates.

Facial recognition technology is a central concern in the EU’s proposed AI regulation. The proposal has been released but will need to pass consultations within the EU before it’s adopted into law.

(Photo by Mikita Yo on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Italy’s facial recognition ban exempts law enforcement appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/11/15/italy-facial-recognition-ban-exempts-law-enforcement/feed/ 0
Axon’s AI ethics board resign after TASER drone announcement https://www.artificialintelligence-news.com/2022/06/07/axon-ai-ethics-board-resign-taser-drone-announcement/ https://www.artificialintelligence-news.com/2022/06/07/axon-ai-ethics-board-resign-taser-drone-announcement/#respond Tue, 07 Jun 2022 17:56:42 +0000 https://www.artificialintelligence-news.com/?p=12052 The majority of Axon’s AI ethics board have resigned after the company announced that it’s developing taser-equipped drones. In response to yet another shooting in a US school, Axon founder and CEO Rick Smith began thinking about how the company could help put a stop to the all too regular occurrence. The shooting kicked off... Read more »

The post Axon’s AI ethics board resign after TASER drone announcement appeared first on AI News.

]]>
The majority of Axon’s AI ethics board have resigned after the company announced that it’s developing taser-equipped drones.

In response to yet another shooting in a US school, Axon founder and CEO Rick Smith began thinking about how the company could help put a stop to the all too regular occurrence.

The shooting kicked off the usual debate over whether stricter gun laws are needed. Unfortunately, we all know nothing is likely to really change and we’ll be back to rehashing the same arguments the next time more children lose their lives.

“In the aftermath of these events, we get stuck in fruitless debates. We need new and better solutions,” Smith said in a statement.

Few would disagree with that statement but Smith’s proposed solution has caused quite a stir.

“We have elected to publicly engage communities and stakeholders, and develop a remotely operated, non-lethal drone system that we believe will be a more effective, immediate, humane, and ethical option to protect innocent people,” Smith explained.

The TASER drone system would use real-time security feeds supplied through a partnership with Fusus.

“Trying to find and stop an active shooter based on the telephone game connecting victim 911 callers is antiquated,” says Chris Lindenau, CEO of Fusus. “Fusus brings the ability to share any security camera to first responders, providing known locations and visual live feeds regardless of which security cameras they use.

“This network of cameras, with human and AI monitoring, together with panic buttons and other local communication tools, can detect and ID a threat before a shot is fired and dramatically improve response times and situational awareness.”

Nine out of 12 members of Azon’s AI ethics board resigned following the announcement and issued a statement explaining their decision.

“Only a few weeks ago, a majority of this board – by an 8-4 vote – recommended that Axon not proceed with a narrow pilot study aimed at vetting the company’s concept of Taser-equipped drones,” wrote the former board members.

“In that limited conception, the Taser-equipped drone was to be used only in situations in which it might avoid a police officer using a firearm, thereby potentially saving a life.”

“We understood the company might proceed despite our recommendation not to, and so we were firm about the sorts of controls that would be needed to conduct a responsible pilot should the company proceed. We just were beginning to produce a public report on Axon’s proposal and our deliberations.”

However, Smith overruled the ethics board and made the announcement regardless.

The board members go on to explain how they’ve been firm against Axon playing a role in supplying real-time, persistent surveillance capabilities that “undoubtedly will harm communities of color and others who are overpoliced, and likely well beyond that.”

“The Taser-equipped drone also has no realistic chance of solving the mass shooting problem Axon now is prescribing it for, only distracting society from real solutions to a tragic problem.”

Over the years, the board members believe they have been able to steer Axon away from implementing draconian facial recognition capabilities and ensure the withdrawal of a software tool to scrape data from social media websites. However, the members claim Axon has more recently rejected their advice on numerous occasions.

“We all feel the desperate need to do something to address our epidemic of mass shootings. But Axon’s proposal to elevate a tech-and-policing response when there are far less harmful alternatives, is not the solution,” explained the board members.

“Significantly for us, it bypassed Axon’s commitment to consult with the company’s own AI Ethics Board.”

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Axon’s AI ethics board resign after TASER drone announcement appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/06/07/axon-ai-ethics-board-resign-taser-drone-announcement/feed/ 0