european union Archives - AI News https://www.artificialintelligence-news.com/tag/european-union/ Artificial Intelligence News Mon, 31 Jul 2023 14:07:52 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png european union Archives - AI News https://www.artificialintelligence-news.com/tag/european-union/ 32 32 AI regulation: A pro-innovation approach – EU vs UK https://www.artificialintelligence-news.com/2023/07/31/ai-regulation-pro-innovation-approach-eu-vs-uk/ https://www.artificialintelligence-news.com/2023/07/31/ai-regulation-pro-innovation-approach-eu-vs-uk/#respond Mon, 31 Jul 2023 14:07:50 +0000 https://www.artificialintelligence-news.com/?p=13348 In this article, the writers compare the United Kingdom’s plans for implementing a pro-innovation approach to regulation (“UK Approach”) versus the European Union’s proposed Artificial Intelligence Act (the “EU AI Act”). Authors: Sean Musch, AI & Partners and Michael Borrelli, AI & Partners AI – The opportunity and the challenge AI currently delivers broad societal... Read more »

The post AI regulation: A pro-innovation approach – EU vs UK appeared first on AI News.

]]>
In this article, the writers compare the United Kingdom’s plans for implementing a pro-innovation approach to regulation (“UK Approach”) versus the European Union’s proposed Artificial Intelligence Act (the “EU AI Act”).

Authors: Sean Musch, AI & Partners and Michael Borrelli, AI & Partners

AI – The opportunity and the challenge

AI currently delivers broad societal benefits, from medical advances to mitigating climate change. As an example, an AI technology developed by DeepMind, a UK- based business, can predict the structure of almost every protein known to science. Government frameworks consider the role of regulation in creating the environment for AI to flourish. AI technologies have not yet reached their full potential. Under the right conditions, AI will transform all areas of life and stimulate economies by unleashing innovation and driving productivity, creating new jobs and improving the workplace.

The UK has indicated a requirement to act quickly to continue to lead the international conversation on AI governance and demonstrate the value of our pragmatic, proportionate regulatory approach. In their report, the UK government identify the short time frame for intervention to provide a clear, pro-innovation regulatory environment in order to make the UK one of the top places in the world to build foundational AI companies. Not too dissimilar to this EU legislators have signalled an intention to make the EU a global hub for AI innovation. On both fronts responding to risk and building public trust are important drivers for regulation. Yet, clear and consistent regulation can also support business investment and build confidence in innovation.

What remains critical for the industry is winning and retaining consumer trust, which is key to the success of innovation economies. Neither the EU nor the UK can afford not to have clear, proportionate approaches to regulation that enable the responsible application of  AI to flourish. Without such consideration, they risk creating cumbersome rules applying to all AI technologies.

What are the policy objectives and intended effects?

Similarities exist in terms of the overall aims. As shown in the table below, the core similarities revolve around growth, safety and economic prosperity:

EU AI ActUK Approach
Ensure that AI systems placed on the market and used are safe and respect existing laws on fundamental rights and Union values.Drive growth and prosperity by boosting innovation, investment, and public trust to harness the opportunities and benefits that AI technologies present.
Enhance governance and effective enforcement of existing laws on fundamental rights and safety requirements applicable to AI systems.Strengthen the UK’s position as a global leader in AI, by ensuring the UK is the best place to develop and use AI technologies.
Ensure legal certainty to facilitate investment and innovation in AI.
Facilitate the development of a single market for lawful, safe, and trustworthy AI applications and prevent market fragmentation.

What are the problems being tackled?

Again, similarities exist in terms of a common focus: the end-user. AI’s involvement in multiple activities of the economy, whether this be from simple chatbots to biometric identification, inevitably mean that end-users end up being affected. Protecting them at all costs seems to be the presiding theme:

EU AI ActUK Approach
Safety risks. Increased risks to the safety and security of citizens caused by the use of AI systems.Market failures. A number of market failures (information asymmetry, misaligned incentives, negative externalities, regulatory failure), mean AI risks are not being adequately addressed.
Fundamental rights risk. The use of AI systems poses an increased risk of violations of citizens’ fundamental rights and Union values.Consumer risks. These risks include damage to physical and mental health, bias and discrimination, and infringements on privacy and individual rights.
Legal uncertainty. Legal uncertainty and complexity on how to ensure compliance with rules applicable to AI systems dissuade businesses from developing and using the technology.
Enforcement. Competent authorities do not have the powers and/or procedural framework to ensure compliance of AIuse with fundamental rights and safety.
Mistrust. Mistrust in AI would slow down AI development in Europe and reduce the global competitiveness of the EU economies.
Fragmentation. Fragmented measures create obstacles for cross-border AI single market and threaten the Union’s digital sovereignty.

What are the differences in policy options?

A variety of options have been considered by the respective policymakers. On the face of it, pro-innovation requires a holistic examination to account for the variety of challenges new ways of working generate. The EU sets the standard with Option 3:

EU AI Act (Decided)UK Approach (In Process)
Option 1 – EU Voluntary labelling scheme – An EU act establishing a voluntary labelling scheme. One definition of AI, however applicable only on a voluntary basis.Option 0 – Do nothing option – Assume the EU delivers the AI Act as drafted in April 2021. The UK makes no regulatory changes regarding AI.
Option 2 – Ad-hoc sectoral approach – Ad-hoc sectoral acts (revision or new). Each sector can adopt a definition of AI and determine the riskiness of the AI systems covered.Option 1 – Delegate to existing regulators, guided by non-statutory advisory principles – Non-legislative option with existing regulators applying cross-sectoral AI governance principles within their remits.
Option 3 – Horizontal risk-based act on AI – A single binding horizontal act on AI. One horizontally applicable AI definition and methodology for the determination of high-risk (risk-based).Option 2 – Delegate to existing regulators with a duty to regard the principles, supported by central AI regulatory functions (Preferred option) – Existing regulators have a ‘duty to have due regard’ to the cross-sectoral AI governance principles, supported by central AI regulatory functions. No new mandatory obligations for businesses.
Option 3+ – Option 3 + industry-led codes of conduct for non-high-risk AI.Option 3 – Centralised AI regulator with new legislative requirements placed on AI systems – The UK establishes a central AI regulator, with mandatory requirements for businesses aligned to the EU AI Act.
Option 4 – Horizontal act for all AI – A single binding horizontal act on AI. One horizontal AI definition, but no methodology/gradation (all risks covered).

What are the estimated direct compliance costs to firms?

Both the UK Approach and the EU AI Act regulatory framework will apply to all AI systems being designed or developed, made available or otherwise being used in the EU/UK, whether they are developed in the EU/UK or abroad. Both businesses that develop and deploy AI, “AI businesses”, and businesses that use AI, “AI adopting businesses”, are in the scope of the framework. These two types of firms have different expected costs per business under the respective frameworks.

UK Approach: Key assumptions for AI system costs

Key finding: Cost of compliance for HRS highest under Option 3

OptionOption 0Option 1Option 2Option 3
% of businesses that provide high-risk systems (HRS)8.1%8.1%8.1%
Cost of compliance per HRS£3,698£3,698£36,981
% of businesses that AI systems that interact with natural persons (non-HRS)39.0%39.0%39.0%
Cost of compliance per non-HRS£330£330£330
Assumed number of AI systems per AI business (2020)Small – 2
Medium – 5
Large – 10
Assumed number of AI systems per AI-adopting business (2020)Small – 2
Medium – 5
Large – 10
EU AI Act: Total compliance cost of the five requirements for each AI product

Key finding: Information provision represents the highest cost incurred by firms.

Administrative ActivityTotal MinutesTotal Admin Cost (Hourly rate = €32)Total Cost
Training Data€5,180.5
Documents & Record Keeping€2,231
Information Provision€6,800
Human Oversight€1,260
Robustness and Accuracy€4,750
Total€20,581.5€10,976.8€29,276.8

In light of these comparisons, it appears the EU estimates a lower cost of compliance compared to the UK. Lower costs don’t confer a less rigid approach. Rather, they indicate an itemised approach to cost estimation as well as using a standard pricing metric, hours. In practice, firms are likely to aim to make this more efficient by reducing the number of hours required to achieve compliance.

Lessons from the UK Approach for the EU AI Act

The forthcoming EU AI Act is set to place the EU at the global forefront of regulating this emerging technology. Accordingly, models for the governance and mitigation of AI risk from outside the region can still provide insightful lessons for EU decision-makers to learn and issues to account for before the EU AI Act is passed.

This is certainly applicable to Article 9 of the EU AI Act, which requires developers to establish, implement, document, and maintain risk management systems for high-risk AI systems. There are three key ideas for EU decision-makers to consider from the UK Approach.

AI assurance techniques and technical standards

Unlike Article 17 of the EU AI Act, the quality management system put in place by providers of high-risk AI systems is designed to ensure compliance. To do this, providers of high-risk  AI  systems must establish techniques, procedures, and systematic actions to be used for development, quality control, and quality assurance. The EU AI Act only briefly covers the concept of assurance, but it could benefit from publishing assurance techniques and technical standards that play a critical role in enabling the responsible adoption of AI so that potential harms at all levels of society are identified and documented.

To assure AI systems effectively, the UK government is calling for a toolbox of assurance techniques to measure, evaluate, and communicate the trustworthiness of AI systems across the development and deployment life cycle. These techniques include impact assessment, audit, and performance testing along with formal verification methods. To help innovators understand how AI assurance techniques can support wider AI governance, the government launched a ‘Portfolio of AI Assurance techniques’ in Spring 2023. This is an industry collaboration to showcase how these tools are already being applied by businesses to real-world use cases and how they align with the AI regulatory principles.

Similarly, assurance techniques need to be underpinned by available technical standards, which provide a common understanding across assurance providers. Technical standards and assurance techniques will also enable organisations to demonstrate that their systems are in line with the regulatory principles enshrined under the EU AI Act and the UK Approach. Similarities exist in terms of the stage of development.

Specifically, the EU AI Act defines common mandatory requirements applicable to the design and development of certain AI systems before they are placed on the market that will be further operationalised through harmonised technical standards. In equal fashion, the UK government intends to have a leading role in the development of international technical standards, working with industry, international and UK partners. The UK government plans to continue to support the role of technical standards in complementing our approach to AI regulation, including through the UK AI Standards Hub. These technical standards may demonstrate firms’ compliance with the EU AI Act.

A harmonised vocabulary

All relevant parties would benefit from reaching a consensus on the definitions of key terms related to the foundations of AI regulation. While the EU AI Act and the UK Approach are either under development or in the incubation stage, decision-makers for both initiatives should seize the opportunity to develop a shared understanding of core AI ideas, principles, and concepts, and codify these into a harmonised transatlantic vocabulary. As shown below, identification of where both initiatives are in agreement, and where they diverge, has been undertaken:

EU AI ActUK Approach
SharedAccountability
Safety
Privacy
Transparency
Fairness
DivergentData Governance
Diversity
Environmental and Social Well-Being
Human Agency and Oversight
Technical Robustness
Non-Discrimination
Governance
Security
Robustness
Explainability
Contestability
Redress

How AI & Partners can help

We can help you start assessing your AI systems using recognised metrics ahead of the expected changes brought about by the EU AI Act. Our leading practice is geared towards helping you identify, design, and implement appropriate metrics for your assessments.

 Website: https://www.ai-and-partners.com/

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI regulation: A pro-innovation approach – EU vs UK appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/07/31/ai-regulation-pro-innovation-approach-eu-vs-uk/feed/ 0
EU committees green-light the AI Act https://www.artificialintelligence-news.com/2023/05/11/eu-committees-green-light-ai-act/ https://www.artificialintelligence-news.com/2023/05/11/eu-committees-green-light-ai-act/#respond Thu, 11 May 2023 12:09:27 +0000 https://www.artificialintelligence-news.com/?p=13048 The Internal Market Committee and the Civil Liberties Committee of the European Parliament have endorsed new transparency and risk-management rules for artificial intelligence systems known as the AI Act. This marks a major step in the development of AI regulation in Europe, as these are the first-ever rules for AI. The rules aim to ensure... Read more »

The post EU committees green-light the AI Act appeared first on AI News.

]]>
The Internal Market Committee and the Civil Liberties Committee of the European Parliament have endorsed new transparency and risk-management rules for artificial intelligence systems known as the AI Act.

This marks a major step in the development of AI regulation in Europe, as these are the first-ever rules for AI. The rules aim to ensure that AI systems are safe, transparent, traceable, and non-discriminatory.

After the vote, co-rapporteur Brando Benifei (S&D, Italy) said:

“We are on the verge of putting in place landmark legislation that must resist the challenge of time. It is crucial to build citizens’ trust in the development of AI, to set the European way for dealing with the extraordinary changes that are already happening, as well as to steer the political debate on AI at the global level.

We are confident our text balances the protection of fundamental rights with the need to provide legal certainty to businesses and stimulate innovation in Europe.”

Co-rapporteur Dragos Tudorache (Renew, Romania) added:

“Given the profound transformative impact AI will have on our societies and economies, the AI Act is very likely the most important piece of legislation in this mandate. It’s the first piece of legislation of this kind worldwide, which means that the EU can lead the way in making AI human-centric, trustworthy, and safe.

We have worked to support AI innovation in Europe and to give start-ups, SMEs and industry space to grow and innovate while protecting fundamental rights, strengthening democratic oversight, and ensuring a mature system of AI governance and enforcement.”

The rules are based on a risk-based approach and they establish obligations for providers and users depending on the level of risk that the AI system can generate. AI systems with an unacceptable level of risk to people’s safety would be strictly prohibited, including systems that deploy subliminal or purposefully manipulative techniques, exploit people’s vulnerabilities, or are used for social scoring.

MEPs also substantially amended the list of prohibited AI practices to include bans on intrusive and discriminatory uses of AI systems, such as real-time remote biometric identification systems in publicly accessible spaces, post-remote biometric identification systems (except for law enforcement purposes), biometric categorisation systems using sensitive characteristics, predictive policing systems, emotion recognition systems in law enforcement, border management, workplace, and educational institutions, and indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases.

MEPs also expanded the classification of high-risk areas to include harm to people’s health, safety, fundamental rights, or the environment. They also added AI systems that influence voters in political campaigns and recommender systems used by social media platforms to the high-risk list.

To boost AI innovation, MEPs added exemptions to these rules for research activities and AI components provided under open-source licenses. The new law also promotes regulatory sandboxes – or controlled environments established by public authorities – to test AI before its deployment.

MEPs want to boost citizens’ right to file complaints about AI systems and receive explanations of decisions based on high-risk AI systems that significantly impact their rights. MEPs also reformed the role of the EU AI Office, which would be tasked with monitoring how the AI rulebook is implemented.

Tim Wright, Tech and AI Regulatory Partner at London-based law firm Fladgate, commented:

“US-based AI developers will likely steal a march on their European competitors given the news that the EU parliamentary committees have green-lit its groundbreaking AI Act, where AI systems will need to be categorised according to their potential for harm from the outset. 

The US tech approach is typically to experiment first and, once market and product fit is established, to retrofit to other markets and their regulatory framework. This approach fosters innovation whereas EU-based AI developers will need to take note of the new rules and develop systems and processes which may take the edge off their ability to innovate.

The UK is adopting a similar approach to the US, although the proximity of the EU market means that UK-based developers are more likely to fall into step with the EU ruleset from the outset. However, the potential to experiment in a safe space – a regulatory sandbox – may prove very attractive.”

Before negotiations with the Council on the final form of the law can begin, this draft negotiating mandate needs to be endorsed by the whole Parliament, with the vote expected during the 12-15 June session.

(Photo by Denis Sebastian Tamas on Unsplash)

Related: UK details ‘pro-innovation’ approach to AI regulation

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post EU committees green-light the AI Act appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/05/11/eu-committees-green-light-ai-act/feed/ 0
AI think tank calls GPT-4 a risk to public safety https://www.artificialintelligence-news.com/2023/03/31/ai-think-tank-gpt-4-risk-to-public-safety/ https://www.artificialintelligence-news.com/2023/03/31/ai-think-tank-gpt-4-risk-to-public-safety/#respond Fri, 31 Mar 2023 15:20:10 +0000 https://www.artificialintelligence-news.com/?p=12881 An AI think tank has filed a complaint with the FTC in a bid to stop OpenAI from further commercial deployments of GPT-4. The Center for Artificial Intelligence and Digital Policy (CAIDP) claims OpenAI has violated section five of the FTC Act—alleging the company of deceptive and unfair practices. Marc Rotenberg, Founder and President of... Read more »

The post AI think tank calls GPT-4 a risk to public safety appeared first on AI News.

]]>
An AI think tank has filed a complaint with the FTC in a bid to stop OpenAI from further commercial deployments of GPT-4.

The Center for Artificial Intelligence and Digital Policy (CAIDP) claims OpenAI has violated section five of the FTC Act—alleging the company of deceptive and unfair practices.

Marc Rotenberg, Founder and President of the CAIDP, said:

“The FTC has a clear responsibility to investigate and prohibit unfair and deceptive trade practices. We believe that the FTC should look closely at OpenAI and GPT-4.

We are specifically asking the FTC to determine whether the company has complied with the guidance the federal agency has issued.”

The CAIDP claims that OpenAI’s GPT-4 is “biased, deceptive, and a risk to privacy and public safety”.

The think tank cited contents in the GPT-4 System Card that describe the model’s potential to reinforce biases and worldviews, including harmful stereotypes and demeaning associations for certain marginalised groups.

In the aforementioned System Card, OpenAI acknowledges that it “found that the model has the potential to reinforce and reproduce specific biases and worldviews, including harmful stereotypical and demeaning associations for certain marginalized groups.”

Furthermore, the document states: “AI systems will have even greater potential to reinforce entire ideologies, worldviews, truths and untruths, and to cement them or lock them in, foreclosing future contestation, reflection, and improvement.”

Other harmful outcomes that OpenAI says GPT-4 could lead to include:

  1. Advice or encouragement for self-harm behaviours
  2. Graphic material such as erotic or violent content
  3. Harassing, demeaning, and hateful content
  4. Content useful for planning attacks or violence
  5. Instructions for finding illegal content

The CAIDP claims that OpenAI released GPT-4 to the public without an independent assessment of its risks.

Last week, the FTC told American companies advertising AI products:

“Merely warning your customers about misuse or telling them to make disclosures is hardly sufficient to deter bad actors.

Your deterrence measures should be durable, built-in features and not bug corrections or optional features that third parties can undermine via modification or removal.”

With its filing, the CAIDP calls on the FTC to investigate the products of OpenAI and other operators of powerful AI systems, prevent further commercial releases of GPT-4, and ensure the establishment of necessary guardrails to protect consumers, businesses, and the commercial marketplace.

Merve Hickok, Chair and Research Director of the CAIDP, commented:

“We are at a critical moment in the evolution of AI products.

We recognise the opportunities and we support research. But without the necessary safeguards established to limit bias and deception, there is a serious risk to businesses, consumers, and public safety.

The FTC is uniquely positioned to address this challenge.”

The complaint was filed as Elon Musk, Steve Wozniak, and other AI experts signed a petition to “pause” development on AI systems more powerful than GPT-4.

However, other high-profile figures believe progress shouldn’t be slowed/halted:

Musk was a co-founder of OpenAI, which was originally created as a nonprofit with the mission of ensuring that AI benefits humanity. Musk resigned from OpenAI’s board in 2018 and has publicly questioned the company’s transformation:

Global approaches to AI regulation

As AI systems become more advanced and powerful, concerns over their potential risks and biases have grown. Organisations such as CAIDP, UNESCO, and the Future of Life Institute are pushing for ethical guidelines and regulations to be put in place to protect the public and ensure the responsible development of AI technology.

UNESCO (United Nations Educational, Scientific, and Cultural Organization) has called on countries to implement its “Recommendation on the Ethics of AI” framework.

Earlier today, Italy banned ChatGPT. The country’s data protection authorities said it would be investigated and the system does not have a proper legal basis to be collecting personal information about the people using it.

The wider EU is establishing a strict regulatory environment for AI, in contrast to the UK’s relatively “light-touch” approach.

Tim Wright, Partner and specialist tech and AI regulation lawyer at law firm Fladgate, commented on the UK’s vision:

“The regulatory principles set out in the whitepaper simply confirm the Government’s preferred approach which they say will encourage innovation in the space without imposing an undue burden on businesses developing and adopting AI while encouraging fair and ethical use and protecting individuals.

Time will tell if this sector-by-sector approach has the desired effect. What it does do is put the UK on a completely different approach from the EU, which is pushing through a detailed rulebook backed up by a new liability regime and overseen by a single super AI regulator.”

As always, it’s a balancing act between regulation and innovation. Not enough regulation puts the public at risk while too much risks driving innovation elsewhere.

(Photo by Ben Sweet on Unsplash)

Related: What will AI regulation look like for businesses?

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI think tank calls GPT-4 a risk to public safety appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/03/31/ai-think-tank-gpt-4-risk-to-public-safety/feed/ 0
What will AI regulation look like for businesses? https://www.artificialintelligence-news.com/2023/03/24/what-will-ai-regulation-look-like-for-businesses/ https://www.artificialintelligence-news.com/2023/03/24/what-will-ai-regulation-look-like-for-businesses/#respond Fri, 24 Mar 2023 16:19:11 +0000 https://www.artificialintelligence-news.com/?p=12863 Unlike food, medicine, and cars, we have yet to see clear regulations or laws to guide AI design in the US. Without standard guidelines, companies that design and develop ML models have historically worked off of their own perceptions of right and wrong.  This is about to change.  As the EU finalizes its AI Act... Read more »

The post What will AI regulation look like for businesses? appeared first on AI News.

]]>
Unlike food, medicine, and cars, we have yet to see clear regulations or laws to guide AI design in the US. Without standard guidelines, companies that design and develop ML models have historically worked off of their own perceptions of right and wrong. 

This is about to change. 

As the EU finalizes its AI Act and generative AI continues to rapidly evolve, we will see the artificial intelligence regulatory landscape shift from general, suggested frameworks to more permanent laws. 

The EU AI Act has spurred significant conversations among business leaders: How can we prepare for stricter AI regulations? Should I proactively design AI that meets this criterion? How soon will it be before similar regulation is passed in the US?

Continue reading to better understand what AI regulation may look like for companies in the near future.  

How the EU AI Act will impact your business 

Like the EU’s General Data Protection Regulation (GDPR) released in 2018, the EU AI Act is expected to become a global standard for AI regulation. Parliament is scheduled to vote on the draft by the end of March 2023, and if this timeline is met, the final AI Act could be adopted by the end of the year. 

It’s highly predicted that the effects of the AI Act will be felt beyond the EU’s borders (read: Brussels effect), albeit it being European regulation. Organizations operating on an international scale will be required to directly conform to the legislation. Meanwhile, US and other independently-led companies will quickly realize that it’s in their best interest to comply with this regulation.

We’re beginning to see this already with other similar legislation like Canada’s Artificial Intelligence & Data Act proposal and New York City’s automated employment regulation

AI system risk categories

Under the AI Act, organizations’ AI systems will be classified into three risk categories, each with their own set of guidelines and consequences. 

  • Unacceptable risk. AI systems that meet this level will be banned. This includes manipulative systems that cause harm, real-time biometric identification systems used in public spaces for law enforcement, and all forms of social scoring. 
  • High risk. These AI systems include tools like job applicant scanning models and will be subject to specific legal requirements. 
  • Limited and minimal risk. This category encompasses many of the AI applications businesses use today, including chatbots and AI-powered inventory management tools, and will largely be left unregulated. Customer-facing limited-risk applications, however, will require disclosure that AI is being used. 

What will AI regulation look like? 

Because the AI Act is still under draft, and its global effects are to be determined, we can’t say with certainty what regulation will look like for organizations. However, we do know that it will vary based on industry, the type of model you’re designing, and the risk category in which it falls. 

Regulation will likely include scrutiny with a third party, where your model is stress tested against the population you’re attempting to serve. These tests will evaluate questions including ‘Is the model performing within acceptable margins of error?’ and ‘Are you disclosing the nature and use of your model? ‘

For organizations with high-risk AI systems, the AI Act has already outlined several requirements: 

  • Implementation of a risk-management system. 
  • Data governance and management. 
  • Technical documentation.
  • Record keeping and logging. 
  • Transparency and provision of information to users.
  • Human oversight. 
  • Accuracy, robustness, and cybersecurity.
  • Conformity assessment. 
  • Registration with the EU-member-state government.
  • Post-market monitoring system. 

We can also expect regular reliability testing for models (similar to e-checks for cars) to become a more widespread service in the AI industry. 

How to prepare for AI regulations 

Many AI leaders have already been prioritizing trust and risk mitigation when designing and developing ML models. The sooner you accept AI regulation as our new reality, the more successful you will be in the future. 

Here are just a few steps organizations can take to prepare for stricter AI regulation: 

  • Research and educate your teams on the types of regulation that will exist, and how it impacts your company today and in the future.  
  • Audit your existing and planned models. Which risk category do they align with and which associated regulations will impact you most?
  • Develop and adopt a framework for designing responsible AI solutions.
  • Think through your AI risk mitigation strategy. How does it apply to existing models and ones designed in the future? What unexpected actions should you account for?  
  • Establish an AI governance and reporting strategy that ensures multiple checks before a model goes live. 

In light of the AI Act and inevitable future regulation, ethical and fair AI design is no longer a “nice to have”, but a “must have”. How can your organization prepare for success?

(Photo by ALEXANDRE LALLEMAND on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post What will AI regulation look like for businesses? appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/03/24/what-will-ai-regulation-look-like-for-businesses/feed/ 0
US and EU agree to collaborate on improving lives with AI https://www.artificialintelligence-news.com/2023/01/31/us-and-eu-agree-collaborate-improving-lives-ai/ https://www.artificialintelligence-news.com/2023/01/31/us-and-eu-agree-collaborate-improving-lives-ai/#comments Tue, 31 Jan 2023 17:02:28 +0000 https://www.artificialintelligence-news.com/?p=12678 The US and EU have signed a landmark agreement to explore how AI can be used to improve lives. The US Department of State and EU Commission’s Directorate-General for Communications Networks, Content and Technology (DG CONNECT) simultaneously held a virtual signing ceremony of the agreement in Washington and Brussels. Roberto Viola, Director General of DG... Read more »

The post US and EU agree to collaborate on improving lives with AI appeared first on AI News.

]]>
The US and EU have signed a landmark agreement to explore how AI can be used to improve lives.

The US Department of State and EU Commission’s Directorate-General for Communications Networks, Content and Technology (DG CONNECT) simultaneously held a virtual signing ceremony of the agreement in Washington and Brussels.

Roberto Viola, Director General of DG CONNECT, signed the ‘Administrative Arrangement on Artificial Intelligence for the Public Good’ on behalf of the EU.

“Today, we are strengthening our cooperation with the US on artificial intelligence and computing to address global challenges, from climate change to natural disasters,” commented Thierry Breton, EU Commissioner for the Internal Market.

“Based on common values and interests, EU and US researchers will join forces to develop societal applications of AI and will work with other international partners for a truly global impact.”

Jose W. Fernandez, Under Secretary of State for Economic Growth, Energy, and the Environment, signed the agreement on behalf of the US.

The arrangement will deepen transatlantic scientific and technological research through what many believe to be the fourth industrial revolution.

With rapid advances in AI, the IoT, distributed ledgers, autonomous vehicles, and more, it’s vital that fundamental principles are upheld.

In a statement, Fernandez’s office wrote:

“This arrangement presents an opportunity for joint scientific and technological research with our Transatlantic partners, for the benefit of the global scientific community. 

Furthermore, it offers a compelling vision for how to use AI in a way that serves our peoples and upholds our democratic values such as transparency, fairness, and privacy.”

Some of the specific research areas will include extreme weather and climate forecasting, emergency response management, health and medicine improvements, electric grid optimisation, and agriculture optimisation.

The latest agreement between the US and EU builds upon the Declaration for the Future of the Internet.

(Image Credit: European Commission)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post US and EU agree to collaborate on improving lives with AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/01/31/us-and-eu-agree-collaborate-improving-lives-ai/feed/ 1
Italy’s facial recognition ban exempts law enforcement https://www.artificialintelligence-news.com/2022/11/15/italy-facial-recognition-ban-exempts-law-enforcement/ https://www.artificialintelligence-news.com/2022/11/15/italy-facial-recognition-ban-exempts-law-enforcement/#respond Tue, 15 Nov 2022 15:47:07 +0000 https://www.artificialintelligence-news.com/?p=12484 Italy has banned the use of facial recognition, except for law enforcement purposes. On Monday, the country’s Data Protection Authority (Garante per la protezione dei dati personali) issued official stays to two municipalities – the southern Italian city of Lecce and the Tuscan city of Arezzo – over their experiments with biometrics technologies. The agency... Read more »

The post Italy’s facial recognition ban exempts law enforcement appeared first on AI News.

]]>
Italy has banned the use of facial recognition, except for law enforcement purposes.

On Monday, the country’s Data Protection Authority (Garante per la protezione dei dati personali) issued official stays to two municipalities – the southern Italian city of Lecce and the Tuscan city of Arezzo – over their experiments with biometrics technologies.

The agency banned facial recognition systems using biometric data until a specific law governing its use is adopted.

“The moratorium arises from the need to regulate eligibility requirements, conditions and guarantees relating to facial recognition, in compliance with the principle of proportionality,” the agency said in a statement.

However, an exception was added for biometric data technology that is being used “to fight crime” or in a judicial investigation.

In Lecce, the municipality’s authorities said they would begin using facial recognition technologies. Italy’s Data Protection Agency ordered Lecce’s authorities to explain what systems will be used, their purpose, and the legal basis.

As for the Arezzo case, the city’s police were to be equipped with infrared smart glasses that could recognise car license plates.

Facial recognition technology is a central concern in the EU’s proposed AI regulation. The proposal has been released but will need to pass consultations within the EU before it’s adopted into law.

(Photo by Mikita Yo on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Italy’s facial recognition ban exempts law enforcement appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/11/15/italy-facial-recognition-ban-exempts-law-enforcement/feed/ 0
MEPs back AI mass surveillance ban for the EU https://www.artificialintelligence-news.com/2021/10/07/meps-back-ai-mass-surveillance-ban-for-the-eu/ https://www.artificialintelligence-news.com/2021/10/07/meps-back-ai-mass-surveillance-ban-for-the-eu/#respond Thu, 07 Oct 2021 10:42:18 +0000 http://artificialintelligence-news.com/?p=11194 MEPs from the European Parliament have adopted a resolution in favour of banning AI-powered mass surveillance and facial recognition in public spaces. With a 71 vote majority, MEPs sided with Petar Vitanov’s report that argued AI must not be allowed to encroach on fundamental rights. An S&D party member, Vitanov pointed out that AI has... Read more »

The post MEPs back AI mass surveillance ban for the EU appeared first on AI News.

]]>
MEPs from the European Parliament have adopted a resolution in favour of banning AI-powered mass surveillance and facial recognition in public spaces.

With a 71 vote majority, MEPs sided with Petar Vitanov’s report that argued AI must not be allowed to encroach on fundamental rights.

An S&D party member, Vitanov pointed out that AI has not yet proven to be a wholly reliable tool on its own.

He cited examples of individuals being denied social benefits because of faulty AI tools, or people being arrested due to innacurate facial recognition, adding that “the victims are always the poor, immigrants, people of colour or Eastern Europeans. I always thought that only happens in the movies”.

Despite the report’s overall majority backing, members of the European People’s Party – the largest party in the EU – all voted against the report apart from seven exceptions.

Behind this dispute is a fundamental disagreement over what exactly constitutes encroaching on civil liberties when using AI surveillance tools.

Karen Melchior

On the left are politicians like Renew Europe MEP Karen Melchior, who believes that “predictive profiling, AI risk assessment, and automated decision making systems are weapons of ‘math destruction’… as dangerous to our democracy as nuclear bombs are for living creatures and life”.

“They will destroy the fundamental rights of each citizen to be equal before the law and in the eye of our authorities,” she said.

Meanwhile, centrist and conservative-leaning MEPs tend to have a more cautious approach to banning AI technologies outright.

Pointing to the July capture of Dutch journalist Peter R. de Vries’ suspected killers thanks to AI, home affairs commissioner Ylva Johanssen described this major case as an example of “smart digital technology used in defence of citizens and our fundamental rights”.

Ylva Johanssen

“Don’t put protection of fundamental rights in contradiction to the protection of human lives and of societies. It’s simply not true that we have to choose. We are capable of doing both,” she added.

The Commission published its proposal for a European Artificial Intelligence Act in April.

Global human rights charity, Fair Trials, welcomed the vote — calling it a “landmark result for fundamental rights and non-discrimination in the technological age”.

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post MEPs back AI mass surveillance ban for the EU appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/10/07/meps-back-ai-mass-surveillance-ban-for-the-eu/feed/ 0
EU regulation sets fines of €20M or up to 4% of turnover for AI misuse https://www.artificialintelligence-news.com/2021/04/14/eu-regulation-fines-20m-up-4-turnover-ai-misuse/ https://www.artificialintelligence-news.com/2021/04/14/eu-regulation-fines-20m-up-4-turnover-ai-misuse/#respond Wed, 14 Apr 2021 16:00:39 +0000 http://artificialintelligence-news.com/?p=10460 A leaked draft of EU regulation around the use of AI sets hefty fines of up to €20 million or four percent of global turnover (whichever is greater.) The regulation (PDF) was first reported by Politico and is expected to be announced next week on April 21st. In the draft, the legislation’s authors wrote: “Some... Read more »

The post EU regulation sets fines of €20M or up to 4% of turnover for AI misuse appeared first on AI News.

]]>
A leaked draft of EU regulation around the use of AI sets hefty fines of up to €20 million or four percent of global turnover (whichever is greater.)

The regulation (PDF) was first reported by Politico and is expected to be announced next week on April 21st.

In the draft, the legislation’s authors wrote:

“Some of the uses and applications of artificial intelligence may generate risks and cause harm to interests and rights that are protected by Union law. Such harm might be material or immaterial, insofar as it relates to the safety and health of persons, their property or other individual fundamental rights and interests protected by Union law.

A legal framework setting up a European approach on artificial intelligence is needed to foster the development and uptake of artificial intelligence that meets a high level of protection of public interests, in particular the health, safety and fundamental rights and freedoms of persons as recognised and protected by Union law.”

Few debate the need for AI regulation, but the extent to which it should be controlled is a contentious issue. No controls risk putting lives and privacy at risk, especially with AI’s well-documented bias issues. However, overregulation and fear of being penalised for AI research in Europe risk driving such an important technology out of the continent to less strict countries.

AI relies on data and therefore the impact the EU’s GDPR would have on research was part of the debate around that particular legislation (which carries the same maximum penalties for breaches as the draft AI rules) when it was being conceived. It’s hard to say for sure whether it’s a result of the strict regulatory environment, but EU nations are falling behind industry leaders like the US, China, and the UK.

Last year, the White House even urged its European allies not to overregulate AI. In a statement released by the Office of Science and Technology Policy, the White House wrote:

“Europe and our allies should avoid heavy-handed innovation-killing models, and instead consider a similar regulatory approach.

The best way to counter authoritarian uses of AI is to make sure America and our international partners remain the global hubs of innovation, shaping the evolution of technology in a manner consistent with our common values.”

The EU believes it has taken a “human-centric” approach to its AI regulation which aims to strike a balance between not leaving powerful companies to their own devices like the US, nor using the technology to create a 1984-like dystopian surveillance state like China with its social scoring systems and mass facial recognition.

AI in policing is one of the most fiercely debated issues, especially due to the aforementioned bias issues. However, it also has huge potential to tackle serious crime. In this area, the EU is also attempting to strike a fine balance by allowing the use of facial recognition by authorities in public places to fight serious crime if its use is limited in time and geography.

European cooperation with the US – which has already been strained in recent years due to EU members’ increasing ties with Russia and China, and a perceived lack of commitment to NATO with historic underfunding and plans for the creation of an EU army – is likely to be put under further pressure from the bill.

In addition to setting rules which govern the use of AI, the draft also proposes the creation of a European Artificial Intelligence Board which features one representative from each EU member, the EU’s data protection authority, and a European Commission representative.

21/04 update: As reported by French publication Contexte, a new draft of the EU’s impending AI regulations increases the potential fines to six percent of turnover, or €30 million (whichever is higher.)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post EU regulation sets fines of €20M or up to 4% of turnover for AI misuse appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/04/14/eu-regulation-fines-20m-up-4-turnover-ai-misuse/feed/ 0
Macron wants Europeans to relax about data or be left behind in AI https://www.artificialintelligence-news.com/2018/03/21/macron-europeans-data-ai/ https://www.artificialintelligence-news.com/2018/03/21/macron-europeans-data-ai/#respond Wed, 21 Mar 2018 13:26:20 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=2931 Emmanuel Macron, President of France, is calling for Europeans to relax about the use of their data by AI companies to prevent those operating in the region falling behind their international counterparts. Citizens are increasingly concerned about their use of data, especially following the ongoing investigations into Facebook and Cambridge Analytica. AI companies, however, rely... Read more »

The post Macron wants Europeans to relax about data or be left behind in AI appeared first on AI News.

]]>
Emmanuel Macron, President of France, is calling for Europeans to relax about the use of their data by AI companies to prevent those operating in the region falling behind their international counterparts.

Citizens are increasingly concerned about their use of data, especially following the ongoing investigations into Facebook and Cambridge Analytica. AI companies, however, rely on the bulk collection of data for training machine learning models.

Macron wants to ensure France is a leader in AI but says his efforts are being held back by European attitudes to privacy.

The president’s comments are in contrast to EU’s current and upcoming policies which companies are concerned will reduce their abilities when compared to the competition in less restrictive countries.

GDPR (General Data Protection Regulation) is often the most cited example of being a particular concern to companies and researchers which rely on the bulk collection of data for their work.

In a piece for our sister publication IoT News, I spoke to Peter Wright, Solicitor and Managing Director of Digital Law UK, who highlighted these very concerns.

“It’s a particular problem when you’re looking at the US, where in places like California they are not under these same pressures,” said Wright. “You’ve got your Silicon Valley startup that can access large amounts of money from investors, access specialist knowledge in the field, and will not be fighting with one arm tied behind its back like a competitor in Europe.”

“Very often we hear ‘Where are the British and European Googles and Facebooks?’ Well, it’s because of barriers like this which stop organisations like that being possible to grow and develop.”

Speaking in Beijing, Macron said the European Union needs to ‘move fast’’ to create ‘a single market that our big data actors can access.’ He said the EU must decide what model it wants to exploit data. His comments were made after witnessing the depth and scope of Chinese data collection.

This may be an uphill proposal for Europeans. In France, 70 percent of people are concerned about personal data collected when they use Internet search engines, according to a December survey.

Do you think Europeans should relax about their data privacy? Let us know in the comments.

 Interested in hearing industry leaders discuss subjects like this and sharing their use-cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London and Amsterdam to learn more. Co-located with the  IoT Tech Expo, Blockchain Expo and Cyber Security & Cloud Expo so you can explore the future of enterprise technology in one place.

The post Macron wants Europeans to relax about data or be left behind in AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2018/03/21/macron-europeans-data-ai/feed/ 0
Editorial: Facebook excluding the EU from AI advancement heralds a trend https://www.artificialintelligence-news.com/2017/11/28/facebook-eu-ai-advancement/ https://www.artificialintelligence-news.com/2017/11/28/facebook-eu-ai-advancement/#respond Tue, 28 Nov 2017 17:10:49 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=2729 EU regulations have forced Facebook to exclude citizens of member states from its new AI-powered suicide prevention tool, and it heralds a worrying trend. Facebook’s new suicide prevention tool aims to use pattern recognition to detect posts or live videos where someone might be expressing thoughts of suicide, and to help respond to reports faster.... Read more »

The post Editorial: Facebook excluding the EU from AI advancement heralds a trend appeared first on AI News.

]]>
EU regulations have forced Facebook to exclude citizens of member states from its new AI-powered suicide prevention tool, and it heralds a worrying trend.

Facebook’s new suicide prevention tool aims to use pattern recognition to detect posts or live videos where someone might be expressing thoughts of suicide, and to help respond to reports faster.

In a post announcing the feature, Facebook wrote: “We are starting to roll out artificial intelligence outside the US to help identify when someone might be expressing thoughts of suicide, including on Facebook Live. This will eventually be available worldwide, except the EU.”

EU data protection regulations

Facebook’s notable lack of support for its latest AI advancement in EU member countries is likely due to strict data protection regulations.

In recent months, I’ve spoken to lawyers, executives from leading companies, and even concerned members of the European Parliament itself about the EU’s stringent regulations stifling innovation across member states.

Julia Reda, an MEP, says: “When we’re trying to regulate the likes of Google, how do we ensure that we’re not also setting in stone that any European competitor that might be growing at the moment would never emerge?”

My discussions highlighted the fear that European businesses will struggle without the data their international counterparts have access to, and startups may look to non-EU countries to set up their operations.

However, the situation has taken a more serious turn with the potential for loss of life. Beyond the inability to launch potentially life-saving features like Facebook’s suicide prevention, the regulations will slow innovation in fields benefiting from AI such as healthcare.

We often cover medical developments on AI News, and most of these advancements rely on data collection to improve machine learning models. GDPR puts significant restrictions on how, when, and why firms can collect and use this data — which simply do not exist to such an extent anywhere else in the world.

“You’ve got your Silicon Valley startup that can access large amounts of money from investors, access specialist knowledge in the field, and will not be fighting with one arm tied behind its back like a competitor in Europe,” comments Peter Wright, Solicitor and Managing Director of Digital Law UK. “Very often we hear ‘Where are the European Googles and Facebooks?’ Well, it’s because of barriers like this which stop organisations like that being possible to grow and develop.”

The issue stems from austere EU data protection regulations being unsuitable for today’s world. There’s little debate against the need to safeguard data, and penalise where this has been insufficient, but even companies with a history of protecting their users are concerned about the extent of this legislation.

“We deal with a very large amount of customer data at F-Secure and I don’t go a working day without hearing a GDPR discussion around me,” comments Sean Sullivan, Security Advisor at Finnish cyber security company F-Secure. “It’s a huge effort, and many people are involved within my part of the organisation.”

“And not just at the legal level; we have ‘data people’ working with our product developers on our software architecture. We’ve always been a privacy focused company, but the last year has been a whole new level in my experience.”

The penalties for non-compliance with GDPR are severe and could devastate a company. Startups in particular, especially in areas such as AI, will struggle from being unable to collect anywhere near as much data as current leaders such as Google hold. However, that doesn’t mean established companies have it easy.

“Fortunately, we have the people we need. I imagine Facebook is still in the position of needing to find California-based GDPR experts who can work with the local developer teams,” explains Sullivan. “I’m confident it has people in Europe who are working on the high level issues, but I doubt that all of the product teams will be able to find the needed resources to be confident of GDPR compliance.”

“There will be more tech innovations that won’t be rolled out in the EU. Hopefully not for long, but at least for the near future.”

With its billions of users, there’s a good chance everyone has friends and family who use Facebook. I’m certain if anyone expresses suicidal thoughts on the platform we’d all want them to receive help as soon as possible.

For many consumers, this situation will be the first to bring awareness to the negative impacts of the EU’s strict data regulations. For businesses, this serves as yet another example.

If you’re having thoughts of suicide or self-harm, please find a list of international helplines here.

What are your thoughts on the EU’s data protection regulations? Let us know in the comments.

 Interested in hearing industry leaders discuss subjects like this and sharing their use-cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London and Amsterdam to learn more. Co-located with the  IoT Tech Expo, Blockchain Expo and Cyber Security & Cloud Expo so you can explore the future of enterprise technology in one place.

The post Editorial: Facebook excluding the EU from AI advancement heralds a trend appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2017/11/28/facebook-eu-ai-advancement/feed/ 0