eu Archives - AI News https://www.artificialintelligence-news.com/tag/eu/ Artificial Intelligence News Mon, 31 Jul 2023 14:07:52 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png eu Archives - AI News https://www.artificialintelligence-news.com/tag/eu/ 32 32 AI regulation: A pro-innovation approach – EU vs UK https://www.artificialintelligence-news.com/2023/07/31/ai-regulation-pro-innovation-approach-eu-vs-uk/ https://www.artificialintelligence-news.com/2023/07/31/ai-regulation-pro-innovation-approach-eu-vs-uk/#respond Mon, 31 Jul 2023 14:07:50 +0000 https://www.artificialintelligence-news.com/?p=13348 In this article, the writers compare the United Kingdom’s plans for implementing a pro-innovation approach to regulation (“UK Approach”) versus the European Union’s proposed Artificial Intelligence Act (the “EU AI Act”). Authors: Sean Musch, AI & Partners and Michael Borrelli, AI & Partners AI – The opportunity and the challenge AI currently delivers broad societal... Read more »

The post AI regulation: A pro-innovation approach – EU vs UK appeared first on AI News.

]]>
In this article, the writers compare the United Kingdom’s plans for implementing a pro-innovation approach to regulation (“UK Approach”) versus the European Union’s proposed Artificial Intelligence Act (the “EU AI Act”).

Authors: Sean Musch, AI & Partners and Michael Borrelli, AI & Partners

AI – The opportunity and the challenge

AI currently delivers broad societal benefits, from medical advances to mitigating climate change. As an example, an AI technology developed by DeepMind, a UK- based business, can predict the structure of almost every protein known to science. Government frameworks consider the role of regulation in creating the environment for AI to flourish. AI technologies have not yet reached their full potential. Under the right conditions, AI will transform all areas of life and stimulate economies by unleashing innovation and driving productivity, creating new jobs and improving the workplace.

The UK has indicated a requirement to act quickly to continue to lead the international conversation on AI governance and demonstrate the value of our pragmatic, proportionate regulatory approach. In their report, the UK government identify the short time frame for intervention to provide a clear, pro-innovation regulatory environment in order to make the UK one of the top places in the world to build foundational AI companies. Not too dissimilar to this EU legislators have signalled an intention to make the EU a global hub for AI innovation. On both fronts responding to risk and building public trust are important drivers for regulation. Yet, clear and consistent regulation can also support business investment and build confidence in innovation.

What remains critical for the industry is winning and retaining consumer trust, which is key to the success of innovation economies. Neither the EU nor the UK can afford not to have clear, proportionate approaches to regulation that enable the responsible application of  AI to flourish. Without such consideration, they risk creating cumbersome rules applying to all AI technologies.

What are the policy objectives and intended effects?

Similarities exist in terms of the overall aims. As shown in the table below, the core similarities revolve around growth, safety and economic prosperity:

EU AI ActUK Approach
Ensure that AI systems placed on the market and used are safe and respect existing laws on fundamental rights and Union values.Drive growth and prosperity by boosting innovation, investment, and public trust to harness the opportunities and benefits that AI technologies present.
Enhance governance and effective enforcement of existing laws on fundamental rights and safety requirements applicable to AI systems.Strengthen the UK’s position as a global leader in AI, by ensuring the UK is the best place to develop and use AI technologies.
Ensure legal certainty to facilitate investment and innovation in AI.
Facilitate the development of a single market for lawful, safe, and trustworthy AI applications and prevent market fragmentation.

What are the problems being tackled?

Again, similarities exist in terms of a common focus: the end-user. AI’s involvement in multiple activities of the economy, whether this be from simple chatbots to biometric identification, inevitably mean that end-users end up being affected. Protecting them at all costs seems to be the presiding theme:

EU AI ActUK Approach
Safety risks. Increased risks to the safety and security of citizens caused by the use of AI systems.Market failures. A number of market failures (information asymmetry, misaligned incentives, negative externalities, regulatory failure), mean AI risks are not being adequately addressed.
Fundamental rights risk. The use of AI systems poses an increased risk of violations of citizens’ fundamental rights and Union values.Consumer risks. These risks include damage to physical and mental health, bias and discrimination, and infringements on privacy and individual rights.
Legal uncertainty. Legal uncertainty and complexity on how to ensure compliance with rules applicable to AI systems dissuade businesses from developing and using the technology.
Enforcement. Competent authorities do not have the powers and/or procedural framework to ensure compliance of AIuse with fundamental rights and safety.
Mistrust. Mistrust in AI would slow down AI development in Europe and reduce the global competitiveness of the EU economies.
Fragmentation. Fragmented measures create obstacles for cross-border AI single market and threaten the Union’s digital sovereignty.

What are the differences in policy options?

A variety of options have been considered by the respective policymakers. On the face of it, pro-innovation requires a holistic examination to account for the variety of challenges new ways of working generate. The EU sets the standard with Option 3:

EU AI Act (Decided)UK Approach (In Process)
Option 1 – EU Voluntary labelling scheme – An EU act establishing a voluntary labelling scheme. One definition of AI, however applicable only on a voluntary basis.Option 0 – Do nothing option – Assume the EU delivers the AI Act as drafted in April 2021. The UK makes no regulatory changes regarding AI.
Option 2 – Ad-hoc sectoral approach – Ad-hoc sectoral acts (revision or new). Each sector can adopt a definition of AI and determine the riskiness of the AI systems covered.Option 1 – Delegate to existing regulators, guided by non-statutory advisory principles – Non-legislative option with existing regulators applying cross-sectoral AI governance principles within their remits.
Option 3 – Horizontal risk-based act on AI – A single binding horizontal act on AI. One horizontally applicable AI definition and methodology for the determination of high-risk (risk-based).Option 2 – Delegate to existing regulators with a duty to regard the principles, supported by central AI regulatory functions (Preferred option) – Existing regulators have a ‘duty to have due regard’ to the cross-sectoral AI governance principles, supported by central AI regulatory functions. No new mandatory obligations for businesses.
Option 3+ – Option 3 + industry-led codes of conduct for non-high-risk AI.Option 3 – Centralised AI regulator with new legislative requirements placed on AI systems – The UK establishes a central AI regulator, with mandatory requirements for businesses aligned to the EU AI Act.
Option 4 – Horizontal act for all AI – A single binding horizontal act on AI. One horizontal AI definition, but no methodology/gradation (all risks covered).

What are the estimated direct compliance costs to firms?

Both the UK Approach and the EU AI Act regulatory framework will apply to all AI systems being designed or developed, made available or otherwise being used in the EU/UK, whether they are developed in the EU/UK or abroad. Both businesses that develop and deploy AI, “AI businesses”, and businesses that use AI, “AI adopting businesses”, are in the scope of the framework. These two types of firms have different expected costs per business under the respective frameworks.

UK Approach: Key assumptions for AI system costs

Key finding: Cost of compliance for HRS highest under Option 3

OptionOption 0Option 1Option 2Option 3
% of businesses that provide high-risk systems (HRS)8.1%8.1%8.1%
Cost of compliance per HRS£3,698£3,698£36,981
% of businesses that AI systems that interact with natural persons (non-HRS)39.0%39.0%39.0%
Cost of compliance per non-HRS£330£330£330
Assumed number of AI systems per AI business (2020)Small – 2
Medium – 5
Large – 10
Assumed number of AI systems per AI-adopting business (2020)Small – 2
Medium – 5
Large – 10
EU AI Act: Total compliance cost of the five requirements for each AI product

Key finding: Information provision represents the highest cost incurred by firms.

Administrative ActivityTotal MinutesTotal Admin Cost (Hourly rate = €32)Total Cost
Training Data€5,180.5
Documents & Record Keeping€2,231
Information Provision€6,800
Human Oversight€1,260
Robustness and Accuracy€4,750
Total€20,581.5€10,976.8€29,276.8

In light of these comparisons, it appears the EU estimates a lower cost of compliance compared to the UK. Lower costs don’t confer a less rigid approach. Rather, they indicate an itemised approach to cost estimation as well as using a standard pricing metric, hours. In practice, firms are likely to aim to make this more efficient by reducing the number of hours required to achieve compliance.

Lessons from the UK Approach for the EU AI Act

The forthcoming EU AI Act is set to place the EU at the global forefront of regulating this emerging technology. Accordingly, models for the governance and mitigation of AI risk from outside the region can still provide insightful lessons for EU decision-makers to learn and issues to account for before the EU AI Act is passed.

This is certainly applicable to Article 9 of the EU AI Act, which requires developers to establish, implement, document, and maintain risk management systems for high-risk AI systems. There are three key ideas for EU decision-makers to consider from the UK Approach.

AI assurance techniques and technical standards

Unlike Article 17 of the EU AI Act, the quality management system put in place by providers of high-risk AI systems is designed to ensure compliance. To do this, providers of high-risk  AI  systems must establish techniques, procedures, and systematic actions to be used for development, quality control, and quality assurance. The EU AI Act only briefly covers the concept of assurance, but it could benefit from publishing assurance techniques and technical standards that play a critical role in enabling the responsible adoption of AI so that potential harms at all levels of society are identified and documented.

To assure AI systems effectively, the UK government is calling for a toolbox of assurance techniques to measure, evaluate, and communicate the trustworthiness of AI systems across the development and deployment life cycle. These techniques include impact assessment, audit, and performance testing along with formal verification methods. To help innovators understand how AI assurance techniques can support wider AI governance, the government launched a ‘Portfolio of AI Assurance techniques’ in Spring 2023. This is an industry collaboration to showcase how these tools are already being applied by businesses to real-world use cases and how they align with the AI regulatory principles.

Similarly, assurance techniques need to be underpinned by available technical standards, which provide a common understanding across assurance providers. Technical standards and assurance techniques will also enable organisations to demonstrate that their systems are in line with the regulatory principles enshrined under the EU AI Act and the UK Approach. Similarities exist in terms of the stage of development.

Specifically, the EU AI Act defines common mandatory requirements applicable to the design and development of certain AI systems before they are placed on the market that will be further operationalised through harmonised technical standards. In equal fashion, the UK government intends to have a leading role in the development of international technical standards, working with industry, international and UK partners. The UK government plans to continue to support the role of technical standards in complementing our approach to AI regulation, including through the UK AI Standards Hub. These technical standards may demonstrate firms’ compliance with the EU AI Act.

A harmonised vocabulary

All relevant parties would benefit from reaching a consensus on the definitions of key terms related to the foundations of AI regulation. While the EU AI Act and the UK Approach are either under development or in the incubation stage, decision-makers for both initiatives should seize the opportunity to develop a shared understanding of core AI ideas, principles, and concepts, and codify these into a harmonised transatlantic vocabulary. As shown below, identification of where both initiatives are in agreement, and where they diverge, has been undertaken:

EU AI ActUK Approach
SharedAccountability
Safety
Privacy
Transparency
Fairness
DivergentData Governance
Diversity
Environmental and Social Well-Being
Human Agency and Oversight
Technical Robustness
Non-Discrimination
Governance
Security
Robustness
Explainability
Contestability
Redress

How AI & Partners can help

We can help you start assessing your AI systems using recognised metrics ahead of the expected changes brought about by the EU AI Act. Our leading practice is geared towards helping you identify, design, and implement appropriate metrics for your assessments.

 Website: https://www.ai-and-partners.com/

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI regulation: A pro-innovation approach – EU vs UK appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/07/31/ai-regulation-pro-innovation-approach-eu-vs-uk/feed/ 0
AI Act: The power of open-source in guiding regulations https://www.artificialintelligence-news.com/2023/07/26/ai-act-power-open-source-guiding-regulations/ https://www.artificialintelligence-news.com/2023/07/26/ai-act-power-open-source-guiding-regulations/#respond Wed, 26 Jul 2023 10:41:51 +0000 https://www.artificialintelligence-news.com/?p=13328 As the EU debates the AI Act, lessons from open-source software can inform the regulatory approach to open ML systems. The AI Act, set to be a global precedent, aims to address the risks associated with AI while encouraging the development of cutting-edge technology. One of the key aspects of this Act is its support... Read more »

The post AI Act: The power of open-source in guiding regulations appeared first on AI News.

]]>
As the EU debates the AI Act, lessons from open-source software can inform the regulatory approach to open ML systems.

The AI Act, set to be a global precedent, aims to address the risks associated with AI while encouraging the development of cutting-edge technology. One of the key aspects of this Act is its support for open-source, non-profit, and academic research and development in the AI ecosystem. Such support ensures the development of safe, transparent, and accountable AI systems that benefit all EU citizens.

Drawing from the success of open-source software development, policymakers can craft regulations that encourage open AI development while safeguarding user interests. By providing exemptions and proportional requirements for open ML systems, the EU can foster innovation and competition in the AI market while maintaining a thriving open-source ecosystem.

Representing both commercial and nonprofit stakeholders, several organisations – including GitHub, Hugging Face, EleutherAI, Creative Commons, and more – have banded together to release a policy paper calling on EU policymakers to protect open-source innovation.

The organisations have five proposals:

  1. Define AI components clearly: Clear definitions of AI components will help stakeholders understand their roles and responsibilities, facilitating collaboration and innovation in the open ecosystem.
  1. Clarify that collaborative development of open-source AI components is exempt from AI Act requirements: To encourage open-source development, the Act should clarify that contributors to public repositories are not subject to the same regulatory requirements as commercial entities.
  1. Support the AI Office’s coordination with the open-source ecosystem: The Act should encourage inclusive governance and collaboration between the AI Office and open-source developers to foster transparency and knowledge exchange.
  1. Ensure practical and effective R&D exception: Allow limited real-world testing in different conditions, combining aspects of the Council’s approach and the Parliament’s Article 2(5d), to facilitate research and development without compromising safety and accountability.
  1. Set proportional requirements for “foundation models”: Differentiate between various uses and development modalities of foundation models, including open source approaches, to ensure fair treatment and promote competition.

Open-source AI development offers several advantages, including transparency, inclusivity, and modularity. It allows stakeholders to collaborate and build on each other’s work, leading to more robust and diverse AI models. For instance, the EleutherAI community has become a leading open-source ML lab, releasing pre-trained models and code libraries that have enabled foundational research and reduced the barriers to developing large AI models.

Similarly, the BigScience project, which brought together over 1200 multidisciplinary researchers, highlights the importance of facilitating direct access to AI components across institutions and disciplines.

Such open collaborations have democratised access to large AI models, enabling researchers to fine-tune and adapt them to various languages and specific tasks—ultimately contributing to a more diverse and representative AI landscape.

Open research and development also promote transparency and accountability in AI systems. For example, LAION – a non-profit research organisation – released openCLIP models, which have been instrumental in identifying and addressing biases in AI applications. Open access to training data and model components has enabled researchers and the public to scrutinise the inner workings of AI systems and challenge misleading or erroneous claims.

The AI Act’s success depends on striking a balance between regulation and support for the open AI ecosystem. While openness and transparency are essential, regulation must also mitigate risks, ensure standards, and establish clear liability for AI systems’ potential harms.

As the EU sets the stage for regulating AI, embracing open source and open science will be critical to ensure that AI technology benefits all citizens.

By implementing the recommendations provided by organisations representing stakeholders in the open AI ecosystem, the AI Act can foster an environment of collaboration, transparency, and innovation, making Europe a leader in the responsible development and deployment of AI technologies.

(Photo by Nick Page on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI Act: The power of open-source in guiding regulations appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/07/26/ai-act-power-open-source-guiding-regulations/feed/ 0
European Parliament adopts AI Act position https://www.artificialintelligence-news.com/2023/06/14/european-parliament-adopts-ai-act-position/ https://www.artificialintelligence-news.com/2023/06/14/european-parliament-adopts-ai-act-position/#respond Wed, 14 Jun 2023 14:27:26 +0000 https://www.artificialintelligence-news.com/?p=13192 The European Parliament has taken a significant step towards the regulation of artificial intelligence by voting to adopt its position for the upcoming AI Act with an overwhelming majority.  The act aims to regulate AI based on its potential to cause harm and follows a risk-based approach, prohibiting applications that pose an unacceptable risk while... Read more »

The post European Parliament adopts AI Act position appeared first on AI News.

]]>
The European Parliament has taken a significant step towards the regulation of artificial intelligence by voting to adopt its position for the upcoming AI Act with an overwhelming majority. 

The act aims to regulate AI based on its potential to cause harm and follows a risk-based approach, prohibiting applications that pose an unacceptable risk while imposing strict regulations for high-risk use cases.

The timing of AI regulation has been a subject of debate, but Dragoș Tudorache, one of the European Parliament’s co-rapporteurs on the AI Act, emphasised that it is the right time to regulate AI due to its profound impact.

Dr Ventsislav Ivanov, AI Expert and Lecturer at Oxford Business College, said: “Regulating artificial intelligence is one of the most important political challenges of our time, and the EU should be congratulated for attempting to tame the risks associated with technologies that are already revolutionising our daily lives.

“As the chaos and controversy accompanying this vote show, this will be not an easy feat. Taking on the global tech companies and other interested parties will be akin to Hercules battling the seven-headed hydra.”

The adoption of the AI Act faced uncertainty as a political deal crumbled, leading to amendments from various political groups.

One of the main points of contention was the use of Remote Biometric Identification, with liberal and progressive lawmakers seeking to ban its real-time use except for ex-post investigations of serious crimes. The centre-right European People’s Party attempted to introduce exceptions for exceptional circumstances like terrorist attacks or missing persons, but their efforts were unsuccessful.

A tiered approach for AI models will be introduced with the act, including stricter regulations for foundation models and generative AI.

The European Parliament intends to introduce mandatory labelling for AI-generated content and mandate the disclosure of training data covered by copyright. This move comes as generative AI, exemplified by ChatGPT, gained widespread attention—prompting the European Commission to launch outreach initiatives to foster international alignment on AI rules.

MEPs made several significant changes to the AI Act, including expanding the list of prohibited practices to include subliminal techniques, biometric categorisation, predictive policing, internet-scraped facial recognition databases, and emotion recognition software.

An extra layer was introduced for high-risk AI applications and extended the list of high-risk areas and use cases in law enforcement, migration control, and recommender systems of prominent social media platforms.

Robin Röhm, CEO of Apheris, commented: “The passing of the plenary vote on the EU’s AI Act marks a significant milestone in AI regulation, but raises more questions than it answers. It will make it more difficult for start-ups to compete and means that investors are less likely to deploy capital into companies operating in the EU.

“It is critical that we allow for capital to flow to businesses, given the cost of building AI technology, but the risk-based approach to regulation proposed by the EU is likely to lead to a lot of extra burden for the European ecosystem and will make investing less attractive.”

With the European Parliament’s adoption of its position on the AI Act, interinstitutional negotiations will commence with the EU Council of Ministers and the European Commission. The negotiations – known as trilogues – will address key points of contention such as high-risk categories, fundamental rights, and foundation models.

Spain, which assumes the rotating presidency of the Council in July, has made finalising the AI law its top digital priority. The aim is to reach a deal by November, with multiple trilogues planned as a backup.

The negotiations are expected to intensify in the coming months as the EU seeks to establish comprehensive regulations for AI, balancing innovation and governance while ensuring the protection of fundamental rights.

“The key to good regulation is ensuring that safety concerns are addressed while not stifling innovation. It remains to be seen whether the EU can achieve this,” concludes Röhm.

(Image Credit: European Union 2023 / Mathieu Cugnot)

Similar: UK will host global AI summit to address potential risks

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

The post European Parliament adopts AI Act position appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/06/14/european-parliament-adopts-ai-act-position/feed/ 0
EU committees green-light the AI Act https://www.artificialintelligence-news.com/2023/05/11/eu-committees-green-light-ai-act/ https://www.artificialintelligence-news.com/2023/05/11/eu-committees-green-light-ai-act/#respond Thu, 11 May 2023 12:09:27 +0000 https://www.artificialintelligence-news.com/?p=13048 The Internal Market Committee and the Civil Liberties Committee of the European Parliament have endorsed new transparency and risk-management rules for artificial intelligence systems known as the AI Act. This marks a major step in the development of AI regulation in Europe, as these are the first-ever rules for AI. The rules aim to ensure... Read more »

The post EU committees green-light the AI Act appeared first on AI News.

]]>
The Internal Market Committee and the Civil Liberties Committee of the European Parliament have endorsed new transparency and risk-management rules for artificial intelligence systems known as the AI Act.

This marks a major step in the development of AI regulation in Europe, as these are the first-ever rules for AI. The rules aim to ensure that AI systems are safe, transparent, traceable, and non-discriminatory.

After the vote, co-rapporteur Brando Benifei (S&D, Italy) said:

“We are on the verge of putting in place landmark legislation that must resist the challenge of time. It is crucial to build citizens’ trust in the development of AI, to set the European way for dealing with the extraordinary changes that are already happening, as well as to steer the political debate on AI at the global level.

We are confident our text balances the protection of fundamental rights with the need to provide legal certainty to businesses and stimulate innovation in Europe.”

Co-rapporteur Dragos Tudorache (Renew, Romania) added:

“Given the profound transformative impact AI will have on our societies and economies, the AI Act is very likely the most important piece of legislation in this mandate. It’s the first piece of legislation of this kind worldwide, which means that the EU can lead the way in making AI human-centric, trustworthy, and safe.

We have worked to support AI innovation in Europe and to give start-ups, SMEs and industry space to grow and innovate while protecting fundamental rights, strengthening democratic oversight, and ensuring a mature system of AI governance and enforcement.”

The rules are based on a risk-based approach and they establish obligations for providers and users depending on the level of risk that the AI system can generate. AI systems with an unacceptable level of risk to people’s safety would be strictly prohibited, including systems that deploy subliminal or purposefully manipulative techniques, exploit people’s vulnerabilities, or are used for social scoring.

MEPs also substantially amended the list of prohibited AI practices to include bans on intrusive and discriminatory uses of AI systems, such as real-time remote biometric identification systems in publicly accessible spaces, post-remote biometric identification systems (except for law enforcement purposes), biometric categorisation systems using sensitive characteristics, predictive policing systems, emotion recognition systems in law enforcement, border management, workplace, and educational institutions, and indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases.

MEPs also expanded the classification of high-risk areas to include harm to people’s health, safety, fundamental rights, or the environment. They also added AI systems that influence voters in political campaigns and recommender systems used by social media platforms to the high-risk list.

To boost AI innovation, MEPs added exemptions to these rules for research activities and AI components provided under open-source licenses. The new law also promotes regulatory sandboxes – or controlled environments established by public authorities – to test AI before its deployment.

MEPs want to boost citizens’ right to file complaints about AI systems and receive explanations of decisions based on high-risk AI systems that significantly impact their rights. MEPs also reformed the role of the EU AI Office, which would be tasked with monitoring how the AI rulebook is implemented.

Tim Wright, Tech and AI Regulatory Partner at London-based law firm Fladgate, commented:

“US-based AI developers will likely steal a march on their European competitors given the news that the EU parliamentary committees have green-lit its groundbreaking AI Act, where AI systems will need to be categorised according to their potential for harm from the outset. 

The US tech approach is typically to experiment first and, once market and product fit is established, to retrofit to other markets and their regulatory framework. This approach fosters innovation whereas EU-based AI developers will need to take note of the new rules and develop systems and processes which may take the edge off their ability to innovate.

The UK is adopting a similar approach to the US, although the proximity of the EU market means that UK-based developers are more likely to fall into step with the EU ruleset from the outset. However, the potential to experiment in a safe space – a regulatory sandbox – may prove very attractive.”

Before negotiations with the Council on the final form of the law can begin, this draft negotiating mandate needs to be endorsed by the whole Parliament, with the vote expected during the 12-15 June session.

(Photo by Denis Sebastian Tamas on Unsplash)

Related: UK details ‘pro-innovation’ approach to AI regulation

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post EU committees green-light the AI Act appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/05/11/eu-committees-green-light-ai-act/feed/ 0
​​Italy will lift ChatGPT ban if OpenAI fixes privacy issues https://www.artificialintelligence-news.com/2023/04/13/italy-lift-chatgpt-ban-openai-fixes-privacy-issues/ https://www.artificialintelligence-news.com/2023/04/13/italy-lift-chatgpt-ban-openai-fixes-privacy-issues/#respond Thu, 13 Apr 2023 15:18:41 +0000 https://www.artificialintelligence-news.com/?p=12944 Italy’s data protection authority has said that it’s willing to lift its ChatGPT ban if OpenAI meets specific conditions. The Guarantor for the Protection of Personal Data (GPDP) announced last month that it was blocking access to OpenAI’s ChatGPT. The move was part of an ongoing investigation into whether the chatbot violated Italy’s data privacy... Read more »

The post ​​Italy will lift ChatGPT ban if OpenAI fixes privacy issues appeared first on AI News.

]]>
Italy’s data protection authority has said that it’s willing to lift its ChatGPT ban if OpenAI meets specific conditions.

The Guarantor for the Protection of Personal Data (GPDP) announced last month that it was blocking access to OpenAI’s ChatGPT. The move was part of an ongoing investigation into whether the chatbot violated Italy’s data privacy laws and the EU’s infamous General Data Protection Regulation (GDPR).

The GPDP was concerned that ChatGPT could recall and emit personal information, such as phone numbers and addresses, from input queries. Additionally, officials were worried that the chatbot could expose minors to inappropriate answers that could potentially be harmful.

The GPDP says it will lift the ban on ChatGPT if its creator, OpenAI, enforces rules protecting minors and users’ personal data by 30th April 2023.

OpenAI has been asked to notify people on its website how ChatGPT stores and processes their data and require users to confirm that they are 18 and older before using the software.

An age verification process will be required when registering new users and children below the age of 13 must be prevented from accessing the software. People aged 13-18 must obtain consent from their parents to use ChatGPT.

The company must also ask for explicit consent to use people’s data to train its AI models and allow anyone – whether they’re a user or not – to request any false personal information generated by ChatGPT to be corrected or deleted altogether.

All of these changes must be implemented by September 30th or the ban will be reinstated.

This move is part of a larger trend of increased scrutiny of AI technologies by regulators around the world. ChatGPT is not the only AI system that has faced regulatory challenges.

Regulators in Canada and France have also launched investigations into whether ChatGPT violates data privacy laws after receiving official complaints. Meanwhile, Spain has urged the EU’s privacy watchdog to launch a deeper investigation into ChatGPT.

The international scrutiny of ChatGPT and similar AI systems highlights the need for developers to be proactive in addressing privacy concerns and implementing safeguards to protect users’ personal data.

(Photo by Levart_Photographer on Unsplash)

Related: AI think tank calls GPT-4 a risk to public safety

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post ​​Italy will lift ChatGPT ban if OpenAI fixes privacy issues appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/04/13/italy-lift-chatgpt-ban-openai-fixes-privacy-issues/feed/ 0
AI think tank calls GPT-4 a risk to public safety https://www.artificialintelligence-news.com/2023/03/31/ai-think-tank-gpt-4-risk-to-public-safety/ https://www.artificialintelligence-news.com/2023/03/31/ai-think-tank-gpt-4-risk-to-public-safety/#respond Fri, 31 Mar 2023 15:20:10 +0000 https://www.artificialintelligence-news.com/?p=12881 An AI think tank has filed a complaint with the FTC in a bid to stop OpenAI from further commercial deployments of GPT-4. The Center for Artificial Intelligence and Digital Policy (CAIDP) claims OpenAI has violated section five of the FTC Act—alleging the company of deceptive and unfair practices. Marc Rotenberg, Founder and President of... Read more »

The post AI think tank calls GPT-4 a risk to public safety appeared first on AI News.

]]>
An AI think tank has filed a complaint with the FTC in a bid to stop OpenAI from further commercial deployments of GPT-4.

The Center for Artificial Intelligence and Digital Policy (CAIDP) claims OpenAI has violated section five of the FTC Act—alleging the company of deceptive and unfair practices.

Marc Rotenberg, Founder and President of the CAIDP, said:

“The FTC has a clear responsibility to investigate and prohibit unfair and deceptive trade practices. We believe that the FTC should look closely at OpenAI and GPT-4.

We are specifically asking the FTC to determine whether the company has complied with the guidance the federal agency has issued.”

The CAIDP claims that OpenAI’s GPT-4 is “biased, deceptive, and a risk to privacy and public safety”.

The think tank cited contents in the GPT-4 System Card that describe the model’s potential to reinforce biases and worldviews, including harmful stereotypes and demeaning associations for certain marginalised groups.

In the aforementioned System Card, OpenAI acknowledges that it “found that the model has the potential to reinforce and reproduce specific biases and worldviews, including harmful stereotypical and demeaning associations for certain marginalized groups.”

Furthermore, the document states: “AI systems will have even greater potential to reinforce entire ideologies, worldviews, truths and untruths, and to cement them or lock them in, foreclosing future contestation, reflection, and improvement.”

Other harmful outcomes that OpenAI says GPT-4 could lead to include:

  1. Advice or encouragement for self-harm behaviours
  2. Graphic material such as erotic or violent content
  3. Harassing, demeaning, and hateful content
  4. Content useful for planning attacks or violence
  5. Instructions for finding illegal content

The CAIDP claims that OpenAI released GPT-4 to the public without an independent assessment of its risks.

Last week, the FTC told American companies advertising AI products:

“Merely warning your customers about misuse or telling them to make disclosures is hardly sufficient to deter bad actors.

Your deterrence measures should be durable, built-in features and not bug corrections or optional features that third parties can undermine via modification or removal.”

With its filing, the CAIDP calls on the FTC to investigate the products of OpenAI and other operators of powerful AI systems, prevent further commercial releases of GPT-4, and ensure the establishment of necessary guardrails to protect consumers, businesses, and the commercial marketplace.

Merve Hickok, Chair and Research Director of the CAIDP, commented:

“We are at a critical moment in the evolution of AI products.

We recognise the opportunities and we support research. But without the necessary safeguards established to limit bias and deception, there is a serious risk to businesses, consumers, and public safety.

The FTC is uniquely positioned to address this challenge.”

The complaint was filed as Elon Musk, Steve Wozniak, and other AI experts signed a petition to “pause” development on AI systems more powerful than GPT-4.

However, other high-profile figures believe progress shouldn’t be slowed/halted:

Musk was a co-founder of OpenAI, which was originally created as a nonprofit with the mission of ensuring that AI benefits humanity. Musk resigned from OpenAI’s board in 2018 and has publicly questioned the company’s transformation:

Global approaches to AI regulation

As AI systems become more advanced and powerful, concerns over their potential risks and biases have grown. Organisations such as CAIDP, UNESCO, and the Future of Life Institute are pushing for ethical guidelines and regulations to be put in place to protect the public and ensure the responsible development of AI technology.

UNESCO (United Nations Educational, Scientific, and Cultural Organization) has called on countries to implement its “Recommendation on the Ethics of AI” framework.

Earlier today, Italy banned ChatGPT. The country’s data protection authorities said it would be investigated and the system does not have a proper legal basis to be collecting personal information about the people using it.

The wider EU is establishing a strict regulatory environment for AI, in contrast to the UK’s relatively “light-touch” approach.

Tim Wright, Partner and specialist tech and AI regulation lawyer at law firm Fladgate, commented on the UK’s vision:

“The regulatory principles set out in the whitepaper simply confirm the Government’s preferred approach which they say will encourage innovation in the space without imposing an undue burden on businesses developing and adopting AI while encouraging fair and ethical use and protecting individuals.

Time will tell if this sector-by-sector approach has the desired effect. What it does do is put the UK on a completely different approach from the EU, which is pushing through a detailed rulebook backed up by a new liability regime and overseen by a single super AI regulator.”

As always, it’s a balancing act between regulation and innovation. Not enough regulation puts the public at risk while too much risks driving innovation elsewhere.

(Photo by Ben Sweet on Unsplash)

Related: What will AI regulation look like for businesses?

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI think tank calls GPT-4 a risk to public safety appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/03/31/ai-think-tank-gpt-4-risk-to-public-safety/feed/ 0
What will AI regulation look like for businesses? https://www.artificialintelligence-news.com/2023/03/24/what-will-ai-regulation-look-like-for-businesses/ https://www.artificialintelligence-news.com/2023/03/24/what-will-ai-regulation-look-like-for-businesses/#respond Fri, 24 Mar 2023 16:19:11 +0000 https://www.artificialintelligence-news.com/?p=12863 Unlike food, medicine, and cars, we have yet to see clear regulations or laws to guide AI design in the US. Without standard guidelines, companies that design and develop ML models have historically worked off of their own perceptions of right and wrong.  This is about to change.  As the EU finalizes its AI Act... Read more »

The post What will AI regulation look like for businesses? appeared first on AI News.

]]>
Unlike food, medicine, and cars, we have yet to see clear regulations or laws to guide AI design in the US. Without standard guidelines, companies that design and develop ML models have historically worked off of their own perceptions of right and wrong. 

This is about to change. 

As the EU finalizes its AI Act and generative AI continues to rapidly evolve, we will see the artificial intelligence regulatory landscape shift from general, suggested frameworks to more permanent laws. 

The EU AI Act has spurred significant conversations among business leaders: How can we prepare for stricter AI regulations? Should I proactively design AI that meets this criterion? How soon will it be before similar regulation is passed in the US?

Continue reading to better understand what AI regulation may look like for companies in the near future.  

How the EU AI Act will impact your business 

Like the EU’s General Data Protection Regulation (GDPR) released in 2018, the EU AI Act is expected to become a global standard for AI regulation. Parliament is scheduled to vote on the draft by the end of March 2023, and if this timeline is met, the final AI Act could be adopted by the end of the year. 

It’s highly predicted that the effects of the AI Act will be felt beyond the EU’s borders (read: Brussels effect), albeit it being European regulation. Organizations operating on an international scale will be required to directly conform to the legislation. Meanwhile, US and other independently-led companies will quickly realize that it’s in their best interest to comply with this regulation.

We’re beginning to see this already with other similar legislation like Canada’s Artificial Intelligence & Data Act proposal and New York City’s automated employment regulation

AI system risk categories

Under the AI Act, organizations’ AI systems will be classified into three risk categories, each with their own set of guidelines and consequences. 

  • Unacceptable risk. AI systems that meet this level will be banned. This includes manipulative systems that cause harm, real-time biometric identification systems used in public spaces for law enforcement, and all forms of social scoring. 
  • High risk. These AI systems include tools like job applicant scanning models and will be subject to specific legal requirements. 
  • Limited and minimal risk. This category encompasses many of the AI applications businesses use today, including chatbots and AI-powered inventory management tools, and will largely be left unregulated. Customer-facing limited-risk applications, however, will require disclosure that AI is being used. 

What will AI regulation look like? 

Because the AI Act is still under draft, and its global effects are to be determined, we can’t say with certainty what regulation will look like for organizations. However, we do know that it will vary based on industry, the type of model you’re designing, and the risk category in which it falls. 

Regulation will likely include scrutiny with a third party, where your model is stress tested against the population you’re attempting to serve. These tests will evaluate questions including ‘Is the model performing within acceptable margins of error?’ and ‘Are you disclosing the nature and use of your model? ‘

For organizations with high-risk AI systems, the AI Act has already outlined several requirements: 

  • Implementation of a risk-management system. 
  • Data governance and management. 
  • Technical documentation.
  • Record keeping and logging. 
  • Transparency and provision of information to users.
  • Human oversight. 
  • Accuracy, robustness, and cybersecurity.
  • Conformity assessment. 
  • Registration with the EU-member-state government.
  • Post-market monitoring system. 

We can also expect regular reliability testing for models (similar to e-checks for cars) to become a more widespread service in the AI industry. 

How to prepare for AI regulations 

Many AI leaders have already been prioritizing trust and risk mitigation when designing and developing ML models. The sooner you accept AI regulation as our new reality, the more successful you will be in the future. 

Here are just a few steps organizations can take to prepare for stricter AI regulation: 

  • Research and educate your teams on the types of regulation that will exist, and how it impacts your company today and in the future.  
  • Audit your existing and planned models. Which risk category do they align with and which associated regulations will impact you most?
  • Develop and adopt a framework for designing responsible AI solutions.
  • Think through your AI risk mitigation strategy. How does it apply to existing models and ones designed in the future? What unexpected actions should you account for?  
  • Establish an AI governance and reporting strategy that ensures multiple checks before a model goes live. 

In light of the AI Act and inevitable future regulation, ethical and fair AI design is no longer a “nice to have”, but a “must have”. How can your organization prepare for success?

(Photo by ALEXANDRE LALLEMAND on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post What will AI regulation look like for businesses? appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/03/24/what-will-ai-regulation-look-like-for-businesses/feed/ 0
GitHub CEO: The EU ‘will define how the world regulates AI’ https://www.artificialintelligence-news.com/2023/02/06/github-ceo-eu-will-define-how-world-regulates-ai/ https://www.artificialintelligence-news.com/2023/02/06/github-ceo-eu-will-define-how-world-regulates-ai/#respond Mon, 06 Feb 2023 17:04:56 +0000 https://www.artificialintelligence-news.com/?p=12708 GitHub CEO Thomas Dohmke addressed the EU Open Source Policy Summit in Brussels and gave his views on the bloc’s upcoming AI Act.  “The AI Act will define how the world regulates AI and we need to get it right, for developers and the open-source community,” said Dohmke. Dohmke was born and grew up in... Read more »

The post GitHub CEO: The EU ‘will define how the world regulates AI’ appeared first on AI News.

]]>
GitHub CEO Thomas Dohmke addressed the EU Open Source Policy Summit in Brussels and gave his views on the bloc’s upcoming AI Act

“The AI Act will define how the world regulates AI and we need to get it right, for developers and the open-source community,” said Dohmke.

Dohmke was born and grew up in Germany but now lives in the US. As such, he is all too aware of the widespread belief that the EU cannot lead when it comes to tech innovation.

“As a European, I love seeing how open-source AI innovations are beginning to break the narrative that only the US and China can lead on tech innovation.”

“I’ll be honest, as a European living in the United States, this is a pervasive – and often true – narrative. But this can change. And it’s already beginning to, thanks to open-source developers.”

AI will revolutionise just about every aspect of our lives. Regulation is vital to minimise the risks associated with AI while allowing the benefits to flourish.

“Together, OSS (Open Source Software) developers will use AI to help make our lives better. I have no doubt that OSS developers will help build AI innovations that empower those with disabilities, help us solve climate change, and save lives.”

A risk of overregulation is that it drives innovation elsewhere. Startups are more likely to establish themselves in countries like the US and China where they’re likely not subject to as strict regulations. Europe will find itself falling behind and having less influence on the global stage when it comes to AI.

“The AI Act is so crucial. This policy could well set the precedent for how the world regulates AI. It is foundationally important. Important for European technological leadership, and the future of the European economy itself. The AI Act must be fair and balanced for the open-source community.

“Policymakers should help us get there. The AI Act can foster democratised innovation and solidify Europe’s leadership in open, values-based artificial intelligence. That is why I believe that open-source developers should be exempt from the AI Act.”

In expanding on his belief that open-source developers should be exempt, Dohmke explains that the compliance burden should fall on those shipping products.

“OSS developers are often volunteers. Many are working two jobs. They are scientists, doctors, academics, professors, and university students alike. They don’t usually stand to profit from their contributions—and they certainly don’t have big budgets and compliance departments!”

EU lawmakers are hoping to agree on draft AI rules next month with the aim of winning the acceptance of member states by the end of the year.

“Open-source is forming the foundation of AI innovation in Europe. The US and China don’t have to win it all. Let’s break that narrative apart!

“Let’s give the open-source community the daylight and the clarity to grow their ideas and build them for the rest of the world! And by doing so, let’s give Europe the chance to be a leader in this new age of AI.”

GitHub’s policy paper on the AI Act can be found here.

(Image Credit: Collision Conf under CC BY 2.0 license)

Relevant: US and EU agree to collaborate on improving lives with AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post GitHub CEO: The EU ‘will define how the world regulates AI’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/02/06/github-ceo-eu-will-define-how-world-regulates-ai/feed/ 0
US and EU agree to collaborate on improving lives with AI https://www.artificialintelligence-news.com/2023/01/31/us-and-eu-agree-collaborate-improving-lives-ai/ https://www.artificialintelligence-news.com/2023/01/31/us-and-eu-agree-collaborate-improving-lives-ai/#comments Tue, 31 Jan 2023 17:02:28 +0000 https://www.artificialintelligence-news.com/?p=12678 The US and EU have signed a landmark agreement to explore how AI can be used to improve lives. The US Department of State and EU Commission’s Directorate-General for Communications Networks, Content and Technology (DG CONNECT) simultaneously held a virtual signing ceremony of the agreement in Washington and Brussels. Roberto Viola, Director General of DG... Read more »

The post US and EU agree to collaborate on improving lives with AI appeared first on AI News.

]]>
The US and EU have signed a landmark agreement to explore how AI can be used to improve lives.

The US Department of State and EU Commission’s Directorate-General for Communications Networks, Content and Technology (DG CONNECT) simultaneously held a virtual signing ceremony of the agreement in Washington and Brussels.

Roberto Viola, Director General of DG CONNECT, signed the ‘Administrative Arrangement on Artificial Intelligence for the Public Good’ on behalf of the EU.

“Today, we are strengthening our cooperation with the US on artificial intelligence and computing to address global challenges, from climate change to natural disasters,” commented Thierry Breton, EU Commissioner for the Internal Market.

“Based on common values and interests, EU and US researchers will join forces to develop societal applications of AI and will work with other international partners for a truly global impact.”

Jose W. Fernandez, Under Secretary of State for Economic Growth, Energy, and the Environment, signed the agreement on behalf of the US.

The arrangement will deepen transatlantic scientific and technological research through what many believe to be the fourth industrial revolution.

With rapid advances in AI, the IoT, distributed ledgers, autonomous vehicles, and more, it’s vital that fundamental principles are upheld.

In a statement, Fernandez’s office wrote:

“This arrangement presents an opportunity for joint scientific and technological research with our Transatlantic partners, for the benefit of the global scientific community. 

Furthermore, it offers a compelling vision for how to use AI in a way that serves our peoples and upholds our democratic values such as transparency, fairness, and privacy.”

Some of the specific research areas will include extreme weather and climate forecasting, emergency response management, health and medicine improvements, electric grid optimisation, and agriculture optimisation.

The latest agreement between the US and EU builds upon the Declaration for the Future of the Internet.

(Image Credit: European Commission)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post US and EU agree to collaborate on improving lives with AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/01/31/us-and-eu-agree-collaborate-improving-lives-ai/feed/ 1
Italy’s facial recognition ban exempts law enforcement https://www.artificialintelligence-news.com/2022/11/15/italy-facial-recognition-ban-exempts-law-enforcement/ https://www.artificialintelligence-news.com/2022/11/15/italy-facial-recognition-ban-exempts-law-enforcement/#respond Tue, 15 Nov 2022 15:47:07 +0000 https://www.artificialintelligence-news.com/?p=12484 Italy has banned the use of facial recognition, except for law enforcement purposes. On Monday, the country’s Data Protection Authority (Garante per la protezione dei dati personali) issued official stays to two municipalities – the southern Italian city of Lecce and the Tuscan city of Arezzo – over their experiments with biometrics technologies. The agency... Read more »

The post Italy’s facial recognition ban exempts law enforcement appeared first on AI News.

]]>
Italy has banned the use of facial recognition, except for law enforcement purposes.

On Monday, the country’s Data Protection Authority (Garante per la protezione dei dati personali) issued official stays to two municipalities – the southern Italian city of Lecce and the Tuscan city of Arezzo – over their experiments with biometrics technologies.

The agency banned facial recognition systems using biometric data until a specific law governing its use is adopted.

“The moratorium arises from the need to regulate eligibility requirements, conditions and guarantees relating to facial recognition, in compliance with the principle of proportionality,” the agency said in a statement.

However, an exception was added for biometric data technology that is being used “to fight crime” or in a judicial investigation.

In Lecce, the municipality’s authorities said they would begin using facial recognition technologies. Italy’s Data Protection Agency ordered Lecce’s authorities to explain what systems will be used, their purpose, and the legal basis.

As for the Arezzo case, the city’s police were to be equipped with infrared smart glasses that could recognise car license plates.

Facial recognition technology is a central concern in the EU’s proposed AI regulation. The proposal has been released but will need to pass consultations within the EU before it’s adopted into law.

(Photo by Mikita Yo on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Italy’s facial recognition ban exempts law enforcement appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/11/15/italy-facial-recognition-ban-exempts-law-enforcement/feed/ 0