law Archives - AI News https://www.artificialintelligence-news.com/tag/law/ Artificial Intelligence News Thu, 19 Oct 2023 15:54:41 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png law Archives - AI News https://www.artificialintelligence-news.com/tag/law/ 32 32 UMG files landmark lawsuit against AI developer Anthropic https://www.artificialintelligence-news.com/2023/10/19/umg-files-landmark-lawsuit-ai-developer-anthropic/ https://www.artificialintelligence-news.com/2023/10/19/umg-files-landmark-lawsuit-ai-developer-anthropic/#respond Thu, 19 Oct 2023 15:54:37 +0000 https://www.artificialintelligence-news.com/?p=13770 Universal Music Group (UMG) has filed a lawsuit against Anthropic, the developer of Claude AI. This landmark case represents the first major legal battle where the music industry confronts an AI developer head-on. UMG – along with several other key industry players including Concord Music Group, ABKCO, Worship Together Music, and Plaintiff Capital CMG –... Read more »

The post UMG files landmark lawsuit against AI developer Anthropic appeared first on AI News.

]]>
Universal Music Group (UMG) has filed a lawsuit against Anthropic, the developer of Claude AI.

This landmark case represents the first major legal battle where the music industry confronts an AI developer head-on. UMG – along with several other key industry players including Concord Music Group, ABKCO, Worship Together Music, and Plaintiff Capital CMG – is seeking $75 million in damages.

The lawsuit centres around the alleged unauthorised use of copyrighted music by Anthropic to train its AI models. The publishers claim that Anthropic illicitly incorporated songs from artists they represent into its AI dataset without obtaining the necessary permissions.

Legal representatives for the publishers have asserted that the action was taken to address the “systematic and widespread infringement” of copyrighted song lyrics by Anthropic.

The lawsuit, spanning 60 pages and posted online by The Hollywood Reporter, emphasises the publishers’ support for innovation and ethical AI use. However, they contend that Anthropic has violated these principles and must be held accountable under established copyright laws.

Anthropic, despite positioning itself as an AI ‘safety and research’ company, stands accused of copyright infringement without regard for the law or the creative community whose works underpin its services, according to the lawsuit.

In addition to the significant monetary damages, the publishers have demanded a jury trial. They also seek reimbursement for legal fees, the destruction of all infringing material, public disclosure of how Anthropic’s AI model was trained, and financial penalties of up to $150,000 per infringed work.

This latest lawsuit follows a string of legal battles between AI developers and creators. Each new case is worth observing to see the precedent that is set for future battles.

(Photo by Jason Rosewell on Unsplash)

See also: BSI: Closing ‘AI confidence gap’ key to unlocking benefits

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UMG files landmark lawsuit against AI developer Anthropic appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/19/umg-files-landmark-lawsuit-ai-developer-anthropic/feed/ 0
UK deputy PM warns UN that AI regulation is falling behind advances https://www.artificialintelligence-news.com/2023/09/22/uk-deputy-pm-warns-un-ai-regulation-falling-behind-advances/ https://www.artificialintelligence-news.com/2023/09/22/uk-deputy-pm-warns-un-ai-regulation-falling-behind-advances/#respond Fri, 22 Sep 2023 09:24:44 +0000 https://www.artificialintelligence-news.com/?p=13630 In a stark address to the UN, UK Deputy PM Oliver Dowden has sounded the alarm on the potentially destabilising impact of AI on the world order. Dowden has urged governments to take immediate action to regulate AI development, warning that the rapid pace of advancement in AI technology could outstrip their ability to ensure... Read more »

The post UK deputy PM warns UN that AI regulation is falling behind advances appeared first on AI News.

]]>
In a stark address to the UN, UK Deputy PM Oliver Dowden has sounded the alarm on the potentially destabilising impact of AI on the world order.

Dowden has urged governments to take immediate action to regulate AI development, warning that the rapid pace of advancement in AI technology could outstrip their ability to ensure its safe and responsible use.

Speaking at the UN General Assembly in New York, Dowden highlighted that the UK will host a global summit in November to discuss the regulation of AI. The summit aims to bring together international leaders, experts, and industry representatives to address the pressing concerns surrounding AI.

One of the primary fears surrounding unchecked AI development is the potential for widespread job displacement, the proliferation of misinformation, and the deepening of societal discrimination. Without adequate regulations in place, AI technologies could be harnessed to magnify these negative effects.

“The starting gun has been fired on a globally competitive race in which individual companies as well as countries will strive to push the boundaries as far and fast as possible,” Dowden cautioned during his address.

Dowden went on to note that the current state of global regulation lags behind the rapid advances in AI technology. Unlike the past, where regulations followed technological developments, Dowden stressed that rules must now be established in tandem with AI’s evolution.

Oseloka Obiora, CTO at RiverSafe, said: “Business leaders are jumping into bed with the latest AI trends at an alarming rate, with little or no concern for the consequences.

“With global regulatory standards falling way behind and the most basic cyber security checks being neglected, it is right for the government to call for new global standards to prevent the AI ticking timebomb from exploding.”

Dowden underscored the importance of ensuring that AI companies do not have undue influence over the regulatory process. He emphasised the need for transparency and oversight, stating that AI companies should not “mark their own homework.” Instead, governments and citizens should have confidence that risks associated with AI are being properly mitigated.

Moreover, Dowden highlighted that only coordinated action by nation-states could provide the necessary assurance to the public that significant national security concerns stemming from AI have been adequately addressed.

He also cautioned against oversimplifying the role of AI—noting that it can be both a tool for good and a tool for ill, depending on its application. During the UN General Assembly, the UK also pitched AI’s potential to accelerate development in the world’s most impoverished nations.

The UK’s initiative to host a global AI regulation summit signals a growing recognition among world leaders of the urgent need to establish a robust framework for AI governance. As AI technology continues to advance, governments are under increasing pressure to strike the right balance between innovation and safeguarding against potential risks.

Jake Moore, Global Cybersecurity Expert at ESET, comments: “The fear that AI could shape our lives in a completely new direction is not without substance, as the power behind the technology churning this wheel is potentially destructive. Not only could AI change jobs, it also has the ability to change what we know to be true and impact what we believe.   

“Regulating it would mean potentially stifling innovation. But even attempting to regulate such a powerful beast would be like trying to regulate the dark web, something that is virtually impossible. Large datasets and algorithms can be designed to do almost anything, so we need to start looking at how we can improve educating people, especially young people in schools, into understanding this new wave of risk.”

Dowden’s warning to the United Nations serves as a clarion call for nations to come together and address the challenges posed by AI head-on. The global summit in November will be a critical step in shaping the future of AI governance and ensuring that the world order remains stable in the face of unprecedented technological change.

(Image Credit: UK Government under CC BY 2.0 license)

See also: CMA sets out principles for responsible AI development 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK deputy PM warns UN that AI regulation is falling behind advances appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/22/uk-deputy-pm-warns-un-ai-regulation-falling-behind-advances/feed/ 0
White House secures safety commitments from eight more AI companies https://www.artificialintelligence-news.com/2023/09/13/white-house-safety-commitments-eight-more-ai-companies/ https://www.artificialintelligence-news.com/2023/09/13/white-house-safety-commitments-eight-more-ai-companies/#respond Wed, 13 Sep 2023 14:56:10 +0000 https://www.artificialintelligence-news.com/?p=13585 The Biden-Harris Administration has announced that it has secured a second round of voluntary safety commitments from eight prominent AI companies. Representatives from Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability attended the White House for the announcement. These eight companies have pledged to play a pivotal role in promoting the development of... Read more »

The post White House secures safety commitments from eight more AI companies appeared first on AI News.

]]>
The Biden-Harris Administration has announced that it has secured a second round of voluntary safety commitments from eight prominent AI companies.

Representatives from Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability attended the White House for the announcement. These eight companies have pledged to play a pivotal role in promoting the development of safe, secure, and trustworthy AI.

The Biden-Harris Administration is actively working on an Executive Order and pursuing bipartisan legislation to ensure the US leads the way in responsible AI development that unlocks its potential while managing its risks.

The commitments made by these companies revolve around three fundamental principles: safety, security, and trust. They have committed to:

  1. Ensure products are safe before introduction:

The companies commit to rigorous internal and external security testing of their AI systems before releasing them to the public. This includes assessments by independent experts, helping guard against significant AI risks such as biosecurity, cybersecurity, and broader societal effects.

They will also actively share information on AI risk management with governments, civil society, academia, and across the industry. This collaborative approach will include sharing best practices for safety, information on attempts to circumvent safeguards, and technical cooperation.

  1. Build systems with security as a top priority:

The companies have pledged to invest in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights. Recognising the critical importance of these model weights in AI systems, they commit to releasing them only when intended and when security risks are adequately addressed.

Additionally, the companies will facilitate third-party discovery and reporting of vulnerabilities in their AI systems. This proactive approach ensures that issues can be identified and resolved promptly even after an AI system is deployed.

  1. Earn the public’s trust:

To enhance transparency and accountability, the companies will develop robust technical mechanisms – such as watermarking systems – to indicate when content is AI-generated. This step aims to foster creativity and productivity while reducing the risks of fraud and deception.

They will also publicly report on their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use, covering both security and societal risks, including fairness and bias. Furthermore, these companies are committed to prioritising research on the societal risks posed by AI systems, including addressing harmful bias and discrimination.

These leading AI companies will also develop and deploy advanced AI systems to address significant societal challenges, from cancer prevention to climate change mitigation, contributing to the prosperity, equality, and security of all.

The Biden-Harris Administration’s engagement with these commitments extends beyond the US, with consultations involving numerous international partners and allies. These commitments complement global initiatives, including the UK’s Summit on AI Safety, Japan’s leadership of the G-7 Hiroshima Process, and India’s leadership as Chair of the Global Partnership on AI.

The announcement marks a significant milestone in the journey towards responsible AI development, with industry leaders and the government coming together to ensure that AI technology benefits society while mitigating its inherent risks.

(Photo by Tabrez Syed on Unsplash)

See also: UK’s AI ecosystem to hit £2.4T by 2027, third in global race

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post White House secures safety commitments from eight more AI companies appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/13/white-house-safety-commitments-eight-more-ai-companies/feed/ 0
UK government outlines AI Safety Summit plans https://www.artificialintelligence-news.com/2023/09/04/uk-government-outlines-ai-safety-summit-plans/ https://www.artificialintelligence-news.com/2023/09/04/uk-government-outlines-ai-safety-summit-plans/#respond Mon, 04 Sep 2023 10:46:55 +0000 https://www.artificialintelligence-news.com/?p=13560 The UK government has announced plans for the global AI Safety Summit on 1-2 November 2023. The major event – set to be held at Bletchley Park, home of Alan Turing and other Allied codebreakers during the Second World War – aims to address the pressing challenges and opportunities presented by AI development on both... Read more »

The post UK government outlines AI Safety Summit plans appeared first on AI News.

]]>
The UK government has announced plans for the global AI Safety Summit on 1-2 November 2023.

The major event – set to be held at Bletchley Park, home of Alan Turing and other Allied codebreakers during the Second World War – aims to address the pressing challenges and opportunities presented by AI development on both national and international scales.

Secretary of State Michelle Donelan has officially launched the formal engagement process leading up to the summit. Jonathan Black and Matt Clifford – serving as the Prime Minister’s representatives for the AI Safety Summit – have also initiated discussions with various countries and frontier AI organisations.

This marks a crucial step towards fostering collaboration in the field of AI safety and follows a recent roundtable discussion hosted by Secretary Donelan, which involved representatives from a diverse range of civil society groups.

The AI Safety Summit will serve as a pivotal platform, bringing together not only influential nations but also leading technology organisations, academia, and civil society. Its primary objective is to facilitate informed discussions that can lead to sensible regulations in the AI landscape.

One of the core focuses of the summit will be on identifying and mitigating risks associated with the most powerful AI systems. These risks include the potential misuse of AI for activities such as undermining biosecurity through the proliferation of sensitive information. 

Additionally, the summit aims to explore how AI can be harnessed for the greater good, encompassing domains like life-saving medical technology and safer transportation.

The UK government claims to recognise the importance of diverse perspectives in shaping the discussions surrounding AI and says that it’s committed to working closely with global partners to ensure that it remains safe and that its benefits can be harnessed worldwide.

As part of this iterative and consultative process, the UK has shared five key objectives that will guide the discussions at the summit:

  1. Developing a shared understanding of the risks posed by AI and the necessity for immediate action.
  2. Establishing a forward process for international collaboration on AI safety, including supporting national and international frameworks.
  3. Determining appropriate measures for individual organisations to enhance AI safety.
  4. Identifying areas for potential collaboration in AI safety research, such as evaluating model capabilities and establishing new standards for governance.
  5. Demonstrating how the safe development of AI can lead to global benefits.

The growth potential of AI investment, deployment, and capabilities is staggering, with projections of up to $7 trillion in growth over the next decade and accelerated drug discovery. A report by Google in July suggests that, by 2030, AI could boost the UK economy alone by £400 billion—leading to an annual growth rate of 2.6 percent.

However, these opportunities come with significant risks that transcend national borders. Addressing these risks is now a matter of utmost urgency.

Earlier this month, DeepMind co-founder Mustafa Suleyman called on the US to enforce AI standards. However, Suleyman is far from the only leading industry figure who has expressed concerns and called for measures to manage the risks of AI.

In an open letter in March, over 1,000 experts infamously called for a halt on “out of control” AI development over the “profound risks to society and humanity”.

Multiple stakeholders – including individual countries, international organisations, businesses, academia, and civil society – are already engaged in AI-related work. This includes efforts at the United Nations, the Organisation for Economic Co-operation and Development (OECD), the Global Partnership on Artificial Intelligence (GPAI), the Council of Europe, G7, G20, and standard development organisations.

The AI Safety Summit will build upon these existing initiatives by formulating practical next steps to mitigate risks associated with AI. These steps will encompass discussions on implementing risk-mitigation measures at relevant organisations, identifying key areas for international collaboration, and creating a roadmap for long-term action.

If successful, the AI Safety Summit at Bletchley Park promises to be a milestone event in the global dialogue on AI safety—seeking to strike a balance between harnessing the potential of AI for the benefit of humanity and addressing the challenges it presents.

(Photo by Hal Gatewood on Unsplash)

See also: UK Deputy PM: AI is the most ‘extensive’ industrial revolution yet

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK government outlines AI Safety Summit plans appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/04/uk-government-outlines-ai-safety-summit-plans/feed/ 0
Error-prone facial recognition leads to another wrongful arrest https://www.artificialintelligence-news.com/2023/08/07/error-prone-facial-recognition-another-wrongful-arrest/ https://www.artificialintelligence-news.com/2023/08/07/error-prone-facial-recognition-another-wrongful-arrest/#comments Mon, 07 Aug 2023 10:43:46 +0000 https://www.artificialintelligence-news.com/?p=13436 The Detroit Police Department (DPD) is once again under scrutiny as a new lawsuit emerges, revealing that another innocent person has been wrongly arrested due to a flawed facial recognition match. Porcha Woodruff, an African American woman who was eight months pregnant at the time, is the sixth individual to come forward and report being... Read more »

The post Error-prone facial recognition leads to another wrongful arrest appeared first on AI News.

]]>
The Detroit Police Department (DPD) is once again under scrutiny as a new lawsuit emerges, revealing that another innocent person has been wrongly arrested due to a flawed facial recognition match.

Porcha Woodruff, an African American woman who was eight months pregnant at the time, is the sixth individual to come forward and report being falsely accused of a crime because of the controversial technology utilised by law enforcement.

Woodruff was accused of robbery and carjacking.

“Are you kidding?” Woodruff claims to have said to the officers, gesturing to her stomach to highlight how nonsensical the allegation was while being eight months pregnant.

The pattern of wrongful arrests based on faulty facial recognition has raised serious concerns, particularly as all six victims known by the American Civil Liberties Union (ACLU) have been African Americans. However, Woodruff’s case is notable as she is the first woman to report such an incident happening to her.

This latest incident marks the third known allegation of a wrongful arrest in the past three years attributed to the Detroit Police Department specifically and its reliance on inaccurate facial recognition matches.

Robert Williams, represented by the ACLU of Michigan and the University of Michigan Law School’s Civil Rights Litigation Initiative (CRLI), has an ongoing lawsuit against the DPD for his wrongful arrest in January 2020 due to the same technology.

Phil Mayor, Senior Staff Attorney at ACLU of Michigan, commented: “It’s deeply concerning that the Detroit Police Department knows the devastating consequences of using flawed facial recognition technology as the basis for someone’s arrest and continues to rely on it anyway.

“As Ms Woodruff’s horrifying experience illustrates, the Department’s use of this technology must end.”

The use of facial recognition technology in law enforcement has been a contentious issue, with concerns raised about its accuracy, racial bias, and potential violations of privacy and civil liberties.

Studies have shown that these systems are more prone to errors when identifying individuals with darker skin tones, leading to a disproportionate impact on marginalised communities.

Critics argue that relying on facial recognition as the sole basis for an arrest poses significant risks and can lead to severe consequences for innocent individuals, as seen in the case of Woodruff.

Calls for transparency and accountability have escalated, with civil rights organisations urging the Detroit Police Department to halt its use of facial recognition until the technology is thoroughly vetted and proven to be unbiased and accurate.

“The DPD continues to hide its abuses of this technology, forcing people whose rights have been violated to expose its wrongdoing case by case,” added Mayor.

“DPD should not be permitted to avoid transparency and hide its own misconduct from public view at the same time it continues to subject Detroiters to dragnet surveillance.” 

As the case unfolds, the public remains watchful of how the Detroit Police Department will respond to the mounting pressure to address concerns about the misuse of facial recognition technology and its impact on the rights and lives of innocent individuals.

(Image Credit: Oleg Gamulinskii from Pixabay)

See also: UK will host global AI summit to address potential risks

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Error-prone facial recognition leads to another wrongful arrest appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/07/error-prone-facial-recognition-another-wrongful-arrest/feed/ 1
SEC turns its gaze from crypto to AI https://www.artificialintelligence-news.com/2023/08/04/sec-turns-gaze-from-crypto-to-ai/ https://www.artificialintelligence-news.com/2023/08/04/sec-turns-gaze-from-crypto-to-ai/#respond Fri, 04 Aug 2023 10:33:47 +0000 https://www.artificialintelligence-news.com/?p=13430 US Securities and Exchange Commission (SEC) chairman Gary Gensler has announced a shift in focus from cryptocurrency to AI. Gensler, who has been vocal about the risks and challenges posed by the cryptocurrency industry, now believes that AI is the technology that “warrants the hype” and deserves greater attention from regulators. Gensler’s interest in AI... Read more »

The post SEC turns its gaze from crypto to AI appeared first on AI News.

]]>
US Securities and Exchange Commission (SEC) chairman Gary Gensler has announced a shift in focus from cryptocurrency to AI.

Gensler, who has been vocal about the risks and challenges posed by the cryptocurrency industry, now believes that AI is the technology that “warrants the hype” and deserves greater attention from regulators.

Gensler’s interest in AI dates back to 1997 when he became intrigued by the technology after witnessing Russian chess grandmaster Garry Kasparov’s infamous loss to IBM’s supercomputer, Deep Blue.

As an MIT professor, Gensler delved deeper into the study of AI, co-authoring a significant paper in 2020 that highlighted the risks posed by deep learning in the financial system.

His concern over the potential implications of mass automation using AI in the finance sector has led him to reevaluate regulatory approaches. Gensler believes that while AI can bring immense benefits to financial firms and their clients through enhanced predictive capabilities, it also carries significant risks that need to be addressed.

“Mass automation can have cascading implications for trillions of dollars in assets that trade on markets overseen by the SEC,” warns Gensler.

One of Gensler’s key concerns is the potential use of AI to obscure responsibility and accountability when things go wrong. Coordinating AI models among major trading houses could lead to increased market volatility and instability, a phenomenon that current regulatory regimes might not be equipped to manage.

As a result, Gensler has taken a proactive step by proposing one of the first regulatory frameworks for AI in the finance industry. His proposal requires trading houses and money managers to carefully evaluate their use of AI and predictive data to identify any conflicts of interest, especially when the interests of clients clash with company profits.

However, this shift in focus does not mean the SEC is easing its crackdown on cryptocurrencies.

Under Gensler’s leadership, the SEC has actively pursued legal action against major crypto firms like Ripple, Binance, and Coinbase. Several lawsuits are currently pending, signalling that the SEC remains committed to enforcing its actions against cryptocurrency companies that engage in scams and fraudulent activities.

Gensler’s emphasis on AI comes at a crucial time when the technology is making rapid strides in automating various financial processes.

While AI holds tremendous promise in revolutionising the industry, its unchecked growth could also lead to unforeseen challenges. By directing the SEC’s attention towards AI, Gensler aims to strike a balance between promoting innovation and safeguarding market integrity and investor interests.

(Photo by Petri Heiskanen on Unsplash)

See also: AI Act: The power of open-source in guiding regulations

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Blockchain Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post SEC turns its gaze from crypto to AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/04/sec-turns-gaze-from-crypto-to-ai/feed/ 0
AI regulation: A pro-innovation approach – EU vs UK https://www.artificialintelligence-news.com/2023/07/31/ai-regulation-pro-innovation-approach-eu-vs-uk/ https://www.artificialintelligence-news.com/2023/07/31/ai-regulation-pro-innovation-approach-eu-vs-uk/#respond Mon, 31 Jul 2023 14:07:50 +0000 https://www.artificialintelligence-news.com/?p=13348 In this article, the writers compare the United Kingdom’s plans for implementing a pro-innovation approach to regulation (“UK Approach”) versus the European Union’s proposed Artificial Intelligence Act (the “EU AI Act”). Authors: Sean Musch, AI & Partners and Michael Borrelli, AI & Partners AI – The opportunity and the challenge AI currently delivers broad societal... Read more »

The post AI regulation: A pro-innovation approach – EU vs UK appeared first on AI News.

]]>
In this article, the writers compare the United Kingdom’s plans for implementing a pro-innovation approach to regulation (“UK Approach”) versus the European Union’s proposed Artificial Intelligence Act (the “EU AI Act”).

Authors: Sean Musch, AI & Partners and Michael Borrelli, AI & Partners

AI – The opportunity and the challenge

AI currently delivers broad societal benefits, from medical advances to mitigating climate change. As an example, an AI technology developed by DeepMind, a UK- based business, can predict the structure of almost every protein known to science. Government frameworks consider the role of regulation in creating the environment for AI to flourish. AI technologies have not yet reached their full potential. Under the right conditions, AI will transform all areas of life and stimulate economies by unleashing innovation and driving productivity, creating new jobs and improving the workplace.

The UK has indicated a requirement to act quickly to continue to lead the international conversation on AI governance and demonstrate the value of our pragmatic, proportionate regulatory approach. In their report, the UK government identify the short time frame for intervention to provide a clear, pro-innovation regulatory environment in order to make the UK one of the top places in the world to build foundational AI companies. Not too dissimilar to this EU legislators have signalled an intention to make the EU a global hub for AI innovation. On both fronts responding to risk and building public trust are important drivers for regulation. Yet, clear and consistent regulation can also support business investment and build confidence in innovation.

What remains critical for the industry is winning and retaining consumer trust, which is key to the success of innovation economies. Neither the EU nor the UK can afford not to have clear, proportionate approaches to regulation that enable the responsible application of  AI to flourish. Without such consideration, they risk creating cumbersome rules applying to all AI technologies.

What are the policy objectives and intended effects?

Similarities exist in terms of the overall aims. As shown in the table below, the core similarities revolve around growth, safety and economic prosperity:

EU AI ActUK Approach
Ensure that AI systems placed on the market and used are safe and respect existing laws on fundamental rights and Union values.Drive growth and prosperity by boosting innovation, investment, and public trust to harness the opportunities and benefits that AI technologies present.
Enhance governance and effective enforcement of existing laws on fundamental rights and safety requirements applicable to AI systems.Strengthen the UK’s position as a global leader in AI, by ensuring the UK is the best place to develop and use AI technologies.
Ensure legal certainty to facilitate investment and innovation in AI.
Facilitate the development of a single market for lawful, safe, and trustworthy AI applications and prevent market fragmentation.

What are the problems being tackled?

Again, similarities exist in terms of a common focus: the end-user. AI’s involvement in multiple activities of the economy, whether this be from simple chatbots to biometric identification, inevitably mean that end-users end up being affected. Protecting them at all costs seems to be the presiding theme:

EU AI ActUK Approach
Safety risks. Increased risks to the safety and security of citizens caused by the use of AI systems.Market failures. A number of market failures (information asymmetry, misaligned incentives, negative externalities, regulatory failure), mean AI risks are not being adequately addressed.
Fundamental rights risk. The use of AI systems poses an increased risk of violations of citizens’ fundamental rights and Union values.Consumer risks. These risks include damage to physical and mental health, bias and discrimination, and infringements on privacy and individual rights.
Legal uncertainty. Legal uncertainty and complexity on how to ensure compliance with rules applicable to AI systems dissuade businesses from developing and using the technology.
Enforcement. Competent authorities do not have the powers and/or procedural framework to ensure compliance of AIuse with fundamental rights and safety.
Mistrust. Mistrust in AI would slow down AI development in Europe and reduce the global competitiveness of the EU economies.
Fragmentation. Fragmented measures create obstacles for cross-border AI single market and threaten the Union’s digital sovereignty.

What are the differences in policy options?

A variety of options have been considered by the respective policymakers. On the face of it, pro-innovation requires a holistic examination to account for the variety of challenges new ways of working generate. The EU sets the standard with Option 3:

EU AI Act (Decided)UK Approach (In Process)
Option 1 – EU Voluntary labelling scheme – An EU act establishing a voluntary labelling scheme. One definition of AI, however applicable only on a voluntary basis.Option 0 – Do nothing option – Assume the EU delivers the AI Act as drafted in April 2021. The UK makes no regulatory changes regarding AI.
Option 2 – Ad-hoc sectoral approach – Ad-hoc sectoral acts (revision or new). Each sector can adopt a definition of AI and determine the riskiness of the AI systems covered.Option 1 – Delegate to existing regulators, guided by non-statutory advisory principles – Non-legislative option with existing regulators applying cross-sectoral AI governance principles within their remits.
Option 3 – Horizontal risk-based act on AI – A single binding horizontal act on AI. One horizontally applicable AI definition and methodology for the determination of high-risk (risk-based).Option 2 – Delegate to existing regulators with a duty to regard the principles, supported by central AI regulatory functions (Preferred option) – Existing regulators have a ‘duty to have due regard’ to the cross-sectoral AI governance principles, supported by central AI regulatory functions. No new mandatory obligations for businesses.
Option 3+ – Option 3 + industry-led codes of conduct for non-high-risk AI.Option 3 – Centralised AI regulator with new legislative requirements placed on AI systems – The UK establishes a central AI regulator, with mandatory requirements for businesses aligned to the EU AI Act.
Option 4 – Horizontal act for all AI – A single binding horizontal act on AI. One horizontal AI definition, but no methodology/gradation (all risks covered).

What are the estimated direct compliance costs to firms?

Both the UK Approach and the EU AI Act regulatory framework will apply to all AI systems being designed or developed, made available or otherwise being used in the EU/UK, whether they are developed in the EU/UK or abroad. Both businesses that develop and deploy AI, “AI businesses”, and businesses that use AI, “AI adopting businesses”, are in the scope of the framework. These two types of firms have different expected costs per business under the respective frameworks.

UK Approach: Key assumptions for AI system costs

Key finding: Cost of compliance for HRS highest under Option 3

OptionOption 0Option 1Option 2Option 3
% of businesses that provide high-risk systems (HRS)8.1%8.1%8.1%
Cost of compliance per HRS£3,698£3,698£36,981
% of businesses that AI systems that interact with natural persons (non-HRS)39.0%39.0%39.0%
Cost of compliance per non-HRS£330£330£330
Assumed number of AI systems per AI business (2020)Small – 2
Medium – 5
Large – 10
Assumed number of AI systems per AI-adopting business (2020)Small – 2
Medium – 5
Large – 10
EU AI Act: Total compliance cost of the five requirements for each AI product

Key finding: Information provision represents the highest cost incurred by firms.

Administrative ActivityTotal MinutesTotal Admin Cost (Hourly rate = €32)Total Cost
Training Data€5,180.5
Documents & Record Keeping€2,231
Information Provision€6,800
Human Oversight€1,260
Robustness and Accuracy€4,750
Total€20,581.5€10,976.8€29,276.8

In light of these comparisons, it appears the EU estimates a lower cost of compliance compared to the UK. Lower costs don’t confer a less rigid approach. Rather, they indicate an itemised approach to cost estimation as well as using a standard pricing metric, hours. In practice, firms are likely to aim to make this more efficient by reducing the number of hours required to achieve compliance.

Lessons from the UK Approach for the EU AI Act

The forthcoming EU AI Act is set to place the EU at the global forefront of regulating this emerging technology. Accordingly, models for the governance and mitigation of AI risk from outside the region can still provide insightful lessons for EU decision-makers to learn and issues to account for before the EU AI Act is passed.

This is certainly applicable to Article 9 of the EU AI Act, which requires developers to establish, implement, document, and maintain risk management systems for high-risk AI systems. There are three key ideas for EU decision-makers to consider from the UK Approach.

AI assurance techniques and technical standards

Unlike Article 17 of the EU AI Act, the quality management system put in place by providers of high-risk AI systems is designed to ensure compliance. To do this, providers of high-risk  AI  systems must establish techniques, procedures, and systematic actions to be used for development, quality control, and quality assurance. The EU AI Act only briefly covers the concept of assurance, but it could benefit from publishing assurance techniques and technical standards that play a critical role in enabling the responsible adoption of AI so that potential harms at all levels of society are identified and documented.

To assure AI systems effectively, the UK government is calling for a toolbox of assurance techniques to measure, evaluate, and communicate the trustworthiness of AI systems across the development and deployment life cycle. These techniques include impact assessment, audit, and performance testing along with formal verification methods. To help innovators understand how AI assurance techniques can support wider AI governance, the government launched a ‘Portfolio of AI Assurance techniques’ in Spring 2023. This is an industry collaboration to showcase how these tools are already being applied by businesses to real-world use cases and how they align with the AI regulatory principles.

Similarly, assurance techniques need to be underpinned by available technical standards, which provide a common understanding across assurance providers. Technical standards and assurance techniques will also enable organisations to demonstrate that their systems are in line with the regulatory principles enshrined under the EU AI Act and the UK Approach. Similarities exist in terms of the stage of development.

Specifically, the EU AI Act defines common mandatory requirements applicable to the design and development of certain AI systems before they are placed on the market that will be further operationalised through harmonised technical standards. In equal fashion, the UK government intends to have a leading role in the development of international technical standards, working with industry, international and UK partners. The UK government plans to continue to support the role of technical standards in complementing our approach to AI regulation, including through the UK AI Standards Hub. These technical standards may demonstrate firms’ compliance with the EU AI Act.

A harmonised vocabulary

All relevant parties would benefit from reaching a consensus on the definitions of key terms related to the foundations of AI regulation. While the EU AI Act and the UK Approach are either under development or in the incubation stage, decision-makers for both initiatives should seize the opportunity to develop a shared understanding of core AI ideas, principles, and concepts, and codify these into a harmonised transatlantic vocabulary. As shown below, identification of where both initiatives are in agreement, and where they diverge, has been undertaken:

EU AI ActUK Approach
SharedAccountability
Safety
Privacy
Transparency
Fairness
DivergentData Governance
Diversity
Environmental and Social Well-Being
Human Agency and Oversight
Technical Robustness
Non-Discrimination
Governance
Security
Robustness
Explainability
Contestability
Redress

How AI & Partners can help

We can help you start assessing your AI systems using recognised metrics ahead of the expected changes brought about by the EU AI Act. Our leading practice is geared towards helping you identify, design, and implement appropriate metrics for your assessments.

 Website: https://www.ai-and-partners.com/

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI regulation: A pro-innovation approach – EU vs UK appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/07/31/ai-regulation-pro-innovation-approach-eu-vs-uk/feed/ 0
AI Act: The power of open-source in guiding regulations https://www.artificialintelligence-news.com/2023/07/26/ai-act-power-open-source-guiding-regulations/ https://www.artificialintelligence-news.com/2023/07/26/ai-act-power-open-source-guiding-regulations/#respond Wed, 26 Jul 2023 10:41:51 +0000 https://www.artificialintelligence-news.com/?p=13328 As the EU debates the AI Act, lessons from open-source software can inform the regulatory approach to open ML systems. The AI Act, set to be a global precedent, aims to address the risks associated with AI while encouraging the development of cutting-edge technology. One of the key aspects of this Act is its support... Read more »

The post AI Act: The power of open-source in guiding regulations appeared first on AI News.

]]>
As the EU debates the AI Act, lessons from open-source software can inform the regulatory approach to open ML systems.

The AI Act, set to be a global precedent, aims to address the risks associated with AI while encouraging the development of cutting-edge technology. One of the key aspects of this Act is its support for open-source, non-profit, and academic research and development in the AI ecosystem. Such support ensures the development of safe, transparent, and accountable AI systems that benefit all EU citizens.

Drawing from the success of open-source software development, policymakers can craft regulations that encourage open AI development while safeguarding user interests. By providing exemptions and proportional requirements for open ML systems, the EU can foster innovation and competition in the AI market while maintaining a thriving open-source ecosystem.

Representing both commercial and nonprofit stakeholders, several organisations – including GitHub, Hugging Face, EleutherAI, Creative Commons, and more – have banded together to release a policy paper calling on EU policymakers to protect open-source innovation.

The organisations have five proposals:

  1. Define AI components clearly: Clear definitions of AI components will help stakeholders understand their roles and responsibilities, facilitating collaboration and innovation in the open ecosystem.
  1. Clarify that collaborative development of open-source AI components is exempt from AI Act requirements: To encourage open-source development, the Act should clarify that contributors to public repositories are not subject to the same regulatory requirements as commercial entities.
  1. Support the AI Office’s coordination with the open-source ecosystem: The Act should encourage inclusive governance and collaboration between the AI Office and open-source developers to foster transparency and knowledge exchange.
  1. Ensure practical and effective R&D exception: Allow limited real-world testing in different conditions, combining aspects of the Council’s approach and the Parliament’s Article 2(5d), to facilitate research and development without compromising safety and accountability.
  1. Set proportional requirements for “foundation models”: Differentiate between various uses and development modalities of foundation models, including open source approaches, to ensure fair treatment and promote competition.

Open-source AI development offers several advantages, including transparency, inclusivity, and modularity. It allows stakeholders to collaborate and build on each other’s work, leading to more robust and diverse AI models. For instance, the EleutherAI community has become a leading open-source ML lab, releasing pre-trained models and code libraries that have enabled foundational research and reduced the barriers to developing large AI models.

Similarly, the BigScience project, which brought together over 1200 multidisciplinary researchers, highlights the importance of facilitating direct access to AI components across institutions and disciplines.

Such open collaborations have democratised access to large AI models, enabling researchers to fine-tune and adapt them to various languages and specific tasks—ultimately contributing to a more diverse and representative AI landscape.

Open research and development also promote transparency and accountability in AI systems. For example, LAION – a non-profit research organisation – released openCLIP models, which have been instrumental in identifying and addressing biases in AI applications. Open access to training data and model components has enabled researchers and the public to scrutinise the inner workings of AI systems and challenge misleading or erroneous claims.

The AI Act’s success depends on striking a balance between regulation and support for the open AI ecosystem. While openness and transparency are essential, regulation must also mitigate risks, ensure standards, and establish clear liability for AI systems’ potential harms.

As the EU sets the stage for regulating AI, embracing open source and open science will be critical to ensure that AI technology benefits all citizens.

By implementing the recommendations provided by organisations representing stakeholders in the open AI ecosystem, the AI Act can foster an environment of collaboration, transparency, and innovation, making Europe a leader in the responsible development and deployment of AI technologies.

(Photo by Nick Page on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI Act: The power of open-source in guiding regulations appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/07/26/ai-act-power-open-source-guiding-regulations/feed/ 0
Assessing the risks of generative AI in the workplace https://www.artificialintelligence-news.com/2023/07/17/assessing-risks-generative-ai-workplace/ https://www.artificialintelligence-news.com/2023/07/17/assessing-risks-generative-ai-workplace/#respond Mon, 17 Jul 2023 13:12:19 +0000 https://www.artificialintelligence-news.com/?p=13284 Amid the exponential growth of generative AI, there is a pressing need to evaluate the legal, ethical, and security implications of these solutions in the workplace. One of the concerns highlighted by industry experts is often the lack of transparency regarding the data on which many generative AI models are trained. There is insufficient information... Read more »

The post Assessing the risks of generative AI in the workplace appeared first on AI News.

]]>
Amid the exponential growth of generative AI, there is a pressing need to evaluate the legal, ethical, and security implications of these solutions in the workplace.

One of the concerns highlighted by industry experts is often the lack of transparency regarding the data on which many generative AI models are trained.

There is insufficient information about the specifics of the training data used for models like GPT-4, which powers applications such as ChatGPT. This lack of clarity extends to the storage of information obtained during interactions with individual users, raising legal and compliance risks.

The potential for leakage of sensitive company data or code through interactions with generative AI solutions is of significant concern.

“Individual employees might leak sensitive company data or code when interacting with popular generative AI solutions,” says Vaidotas Šedys, Head of Risk Management at Oxylabs.

“While there is no concrete evidence that data submitted to ChatGPT or any other generative AI system might be stored and shared with other people, the risk still exists as new and less tested software often has security gaps.” 

OpenAI, the organisation behind ChatGPT, has been cautious in providing detailed information on how user data is handled. This poses challenges for organisations seeking to mitigate the risk of confidential code fragments being leaked. Constant monitoring of employee activities and implementing alerts for the use of generative AI platforms becomes necessary, which can be burdensome for many organisations.

“Further risks include using wrong or outdated information, especially in the case of junior specialists who are often unable to evaluate the quality of the AI’s output. Most generative models function on large but limited datasets that need constant updating,” adds Šedys.

These models have a limited context window and may encounter difficulties when dealing with new information. OpenAI has acknowledged that its latest framework, GPT-4, still suffers from factual inaccuracies, which can lead to the dissemination of misinformation.

The implications extend beyond individual companies. For example, Stack Overflow – a popular developer community – has temporarily banned the use of content generated with ChatGPT due to low precision rates, which can mislead users seeking coding answers.

Legal risks also come into play when utilising free generative AI solutions. GitHub’s Copilot has already faced accusations and lawsuits for incorporating copyrighted code fragments from public and open-source repositories.

“As AI-generated code can contain proprietary information or trade secrets belonging to another company or person, the company whose developers are using such code might be liable for infringement of third-party rights,” explains Šedys.

“Moreover, failure to comply with copyright laws might affect company evaluation by investors if discovered.”

While organisations cannot feasibly achieve total workplace surveillance, individual awareness and responsibility are crucial. Educating the general public about the potential risks associated with generative AI solutions is essential.

Industry leaders, organisations, and individuals must collaborate to address the data privacy, accuracy, and legal risks of generative AI in the workplace.

(Photo by Sean Pollock on Unsplash)

See also: Universities want to ensure staff and students are ‘AI-literate’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Assessing the risks of generative AI in the workplace appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/07/17/assessing-risks-generative-ai-workplace/feed/ 0
Stephen Almond, ICO: Prioritise privacy when adopting generative AI https://www.artificialintelligence-news.com/2023/06/15/stephen-almond-ico-prioritise-privacy-adopting-generative-ai/ https://www.artificialintelligence-news.com/2023/06/15/stephen-almond-ico-prioritise-privacy-adopting-generative-ai/#respond Thu, 15 Jun 2023 14:09:46 +0000 https://www.artificialintelligence-news.com/?p=13197 The Information Commissioner’s Office (ICO) is urging businesses to prioritise privacy considerations when adopting generative AI technology. According to new research, generative AI has the potential to become a £1 trillion market within the next ten years, offering significant benefits to both businesses and society. However, the ICO emphasises the need for organisations to be... Read more »

The post Stephen Almond, ICO: Prioritise privacy when adopting generative AI appeared first on AI News.

]]>
The Information Commissioner’s Office (ICO) is urging businesses to prioritise privacy considerations when adopting generative AI technology.

According to new research, generative AI has the potential to become a £1 trillion market within the next ten years, offering significant benefits to both businesses and society. However, the ICO emphasises the need for organisations to be aware of the associated privacy risks.

Stephen Almond, the Executive Director of Regulatory Risk at the ICO, highlighted the importance of recognising the opportunities presented by generative AI while also understanding the potential risks.

“Businesses are right to see the opportunity that generative AI offers, whether to create better services for customers or to cut the costs of their services. But they must not be blind to the privacy risks,” says Almond.

“Spend time at the outset to understand how AI is using personal information, mitigate any risks you become aware of, and then roll out your AI approach with confidence that it won’t upset customers or regulators.”

Generative AI works by generating content based on extensive data collection from publicly accessible sources, including personal information. Existing laws already safeguard individuals’ rights, including privacy, and these regulations extend to emerging technologies such as generative AI.

In April, the ICO outlined eight key questions that organisations using or developing generative AI that processes personal data should be asking themselves. The regulatory body is committed to taking action against organisations that fail to comply with data protection laws.

Almond reaffirms the ICO’s stance, stating that they will assess whether businesses have effectively addressed privacy risks before implementing generative AI, and will take action if there is a potential for harm resulting from the misuse of personal data. He emphasises that businesses must not overlook the risks to individuals’ rights and freedoms during the rollout of generative AI.

“We will be checking whether businesses have tackled privacy risks before introducing generative AI – and taking action where there is a risk of harm to people through poor use of their data. There can be no excuse for ignoring risks to people’s rights and freedoms before rollout,” explains Almond.

“Businesses need to show us how they’ve addressed the risks that occur in their context – even if the underlying technology is the same. An AI-backed chat function helping customers at a cinema raises different questions compared with one for a sexual health clinic, for instance.”

The ICO is committed to supporting UK businesses in their development and adoption of new technologies that prioritise privacy.

The recently updated Guidance on AI and Data Protection serves as a comprehensive resource for developers and users of generative AI, providing a roadmap for data protection compliance. Additionally, the ICO offers a risk toolkit to assist organisations in identifying and mitigating data protection risks associated with generative AI.

For innovators facing novel data protection challenges, the ICO provides advice through its Regulatory Sandbox and Innovation Advice service. To enhance their support, the ICO is piloting a Multi-Agency Advice Service in collaboration with the Digital Regulation Cooperation Forum, aiming to provide comprehensive guidance from multiple regulatory bodies to digital innovators.

While generative AI offers tremendous opportunities for businesses, the ICO emphasises the need to address privacy risks before widespread adoption. By understanding the implications, mitigating risks, and complying with data protection laws, organisations can ensure the responsible and ethical implementation of generative AI technologies.

(Image Credit: ICO)

Related: UK will host global AI summit to address potential risks

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

The post Stephen Almond, ICO: Prioritise privacy when adopting generative AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/06/15/stephen-almond-ico-prioritise-privacy-adopting-generative-ai/feed/ 0