development Archives - AI News https://www.artificialintelligence-news.com/tag/development/ Artificial Intelligence News Mon, 30 Oct 2023 10:18:15 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png development Archives - AI News https://www.artificialintelligence-news.com/tag/development/ 32 32 Biden issues executive order to ensure responsible AI development https://www.artificialintelligence-news.com/2023/10/30/biden-issues-executive-order-responsible-ai-development/ https://www.artificialintelligence-news.com/2023/10/30/biden-issues-executive-order-responsible-ai-development/#respond Mon, 30 Oct 2023 10:18:14 +0000 https://www.artificialintelligence-news.com/?p=13798 President Biden has issued an executive order aimed at positioning the US at the forefront of AI while ensuring the technology’s safe and responsible use. The order establishes stringent standards for AI safety and security, safeguards Americans’ privacy, promotes equity and civil rights, protects consumers and workers, fosters innovation and competition, and enhances American leadership... Read more »

The post Biden issues executive order to ensure responsible AI development appeared first on AI News.

]]>
President Biden has issued an executive order aimed at positioning the US at the forefront of AI while ensuring the technology’s safe and responsible use.

The order establishes stringent standards for AI safety and security, safeguards Americans’ privacy, promotes equity and civil rights, protects consumers and workers, fosters innovation and competition, and enhances American leadership on the global stage.

Key actions outlined in the order:

  1. New standards for AI safety and security: The order mandates that developers of powerful AI systems share safety test results and critical information with the U.S. government. Rigorous standards, tools, and tests will be developed to ensure AI systems are safe, secure, and trustworthy before public release. Additionally, measures will be taken to protect against the risks of using AI to engineer dangerous biological materials and combat AI-enabled fraud and deception.
  2. Protecting citizens’ privacy: The President calls on Congress to pass bipartisan data privacy legislation, prioritizing federal support for privacy-preserving techniques, especially those using AI. Guidelines will be developed for federal agencies to evaluate the effectiveness of privacy-preserving techniques, including those used in AI systems.
  3. Advancing equity and civil rights: Clear guidance will be provided to prevent AI algorithms from exacerbating discrimination, especially in areas like housing and federal benefit programs. Best practices will be established for the use of AI in the criminal justice system to ensure fairness.
  4. Standing up for consumers, patients, and students: Responsible use of AI in healthcare and education will be promoted, ensuring that consumers are protected from harmful AI applications while benefiting from its advancements in these sectors.
  5. Supporting workers: Principles and best practices will be developed to mitigate the harms and maximise the benefits of AI for workers, addressing issues such as job displacement, workplace equity, and health and safety. A report on AI’s potential labour-market impacts will be produced, identifying options for strengthening federal support for workers facing labour disruptions due to AI.
  6. Promoting innovation and competition: The order aims to catalyse AI research across the US, promote a fair and competitive AI ecosystem, and expand the ability of highly skilled immigrants and non-immigrants to study, stay, and work in the US to foster innovation in the field.
  7. Advancing leadership abroad: The US will collaborate with other nations to establish international frameworks for safe and trustworthy AI deployment. Efforts will be made to accelerate the development and implementation of vital AI standards with international partners and promote the responsible development and deployment of AI abroad to address global challenges.
  8. Ensuring responsible and effective government adoption: Clear standards and guidelines will be issued for government agencies’ use of AI to protect rights and safety. Efforts will be made to help agencies acquire AI products and services more rapidly and efficiently, and an AI talent surge will be initiated to enhance government capacity in AI-related fields.

The executive order signifies a major step forward in the US towards harnessing the potential of AI while safeguarding individuals’ rights and security.

“As we advance this agenda at home, the Administration will work with allies and partners abroad on a strong international framework to govern the development and use of AI,” wrote the White House in a statement.

“The actions that President Biden directed today are vital steps forward in the US’ approach on safe, secure, and trustworthy AI. More action will be required, and the Administration will continue to work with Congress to pursue bipartisan legislation to help America lead the way in responsible innovation.”

The administration’s commitment to responsible innovation is paramount and sets the stage for continued collaboration with international partners to shape the future of AI globally.

(Photo by David Everett Strickler on Unsplash)

See also: UK paper highlights AI risks ahead of global Safety Summit

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Biden issues executive order to ensure responsible AI development appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/30/biden-issues-executive-order-responsible-ai-development/feed/ 0
Nightshade ‘poisons’ AI models to fight copyright theft https://www.artificialintelligence-news.com/2023/10/24/nightshade-poisons-ai-models-fight-copyright-theft/ https://www.artificialintelligence-news.com/2023/10/24/nightshade-poisons-ai-models-fight-copyright-theft/#respond Tue, 24 Oct 2023 14:49:13 +0000 https://www.artificialintelligence-news.com/?p=13779 University of Chicago researchers have unveiled Nightshade, a tool designed to disrupt AI models attempting to learn from artistic imagery. The tool – still in its developmental phase – allows artists to protect their work by subtly altering pixels in images, rendering them imperceptibly different to the human eye but confusing to AI models. Many... Read more »

The post Nightshade ‘poisons’ AI models to fight copyright theft appeared first on AI News.

]]>
University of Chicago researchers have unveiled Nightshade, a tool designed to disrupt AI models attempting to learn from artistic imagery.

The tool – still in its developmental phase – allows artists to protect their work by subtly altering pixels in images, rendering them imperceptibly different to the human eye but confusing to AI models.

Many artists and creators have expressed concern over the use of their work in training commercial AI products without their consent.

AI models rely on vast amounts of multimedia data – including written material and images, often scraped from the web – to function effectively. Nightshade offers a potential solution by sabotaging this data.

When integrated into digital artwork, Nightshade misleads AI models, causing them to misidentify objects and scenes.

For instance, Nightshade transformed images of dogs into data that appeared to AI models as cats. After exposure to a mere 100 poison samples, the AI reliably generated a cat when asked for a dog—demonstrating the tool’s effectiveness.

This technique not only confuses AI models but also challenges the fundamental way in which generative AI operates. By exploiting the clustering of similar words and ideas in AI models, Nightshade can manipulate responses to specific prompts and further undermine the accuracy of AI-generated content.

Developed by computer science professor Ben Zhao and his team, Nightshade is an extension of their prior product, Glaze, which cloaks digital artwork and distorts pixels to baffle AI models regarding artistic style.

While the potential for misuse of Nightshade is acknowledged, the researchers’ primary objective is to shift the balance of power from AI companies back to artists and discourage intellectual property violations.

The introduction of Nightshade presents a major challenge to AI developers. Detecting and removing images with poisoned pixels is a complex task, given the imperceptible nature of the alterations.

If integrated into existing AI training datasets, these images necessitate removal and potential retraining of AI models, posing a substantial hurdle for companies relying on stolen or unauthorised data.

As the researchers await peer review of their work, Nightshade is a beacon of hope for artists seeking to protect their creative endeavours.

(Photo by Josie Weiss on Unsplash)

See also: UMG files landmark lawsuit against AI developer Anthropic

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Nightshade ‘poisons’ AI models to fight copyright theft appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/24/nightshade-poisons-ai-models-fight-copyright-theft/feed/ 0
UK deputy PM warns UN that AI regulation is falling behind advances https://www.artificialintelligence-news.com/2023/09/22/uk-deputy-pm-warns-un-ai-regulation-falling-behind-advances/ https://www.artificialintelligence-news.com/2023/09/22/uk-deputy-pm-warns-un-ai-regulation-falling-behind-advances/#respond Fri, 22 Sep 2023 09:24:44 +0000 https://www.artificialintelligence-news.com/?p=13630 In a stark address to the UN, UK Deputy PM Oliver Dowden has sounded the alarm on the potentially destabilising impact of AI on the world order. Dowden has urged governments to take immediate action to regulate AI development, warning that the rapid pace of advancement in AI technology could outstrip their ability to ensure... Read more »

The post UK deputy PM warns UN that AI regulation is falling behind advances appeared first on AI News.

]]>
In a stark address to the UN, UK Deputy PM Oliver Dowden has sounded the alarm on the potentially destabilising impact of AI on the world order.

Dowden has urged governments to take immediate action to regulate AI development, warning that the rapid pace of advancement in AI technology could outstrip their ability to ensure its safe and responsible use.

Speaking at the UN General Assembly in New York, Dowden highlighted that the UK will host a global summit in November to discuss the regulation of AI. The summit aims to bring together international leaders, experts, and industry representatives to address the pressing concerns surrounding AI.

One of the primary fears surrounding unchecked AI development is the potential for widespread job displacement, the proliferation of misinformation, and the deepening of societal discrimination. Without adequate regulations in place, AI technologies could be harnessed to magnify these negative effects.

“The starting gun has been fired on a globally competitive race in which individual companies as well as countries will strive to push the boundaries as far and fast as possible,” Dowden cautioned during his address.

Dowden went on to note that the current state of global regulation lags behind the rapid advances in AI technology. Unlike the past, where regulations followed technological developments, Dowden stressed that rules must now be established in tandem with AI’s evolution.

Oseloka Obiora, CTO at RiverSafe, said: “Business leaders are jumping into bed with the latest AI trends at an alarming rate, with little or no concern for the consequences.

“With global regulatory standards falling way behind and the most basic cyber security checks being neglected, it is right for the government to call for new global standards to prevent the AI ticking timebomb from exploding.”

Dowden underscored the importance of ensuring that AI companies do not have undue influence over the regulatory process. He emphasised the need for transparency and oversight, stating that AI companies should not “mark their own homework.” Instead, governments and citizens should have confidence that risks associated with AI are being properly mitigated.

Moreover, Dowden highlighted that only coordinated action by nation-states could provide the necessary assurance to the public that significant national security concerns stemming from AI have been adequately addressed.

He also cautioned against oversimplifying the role of AI—noting that it can be both a tool for good and a tool for ill, depending on its application. During the UN General Assembly, the UK also pitched AI’s potential to accelerate development in the world’s most impoverished nations.

The UK’s initiative to host a global AI regulation summit signals a growing recognition among world leaders of the urgent need to establish a robust framework for AI governance. As AI technology continues to advance, governments are under increasing pressure to strike the right balance between innovation and safeguarding against potential risks.

Jake Moore, Global Cybersecurity Expert at ESET, comments: “The fear that AI could shape our lives in a completely new direction is not without substance, as the power behind the technology churning this wheel is potentially destructive. Not only could AI change jobs, it also has the ability to change what we know to be true and impact what we believe.   

“Regulating it would mean potentially stifling innovation. But even attempting to regulate such a powerful beast would be like trying to regulate the dark web, something that is virtually impossible. Large datasets and algorithms can be designed to do almost anything, so we need to start looking at how we can improve educating people, especially young people in schools, into understanding this new wave of risk.”

Dowden’s warning to the United Nations serves as a clarion call for nations to come together and address the challenges posed by AI head-on. The global summit in November will be a critical step in shaping the future of AI governance and ensuring that the world order remains stable in the face of unprecedented technological change.

(Image Credit: UK Government under CC BY 2.0 license)

See also: CMA sets out principles for responsible AI development 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK deputy PM warns UN that AI regulation is falling behind advances appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/22/uk-deputy-pm-warns-un-ai-regulation-falling-behind-advances/feed/ 0
CMA sets out principles for responsible AI development  https://www.artificialintelligence-news.com/2023/09/19/cma-sets-principles-responsible-ai-development/ https://www.artificialintelligence-news.com/2023/09/19/cma-sets-principles-responsible-ai-development/#respond Tue, 19 Sep 2023 10:41:38 +0000 https://www.artificialintelligence-news.com/?p=13614 The Competition and Markets Authority (CMA) has set out its principles to ensure the responsible development and use of foundation models (FMs). FMs are versatile AI systems with the potential to revolutionise various sectors, from information access to healthcare. The CMA’s report, published today, outlines a set of guiding principles aimed at safeguarding consumer protection... Read more »

The post CMA sets out principles for responsible AI development  appeared first on AI News.

]]>
The Competition and Markets Authority (CMA) has set out its principles to ensure the responsible development and use of foundation models (FMs).

FMs are versatile AI systems with the potential to revolutionise various sectors, from information access to healthcare. The CMA’s report, published today, outlines a set of guiding principles aimed at safeguarding consumer protection and fostering healthy competition within this burgeoning industry.

Foundation models – known for their adaptability to diverse applications – have witnessed rapid adoption across various user platforms, including familiar names like ChatGPT and Office 365 Copilot. These AI systems possess the power to drive innovation and stimulate economic growth, promising transformative changes across sectors and industries.

Sarah Cardell, CEO of the CMA, emphasised the urgency of proactive intervention in the AI:

“The speed at which AI is becoming part of everyday life for people and businesses is dramatic. There is real potential for this technology to turbocharge productivity and make millions of everyday tasks easier – but we can’t take a positive future for granted.

That’s why we have today proposed these new principles and launched a broad programme of engagement to help ensure the development and use of foundation models evolves in a way that promotes competition and protects consumers.

While I hope that our collaborative approach will help realise the maximum potential of this new technology, we are ready to intervene where necessary.”

Research from Earlybird reveals that Britain houses the largest number of AI startups in Europe. The CMA’s report underscores the immense benefits that can accrue if the development and use of FMs are managed effectively.

These advantages include the emergence of superior products and services, enhanced access to information, breakthroughs in science and healthcare, and even lower prices for consumers. Additionally, a vibrant FM market could open doors for a wider range of businesses to compete successfully, challenging established market leaders. This competition and innovation, in turn, could boost the overall economy, fostering increased productivity and economic growth.

Claire Trachet, tech industry expert and CEO of business advisory Trachet, said:

“With the [UK-hosted] global AI Safety Summit around the corner, the announcement of these principles shows the public and investors that the UK is committed to regulating AI safely. To continue this momentum, it’s important for the UK to strike a balance in creating effective regulation without stifling growing innovation and investment. 

Ensuring that regulation is both well-designed and effective will help attract and maintain investment in the UK by creating a stable, secure, and trustworthy business environment that appeals to domestic and international investors.” 

The CMA’s report also sounds a cautionary note. It highlights the potential risks if competition remains weak or if developers neglect consumer protection regulations. Such lapses could expose individuals and businesses to significant levels of false information and AI-driven fraud. In the long run, a handful of dominant firms might exploit FMs to consolidate market power, offering subpar products or services at exorbitant prices.

While the scope of the CMA’s initial review focused primarily on competition and consumer protection concerns, it acknowledges that other important questions related to FMs, such as copyright, intellectual property, online safety, data protection, and security, warrant further examination.

Sridhar Iyengar, Managing Director of Zoho Europe, commented:

“The safe development of AI has been a central focus of UK policy and will continue to play a significant role in the UK’s ambitions of leading the global AI race. While there is public concern over the trustworthiness of AI, we shouldn’t lose sight of the business benefits that it provides, such as forecasting and improved data analysis, and work towards a solution.

Collaboration between businesses, government, academia and industry experts is crucial to strike a balance between safe regulations and guidance that can lead to the positive development and use of innovative business AI tools.

AI is going to move forward with or without the UK, so it’s best to take the lead on research and development to ensure its safe evolution.”

The proposed guiding principles, unveiled by the CMA, aim to steer the ongoing development and use of FMs, ensuring that people, businesses, and the economy reap the full benefits of innovation and growth. Drawing inspiration from the evolution of other technology markets, these principles seek to guide FM developers and deployers in the following key areas:

  • Accountability: Developers and deployers are accountable for the outputs provided to consumers.
  • Access: Ensuring ongoing access to essential inputs without unnecessary restrictions.
  • Diversity: Encouraging a sustained diversity of business models, including both open and closed approaches.
  • Choice: Providing businesses with sufficient choices to determine how to utilize FMs effectively.
  • Flexibility: Allowing the flexibility to switch between or use multiple FMs as needed.
  • Fairness: Prohibiting anti-competitive conduct, including self-preferencing, tying, or bundling.
  • Transparency: Offering consumers and businesses information about the risks and limitations of FM-generated content to enable informed choices.

Over the next few months, the CMA plans to engage extensively with a diverse range of stakeholders both within the UK and internationally to further develop these principles. This collaborative effort aims to support the positive growth of FM markets, fostering effective competition and consumer protection.

Gareth Mills, Partner at law firm Charles Russell Speechlys, said:

“The principles themselves are clearly aimed at facilitating a dynamic sector with low entry requirements that allows smaller players to compete effectively with more established names, whilst at the same time mitigating against the potential for AI technologies to have adverse consequences for consumers.

The report itself notes that, although the CMA has established a number of core principles, there is still work to do and that stakeholder feedback – both within the UK and internationally – will be required before a formal policy and regulatory position can be definitively established.

As the utilisation of the technologies grows, the extent to which there is any inconsistency between competition objectives and government strategy will be fleshed out.”

An update on the CMA’s progress and the reception of these principles will be published in early 2024, reflecting the authority’s commitment to shaping AI markets in ways that benefit people, businesses, and the UK economy as a whole.

(Photo by JESHOOTS.COM on Unsplash)

See also: UK to pitch AI’s potential for international development at UN

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post CMA sets out principles for responsible AI development  appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/19/cma-sets-principles-responsible-ai-development/feed/ 0
UK government outlines AI Safety Summit plans https://www.artificialintelligence-news.com/2023/09/04/uk-government-outlines-ai-safety-summit-plans/ https://www.artificialintelligence-news.com/2023/09/04/uk-government-outlines-ai-safety-summit-plans/#respond Mon, 04 Sep 2023 10:46:55 +0000 https://www.artificialintelligence-news.com/?p=13560 The UK government has announced plans for the global AI Safety Summit on 1-2 November 2023. The major event – set to be held at Bletchley Park, home of Alan Turing and other Allied codebreakers during the Second World War – aims to address the pressing challenges and opportunities presented by AI development on both... Read more »

The post UK government outlines AI Safety Summit plans appeared first on AI News.

]]>
The UK government has announced plans for the global AI Safety Summit on 1-2 November 2023.

The major event – set to be held at Bletchley Park, home of Alan Turing and other Allied codebreakers during the Second World War – aims to address the pressing challenges and opportunities presented by AI development on both national and international scales.

Secretary of State Michelle Donelan has officially launched the formal engagement process leading up to the summit. Jonathan Black and Matt Clifford – serving as the Prime Minister’s representatives for the AI Safety Summit – have also initiated discussions with various countries and frontier AI organisations.

This marks a crucial step towards fostering collaboration in the field of AI safety and follows a recent roundtable discussion hosted by Secretary Donelan, which involved representatives from a diverse range of civil society groups.

The AI Safety Summit will serve as a pivotal platform, bringing together not only influential nations but also leading technology organisations, academia, and civil society. Its primary objective is to facilitate informed discussions that can lead to sensible regulations in the AI landscape.

One of the core focuses of the summit will be on identifying and mitigating risks associated with the most powerful AI systems. These risks include the potential misuse of AI for activities such as undermining biosecurity through the proliferation of sensitive information. 

Additionally, the summit aims to explore how AI can be harnessed for the greater good, encompassing domains like life-saving medical technology and safer transportation.

The UK government claims to recognise the importance of diverse perspectives in shaping the discussions surrounding AI and says that it’s committed to working closely with global partners to ensure that it remains safe and that its benefits can be harnessed worldwide.

As part of this iterative and consultative process, the UK has shared five key objectives that will guide the discussions at the summit:

  1. Developing a shared understanding of the risks posed by AI and the necessity for immediate action.
  2. Establishing a forward process for international collaboration on AI safety, including supporting national and international frameworks.
  3. Determining appropriate measures for individual organisations to enhance AI safety.
  4. Identifying areas for potential collaboration in AI safety research, such as evaluating model capabilities and establishing new standards for governance.
  5. Demonstrating how the safe development of AI can lead to global benefits.

The growth potential of AI investment, deployment, and capabilities is staggering, with projections of up to $7 trillion in growth over the next decade and accelerated drug discovery. A report by Google in July suggests that, by 2030, AI could boost the UK economy alone by £400 billion—leading to an annual growth rate of 2.6 percent.

However, these opportunities come with significant risks that transcend national borders. Addressing these risks is now a matter of utmost urgency.

Earlier this month, DeepMind co-founder Mustafa Suleyman called on the US to enforce AI standards. However, Suleyman is far from the only leading industry figure who has expressed concerns and called for measures to manage the risks of AI.

In an open letter in March, over 1,000 experts infamously called for a halt on “out of control” AI development over the “profound risks to society and humanity”.

Multiple stakeholders – including individual countries, international organisations, businesses, academia, and civil society – are already engaged in AI-related work. This includes efforts at the United Nations, the Organisation for Economic Co-operation and Development (OECD), the Global Partnership on Artificial Intelligence (GPAI), the Council of Europe, G7, G20, and standard development organisations.

The AI Safety Summit will build upon these existing initiatives by formulating practical next steps to mitigate risks associated with AI. These steps will encompass discussions on implementing risk-mitigation measures at relevant organisations, identifying key areas for international collaboration, and creating a roadmap for long-term action.

If successful, the AI Safety Summit at Bletchley Park promises to be a milestone event in the global dialogue on AI safety—seeking to strike a balance between harnessing the potential of AI for the benefit of humanity and addressing the challenges it presents.

(Photo by Hal Gatewood on Unsplash)

See also: UK Deputy PM: AI is the most ‘extensive’ industrial revolution yet

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK government outlines AI Safety Summit plans appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/04/uk-government-outlines-ai-safety-summit-plans/feed/ 0
AI Act: The power of open-source in guiding regulations https://www.artificialintelligence-news.com/2023/07/26/ai-act-power-open-source-guiding-regulations/ https://www.artificialintelligence-news.com/2023/07/26/ai-act-power-open-source-guiding-regulations/#respond Wed, 26 Jul 2023 10:41:51 +0000 https://www.artificialintelligence-news.com/?p=13328 As the EU debates the AI Act, lessons from open-source software can inform the regulatory approach to open ML systems. The AI Act, set to be a global precedent, aims to address the risks associated with AI while encouraging the development of cutting-edge technology. One of the key aspects of this Act is its support... Read more »

The post AI Act: The power of open-source in guiding regulations appeared first on AI News.

]]>
As the EU debates the AI Act, lessons from open-source software can inform the regulatory approach to open ML systems.

The AI Act, set to be a global precedent, aims to address the risks associated with AI while encouraging the development of cutting-edge technology. One of the key aspects of this Act is its support for open-source, non-profit, and academic research and development in the AI ecosystem. Such support ensures the development of safe, transparent, and accountable AI systems that benefit all EU citizens.

Drawing from the success of open-source software development, policymakers can craft regulations that encourage open AI development while safeguarding user interests. By providing exemptions and proportional requirements for open ML systems, the EU can foster innovation and competition in the AI market while maintaining a thriving open-source ecosystem.

Representing both commercial and nonprofit stakeholders, several organisations – including GitHub, Hugging Face, EleutherAI, Creative Commons, and more – have banded together to release a policy paper calling on EU policymakers to protect open-source innovation.

The organisations have five proposals:

  1. Define AI components clearly: Clear definitions of AI components will help stakeholders understand their roles and responsibilities, facilitating collaboration and innovation in the open ecosystem.
  1. Clarify that collaborative development of open-source AI components is exempt from AI Act requirements: To encourage open-source development, the Act should clarify that contributors to public repositories are not subject to the same regulatory requirements as commercial entities.
  1. Support the AI Office’s coordination with the open-source ecosystem: The Act should encourage inclusive governance and collaboration between the AI Office and open-source developers to foster transparency and knowledge exchange.
  1. Ensure practical and effective R&D exception: Allow limited real-world testing in different conditions, combining aspects of the Council’s approach and the Parliament’s Article 2(5d), to facilitate research and development without compromising safety and accountability.
  1. Set proportional requirements for “foundation models”: Differentiate between various uses and development modalities of foundation models, including open source approaches, to ensure fair treatment and promote competition.

Open-source AI development offers several advantages, including transparency, inclusivity, and modularity. It allows stakeholders to collaborate and build on each other’s work, leading to more robust and diverse AI models. For instance, the EleutherAI community has become a leading open-source ML lab, releasing pre-trained models and code libraries that have enabled foundational research and reduced the barriers to developing large AI models.

Similarly, the BigScience project, which brought together over 1200 multidisciplinary researchers, highlights the importance of facilitating direct access to AI components across institutions and disciplines.

Such open collaborations have democratised access to large AI models, enabling researchers to fine-tune and adapt them to various languages and specific tasks—ultimately contributing to a more diverse and representative AI landscape.

Open research and development also promote transparency and accountability in AI systems. For example, LAION – a non-profit research organisation – released openCLIP models, which have been instrumental in identifying and addressing biases in AI applications. Open access to training data and model components has enabled researchers and the public to scrutinise the inner workings of AI systems and challenge misleading or erroneous claims.

The AI Act’s success depends on striking a balance between regulation and support for the open AI ecosystem. While openness and transparency are essential, regulation must also mitigate risks, ensure standards, and establish clear liability for AI systems’ potential harms.

As the EU sets the stage for regulating AI, embracing open source and open science will be critical to ensure that AI technology benefits all citizens.

By implementing the recommendations provided by organisations representing stakeholders in the open AI ecosystem, the AI Act can foster an environment of collaboration, transparency, and innovation, making Europe a leader in the responsible development and deployment of AI technologies.

(Photo by Nick Page on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI Act: The power of open-source in guiding regulations appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/07/26/ai-act-power-open-source-guiding-regulations/feed/ 0
AI think tank calls GPT-4 a risk to public safety https://www.artificialintelligence-news.com/2023/03/31/ai-think-tank-gpt-4-risk-to-public-safety/ https://www.artificialintelligence-news.com/2023/03/31/ai-think-tank-gpt-4-risk-to-public-safety/#respond Fri, 31 Mar 2023 15:20:10 +0000 https://www.artificialintelligence-news.com/?p=12881 An AI think tank has filed a complaint with the FTC in a bid to stop OpenAI from further commercial deployments of GPT-4. The Center for Artificial Intelligence and Digital Policy (CAIDP) claims OpenAI has violated section five of the FTC Act—alleging the company of deceptive and unfair practices. Marc Rotenberg, Founder and President of... Read more »

The post AI think tank calls GPT-4 a risk to public safety appeared first on AI News.

]]>
An AI think tank has filed a complaint with the FTC in a bid to stop OpenAI from further commercial deployments of GPT-4.

The Center for Artificial Intelligence and Digital Policy (CAIDP) claims OpenAI has violated section five of the FTC Act—alleging the company of deceptive and unfair practices.

Marc Rotenberg, Founder and President of the CAIDP, said:

“The FTC has a clear responsibility to investigate and prohibit unfair and deceptive trade practices. We believe that the FTC should look closely at OpenAI and GPT-4.

We are specifically asking the FTC to determine whether the company has complied with the guidance the federal agency has issued.”

The CAIDP claims that OpenAI’s GPT-4 is “biased, deceptive, and a risk to privacy and public safety”.

The think tank cited contents in the GPT-4 System Card that describe the model’s potential to reinforce biases and worldviews, including harmful stereotypes and demeaning associations for certain marginalised groups.

In the aforementioned System Card, OpenAI acknowledges that it “found that the model has the potential to reinforce and reproduce specific biases and worldviews, including harmful stereotypical and demeaning associations for certain marginalized groups.”

Furthermore, the document states: “AI systems will have even greater potential to reinforce entire ideologies, worldviews, truths and untruths, and to cement them or lock them in, foreclosing future contestation, reflection, and improvement.”

Other harmful outcomes that OpenAI says GPT-4 could lead to include:

  1. Advice or encouragement for self-harm behaviours
  2. Graphic material such as erotic or violent content
  3. Harassing, demeaning, and hateful content
  4. Content useful for planning attacks or violence
  5. Instructions for finding illegal content

The CAIDP claims that OpenAI released GPT-4 to the public without an independent assessment of its risks.

Last week, the FTC told American companies advertising AI products:

“Merely warning your customers about misuse or telling them to make disclosures is hardly sufficient to deter bad actors.

Your deterrence measures should be durable, built-in features and not bug corrections or optional features that third parties can undermine via modification or removal.”

With its filing, the CAIDP calls on the FTC to investigate the products of OpenAI and other operators of powerful AI systems, prevent further commercial releases of GPT-4, and ensure the establishment of necessary guardrails to protect consumers, businesses, and the commercial marketplace.

Merve Hickok, Chair and Research Director of the CAIDP, commented:

“We are at a critical moment in the evolution of AI products.

We recognise the opportunities and we support research. But without the necessary safeguards established to limit bias and deception, there is a serious risk to businesses, consumers, and public safety.

The FTC is uniquely positioned to address this challenge.”

The complaint was filed as Elon Musk, Steve Wozniak, and other AI experts signed a petition to “pause” development on AI systems more powerful than GPT-4.

However, other high-profile figures believe progress shouldn’t be slowed/halted:

Musk was a co-founder of OpenAI, which was originally created as a nonprofit with the mission of ensuring that AI benefits humanity. Musk resigned from OpenAI’s board in 2018 and has publicly questioned the company’s transformation:

Global approaches to AI regulation

As AI systems become more advanced and powerful, concerns over their potential risks and biases have grown. Organisations such as CAIDP, UNESCO, and the Future of Life Institute are pushing for ethical guidelines and regulations to be put in place to protect the public and ensure the responsible development of AI technology.

UNESCO (United Nations Educational, Scientific, and Cultural Organization) has called on countries to implement its “Recommendation on the Ethics of AI” framework.

Earlier today, Italy banned ChatGPT. The country’s data protection authorities said it would be investigated and the system does not have a proper legal basis to be collecting personal information about the people using it.

The wider EU is establishing a strict regulatory environment for AI, in contrast to the UK’s relatively “light-touch” approach.

Tim Wright, Partner and specialist tech and AI regulation lawyer at law firm Fladgate, commented on the UK’s vision:

“The regulatory principles set out in the whitepaper simply confirm the Government’s preferred approach which they say will encourage innovation in the space without imposing an undue burden on businesses developing and adopting AI while encouraging fair and ethical use and protecting individuals.

Time will tell if this sector-by-sector approach has the desired effect. What it does do is put the UK on a completely different approach from the EU, which is pushing through a detailed rulebook backed up by a new liability regime and overseen by a single super AI regulator.”

As always, it’s a balancing act between regulation and innovation. Not enough regulation puts the public at risk while too much risks driving innovation elsewhere.

(Photo by Ben Sweet on Unsplash)

Related: What will AI regulation look like for businesses?

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI think tank calls GPT-4 a risk to public safety appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/03/31/ai-think-tank-gpt-4-risk-to-public-safety/feed/ 0
UK details ‘pro-innovation’ approach to AI regulation https://www.artificialintelligence-news.com/2023/03/29/uk-details-pro-innovation-approach-ai-regulation/ https://www.artificialintelligence-news.com/2023/03/29/uk-details-pro-innovation-approach-ai-regulation/#respond Wed, 29 Mar 2023 12:35:16 +0000 https://www.artificialintelligence-news.com/?p=12874 The UK government has unveiled a new regulatory framework for AI, aimed at promoting innovation while maintaining public trust. Michelle Donelan, Science, Innovation, and Technology Secretary, said: “AI has the potential to make Britain a smarter, healthier and happier place to live and work. Artificial intelligence is no longer the stuff of science fiction, and... Read more »

The post UK details ‘pro-innovation’ approach to AI regulation appeared first on AI News.

]]>
The UK government has unveiled a new regulatory framework for AI, aimed at promoting innovation while maintaining public trust.

Michelle Donelan, Science, Innovation, and Technology Secretary, said: “AI has the potential to make Britain a smarter, healthier and happier place to live and work. Artificial intelligence is no longer the stuff of science fiction, and the pace of AI development is staggering, so we need to have rules to make sure it is developed safely.

“Our new approach is based on strong principles so that people can trust businesses to unleash this technology of tomorrow.”

The framework, set out in the AI regulation white paper, is based on these five principles:

  • Safety – Ensuring that applications function in a secure, safe, and robust manner.
  • Transparency and explainability – Organisations that deploy AI should communicate when and how it’s used. Furthermore, they should be able to explain a system’s decision-making process.
  • Fairness – Ensure compatibility with the UK’s existing laws, including the Equality Act 2010 and UK GDPR.
  • Accountability and governance – Introducing measures to ensure appropriate oversight of AI.
  • Contestability and redress – Ensure that people have clear routes to dispute outcomes or decisions generated by AI.

The principles will be applied by existing regulators in their sectors rather than through the creation of a single new regulator. The government has allocated £2m ($2.7m) to fund an AI sandbox, where businesses can test AI products and services.

Over the next year, regulators will issue guidance to organisations and other resources to implement the principles. Legislation could also be introduced to ensure the principles are considered consistently.

A consultation has also been launched by the government on new processes to improve coordination between regulators and to evaluate the effectiveness of the framework.

Emma Wright, Head of Technology, Data, and Digital at law firm Harbottle & Lewis, commented:

“I do welcome industry-specific regulation rather than primary legislation covering AI  (such as the EU is proposing). However, I am concerned that this is essentially another consultation paper calling for regulators to produce more guidance when entrepreneurs and investors are looking for greater regulatory certainty. 

The use of AI is becoming mainstream with the arrival of ChatGPT and not enough attention has been given to the need for capacity building within the existing regulators who will now be tasked with driving responsible innovation whilst not stifling investment. 

Building trustworthy AI will be the key to greater adoption and setting basic frameworks for entrepreneurs and investors to operate is not at odds with this. Although regulatory sandboxes have been successfully used in the past in other tech verticals, such as fintech, the issue is that lots of the AI tools currently being released have unintended consequences when made available for general use – it seems hard to see how a true sandbox environment will be able to replicate such scenarios and risks damaging any trust users place in an AI tool that has been sandboxed but produces discriminatory results or output.

It is possible to have a pro-innovation approach while setting basic frameworks to be followed such as the UNESCO Recommendation on Ethical AI (that the UK is a signatory to) and it feels like a little bit of a missed opportunity to have missed aligning a pro-innovation environment with what responsible AI use means today rather than at some point in the future.”

The UK’s AI industry currently employs over 50,000 people and contributed £3.7bn to the economy in 2022. Britain is home to twice as many companies offering AI services and products as any other European country, with hundreds of new firms created each year.

Behind the US and China, the UK’s tech sector overall has the third-highest amount of VC investment in the world – more than Germany and France combined – and has produced more than double the number of $1 billion tech firms than any other European country.

However, concerns have been raised that AI could pose risks to privacy, human rights, and safety, as well as the fairness of using AI tools to make decisions that affect people’s lives, such as assessing loan or mortgage applications.

The proposals in the white paper aim to address these concerns and have been warmly welcomed by businesses, which previously called for more coordination between regulators to ensure effective implementation across the economy.

Lila Ibrahim, COO at DeepMind, commented: “AI has the potential to advance science and benefit humanity in numerous ways, from combating climate change to better understanding and treating diseases. This transformative technology can only reach its full potential if it is trusted, which requires public and private partnership in the spirit of pioneering responsibly.

“The UK’s proposed context-driven approach will help regulation keep pace with the development of AI, support innovation, and mitigate future risks.”

Grazia Vittadini, CTO at Rolls-Royce, added: “Both our business and our customers will benefit from agile, context-driven AI regulation.

“It will enable us to continue to lead the technical and quality assurance innovations for safety-critical industrial AI applications, while remaining compliant with the standards of integrity, responsibility, and trust that society demands from AI developers.”

The new framework aims to provide protections for the public without stifling the use of AI in developing the economy, better jobs, and new discoveries.

Separately, an open letter posted today – signed by Elon Musk, Steve Wozniak, and over 1,000 other experts – called for a halt to “out-of-control” AI development.

You can find a full copy of the UK’s AI regulation whitepaper here.

(Photo by Steve Harvey on Unsplash)

Related: Editorial: UK puts AI at the centre of its Budget

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK details ‘pro-innovation’ approach to AI regulation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/03/29/uk-details-pro-innovation-approach-ai-regulation/feed/ 0
GitHub CEO: The EU ‘will define how the world regulates AI’ https://www.artificialintelligence-news.com/2023/02/06/github-ceo-eu-will-define-how-world-regulates-ai/ https://www.artificialintelligence-news.com/2023/02/06/github-ceo-eu-will-define-how-world-regulates-ai/#respond Mon, 06 Feb 2023 17:04:56 +0000 https://www.artificialintelligence-news.com/?p=12708 GitHub CEO Thomas Dohmke addressed the EU Open Source Policy Summit in Brussels and gave his views on the bloc’s upcoming AI Act.  “The AI Act will define how the world regulates AI and we need to get it right, for developers and the open-source community,” said Dohmke. Dohmke was born and grew up in... Read more »

The post GitHub CEO: The EU ‘will define how the world regulates AI’ appeared first on AI News.

]]>
GitHub CEO Thomas Dohmke addressed the EU Open Source Policy Summit in Brussels and gave his views on the bloc’s upcoming AI Act

“The AI Act will define how the world regulates AI and we need to get it right, for developers and the open-source community,” said Dohmke.

Dohmke was born and grew up in Germany but now lives in the US. As such, he is all too aware of the widespread belief that the EU cannot lead when it comes to tech innovation.

“As a European, I love seeing how open-source AI innovations are beginning to break the narrative that only the US and China can lead on tech innovation.”

“I’ll be honest, as a European living in the United States, this is a pervasive – and often true – narrative. But this can change. And it’s already beginning to, thanks to open-source developers.”

AI will revolutionise just about every aspect of our lives. Regulation is vital to minimise the risks associated with AI while allowing the benefits to flourish.

“Together, OSS (Open Source Software) developers will use AI to help make our lives better. I have no doubt that OSS developers will help build AI innovations that empower those with disabilities, help us solve climate change, and save lives.”

A risk of overregulation is that it drives innovation elsewhere. Startups are more likely to establish themselves in countries like the US and China where they’re likely not subject to as strict regulations. Europe will find itself falling behind and having less influence on the global stage when it comes to AI.

“The AI Act is so crucial. This policy could well set the precedent for how the world regulates AI. It is foundationally important. Important for European technological leadership, and the future of the European economy itself. The AI Act must be fair and balanced for the open-source community.

“Policymakers should help us get there. The AI Act can foster democratised innovation and solidify Europe’s leadership in open, values-based artificial intelligence. That is why I believe that open-source developers should be exempt from the AI Act.”

In expanding on his belief that open-source developers should be exempt, Dohmke explains that the compliance burden should fall on those shipping products.

“OSS developers are often volunteers. Many are working two jobs. They are scientists, doctors, academics, professors, and university students alike. They don’t usually stand to profit from their contributions—and they certainly don’t have big budgets and compliance departments!”

EU lawmakers are hoping to agree on draft AI rules next month with the aim of winning the acceptance of member states by the end of the year.

“Open-source is forming the foundation of AI innovation in Europe. The US and China don’t have to win it all. Let’s break that narrative apart!

“Let’s give the open-source community the daylight and the clarity to grow their ideas and build them for the rest of the world! And by doing so, let’s give Europe the chance to be a leader in this new age of AI.”

GitHub’s policy paper on the AI Act can be found here.

(Image Credit: Collision Conf under CC BY 2.0 license)

Relevant: US and EU agree to collaborate on improving lives with AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post GitHub CEO: The EU ‘will define how the world regulates AI’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/02/06/github-ceo-eu-will-define-how-world-regulates-ai/feed/ 0
Google to speed up AI releases in response to ChatGPT https://www.artificialintelligence-news.com/2023/01/20/google-speed-up-ai-releases-in-response-chatgpt/ https://www.artificialintelligence-news.com/2023/01/20/google-speed-up-ai-releases-in-response-chatgpt/#respond Fri, 20 Jan 2023 17:17:36 +0000 https://www.artificialintelligence-news.com/?p=12635 Google is reportedly set to speed up its release of AI solutions in response to the launch of ChatGPT. The New York Times claims ChatGPT set off alarm bells at Google. At the invite of Google CEO Sundar Pichai, the company’s founders – Larry Page and Sergey Brin – returned for a series of meetings... Read more »

The post Google to speed up AI releases in response to ChatGPT appeared first on AI News.

]]>
Google is reportedly set to speed up its release of AI solutions in response to the launch of ChatGPT.

The New York Times claims ChatGPT set off alarm bells at Google. At the invite of Google CEO Sundar Pichai, the company’s founders – Larry Page and Sergey Brin – returned for a series of meetings to review Google’s AI product strategy.

Google is one of the biggest investors in AI and has some of the most talented minds in the industry. As a result, the company is scrutinised more than most when it comes to any AI developments.

In 2020, leading AI ethics researcher Timnit Gebru was fired by Google. Gebru claims she was fired over an unpublished paper and sending an email critical of the company’s practices. Numerous other AI experts at Google left following her firing.

Just two years earlier, over 4,000 Googlers signed a petition demanding that Google cease its plans to develop AI for the US military. Google withdrew from the contract but not before at least a dozen employees resigned.

With the company in the spotlight, Google has allegedly been ultra-cautious in how it develops and deploys AI.

According to a CNBC report, Pichai and Google AI Chief Jeff Dean were asked in a meeting whether ChatGPT represented a “missed opportunity” for the company. Pichai and Dean said that Google’s own models were just as capable but the company had to move “more conservatively than a small startup” because of the “reputational risk” it poses.

Microsoft has invested so heavily in OpenAI that it’s hard to consider the company a small startup anymore. The two companies have established a deep partnership and Microsoft has begun integrating OpenAI’s technologies into its own products.

Earlier this month, AI News reported that Microsoft and OpenAI are set to integrate technology from OpenAI in Bing to challenge Google’s search dominance. That appears to have been what really set off the alarm bells at Google.

Google now appears to be speeding up the reveal and deployment of its own AI solutions. To that end, the company is reportedly working to speed up the review process which checks if it’s operating ethically.

One of the first AI solutions set to debut sounds very similar to what Microsoft and OpenAI have planned for Bing.

A demo of a chatbot-enhanced Google Search is expected at the company’s annual I/O developer conference in May. The demo will prioritise “getting facts right, ensuring safety and getting rid of misinformation.”

Other AI-powered product launches expected to be shown include an image generator, a set of tools for enterprises to develop their own AI prototypes within a browser window, and an app for testing such prototypes.

Google is also said to be working on a rival to GitHub Copilot, a coding assistant powered by OpenAI’s technology. Google’s alternative is called PaLM-Coder 2 and will have a version for building smartphone apps called Colab that will be integrated into Android Studio.

Overall, Google is set to unveil more than 20 AI-powered projects this year. The announcements should calm investors who’ve criticised Google’s slow AI developments in recent years but ethicists will be concerned about the company prioritising speed over safety.

(Photo by Mitchell Luo on Unsplash)

Relevant: OpenAI CEO: People are ‘begging to be disappointed’ about GPT-4

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Google to speed up AI releases in response to ChatGPT appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/01/20/google-speed-up-ai-releases-in-response-chatgpt/feed/ 0