Latest AI Legislation & Government News | AI News https://www.artificialintelligence-news.com/categories/ai-legislation-government/ Artificial Intelligence News Mon, 30 Oct 2023 10:18:15 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png Latest AI Legislation & Government News | AI News https://www.artificialintelligence-news.com/categories/ai-legislation-government/ 32 32 Biden issues executive order to ensure responsible AI development https://www.artificialintelligence-news.com/2023/10/30/biden-issues-executive-order-responsible-ai-development/ https://www.artificialintelligence-news.com/2023/10/30/biden-issues-executive-order-responsible-ai-development/#respond Mon, 30 Oct 2023 10:18:14 +0000 https://www.artificialintelligence-news.com/?p=13798 President Biden has issued an executive order aimed at positioning the US at the forefront of AI while ensuring the technology’s safe and responsible use. The order establishes stringent standards for AI safety and security, safeguards Americans’ privacy, promotes equity and civil rights, protects consumers and workers, fosters innovation and competition, and enhances American leadership... Read more »

The post Biden issues executive order to ensure responsible AI development appeared first on AI News.

]]>
President Biden has issued an executive order aimed at positioning the US at the forefront of AI while ensuring the technology’s safe and responsible use.

The order establishes stringent standards for AI safety and security, safeguards Americans’ privacy, promotes equity and civil rights, protects consumers and workers, fosters innovation and competition, and enhances American leadership on the global stage.

Key actions outlined in the order:

  1. New standards for AI safety and security: The order mandates that developers of powerful AI systems share safety test results and critical information with the U.S. government. Rigorous standards, tools, and tests will be developed to ensure AI systems are safe, secure, and trustworthy before public release. Additionally, measures will be taken to protect against the risks of using AI to engineer dangerous biological materials and combat AI-enabled fraud and deception.
  2. Protecting citizens’ privacy: The President calls on Congress to pass bipartisan data privacy legislation, prioritizing federal support for privacy-preserving techniques, especially those using AI. Guidelines will be developed for federal agencies to evaluate the effectiveness of privacy-preserving techniques, including those used in AI systems.
  3. Advancing equity and civil rights: Clear guidance will be provided to prevent AI algorithms from exacerbating discrimination, especially in areas like housing and federal benefit programs. Best practices will be established for the use of AI in the criminal justice system to ensure fairness.
  4. Standing up for consumers, patients, and students: Responsible use of AI in healthcare and education will be promoted, ensuring that consumers are protected from harmful AI applications while benefiting from its advancements in these sectors.
  5. Supporting workers: Principles and best practices will be developed to mitigate the harms and maximise the benefits of AI for workers, addressing issues such as job displacement, workplace equity, and health and safety. A report on AI’s potential labour-market impacts will be produced, identifying options for strengthening federal support for workers facing labour disruptions due to AI.
  6. Promoting innovation and competition: The order aims to catalyse AI research across the US, promote a fair and competitive AI ecosystem, and expand the ability of highly skilled immigrants and non-immigrants to study, stay, and work in the US to foster innovation in the field.
  7. Advancing leadership abroad: The US will collaborate with other nations to establish international frameworks for safe and trustworthy AI deployment. Efforts will be made to accelerate the development and implementation of vital AI standards with international partners and promote the responsible development and deployment of AI abroad to address global challenges.
  8. Ensuring responsible and effective government adoption: Clear standards and guidelines will be issued for government agencies’ use of AI to protect rights and safety. Efforts will be made to help agencies acquire AI products and services more rapidly and efficiently, and an AI talent surge will be initiated to enhance government capacity in AI-related fields.

The executive order signifies a major step forward in the US towards harnessing the potential of AI while safeguarding individuals’ rights and security.

“As we advance this agenda at home, the Administration will work with allies and partners abroad on a strong international framework to govern the development and use of AI,” wrote the White House in a statement.

“The actions that President Biden directed today are vital steps forward in the US’ approach on safe, secure, and trustworthy AI. More action will be required, and the Administration will continue to work with Congress to pursue bipartisan legislation to help America lead the way in responsible innovation.”

The administration’s commitment to responsible innovation is paramount and sets the stage for continued collaboration with international partners to shape the future of AI globally.

(Photo by David Everett Strickler on Unsplash)

See also: UK paper highlights AI risks ahead of global Safety Summit

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Biden issues executive order to ensure responsible AI development appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/30/biden-issues-executive-order-responsible-ai-development/feed/ 0
UK paper highlights AI risks ahead of global Safety Summit https://www.artificialintelligence-news.com/2023/10/26/uk-paper-highlights-ai-risks-ahead-global-safety-summit/ https://www.artificialintelligence-news.com/2023/10/26/uk-paper-highlights-ai-risks-ahead-global-safety-summit/#respond Thu, 26 Oct 2023 15:48:59 +0000 https://www.artificialintelligence-news.com/?p=13793 The UK Government has unveiled a comprehensive paper addressing the capabilities and risks associated with frontier AI. UK Prime Minister Rishi Sunak has spoken today on the global responsibility to confront the risks highlighted in the report and harness AI’s potential. Sunak emphasised the need for honest dialogue about the dual nature of AI: offering... Read more »

The post UK paper highlights AI risks ahead of global Safety Summit appeared first on AI News.

]]>
The UK Government has unveiled a comprehensive paper addressing the capabilities and risks associated with frontier AI.

UK Prime Minister Rishi Sunak has spoken today on the global responsibility to confront the risks highlighted in the report and harness AI’s potential. Sunak emphasised the need for honest dialogue about the dual nature of AI: offering unprecedented opportunities, while also posing significant dangers.

“AI will bring new knowledge, new opportunities for economic growth, new advances in human capability, and the chance to solve problems we once thought beyond us. But it also brings new dangers and new fears,” said Sunak.

“So, the responsible thing for me to do is to address those fears head-on, giving you the peace of mind that we will keep you safe while making sure you and your children have all the opportunities for a better future that AI can bring.

“Doing the right thing, not the easy thing, means being honest with people about the risks from these technologies.”

The report delves into the rapid advancements of frontier AI, drawing on numerous sources. It highlights the diverse perspectives within scientific, expert, and global communities regarding the risks associated with the swift evolution of AI technology. 

The publication comprises three key sections:

  1. Capabilities and risks from frontier AI: This section presents a discussion paper advocating further research into AI risk. It delineates the current state of frontier AI capabilities, potential future improvements, and associated risks, including societal harms, misuse, and loss of control.
  2. Safety and security risks of generative AI to 2025: Drawing on intelligence assessments, this report outlines the potential global benefits of generative AI while highlighting the increased safety and security risks. It underscores the enhancement of threat actor capabilities and the effectiveness of attacks due to generative AI development.
  3. Future risks of frontier AI: Prepared by the Government Office for Science, this report explores uncertainties in frontier AI development, future system risks, and potential scenarios for AI up to 2030.

The report – based on declassified information from intelligence agencies – focuses on generative AI, the technology underpinning popular chatbots and image generation software. It foresees a future where AI might be exploited by terrorists to plan biological or chemical attacks, raising serious concerns about global security.

Sjuul van der Leeuw, CEO of Deployteq, commented: “It is good to see the government take a serious approach, offering a report ahead of the Safety Summit next week however more must be done.

“An ongoing effort to address AI risks is needed and we hope that the summit brings much-needed clarity, allowing businesses and marketers to enjoy the benefits this emerging piece of technology offers, without the worry of backlash.”

The report highlights that generative AI could be utilised to gather knowledge on physical attacks by non-state violent actors, including creating chemical, biological, and radiological weapons.

Although companies are working to implement safeguards, the report emphasises the varying effectiveness of these measures. Obstacles to obtaining the necessary knowledge, raw materials, and equipment for such attacks are decreasing, with AI potentially accelerating this process.

Additionally, the report warns of the likelihood of AI-driven cyber-attacks becoming faster-paced, more effective, and on a larger scale by 2025. AI could aid hackers in mimicking official language, and overcome previous challenges faced in this area.

However, some experts have questioned the UK Government’s approach.

Rashik Parmar MBE, CEO of BCS, The Chartered Institute for IT, said: “Over 1,300 technologists and leaders signed our open letter calling AI a force for good rather than an existential threat to humanity.

“AI won’t grow up like The Terminator. If we take the proper steps, it will be a trusted co-pilot from our earliest school days to our retirement.

The AI Safety Summit will aim to foster healthy discussion around how to address frontier AI risks, encompassing misuse by non-state actors for cyberattacks or bioweapon design and concerns related to AI systems acting autonomously contrary to human intentions. Discussions at the summit will also extend to broader societal impacts, such as election disruption, bias, crime, and online safety.

Claire Trachet, CEO of Trachet, commented: “The fast-growing nature of AI has made it difficult for governments to balance creating effective regulation which safeguards the interest of businesses and consumers without stifling investment opportunities. Even though there are some forms of risk management and different reports coming out now, none of them are true coordinated approaches.

“The UK Government’s commitment to AI safety is commendable, but the criticism surrounding the summit serves as a reminder of the importance of a balanced, constructive, and forward-thinking approach to AI regulation.”

If the UK Government’s report is anything to go by, the need for collaboration around proportionate but rigorous measures to manage the risks posed by AI is more imperative than ever.

The global AI Safety Summit is set to take place at the historic Bletchley Park on 1 – 2 November 2023.

(Image Credit: GOV.UK)

See also: BSI: Closing ‘AI confidence gap’ key to unlocking benefits

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK paper highlights AI risks ahead of global Safety Summit appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/26/uk-paper-highlights-ai-risks-ahead-global-safety-summit/feed/ 0
BSI: Closing ‘AI confidence gap’ key to unlocking benefits https://www.artificialintelligence-news.com/2023/10/17/bsi-closing-ai-confidence-gap-key-unlocking-benefits/ https://www.artificialintelligence-news.com/2023/10/17/bsi-closing-ai-confidence-gap-key-unlocking-benefits/#respond Tue, 17 Oct 2023 14:34:00 +0000 https://www.artificialintelligence-news.com/?p=13759 The UK’s potential to harness the benefits of AI in crucial sectors such as healthcare, food safety, and sustainability is under threat due to a significant “confidence gap” among the public. According to a study conducted by BSI, 54 percent of UK respondents expressed excitement about AI’s potential to revolutionise medical diagnoses and 43 percent... Read more »

The post BSI: Closing ‘AI confidence gap’ key to unlocking benefits appeared first on AI News.

]]>
The UK’s potential to harness the benefits of AI in crucial sectors such as healthcare, food safety, and sustainability is under threat due to a significant “confidence gap” among the public.

According to a study conducted by BSI, 54 percent of UK respondents expressed excitement about AI’s potential to revolutionise medical diagnoses and 43 percent welcomed AI’s role in reducing food waste. However, there is a prevailing lack of trust.

This scepticism could hinder the integration of AI technologies in the NHS, which is currently grappling with challenges like the COVID-19 backlog and an ageing population. Almost half of Britons (49%) support the use of AI to alleviate pressure on the healthcare system and reduce waiting times. However, only 20 percent have more confidence in AI than humans in detecting food contamination issues.

The study also highlighted a pressing need for education, as 65 percent of respondents felt patients should be informed about the use of AI tools in diagnosis or treatment. 37 percent of respondents expect to use AI regularly in medical settings by 2030.

Craig Civil, Director of Data Science and AI at BSI, said:

“The magnitude of ways AI can shape the UK’s future means we are seeing some degree of hesitation of the unknown. This can be addressed by developing greater understanding and recognition that human involvement will always be needed if we are to make the best use of this technology, and by ensuring we have frameworks that are in place to govern its use and build trust.

Now is the moment for the UK to collaborate to balance the great power of this tool with the realities of actually using it in a credible, authentic, well-executed, and well-governed way.

Closing the confidence gap and building the appropriate checks and balances can enable us to make not just good but great use of AI in every area of life and society.”

60 percent believed consumers needed protections regarding AI technologies. The study also revealed that 61 percent of Britons are calling for international guidelines to ensure the safe use of AI. This demand reflects a global sentiment, with 50 percent of respondents highlighting the need for ethical safeguards on patient data use.

Harold Pradal, Chief Commercial Officer at BSI, commented:

“AI is a transformational technology. For it to be a powerful force for good, trust needs to be the critical factor. There is a clear opportunity to harness AI to drive societal impact, change lives, and accelerate progress towards a better future and a sustainable world.

Closing the AI confidence gap is the first necessary step, it has to be delivered through education to help realise AI’s benefits and shape Society 5.0 in a positive way.”

The study’s findings are a call to action for the UK, urging collaboration and the establishment of frameworks to govern AI’s use.

The UK Government, recognising the importance of safe AI implementation, is set to host a global AI Safety Summit at the historic Bletchley Park on 1-2 November 2023. BSI is an official partner for the much-anticipated event.

(Photo by Suad Kamardeen on Unsplash)

See also: UK reveals AI Safety Summit opening day agenda

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post BSI: Closing ‘AI confidence gap’ key to unlocking benefits appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/17/bsi-closing-ai-confidence-gap-key-unlocking-benefits/feed/ 0
UK reveals AI Safety Summit opening day agenda https://www.artificialintelligence-news.com/2023/10/16/uk-reveals-ai-safety-summit-opening-day-agenda/ https://www.artificialintelligence-news.com/2023/10/16/uk-reveals-ai-safety-summit-opening-day-agenda/#respond Mon, 16 Oct 2023 15:02:01 +0000 https://www.artificialintelligence-news.com/?p=13754 The UK Government has unveiled plans for the inaugural global AI Safety Summit, scheduled to take place at the historic Bletchley Park. The summit will bring together digital ministers, AI companies, civil society representatives, and independent experts for crucial discussions. The primary focus is on frontier AI, the most advanced generation of AI models, which... Read more »

The post UK reveals AI Safety Summit opening day agenda appeared first on AI News.

]]>
The UK Government has unveiled plans for the inaugural global AI Safety Summit, scheduled to take place at the historic Bletchley Park.

The summit will bring together digital ministers, AI companies, civil society representatives, and independent experts for crucial discussions. The primary focus is on frontier AI, the most advanced generation of AI models, which – if not developed responsibly – could pose significant risks.

The event aims to explore both the potential dangers emerging from rapid advances in AI and the transformative opportunities the technology presents, especially in education and international research collaborations.

Technology Secretary Michelle Donelan will lead the summit and articulate the government’s position that safety and security must be central to AI advancements. The event will feature parallel sessions in the first half of the day, delving into understanding frontier AI risks.

Other topics that will be covered during the AI Safety Summit include threats to national security, potential election disruption, erosion of social trust, and exacerbation of global inequalities.

The latter part of the day will focus on roundtable discussions aimed at enhancing frontier AI safety responsibly. Delegates will explore defining risk thresholds, effective safety assessments, and robust governance mechanisms to enable the safe scaling of frontier AI by developers.

International collaboration will be a key theme, emphasising the need for policymakers, scientists, and researchers to work together in managing risks and harnessing AI’s potential for global economic and social benefits.

The summit will conclude with a panel discussion on the transformative opportunities of AI for the public good, specifically in revolutionising education. Donelan will provide closing remarks and underline the importance of global collaboration in adopting AI safely.

This event aims to mark a positive step towards fostering international cooperation in the responsible development and deployment of AI technology. By convening global experts and policymakers, the UK Government wants to lead the conversation on creating a safe and positive future with AI.

(Photo by Ricardo Gomez Angel on Unsplash)

See also: UK races to agree statement on AI risks with global leaders

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK reveals AI Safety Summit opening day agenda appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/16/uk-reveals-ai-safety-summit-opening-day-agenda/feed/ 0
UK races to agree statement on AI risks with global leaders https://www.artificialintelligence-news.com/2023/10/10/uk-races-agree-statement-ai-risks-global-leaders/ https://www.artificialintelligence-news.com/2023/10/10/uk-races-agree-statement-ai-risks-global-leaders/#respond Tue, 10 Oct 2023 13:40:33 +0000 https://www.artificialintelligence-news.com/?p=13709 Downing Street officials find themselves in a race against time to finalise an agreed communique from global leaders concerning the escalating concerns surrounding artificial intelligence.  This hurried effort comes in anticipation of the UK’s AI Safety Summit scheduled next month at the historic Bletchley Park. The summit, designed to provide an update on White House-brokered... Read more »

The post UK races to agree statement on AI risks with global leaders appeared first on AI News.

]]>
Downing Street officials find themselves in a race against time to finalise an agreed communique from global leaders concerning the escalating concerns surrounding artificial intelligence. 

This hurried effort comes in anticipation of the UK’s AI Safety Summit scheduled next month at the historic Bletchley Park.

The summit, designed to provide an update on White House-brokered safety guidelines – as well as facilitate a debate on how national security agencies can scrutinise the most dangerous versions of this technology – faces a potential hurdle. It’s unlikely to generate an agreement on establishing a new international organisation to scrutinise cutting-edge AI, apart from its proposed communique.

The proposed AI Safety Institute, a brainchild of the UK government, aims to enable national security-related scrutiny of frontier AI models. However, this ambition might face disappointment if an international consensus is not reached.

Claire Trachet, tech industry expert and CEO of business advisory Trachet, said:

“I think that this marks a very important moment for the UK, especially in terms of recognising that there are other players across Europe also hoping to catch up with the US in the AI space. It’s therefore essential that the UK continues to balance its drive for innovation with creating effective regulation that will not stifle the country’s growth prospects.

While the UK possesses the potential to be a frontrunner in the global tech race, concerted efforts are needed to strengthen the country’s position. By investing in research, securing supply chains, promoting collaboration, and nurturing local talent, the UK can position itself as a prominent player in shaping the future of AI-driven technologies.”

Currently, the UK stands as a key player in the global tech arena, with its AI market valued at over £16.9 billion and expected to soar to £803.7 billion by 2035, according to the US International Trade.

The British government’s commitment is evident through its £1 billion investment in supercomputing and AI research. Moreover, the introduction of seven new AI principles for regulation – focusing on accountability, access, diversity, choice, flexibility, fair dealing, and transparency – showcases the government’s dedication to fostering a robust AI ecosystem.

Despite these efforts, challenges loom as France emerges as a formidable competitor within Europe.

French billionaire Xavier Niel recently announced a €200 million investment in artificial intelligence, including a research lab and supercomputer, aimed at bolstering Europe’s competitiveness in the global AI race.

Niel’s initiative aligns with President Macron’s commitment, who announced €500 million in new funding at VivaTech to create new AI champions. Furthermore, France plans to attract companies through its own AI summit.

Claire Trachet acknowledges the intensifying competition between the UK and France, stating that while the rivalry adds complexity to the UK’s goals, it can also spur innovation within the industry. However, Trachet emphasises the importance of the UK striking a balance between innovation and effective regulation to sustain its growth prospects.

“In my view, if Europe wants to truly make a meaningful impact, they must leverage their collective resources, foster collaboration, and invest in nurturing a robust ecosystem,” adds Trachet.

“This means combining the strengths of the UK, France and Germany, to possibly create a compelling alternative in the next 10-15 years that disrupts the AI landscape, but again, this would require a heavily strategic vision and collaborative approach.”

(Photo by Nick Kane on Unsplash)

See also: Cyber Security & Cloud Expo: The alarming potential of AI-powered cybercrime

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK races to agree statement on AI risks with global leaders appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/10/uk-races-agree-statement-ai-risks-global-leaders/feed/ 0
UK deputy PM warns UN that AI regulation is falling behind advances https://www.artificialintelligence-news.com/2023/09/22/uk-deputy-pm-warns-un-ai-regulation-falling-behind-advances/ https://www.artificialintelligence-news.com/2023/09/22/uk-deputy-pm-warns-un-ai-regulation-falling-behind-advances/#respond Fri, 22 Sep 2023 09:24:44 +0000 https://www.artificialintelligence-news.com/?p=13630 In a stark address to the UN, UK Deputy PM Oliver Dowden has sounded the alarm on the potentially destabilising impact of AI on the world order. Dowden has urged governments to take immediate action to regulate AI development, warning that the rapid pace of advancement in AI technology could outstrip their ability to ensure... Read more »

The post UK deputy PM warns UN that AI regulation is falling behind advances appeared first on AI News.

]]>
In a stark address to the UN, UK Deputy PM Oliver Dowden has sounded the alarm on the potentially destabilising impact of AI on the world order.

Dowden has urged governments to take immediate action to regulate AI development, warning that the rapid pace of advancement in AI technology could outstrip their ability to ensure its safe and responsible use.

Speaking at the UN General Assembly in New York, Dowden highlighted that the UK will host a global summit in November to discuss the regulation of AI. The summit aims to bring together international leaders, experts, and industry representatives to address the pressing concerns surrounding AI.

One of the primary fears surrounding unchecked AI development is the potential for widespread job displacement, the proliferation of misinformation, and the deepening of societal discrimination. Without adequate regulations in place, AI technologies could be harnessed to magnify these negative effects.

“The starting gun has been fired on a globally competitive race in which individual companies as well as countries will strive to push the boundaries as far and fast as possible,” Dowden cautioned during his address.

Dowden went on to note that the current state of global regulation lags behind the rapid advances in AI technology. Unlike the past, where regulations followed technological developments, Dowden stressed that rules must now be established in tandem with AI’s evolution.

Oseloka Obiora, CTO at RiverSafe, said: “Business leaders are jumping into bed with the latest AI trends at an alarming rate, with little or no concern for the consequences.

“With global regulatory standards falling way behind and the most basic cyber security checks being neglected, it is right for the government to call for new global standards to prevent the AI ticking timebomb from exploding.”

Dowden underscored the importance of ensuring that AI companies do not have undue influence over the regulatory process. He emphasised the need for transparency and oversight, stating that AI companies should not “mark their own homework.” Instead, governments and citizens should have confidence that risks associated with AI are being properly mitigated.

Moreover, Dowden highlighted that only coordinated action by nation-states could provide the necessary assurance to the public that significant national security concerns stemming from AI have been adequately addressed.

He also cautioned against oversimplifying the role of AI—noting that it can be both a tool for good and a tool for ill, depending on its application. During the UN General Assembly, the UK also pitched AI’s potential to accelerate development in the world’s most impoverished nations.

The UK’s initiative to host a global AI regulation summit signals a growing recognition among world leaders of the urgent need to establish a robust framework for AI governance. As AI technology continues to advance, governments are under increasing pressure to strike the right balance between innovation and safeguarding against potential risks.

Jake Moore, Global Cybersecurity Expert at ESET, comments: “The fear that AI could shape our lives in a completely new direction is not without substance, as the power behind the technology churning this wheel is potentially destructive. Not only could AI change jobs, it also has the ability to change what we know to be true and impact what we believe.   

“Regulating it would mean potentially stifling innovation. But even attempting to regulate such a powerful beast would be like trying to regulate the dark web, something that is virtually impossible. Large datasets and algorithms can be designed to do almost anything, so we need to start looking at how we can improve educating people, especially young people in schools, into understanding this new wave of risk.”

Dowden’s warning to the United Nations serves as a clarion call for nations to come together and address the challenges posed by AI head-on. The global summit in November will be a critical step in shaping the future of AI governance and ensuring that the world order remains stable in the face of unprecedented technological change.

(Image Credit: UK Government under CC BY 2.0 license)

See also: CMA sets out principles for responsible AI development 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK deputy PM warns UN that AI regulation is falling behind advances appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/22/uk-deputy-pm-warns-un-ai-regulation-falling-behind-advances/feed/ 0
IFOW: AI can have a positive impact on jobs https://www.artificialintelligence-news.com/2023/09/20/ifow-ai-can-have-positive-impact-jobs/ https://www.artificialintelligence-news.com/2023/09/20/ifow-ai-can-have-positive-impact-jobs/#respond Wed, 20 Sep 2023 12:15:37 +0000 https://www.artificialintelligence-news.com/?p=13618 In a world where sensational headlines about AI and autonomous robots dominate the media landscape, a new report sheds light on a different narrative. The research, funded by the Nuffield Foundation, explores the nuanced impacts of AI adoption on jobs and work quality. Contrary to the doomsday predictions, the report suggests that AI could have... Read more »

The post IFOW: AI can have a positive impact on jobs appeared first on AI News.

]]>
In a world where sensational headlines about AI and autonomous robots dominate the media landscape, a new report sheds light on a different narrative.

The research, funded by the Nuffield Foundation, explores the nuanced impacts of AI adoption on jobs and work quality. Contrary to the doomsday predictions, the report suggests that AI could have a positive influence on employment and job quality.

The study, conducted by the Institute for the Future of Work (IFOW), indicates that AI adoption is already well underway in UK firms. However, rather than leading to widespread job loss, it suggests that AI has the potential to create more jobs and improve the quality of existing ones.

Anna Thomas, Co-Founder and Director of the IFOW, expressed optimism about the study’s results: “This report not only highlights that the adoption of AI is well underway across UK firms but that it is possible for this tech transformation to lead to both net job creation and more ‘good work’ – great news as we look to solve the UK’s productivity puzzle.”

“With the [UK-hosted global] AI Summit fast approaching, Government must act urgently to regulate, legislate and invest so that UK firms and workers can benefit from this fast-moving technology.”

One key takeaway from the study is the importance of regional investment in education and infrastructure to make all areas of the UK ‘innovation ready.’ The study also emphasises the need for firms to engage workers when investing in automation and AI.

Taking these suggested actions could help ensure that the benefits of AI are distributed more evenly across regions and demographics, reducing existing inequalities.

Professor Sir Christopher Pissarides, Nobel Laureate and Co-Founder of IFOW, stressed the significance of placing “good jobs” at the heart of an economic and industrial strategy in the age of automation. He believes that the study provides valuable insights into how this can be achieved.

The IFOW’s study suggests that with the right approach, AI adoption can lead to a positive transformation of the labour market. By investing in education, infrastructure, and worker engagement, the UK can harness the potential of AI to create more jobs and improve job quality across the country.

Matt Robinson, Head of Nations and Regions, techUK, commented: “Realising the benefits of technologies like AI for all will mean getting the right foundations in place across areas like digital infrastructure and skills provision in every part of the UK to enable and create high-quality digital jobs.

“Access to good digital infrastructure, as well as skills and talent, is a priority for techUK members, and the Institute’s work provides welcome insights into their importance for creating good work throughout the country.”

While the IFOW’s study paints a more positive outlook on the adoption of AI than most headlines, it will be an uphill battle to convince the wider public.

A poll of US adults released this week by Mitre-Harris found the majority (54%) believe the risks of AI and just 39 percent of adults said they believed today’s AI technologies are safe and secure — down nine points from the previous survey.

As the AI industry continues to evolve, urgent action from governments, employers, and employees is essential to realise the opportunities, manage the risks, and convince a wary public of the technology’s benefits.

A copy of the full working paper can be found here (PDF)

(Photo by Damian Zaleski on Unsplash)

See also: CMA sets out principles for responsible AI development 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post IFOW: AI can have a positive impact on jobs appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/20/ifow-ai-can-have-positive-impact-jobs/feed/ 0
CMA sets out principles for responsible AI development  https://www.artificialintelligence-news.com/2023/09/19/cma-sets-principles-responsible-ai-development/ https://www.artificialintelligence-news.com/2023/09/19/cma-sets-principles-responsible-ai-development/#respond Tue, 19 Sep 2023 10:41:38 +0000 https://www.artificialintelligence-news.com/?p=13614 The Competition and Markets Authority (CMA) has set out its principles to ensure the responsible development and use of foundation models (FMs). FMs are versatile AI systems with the potential to revolutionise various sectors, from information access to healthcare. The CMA’s report, published today, outlines a set of guiding principles aimed at safeguarding consumer protection... Read more »

The post CMA sets out principles for responsible AI development  appeared first on AI News.

]]>
The Competition and Markets Authority (CMA) has set out its principles to ensure the responsible development and use of foundation models (FMs).

FMs are versatile AI systems with the potential to revolutionise various sectors, from information access to healthcare. The CMA’s report, published today, outlines a set of guiding principles aimed at safeguarding consumer protection and fostering healthy competition within this burgeoning industry.

Foundation models – known for their adaptability to diverse applications – have witnessed rapid adoption across various user platforms, including familiar names like ChatGPT and Office 365 Copilot. These AI systems possess the power to drive innovation and stimulate economic growth, promising transformative changes across sectors and industries.

Sarah Cardell, CEO of the CMA, emphasised the urgency of proactive intervention in the AI:

“The speed at which AI is becoming part of everyday life for people and businesses is dramatic. There is real potential for this technology to turbocharge productivity and make millions of everyday tasks easier – but we can’t take a positive future for granted.

That’s why we have today proposed these new principles and launched a broad programme of engagement to help ensure the development and use of foundation models evolves in a way that promotes competition and protects consumers.

While I hope that our collaborative approach will help realise the maximum potential of this new technology, we are ready to intervene where necessary.”

Research from Earlybird reveals that Britain houses the largest number of AI startups in Europe. The CMA’s report underscores the immense benefits that can accrue if the development and use of FMs are managed effectively.

These advantages include the emergence of superior products and services, enhanced access to information, breakthroughs in science and healthcare, and even lower prices for consumers. Additionally, a vibrant FM market could open doors for a wider range of businesses to compete successfully, challenging established market leaders. This competition and innovation, in turn, could boost the overall economy, fostering increased productivity and economic growth.

Claire Trachet, tech industry expert and CEO of business advisory Trachet, said:

“With the [UK-hosted] global AI Safety Summit around the corner, the announcement of these principles shows the public and investors that the UK is committed to regulating AI safely. To continue this momentum, it’s important for the UK to strike a balance in creating effective regulation without stifling growing innovation and investment. 

Ensuring that regulation is both well-designed and effective will help attract and maintain investment in the UK by creating a stable, secure, and trustworthy business environment that appeals to domestic and international investors.” 

The CMA’s report also sounds a cautionary note. It highlights the potential risks if competition remains weak or if developers neglect consumer protection regulations. Such lapses could expose individuals and businesses to significant levels of false information and AI-driven fraud. In the long run, a handful of dominant firms might exploit FMs to consolidate market power, offering subpar products or services at exorbitant prices.

While the scope of the CMA’s initial review focused primarily on competition and consumer protection concerns, it acknowledges that other important questions related to FMs, such as copyright, intellectual property, online safety, data protection, and security, warrant further examination.

Sridhar Iyengar, Managing Director of Zoho Europe, commented:

“The safe development of AI has been a central focus of UK policy and will continue to play a significant role in the UK’s ambitions of leading the global AI race. While there is public concern over the trustworthiness of AI, we shouldn’t lose sight of the business benefits that it provides, such as forecasting and improved data analysis, and work towards a solution.

Collaboration between businesses, government, academia and industry experts is crucial to strike a balance between safe regulations and guidance that can lead to the positive development and use of innovative business AI tools.

AI is going to move forward with or without the UK, so it’s best to take the lead on research and development to ensure its safe evolution.”

The proposed guiding principles, unveiled by the CMA, aim to steer the ongoing development and use of FMs, ensuring that people, businesses, and the economy reap the full benefits of innovation and growth. Drawing inspiration from the evolution of other technology markets, these principles seek to guide FM developers and deployers in the following key areas:

  • Accountability: Developers and deployers are accountable for the outputs provided to consumers.
  • Access: Ensuring ongoing access to essential inputs without unnecessary restrictions.
  • Diversity: Encouraging a sustained diversity of business models, including both open and closed approaches.
  • Choice: Providing businesses with sufficient choices to determine how to utilize FMs effectively.
  • Flexibility: Allowing the flexibility to switch between or use multiple FMs as needed.
  • Fairness: Prohibiting anti-competitive conduct, including self-preferencing, tying, or bundling.
  • Transparency: Offering consumers and businesses information about the risks and limitations of FM-generated content to enable informed choices.

Over the next few months, the CMA plans to engage extensively with a diverse range of stakeholders both within the UK and internationally to further develop these principles. This collaborative effort aims to support the positive growth of FM markets, fostering effective competition and consumer protection.

Gareth Mills, Partner at law firm Charles Russell Speechlys, said:

“The principles themselves are clearly aimed at facilitating a dynamic sector with low entry requirements that allows smaller players to compete effectively with more established names, whilst at the same time mitigating against the potential for AI technologies to have adverse consequences for consumers.

The report itself notes that, although the CMA has established a number of core principles, there is still work to do and that stakeholder feedback – both within the UK and internationally – will be required before a formal policy and regulatory position can be definitively established.

As the utilisation of the technologies grows, the extent to which there is any inconsistency between competition objectives and government strategy will be fleshed out.”

An update on the CMA’s progress and the reception of these principles will be published in early 2024, reflecting the authority’s commitment to shaping AI markets in ways that benefit people, businesses, and the UK economy as a whole.

(Photo by JESHOOTS.COM on Unsplash)

See also: UK to pitch AI’s potential for international development at UN

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post CMA sets out principles for responsible AI development  appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/19/cma-sets-principles-responsible-ai-development/feed/ 0
Is Europe killing itself financially with the AI Act? https://www.artificialintelligence-news.com/2023/09/18/is-europe-killing-itself-financially-with-ai-act/ https://www.artificialintelligence-news.com/2023/09/18/is-europe-killing-itself-financially-with-ai-act/#respond Mon, 18 Sep 2023 15:59:15 +0000 https://www.artificialintelligence-news.com/?p=13606 Europe is tinkering with legislation to regulate artificial intelligence. European regulators are delighted with this, but what does the world say about the AI Act? Now the outlines for the AI Act are known, a debate is beginning to erupt around its possible implications. One camp believes regulations are needed to curb the risks of powerful AI... Read more »

The post Is Europe killing itself financially with the AI Act? appeared first on AI News.

]]>
Europe is tinkering with legislation to regulate artificial intelligence. European regulators are delighted with this, but what does the world say about the AI Act?

Now the outlines for the AI Act are known, a debate is beginning to erupt around its possible implications. One camp believes regulations are needed to curb the risks of powerful AI technology, while the other is convinced that regulation will prove pernicious for the European economy. Is it out of the question that safe AI products also bring economic prosperity?

‘Industrial revolution’ without Europe

The EU “prevents the industrial revolution from happening” and portrays itself as “no part of the future world,” Joe Lonsdale told Bloomberg. He regularly appears in the US media around AI topics as an outspoken advocate of the technology. According to him, the technology has the potential to cause a third industrial revolution, and every company should already have implemented it in its organization.

He earned a bachelor’s degree in computer science in 2003. Meanwhile, he co-founded several technology companies, including those that deploy artificial intelligence. He later grew to become a businessman and venture capitalist.

The only question is, are the concerns well-founded? At the very least, caution seems necessary to avoid seeing major AI products disappear from Europe. Sam Altman, a better-known IT figure as CEO of OpenAI, previously spoke out about the possible disappearance of AI companies from Europe if the rules become too hard to apply. He does not plan to pull ChatGPT out of Europe because of the AI law, but he warns here of the possible actions of other companies.

ChatGPT stays

The CEO himself is essentially a strong supporter of security legislation for AI. He advocates for clear security requirements that AI developers must meet before the official release of a new product.

When a major player in the AI field calls for regulation of the technology he is working with, perhaps we as Europe should listen. That is what is happening with the AI Act, through which the EU is trying to be the first in the world to put out a set of rules for artificial intelligence. The EU is a pioneer, but it will also have to discover the pitfalls of a policy in the absence of a working example in the world.

The rules will be continuously tested until they officially come into effect in 2025 by experts who publicly give their opinions on the law. A public testing period which AI developers should also find important, Altman said. The European Union also avoids making up rules from higher up for a field it doesn’t know much about itself. The legislation will come bottom-up by involving companies and developers already actively engaged in AI setting the standards.

Copy off

Although the EU often pronounces that the AI law will be the world’s first regulation of artificial intelligence, other places are tinkering with a legal framework just as much. The United Kingdom, for example, is eager to embrace the technology but also wants certainty about its security. To that end, it immerses itself in the technology and gains early access to DeepMind, OpenAI and Anthropic’s models for research purposes.

However, Britain has no plans to punish companies that do not comply. The country limits itself to a framework of five principles that artificial intelligence should comply with. The choice seems to play to the disadvantage of guaranteed safety of AI products, as the country says it is necessary not to make a mandatory political framework for companies, to attract investment from AI companies in the UK. So secure AI products and economic prosperity do not appear to fit well together according to the country. Wait and see if Europe’s AI law validates that.

(Editor’s note: This article first appeared on Techzine)

The post Is Europe killing itself financially with the AI Act? appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/18/is-europe-killing-itself-financially-with-ai-act/feed/ 0
UK to pitch AI’s potential for international development at UN https://www.artificialintelligence-news.com/2023/09/18/uk-pitch-ai-potential-international-development-un/ https://www.artificialintelligence-news.com/2023/09/18/uk-pitch-ai-potential-international-development-un/#respond Mon, 18 Sep 2023 09:28:39 +0000 https://www.artificialintelligence-news.com/?p=13600 The UK is pitching its vision for leveraging AI’s potential to accelerate development in the world’s most impoverished nations during the UN General Assembly (UNGA). The vision was set out by UK Foreign Secretary James Cleverly and calls upon international partners to collaborate and coordinate their efforts in harnessing AI for development in Africa and... Read more »

The post UK to pitch AI’s potential for international development at UN appeared first on AI News.

]]>
The UK is pitching its vision for leveraging AI’s potential to accelerate development in the world’s most impoverished nations during the UN General Assembly (UNGA).

The vision was set out by UK Foreign Secretary James Cleverly and calls upon international partners to collaborate and coordinate their efforts in harnessing AI for development in Africa and making progress towards the UN’s Sustainable Development Goals.

As part of its efforts, the UK is launching the ‘AI for Development’ programme in partnership with Canada’s International Development Research Centre (IDRC). The primary focus of this initiative is to assist developing countries, primarily in Africa, in building local AI capabilities and fostering innovation.

The announcement coincides with the UK’s co-convening of an event on AI during the margins of the UN General Assembly. This high-level session – chaired by US Secretary of State Antony Blinken – will assemble governments, tech companies, and non-governmental organisations (NGOs) to explore how AI can expedite progress towards the Sustainable Development Goals. These goals aim to create a healthier, fairer, and more prosperous world by 2030.

In parallel with these efforts, the UK is committing £1 million in investment towards a pioneering fund known as the Complex Risk Analytics Fund (‘CRAF’d’). This fund, in collaboration with international partners, will harness the power of AI to prevent crises before they occur. Additionally, it will provide assistance during emergencies and support countries in their recovery towards sustainable development.

Foreign Secretary James Cleverly said:

“The opportunity of AI is immense. It has already been shown to speed up drug discovery, help develop new treatments for common diseases, and predict food insecurity—to name only a few uses.

The UK, alongside our allies and partners, is making sure that the fulfilment of this enormous potential is shared globally.

As AI continues to rapidly evolve, we need a global approach that seizes the opportunities that AI can bring to solving humanity’s shared challenges. The UK-hosted AI summit this November will be key to helping us achieve this.”

Julie Delahanty, President of the International Development Research Centre (IDRC), expressed her satisfaction with the collaboration between IDRC and the UK Foreign, Commonwealth & Development Office (FCDO).

“IDRC is pleased to announce a new collaboration with FCDO, a key ally in tackling the most pressing development challenges,” said Delahanty.

“The AI for Development program will build on existing partnerships, leveraging AI’s capacity to reduce inequalities, address poverty, improve food systems, confront the challenges of climate change and make education more inclusive, while also mitigating risks.”

This announcement underscores the broader commitment of the UK to employ AI innovation to tackle global challenges, including the pursuit of the Sustainable Development Goals.

In a separate event, scheduled for 1-2 November 2023, the UK will host the world’s first major AI Safety Summit at the historic Bletchley Park in Buckinghamshire. This summit aims to garner international consensus on the urgent need for safety measures in cutting-edge AI technology.

See also: White House secures safety commitments from eight more AI companies

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK to pitch AI’s potential for international development at UN appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/18/uk-pitch-ai-potential-international-development-un/feed/ 0