regulation Archives - AI News https://www.artificialintelligence-news.com/tag/regulation/ Artificial Intelligence News Mon, 16 Oct 2023 15:02:03 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png regulation Archives - AI News https://www.artificialintelligence-news.com/tag/regulation/ 32 32 UK reveals AI Safety Summit opening day agenda https://www.artificialintelligence-news.com/2023/10/16/uk-reveals-ai-safety-summit-opening-day-agenda/ https://www.artificialintelligence-news.com/2023/10/16/uk-reveals-ai-safety-summit-opening-day-agenda/#respond Mon, 16 Oct 2023 15:02:01 +0000 https://www.artificialintelligence-news.com/?p=13754 The UK Government has unveiled plans for the inaugural global AI Safety Summit, scheduled to take place at the historic Bletchley Park. The summit will bring together digital ministers, AI companies, civil society representatives, and independent experts for crucial discussions. The primary focus is on frontier AI, the most advanced generation of AI models, which... Read more »

The post UK reveals AI Safety Summit opening day agenda appeared first on AI News.

]]>
The UK Government has unveiled plans for the inaugural global AI Safety Summit, scheduled to take place at the historic Bletchley Park.

The summit will bring together digital ministers, AI companies, civil society representatives, and independent experts for crucial discussions. The primary focus is on frontier AI, the most advanced generation of AI models, which – if not developed responsibly – could pose significant risks.

The event aims to explore both the potential dangers emerging from rapid advances in AI and the transformative opportunities the technology presents, especially in education and international research collaborations.

Technology Secretary Michelle Donelan will lead the summit and articulate the government’s position that safety and security must be central to AI advancements. The event will feature parallel sessions in the first half of the day, delving into understanding frontier AI risks.

Other topics that will be covered during the AI Safety Summit include threats to national security, potential election disruption, erosion of social trust, and exacerbation of global inequalities.

The latter part of the day will focus on roundtable discussions aimed at enhancing frontier AI safety responsibly. Delegates will explore defining risk thresholds, effective safety assessments, and robust governance mechanisms to enable the safe scaling of frontier AI by developers.

International collaboration will be a key theme, emphasising the need for policymakers, scientists, and researchers to work together in managing risks and harnessing AI’s potential for global economic and social benefits.

The summit will conclude with a panel discussion on the transformative opportunities of AI for the public good, specifically in revolutionising education. Donelan will provide closing remarks and underline the importance of global collaboration in adopting AI safely.

This event aims to mark a positive step towards fostering international cooperation in the responsible development and deployment of AI technology. By convening global experts and policymakers, the UK Government wants to lead the conversation on creating a safe and positive future with AI.

(Photo by Ricardo Gomez Angel on Unsplash)

See also: UK races to agree statement on AI risks with global leaders

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK reveals AI Safety Summit opening day agenda appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/16/uk-reveals-ai-safety-summit-opening-day-agenda/feed/ 0
UK races to agree statement on AI risks with global leaders https://www.artificialintelligence-news.com/2023/10/10/uk-races-agree-statement-ai-risks-global-leaders/ https://www.artificialintelligence-news.com/2023/10/10/uk-races-agree-statement-ai-risks-global-leaders/#respond Tue, 10 Oct 2023 13:40:33 +0000 https://www.artificialintelligence-news.com/?p=13709 Downing Street officials find themselves in a race against time to finalise an agreed communique from global leaders concerning the escalating concerns surrounding artificial intelligence.  This hurried effort comes in anticipation of the UK’s AI Safety Summit scheduled next month at the historic Bletchley Park. The summit, designed to provide an update on White House-brokered... Read more »

The post UK races to agree statement on AI risks with global leaders appeared first on AI News.

]]>
Downing Street officials find themselves in a race against time to finalise an agreed communique from global leaders concerning the escalating concerns surrounding artificial intelligence. 

This hurried effort comes in anticipation of the UK’s AI Safety Summit scheduled next month at the historic Bletchley Park.

The summit, designed to provide an update on White House-brokered safety guidelines – as well as facilitate a debate on how national security agencies can scrutinise the most dangerous versions of this technology – faces a potential hurdle. It’s unlikely to generate an agreement on establishing a new international organisation to scrutinise cutting-edge AI, apart from its proposed communique.

The proposed AI Safety Institute, a brainchild of the UK government, aims to enable national security-related scrutiny of frontier AI models. However, this ambition might face disappointment if an international consensus is not reached.

Claire Trachet, tech industry expert and CEO of business advisory Trachet, said:

“I think that this marks a very important moment for the UK, especially in terms of recognising that there are other players across Europe also hoping to catch up with the US in the AI space. It’s therefore essential that the UK continues to balance its drive for innovation with creating effective regulation that will not stifle the country’s growth prospects.

While the UK possesses the potential to be a frontrunner in the global tech race, concerted efforts are needed to strengthen the country’s position. By investing in research, securing supply chains, promoting collaboration, and nurturing local talent, the UK can position itself as a prominent player in shaping the future of AI-driven technologies.”

Currently, the UK stands as a key player in the global tech arena, with its AI market valued at over £16.9 billion and expected to soar to £803.7 billion by 2035, according to the US International Trade.

The British government’s commitment is evident through its £1 billion investment in supercomputing and AI research. Moreover, the introduction of seven new AI principles for regulation – focusing on accountability, access, diversity, choice, flexibility, fair dealing, and transparency – showcases the government’s dedication to fostering a robust AI ecosystem.

Despite these efforts, challenges loom as France emerges as a formidable competitor within Europe.

French billionaire Xavier Niel recently announced a €200 million investment in artificial intelligence, including a research lab and supercomputer, aimed at bolstering Europe’s competitiveness in the global AI race.

Niel’s initiative aligns with President Macron’s commitment, who announced €500 million in new funding at VivaTech to create new AI champions. Furthermore, France plans to attract companies through its own AI summit.

Claire Trachet acknowledges the intensifying competition between the UK and France, stating that while the rivalry adds complexity to the UK’s goals, it can also spur innovation within the industry. However, Trachet emphasises the importance of the UK striking a balance between innovation and effective regulation to sustain its growth prospects.

“In my view, if Europe wants to truly make a meaningful impact, they must leverage their collective resources, foster collaboration, and invest in nurturing a robust ecosystem,” adds Trachet.

“This means combining the strengths of the UK, France and Germany, to possibly create a compelling alternative in the next 10-15 years that disrupts the AI landscape, but again, this would require a heavily strategic vision and collaborative approach.”

(Photo by Nick Kane on Unsplash)

See also: Cyber Security & Cloud Expo: The alarming potential of AI-powered cybercrime

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK races to agree statement on AI risks with global leaders appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/10/uk-races-agree-statement-ai-risks-global-leaders/feed/ 0
UK deputy PM warns UN that AI regulation is falling behind advances https://www.artificialintelligence-news.com/2023/09/22/uk-deputy-pm-warns-un-ai-regulation-falling-behind-advances/ https://www.artificialintelligence-news.com/2023/09/22/uk-deputy-pm-warns-un-ai-regulation-falling-behind-advances/#respond Fri, 22 Sep 2023 09:24:44 +0000 https://www.artificialintelligence-news.com/?p=13630 In a stark address to the UN, UK Deputy PM Oliver Dowden has sounded the alarm on the potentially destabilising impact of AI on the world order. Dowden has urged governments to take immediate action to regulate AI development, warning that the rapid pace of advancement in AI technology could outstrip their ability to ensure... Read more »

The post UK deputy PM warns UN that AI regulation is falling behind advances appeared first on AI News.

]]>
In a stark address to the UN, UK Deputy PM Oliver Dowden has sounded the alarm on the potentially destabilising impact of AI on the world order.

Dowden has urged governments to take immediate action to regulate AI development, warning that the rapid pace of advancement in AI technology could outstrip their ability to ensure its safe and responsible use.

Speaking at the UN General Assembly in New York, Dowden highlighted that the UK will host a global summit in November to discuss the regulation of AI. The summit aims to bring together international leaders, experts, and industry representatives to address the pressing concerns surrounding AI.

One of the primary fears surrounding unchecked AI development is the potential for widespread job displacement, the proliferation of misinformation, and the deepening of societal discrimination. Without adequate regulations in place, AI technologies could be harnessed to magnify these negative effects.

“The starting gun has been fired on a globally competitive race in which individual companies as well as countries will strive to push the boundaries as far and fast as possible,” Dowden cautioned during his address.

Dowden went on to note that the current state of global regulation lags behind the rapid advances in AI technology. Unlike the past, where regulations followed technological developments, Dowden stressed that rules must now be established in tandem with AI’s evolution.

Oseloka Obiora, CTO at RiverSafe, said: “Business leaders are jumping into bed with the latest AI trends at an alarming rate, with little or no concern for the consequences.

“With global regulatory standards falling way behind and the most basic cyber security checks being neglected, it is right for the government to call for new global standards to prevent the AI ticking timebomb from exploding.”

Dowden underscored the importance of ensuring that AI companies do not have undue influence over the regulatory process. He emphasised the need for transparency and oversight, stating that AI companies should not “mark their own homework.” Instead, governments and citizens should have confidence that risks associated with AI are being properly mitigated.

Moreover, Dowden highlighted that only coordinated action by nation-states could provide the necessary assurance to the public that significant national security concerns stemming from AI have been adequately addressed.

He also cautioned against oversimplifying the role of AI—noting that it can be both a tool for good and a tool for ill, depending on its application. During the UN General Assembly, the UK also pitched AI’s potential to accelerate development in the world’s most impoverished nations.

The UK’s initiative to host a global AI regulation summit signals a growing recognition among world leaders of the urgent need to establish a robust framework for AI governance. As AI technology continues to advance, governments are under increasing pressure to strike the right balance between innovation and safeguarding against potential risks.

Jake Moore, Global Cybersecurity Expert at ESET, comments: “The fear that AI could shape our lives in a completely new direction is not without substance, as the power behind the technology churning this wheel is potentially destructive. Not only could AI change jobs, it also has the ability to change what we know to be true and impact what we believe.   

“Regulating it would mean potentially stifling innovation. But even attempting to regulate such a powerful beast would be like trying to regulate the dark web, something that is virtually impossible. Large datasets and algorithms can be designed to do almost anything, so we need to start looking at how we can improve educating people, especially young people in schools, into understanding this new wave of risk.”

Dowden’s warning to the United Nations serves as a clarion call for nations to come together and address the challenges posed by AI head-on. The global summit in November will be a critical step in shaping the future of AI governance and ensuring that the world order remains stable in the face of unprecedented technological change.

(Image Credit: UK Government under CC BY 2.0 license)

See also: CMA sets out principles for responsible AI development 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK deputy PM warns UN that AI regulation is falling behind advances appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/22/uk-deputy-pm-warns-un-ai-regulation-falling-behind-advances/feed/ 0
IFOW: AI can have a positive impact on jobs https://www.artificialintelligence-news.com/2023/09/20/ifow-ai-can-have-positive-impact-jobs/ https://www.artificialintelligence-news.com/2023/09/20/ifow-ai-can-have-positive-impact-jobs/#respond Wed, 20 Sep 2023 12:15:37 +0000 https://www.artificialintelligence-news.com/?p=13618 In a world where sensational headlines about AI and autonomous robots dominate the media landscape, a new report sheds light on a different narrative. The research, funded by the Nuffield Foundation, explores the nuanced impacts of AI adoption on jobs and work quality. Contrary to the doomsday predictions, the report suggests that AI could have... Read more »

The post IFOW: AI can have a positive impact on jobs appeared first on AI News.

]]>
In a world where sensational headlines about AI and autonomous robots dominate the media landscape, a new report sheds light on a different narrative.

The research, funded by the Nuffield Foundation, explores the nuanced impacts of AI adoption on jobs and work quality. Contrary to the doomsday predictions, the report suggests that AI could have a positive influence on employment and job quality.

The study, conducted by the Institute for the Future of Work (IFOW), indicates that AI adoption is already well underway in UK firms. However, rather than leading to widespread job loss, it suggests that AI has the potential to create more jobs and improve the quality of existing ones.

Anna Thomas, Co-Founder and Director of the IFOW, expressed optimism about the study’s results: “This report not only highlights that the adoption of AI is well underway across UK firms but that it is possible for this tech transformation to lead to both net job creation and more ‘good work’ – great news as we look to solve the UK’s productivity puzzle.”

“With the [UK-hosted global] AI Summit fast approaching, Government must act urgently to regulate, legislate and invest so that UK firms and workers can benefit from this fast-moving technology.”

One key takeaway from the study is the importance of regional investment in education and infrastructure to make all areas of the UK ‘innovation ready.’ The study also emphasises the need for firms to engage workers when investing in automation and AI.

Taking these suggested actions could help ensure that the benefits of AI are distributed more evenly across regions and demographics, reducing existing inequalities.

Professor Sir Christopher Pissarides, Nobel Laureate and Co-Founder of IFOW, stressed the significance of placing “good jobs” at the heart of an economic and industrial strategy in the age of automation. He believes that the study provides valuable insights into how this can be achieved.

The IFOW’s study suggests that with the right approach, AI adoption can lead to a positive transformation of the labour market. By investing in education, infrastructure, and worker engagement, the UK can harness the potential of AI to create more jobs and improve job quality across the country.

Matt Robinson, Head of Nations and Regions, techUK, commented: “Realising the benefits of technologies like AI for all will mean getting the right foundations in place across areas like digital infrastructure and skills provision in every part of the UK to enable and create high-quality digital jobs.

“Access to good digital infrastructure, as well as skills and talent, is a priority for techUK members, and the Institute’s work provides welcome insights into their importance for creating good work throughout the country.”

While the IFOW’s study paints a more positive outlook on the adoption of AI than most headlines, it will be an uphill battle to convince the wider public.

A poll of US adults released this week by Mitre-Harris found the majority (54%) believe the risks of AI and just 39 percent of adults said they believed today’s AI technologies are safe and secure — down nine points from the previous survey.

As the AI industry continues to evolve, urgent action from governments, employers, and employees is essential to realise the opportunities, manage the risks, and convince a wary public of the technology’s benefits.

A copy of the full working paper can be found here (PDF)

(Photo by Damian Zaleski on Unsplash)

See also: CMA sets out principles for responsible AI development 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post IFOW: AI can have a positive impact on jobs appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/20/ifow-ai-can-have-positive-impact-jobs/feed/ 0
CMA sets out principles for responsible AI development  https://www.artificialintelligence-news.com/2023/09/19/cma-sets-principles-responsible-ai-development/ https://www.artificialintelligence-news.com/2023/09/19/cma-sets-principles-responsible-ai-development/#respond Tue, 19 Sep 2023 10:41:38 +0000 https://www.artificialintelligence-news.com/?p=13614 The Competition and Markets Authority (CMA) has set out its principles to ensure the responsible development and use of foundation models (FMs). FMs are versatile AI systems with the potential to revolutionise various sectors, from information access to healthcare. The CMA’s report, published today, outlines a set of guiding principles aimed at safeguarding consumer protection... Read more »

The post CMA sets out principles for responsible AI development  appeared first on AI News.

]]>
The Competition and Markets Authority (CMA) has set out its principles to ensure the responsible development and use of foundation models (FMs).

FMs are versatile AI systems with the potential to revolutionise various sectors, from information access to healthcare. The CMA’s report, published today, outlines a set of guiding principles aimed at safeguarding consumer protection and fostering healthy competition within this burgeoning industry.

Foundation models – known for their adaptability to diverse applications – have witnessed rapid adoption across various user platforms, including familiar names like ChatGPT and Office 365 Copilot. These AI systems possess the power to drive innovation and stimulate economic growth, promising transformative changes across sectors and industries.

Sarah Cardell, CEO of the CMA, emphasised the urgency of proactive intervention in the AI:

“The speed at which AI is becoming part of everyday life for people and businesses is dramatic. There is real potential for this technology to turbocharge productivity and make millions of everyday tasks easier – but we can’t take a positive future for granted.

That’s why we have today proposed these new principles and launched a broad programme of engagement to help ensure the development and use of foundation models evolves in a way that promotes competition and protects consumers.

While I hope that our collaborative approach will help realise the maximum potential of this new technology, we are ready to intervene where necessary.”

Research from Earlybird reveals that Britain houses the largest number of AI startups in Europe. The CMA’s report underscores the immense benefits that can accrue if the development and use of FMs are managed effectively.

These advantages include the emergence of superior products and services, enhanced access to information, breakthroughs in science and healthcare, and even lower prices for consumers. Additionally, a vibrant FM market could open doors for a wider range of businesses to compete successfully, challenging established market leaders. This competition and innovation, in turn, could boost the overall economy, fostering increased productivity and economic growth.

Claire Trachet, tech industry expert and CEO of business advisory Trachet, said:

“With the [UK-hosted] global AI Safety Summit around the corner, the announcement of these principles shows the public and investors that the UK is committed to regulating AI safely. To continue this momentum, it’s important for the UK to strike a balance in creating effective regulation without stifling growing innovation and investment. 

Ensuring that regulation is both well-designed and effective will help attract and maintain investment in the UK by creating a stable, secure, and trustworthy business environment that appeals to domestic and international investors.” 

The CMA’s report also sounds a cautionary note. It highlights the potential risks if competition remains weak or if developers neglect consumer protection regulations. Such lapses could expose individuals and businesses to significant levels of false information and AI-driven fraud. In the long run, a handful of dominant firms might exploit FMs to consolidate market power, offering subpar products or services at exorbitant prices.

While the scope of the CMA’s initial review focused primarily on competition and consumer protection concerns, it acknowledges that other important questions related to FMs, such as copyright, intellectual property, online safety, data protection, and security, warrant further examination.

Sridhar Iyengar, Managing Director of Zoho Europe, commented:

“The safe development of AI has been a central focus of UK policy and will continue to play a significant role in the UK’s ambitions of leading the global AI race. While there is public concern over the trustworthiness of AI, we shouldn’t lose sight of the business benefits that it provides, such as forecasting and improved data analysis, and work towards a solution.

Collaboration between businesses, government, academia and industry experts is crucial to strike a balance between safe regulations and guidance that can lead to the positive development and use of innovative business AI tools.

AI is going to move forward with or without the UK, so it’s best to take the lead on research and development to ensure its safe evolution.”

The proposed guiding principles, unveiled by the CMA, aim to steer the ongoing development and use of FMs, ensuring that people, businesses, and the economy reap the full benefits of innovation and growth. Drawing inspiration from the evolution of other technology markets, these principles seek to guide FM developers and deployers in the following key areas:

  • Accountability: Developers and deployers are accountable for the outputs provided to consumers.
  • Access: Ensuring ongoing access to essential inputs without unnecessary restrictions.
  • Diversity: Encouraging a sustained diversity of business models, including both open and closed approaches.
  • Choice: Providing businesses with sufficient choices to determine how to utilize FMs effectively.
  • Flexibility: Allowing the flexibility to switch between or use multiple FMs as needed.
  • Fairness: Prohibiting anti-competitive conduct, including self-preferencing, tying, or bundling.
  • Transparency: Offering consumers and businesses information about the risks and limitations of FM-generated content to enable informed choices.

Over the next few months, the CMA plans to engage extensively with a diverse range of stakeholders both within the UK and internationally to further develop these principles. This collaborative effort aims to support the positive growth of FM markets, fostering effective competition and consumer protection.

Gareth Mills, Partner at law firm Charles Russell Speechlys, said:

“The principles themselves are clearly aimed at facilitating a dynamic sector with low entry requirements that allows smaller players to compete effectively with more established names, whilst at the same time mitigating against the potential for AI technologies to have adverse consequences for consumers.

The report itself notes that, although the CMA has established a number of core principles, there is still work to do and that stakeholder feedback – both within the UK and internationally – will be required before a formal policy and regulatory position can be definitively established.

As the utilisation of the technologies grows, the extent to which there is any inconsistency between competition objectives and government strategy will be fleshed out.”

An update on the CMA’s progress and the reception of these principles will be published in early 2024, reflecting the authority’s commitment to shaping AI markets in ways that benefit people, businesses, and the UK economy as a whole.

(Photo by JESHOOTS.COM on Unsplash)

See also: UK to pitch AI’s potential for international development at UN

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post CMA sets out principles for responsible AI development  appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/19/cma-sets-principles-responsible-ai-development/feed/ 0
SEC turns its gaze from crypto to AI https://www.artificialintelligence-news.com/2023/08/04/sec-turns-gaze-from-crypto-to-ai/ https://www.artificialintelligence-news.com/2023/08/04/sec-turns-gaze-from-crypto-to-ai/#respond Fri, 04 Aug 2023 10:33:47 +0000 https://www.artificialintelligence-news.com/?p=13430 US Securities and Exchange Commission (SEC) chairman Gary Gensler has announced a shift in focus from cryptocurrency to AI. Gensler, who has been vocal about the risks and challenges posed by the cryptocurrency industry, now believes that AI is the technology that “warrants the hype” and deserves greater attention from regulators. Gensler’s interest in AI... Read more »

The post SEC turns its gaze from crypto to AI appeared first on AI News.

]]>
US Securities and Exchange Commission (SEC) chairman Gary Gensler has announced a shift in focus from cryptocurrency to AI.

Gensler, who has been vocal about the risks and challenges posed by the cryptocurrency industry, now believes that AI is the technology that “warrants the hype” and deserves greater attention from regulators.

Gensler’s interest in AI dates back to 1997 when he became intrigued by the technology after witnessing Russian chess grandmaster Garry Kasparov’s infamous loss to IBM’s supercomputer, Deep Blue.

As an MIT professor, Gensler delved deeper into the study of AI, co-authoring a significant paper in 2020 that highlighted the risks posed by deep learning in the financial system.

His concern over the potential implications of mass automation using AI in the finance sector has led him to reevaluate regulatory approaches. Gensler believes that while AI can bring immense benefits to financial firms and their clients through enhanced predictive capabilities, it also carries significant risks that need to be addressed.

“Mass automation can have cascading implications for trillions of dollars in assets that trade on markets overseen by the SEC,” warns Gensler.

One of Gensler’s key concerns is the potential use of AI to obscure responsibility and accountability when things go wrong. Coordinating AI models among major trading houses could lead to increased market volatility and instability, a phenomenon that current regulatory regimes might not be equipped to manage.

As a result, Gensler has taken a proactive step by proposing one of the first regulatory frameworks for AI in the finance industry. His proposal requires trading houses and money managers to carefully evaluate their use of AI and predictive data to identify any conflicts of interest, especially when the interests of clients clash with company profits.

However, this shift in focus does not mean the SEC is easing its crackdown on cryptocurrencies.

Under Gensler’s leadership, the SEC has actively pursued legal action against major crypto firms like Ripple, Binance, and Coinbase. Several lawsuits are currently pending, signalling that the SEC remains committed to enforcing its actions against cryptocurrency companies that engage in scams and fraudulent activities.

Gensler’s emphasis on AI comes at a crucial time when the technology is making rapid strides in automating various financial processes.

While AI holds tremendous promise in revolutionising the industry, its unchecked growth could also lead to unforeseen challenges. By directing the SEC’s attention towards AI, Gensler aims to strike a balance between promoting innovation and safeguarding market integrity and investor interests.

(Photo by Petri Heiskanen on Unsplash)

See also: AI Act: The power of open-source in guiding regulations

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Blockchain Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post SEC turns its gaze from crypto to AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/04/sec-turns-gaze-from-crypto-to-ai/feed/ 0
European Parliament adopts AI Act position https://www.artificialintelligence-news.com/2023/06/14/european-parliament-adopts-ai-act-position/ https://www.artificialintelligence-news.com/2023/06/14/european-parliament-adopts-ai-act-position/#respond Wed, 14 Jun 2023 14:27:26 +0000 https://www.artificialintelligence-news.com/?p=13192 The European Parliament has taken a significant step towards the regulation of artificial intelligence by voting to adopt its position for the upcoming AI Act with an overwhelming majority.  The act aims to regulate AI based on its potential to cause harm and follows a risk-based approach, prohibiting applications that pose an unacceptable risk while... Read more »

The post European Parliament adopts AI Act position appeared first on AI News.

]]>
The European Parliament has taken a significant step towards the regulation of artificial intelligence by voting to adopt its position for the upcoming AI Act with an overwhelming majority. 

The act aims to regulate AI based on its potential to cause harm and follows a risk-based approach, prohibiting applications that pose an unacceptable risk while imposing strict regulations for high-risk use cases.

The timing of AI regulation has been a subject of debate, but Dragoș Tudorache, one of the European Parliament’s co-rapporteurs on the AI Act, emphasised that it is the right time to regulate AI due to its profound impact.

Dr Ventsislav Ivanov, AI Expert and Lecturer at Oxford Business College, said: “Regulating artificial intelligence is one of the most important political challenges of our time, and the EU should be congratulated for attempting to tame the risks associated with technologies that are already revolutionising our daily lives.

“As the chaos and controversy accompanying this vote show, this will be not an easy feat. Taking on the global tech companies and other interested parties will be akin to Hercules battling the seven-headed hydra.”

The adoption of the AI Act faced uncertainty as a political deal crumbled, leading to amendments from various political groups.

One of the main points of contention was the use of Remote Biometric Identification, with liberal and progressive lawmakers seeking to ban its real-time use except for ex-post investigations of serious crimes. The centre-right European People’s Party attempted to introduce exceptions for exceptional circumstances like terrorist attacks or missing persons, but their efforts were unsuccessful.

A tiered approach for AI models will be introduced with the act, including stricter regulations for foundation models and generative AI.

The European Parliament intends to introduce mandatory labelling for AI-generated content and mandate the disclosure of training data covered by copyright. This move comes as generative AI, exemplified by ChatGPT, gained widespread attention—prompting the European Commission to launch outreach initiatives to foster international alignment on AI rules.

MEPs made several significant changes to the AI Act, including expanding the list of prohibited practices to include subliminal techniques, biometric categorisation, predictive policing, internet-scraped facial recognition databases, and emotion recognition software.

An extra layer was introduced for high-risk AI applications and extended the list of high-risk areas and use cases in law enforcement, migration control, and recommender systems of prominent social media platforms.

Robin Röhm, CEO of Apheris, commented: “The passing of the plenary vote on the EU’s AI Act marks a significant milestone in AI regulation, but raises more questions than it answers. It will make it more difficult for start-ups to compete and means that investors are less likely to deploy capital into companies operating in the EU.

“It is critical that we allow for capital to flow to businesses, given the cost of building AI technology, but the risk-based approach to regulation proposed by the EU is likely to lead to a lot of extra burden for the European ecosystem and will make investing less attractive.”

With the European Parliament’s adoption of its position on the AI Act, interinstitutional negotiations will commence with the EU Council of Ministers and the European Commission. The negotiations – known as trilogues – will address key points of contention such as high-risk categories, fundamental rights, and foundation models.

Spain, which assumes the rotating presidency of the Council in July, has made finalising the AI law its top digital priority. The aim is to reach a deal by November, with multiple trilogues planned as a backup.

The negotiations are expected to intensify in the coming months as the EU seeks to establish comprehensive regulations for AI, balancing innovation and governance while ensuring the protection of fundamental rights.

“The key to good regulation is ensuring that safety concerns are addressed while not stifling innovation. It remains to be seen whether the EU can achieve this,” concludes Röhm.

(Image Credit: European Union 2023 / Mathieu Cugnot)

Similar: UK will host global AI summit to address potential risks

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

The post European Parliament adopts AI Act position appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/06/14/european-parliament-adopts-ai-act-position/feed/ 0
UK will host global AI summit to address potential risks https://www.artificialintelligence-news.com/2023/06/08/uk-host-global-ai-summit-address-potential-risks/ https://www.artificialintelligence-news.com/2023/06/08/uk-host-global-ai-summit-address-potential-risks/#respond Thu, 08 Jun 2023 12:53:16 +0000 https://www.artificialintelligence-news.com/?p=13171 The UK has announced that it will host a global summit this autumn to address the most significant risks associated with AI. The decision comes after meetings between Prime Minister Rishi Sunak, US President Joe Biden, Congress, and business leaders. “AI has an incredible potential to transform our lives for the better. But we need... Read more »

The post UK will host global AI summit to address potential risks appeared first on AI News.

]]>
The UK has announced that it will host a global summit this autumn to address the most significant risks associated with AI.

The decision comes after meetings between Prime Minister Rishi Sunak, US President Joe Biden, Congress, and business leaders.

“AI has an incredible potential to transform our lives for the better. But we need to make sure it is developed and used in a way that is safe and secure,” explained Sunak.

“No one country can do this alone. This is going to take a global effort. But with our vast expertise and commitment to an open, democratic international system, the UK will stand together with our allies to lead the way.”

The UK government believes that the country is the natural place to lead discussions due to hosting Europe’s largest AI industry, which is only behind the US and China on the world stage

The AI industry in the UK employs over 50,000 people and contributes more than £3.7 billion to the country’s economy. US tech giant Palantir announced today it will make the UK its new European HQ for AI development.

“We are proud to extend our partnership with the United Kingdom, where we employ nearly a quarter of our global workforce,” said Alexander C. Karp, CEO of Palantir.

“London is a magnet for the best software engineering talent in the world, and it is the natural choice as the hub for our European efforts to develop the most effective and ethical artificial intelligence software solutions available.”

The urgency to evaluate AI risks stems from increasing concerns about the potential existential threats posed by this technology. Earlier this week, an AI task force adviser to the UK prime minister issued a stark warning: AI will threaten humans in two years.

McKinsey, a global consulting firm, predicts that between 2016 and 2030, AI-related advancements could impact approximately 15 percent of the global workforce, potentially displacing 400 million workers worldwide. In response, global regulators are racing to establish new rules and regulations to mitigate these risks.

“The Global Summit on AI Safety will play a critical role in bringing together government, industry, academia and civil society, and we’re looking forward to working closely with the UK Government to help make these efforts a success,” said Demis Hassabis, CEO of UK-headquartered Google DeepMind.

The attendees of the upcoming summit have not been announced yet, but the UK government plans to bring together key countries, leading tech companies, and researchers to establish safety measures for AI.

Prime Minister Sunak aims to ensure that AI is developed and utilised in a manner that is safe and secure while maximising its potential to benefit humanity.

Sridhar Iyengar, MD of Zoho Europe, commented:

“Earlier this year, the whitepaper released in the UK highlighted the numerous advantages of artificial intelligence, emphasising its potential as a valuable tool for enhancing business operations.

With the government’s ongoing ambition to position the UK as a science and technology superpower by 2030, and coupled with Chancellor Jeremy Hunt reiterating his vision of making the UK the ‘next Silicon Valley’, the UK’s leading input here could be extremely helpful in achieving these goals.”

Iyengar emphasised the advantages of AI and its potential to enhance various aspects of business operations, from customer service to fraud detection, ultimately improving business efficiencies.

However, Iyengar stressed the need for a global regulatory framework supported by public trust to fully harness the power of AI and achieve optimal outcomes for all stakeholders.

The European Union is already working on an Artificial Intelligence Act but it could take up to two-and-a-half years to come into effect. China, meanwhile, has also started drafting AI regulations, including proposals to require companies to notify users when an AI algorithm is being used.

These ongoing efforts highlight the global recognition of the need for comprehensive regulations and guidelines to manage AI’s impact effectively.

“To fully harness the power of AI and ensure optimal outcomes for all stakeholders, a global regulatory framework supported by public trust is essential,” added Iyengar.

“As AI becomes increasingly integrated into our daily lives, adopting a unified approach to regulations becomes crucial.”

The UK’s decision to host a global AI safety measure summit demonstrates its commitment to proactively addressing the risks associated with AI. As the world grapples with the challenges posed by AI, global cooperation and unified regulatory approaches will be vital to shaping the future of this transformative technology.

(Image Credit: No 10 Downing Street)

Related: AI leaders warn about ‘risk of extinction’ in open letter

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK will host global AI summit to address potential risks appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/06/08/uk-host-global-ai-summit-address-potential-risks/feed/ 0
AI Task Force adviser: AI will threaten humans in two years https://www.artificialintelligence-news.com/2023/06/06/ai-task-force-adviser-threaten-humans-two-years/ https://www.artificialintelligence-news.com/2023/06/06/ai-task-force-adviser-threaten-humans-two-years/#respond Tue, 06 Jun 2023 13:50:07 +0000 https://www.artificialintelligence-news.com/?p=13142 An artificial intelligence task force adviser to the UK prime minister has a stark warning: AI will threaten humans in two years. The adviser, Matt Clifford, said in an interview with TalkTV that humans have a narrow window of two years to control and regulate AI before it becomes too powerful. “The near-term risks are... Read more »

The post AI Task Force adviser: AI will threaten humans in two years appeared first on AI News.

]]>
An artificial intelligence task force adviser to the UK prime minister has a stark warning: AI will threaten humans in two years.

The adviser, Matt Clifford, said in an interview with TalkTV that humans have a narrow window of two years to control and regulate AI before it becomes too powerful.

“The near-term risks are actually pretty scary. You can use AI today to create new recipes for bioweapons or to launch large-scale cyber attacks,” said Clifford.

“You can have really very dangerous threats to humans that could kill many humans – not all humans – simply from where we would expect models to be in two years’ time.”

Clifford, who also chairs the government’s Advanced Research and Invention Agency (ARIA), emphasised the need for a framework that addresses the safety and regulation of AI systems.

In the interview, Clifford highlighted the growing capabilities of AI systems and the urgent need to consider the risks associated with them. He warned that if safety and regulations are not put in place, these systems could become highly powerful within two years, posing significant risks in both the short and long term.

He referenced an open letter signed by 350 AI experts, including OpenAI CEO Sam Altman, which called for treating AI as an existential threat akin to nuclear weapons and pandemics.

“The kind of existential risk that I think the letter writers were talking about is … about what happens once we effectively create a new species, an intelligence that is greater than humans,” explains Clifford.

Clifford went on to emphasise the importance of understanding and controlling AI models, stating that the lack of comprehension regarding their behaviour is a significant concern. He stressed the need for an audit and evaluation process before the deployment of powerful models, a sentiment shared by many AI development leaders.

“I think the thing to focus on now is how do we make sure that we know how to control these models because right now we don’t,” says Clifford.

Around the world, regulators are grappling with AI’s rapid advancements and the complexities and implications that it introduces. Regulators are aiming to strike a balance between user protection and fostering innovation.

In the United Kingdom, a member of the opposition Labour Party echoed the concerns raised in the Center for AI Safety’s letter, calling for technology to be regulated on par with medicine and nuclear power.

During a US visit, UK Prime Minister Rishi Sunak is expected to pitch for a London-based global AI watchdog. Sunak has said he is “looking very carefully” at the risk of extinction posed by AI.

The EU, meanwhile, has even proposed the mandatory labelling of all AI-generated content to combat disinformation.

With only a limited timeframe to act, policymakers, researchers, and developers must collaborate to ensure the responsible development and deployment of AI systems, taking into account the potential risks and implications associated with their rapid advancement.

You can view the full interview with Clifford below:

(Photo by Goh Rhy Yan on Unsplash)

Related: Over 1,000 experts call for halt to ‘out-of-control’ AI development

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI Task Force adviser: AI will threaten humans in two years appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/06/06/ai-task-force-adviser-threaten-humans-two-years/feed/ 0
AI leaders warn about ‘risk of extinction’ in open letter https://www.artificialintelligence-news.com/2023/05/31/ai-leaders-warn-about-risk-of-extinction-in-open-letter/ https://www.artificialintelligence-news.com/2023/05/31/ai-leaders-warn-about-risk-of-extinction-in-open-letter/#respond Wed, 31 May 2023 08:33:10 +0000 https://www.artificialintelligence-news.com/?p=13124 The Center for AI Safety (CAIS) recently issued a statement signed by prominent figures in AI warning about the potential risks posed by the technology to humanity. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” reads the statement. Signatories of the... Read more »

The post AI leaders warn about ‘risk of extinction’ in open letter appeared first on AI News.

]]>
The Center for AI Safety (CAIS) recently issued a statement signed by prominent figures in AI warning about the potential risks posed by the technology to humanity.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” reads the statement.

Signatories of the statement include renowned researchers and Turing Award winners like Geoffery Hinton and Yoshua Bengio, as well as executives from OpenAI and DeepMind, such as Sam Altman, Ilya Sutskever, and Demis Hassabis.

The CAIS letter aims to spark discussions about the various urgent risks associated with AI and has attracted both support and criticism across the wider industry. It follows another open letter signed by Elon Musk, Steve Wozniak, and over 1,000 other experts who called for a halt to “out-of-control” AI development.

Despite its brevity, the latest statement does not provide specific details about the definition of AI or offer concrete strategies for mitigating the risks. However, CAIS clarified in a press release that its goal is to establish safeguards and institutions to ensure that AI risks are effectively managed.

OpenAI CEO Sam Altman has been actively engaging with global leaders and advocating for AI regulations. During a recent Senate appearance, Altman repeatedly called on lawmakers to heavily regulate the industry. The CAIS statement aligns with his efforts to raise awareness about the dangers of AI.

While the open letter has garnered attention, some experts in AI ethics have criticised the trend of issuing such statements.

Dr Sasha Luccioni, a machine-learning research scientist, suggests that mentioning hypothetical risks of AI alongside tangible risks like pandemics and climate change enhances its credibility while diverting attention from immediate issues like bias, legal challenges, and consent.

Daniel Jeffries, a writer and futurist, argues that discussing AI risks has become a status game in which individuals jump on the bandwagon without incurring any real costs.

Critics believe that signing open letters about future threats allows those responsible for current AI harms to alleviate their guilt while neglecting the ethical problems associated with AI technologies already in use.

However, CAIS – a San Francisco-based nonprofit – remains focused on reducing societal-scale risks from AI through technical research and advocacy. The organisation was co-founded by experts with backgrounds in computer science and a keen interest in AI safety.

While some researchers fear the emergence of a superintelligent AI that could surpass human capabilities and pose an existential threat, others argue that signing open letters about hypothetical doomsday scenarios distracts from the existing ethical dilemmas surrounding AI. They emphasise the need to address the real problems AI poses today, such as surveillance, biased algorithms, and the infringement of human rights.

Balancing the advancement of AI with responsible implementation and regulation remains a crucial task for researchers, policymakers, and industry leaders alike.

(Photo by Apolo Photographer on Unsplash)

Related: OpenAI CEO: AI regulation ‘is essential’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI leaders warn about ‘risk of extinction’ in open letter appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/05/31/ai-leaders-warn-about-risk-of-extinction-in-open-letter/feed/ 0