standards Archives - AI News https://www.artificialintelligence-news.com/tag/standards/ Artificial Intelligence News Mon, 04 Sep 2023 10:46:57 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png standards Archives - AI News https://www.artificialintelligence-news.com/tag/standards/ 32 32 UK government outlines AI Safety Summit plans https://www.artificialintelligence-news.com/2023/09/04/uk-government-outlines-ai-safety-summit-plans/ https://www.artificialintelligence-news.com/2023/09/04/uk-government-outlines-ai-safety-summit-plans/#respond Mon, 04 Sep 2023 10:46:55 +0000 https://www.artificialintelligence-news.com/?p=13560 The UK government has announced plans for the global AI Safety Summit on 1-2 November 2023. The major event – set to be held at Bletchley Park, home of Alan Turing and other Allied codebreakers during the Second World War – aims to address the pressing challenges and opportunities presented by AI development on both... Read more »

The post UK government outlines AI Safety Summit plans appeared first on AI News.

]]>
The UK government has announced plans for the global AI Safety Summit on 1-2 November 2023.

The major event – set to be held at Bletchley Park, home of Alan Turing and other Allied codebreakers during the Second World War – aims to address the pressing challenges and opportunities presented by AI development on both national and international scales.

Secretary of State Michelle Donelan has officially launched the formal engagement process leading up to the summit. Jonathan Black and Matt Clifford – serving as the Prime Minister’s representatives for the AI Safety Summit – have also initiated discussions with various countries and frontier AI organisations.

This marks a crucial step towards fostering collaboration in the field of AI safety and follows a recent roundtable discussion hosted by Secretary Donelan, which involved representatives from a diverse range of civil society groups.

The AI Safety Summit will serve as a pivotal platform, bringing together not only influential nations but also leading technology organisations, academia, and civil society. Its primary objective is to facilitate informed discussions that can lead to sensible regulations in the AI landscape.

One of the core focuses of the summit will be on identifying and mitigating risks associated with the most powerful AI systems. These risks include the potential misuse of AI for activities such as undermining biosecurity through the proliferation of sensitive information. 

Additionally, the summit aims to explore how AI can be harnessed for the greater good, encompassing domains like life-saving medical technology and safer transportation.

The UK government claims to recognise the importance of diverse perspectives in shaping the discussions surrounding AI and says that it’s committed to working closely with global partners to ensure that it remains safe and that its benefits can be harnessed worldwide.

As part of this iterative and consultative process, the UK has shared five key objectives that will guide the discussions at the summit:

  1. Developing a shared understanding of the risks posed by AI and the necessity for immediate action.
  2. Establishing a forward process for international collaboration on AI safety, including supporting national and international frameworks.
  3. Determining appropriate measures for individual organisations to enhance AI safety.
  4. Identifying areas for potential collaboration in AI safety research, such as evaluating model capabilities and establishing new standards for governance.
  5. Demonstrating how the safe development of AI can lead to global benefits.

The growth potential of AI investment, deployment, and capabilities is staggering, with projections of up to $7 trillion in growth over the next decade and accelerated drug discovery. A report by Google in July suggests that, by 2030, AI could boost the UK economy alone by £400 billion—leading to an annual growth rate of 2.6 percent.

However, these opportunities come with significant risks that transcend national borders. Addressing these risks is now a matter of utmost urgency.

Earlier this month, DeepMind co-founder Mustafa Suleyman called on the US to enforce AI standards. However, Suleyman is far from the only leading industry figure who has expressed concerns and called for measures to manage the risks of AI.

In an open letter in March, over 1,000 experts infamously called for a halt on “out of control” AI development over the “profound risks to society and humanity”.

Multiple stakeholders – including individual countries, international organisations, businesses, academia, and civil society – are already engaged in AI-related work. This includes efforts at the United Nations, the Organisation for Economic Co-operation and Development (OECD), the Global Partnership on Artificial Intelligence (GPAI), the Council of Europe, G7, G20, and standard development organisations.

The AI Safety Summit will build upon these existing initiatives by formulating practical next steps to mitigate risks associated with AI. These steps will encompass discussions on implementing risk-mitigation measures at relevant organisations, identifying key areas for international collaboration, and creating a roadmap for long-term action.

If successful, the AI Safety Summit at Bletchley Park promises to be a milestone event in the global dialogue on AI safety—seeking to strike a balance between harnessing the potential of AI for the benefit of humanity and addressing the challenges it presents.

(Photo by Hal Gatewood on Unsplash)

See also: UK Deputy PM: AI is the most ‘extensive’ industrial revolution yet

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK government outlines AI Safety Summit plans appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/04/uk-government-outlines-ai-safety-summit-plans/feed/ 0
Chinese AI darling SenseTime wants facial recognition standards https://www.artificialintelligence-news.com/2018/10/02/ai-sensetime-facial-recognition-standards/ https://www.artificialintelligence-news.com/2018/10/02/ai-sensetime-facial-recognition-standards/#respond Tue, 02 Oct 2018 13:33:32 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4039 The CEO of Chinese AI darling SenseTime wants to see facial recognition standards established for a ‘healthier’ industry. SenseTime is among China’s most renowned AI companies. Back in April, we reported it had become the world’s most funded AI startup. Part of the company’s monumental success is the popularity of facial recognition in China where... Read more »

The post Chinese AI darling SenseTime wants facial recognition standards appeared first on AI News.

]]>
The CEO of Chinese AI darling SenseTime wants to see facial recognition standards established for a ‘healthier’ industry.

SenseTime is among China’s most renowned AI companies. Back in April, we reported it had become the world’s most funded AI startup.

Part of the company’s monumental success is the popularity of facial recognition in China where it’s used in many aspects of citizens’ lives. Just yesterday, game developer Tencent announced it’s testing facial recognition to check users’ ages.

Xu Li, CEO of SenseTime, says immigration officials doubted the accuracy of facial recognition technology when he first introduced his own. “We knew about it 20 years ago and, combined with fingerprint checks, the accuracy is only 53 per cent,” one told him.

Facial recognition has come a long way since 20 years ago. Recent advances in artificial intelligence have led to even greater leaps, resulting in companies such as SenseTime.

To dispel the idea that facial recognition is still inaccurate, Li wants ‘trust levels’ to be established.

“With standards, technology adopters can better understand the risk involved, just like credit worthiness for individuals and companies,” Xu said to South China Morning Post. “Providers of facial recognition can be assigned different trust levels, ranging from financial security at the top to entertainment uses.”

Many of the leading facial recognition technologies have their own built-in trust levels. These levels determine how certain the software must be to call it a match.

Back in July, AI News reported on the findings of ACLU (American Civil Liberties Union) which found Amazon’s facial recognition AI erroneously labelled those with darker skin colours as criminals more often when matching against mugshots.

Amazon claims the ACLU left the facial recognition service’s default confidence setting of 80 percent on – when it suggests 95 percent or higher for law enforcement.

Responding to the ACLU’s findings, Dr Matt Wood, GM of Deep Learning and AI at Amazon Web Services, also called for regulations. However, Wood asks the government to force a minimum confidence level for the use of facial recognition in law enforcement.

Li and Wood may be calling for different regulations, but they – and many other AI leaders – agree that some are essential to ensure a healthy industry.

 Interested in hearing industry leaders discuss subjects like this? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London and Amsterdam to learn more. Co-located with the  IoT Tech Expo, Blockchain Expo and Cyber Security & Cloud Expo so you can explore the future of enterprise technology in one place.

The post Chinese AI darling SenseTime wants facial recognition standards appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2018/10/02/ai-sensetime-facial-recognition-standards/feed/ 0