world economic forum Archives - AI News https://www.artificialintelligence-news.com/tag/world-economic-forum/ Artificial Intelligence News Mon, 23 Jan 2023 14:26:41 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png world economic forum Archives - AI News https://www.artificialintelligence-news.com/tag/world-economic-forum/ 32 32 FBI director warns about Beijing’s AI program https://www.artificialintelligence-news.com/2023/01/23/fbi-director-warns-beijing-ai-program/ https://www.artificialintelligence-news.com/2023/01/23/fbi-director-warns-beijing-ai-program/#respond Mon, 23 Jan 2023 14:26:40 +0000 https://www.artificialintelligence-news.com/?p=12644 FBI Director Christopher Wray has warned about the national security threat posed by Beijing’s AI program. During a panel at the World Economic Forum, Wray explained that Beijing’s AI program “is not constrained by the rule of law”. Wray says Beijing has “a bigger hacking program than any other nation” and will use machine learning... Read more »

The post FBI director warns about Beijing’s AI program appeared first on AI News.

]]>
FBI Director Christopher Wray has warned about the national security threat posed by Beijing’s AI program.

During a panel at the World Economic Forum, Wray explained that Beijing’s AI program “is not constrained by the rule of law”.

Wray says Beijing has “a bigger hacking program than any other nation” and will use machine learning to further boost the capabilities of its state-sponsored hackers.

Much like nuclear expertise, AI can be used to benefit the world or harm it.

“I have the same reaction every time,” Wray explained. “I think, ‘Wow, we can do that.’ And then, ‘Oh god, they can do that.’”

Beijing is often accused of influencing other countries through its infrastructure investments. Washington largely views China’s expanding economic influence and military might as America’s main long-term security challenge.

Wray says that Beijing’s AI program “is built on top of the massive troves of intellectual property and sensitive data that they’ve stolen over the years.”

Furthermore, it will be used “to advance that same intellectual property theft, to advance the repression that occurs not just back home in mainland China but increasingly as a product they export around the world.”

Cloudflare CEO Matthew Prince spoke on the same panel and offered a more positive take: “The thing that makes me optimistic in this space: there are more good guys than bad guys.”

Prince acknowledges that whoever has the most data will win the AI race. Western data collection protections have historically been much stricter than in China.

“In a world where all these technologies are available to both the good guys and the bad guys, the good guys are constrained by the rule of law and international norms,” Wray added. “The bad guys aren’t, which you could argue gives them a competitive advantage.”

Prince and Wray say it’s the cooperation of the “good guys” that gives them the best chance at staying a step ahead of those wishing to cause harm.

“When we’re all working together, they’re no match,” concludes Wray.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with the Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post FBI director warns about Beijing’s AI program appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/01/23/fbi-director-warns-beijing-ai-program/feed/ 0
AI bias harms over a third of businesses, 81% want more regulation https://www.artificialintelligence-news.com/2022/01/20/ai-bias-harms-over-a-third-of-businesses-81-want-more-regulation/ https://www.artificialintelligence-news.com/2022/01/20/ai-bias-harms-over-a-third-of-businesses-81-want-more-regulation/#respond Thu, 20 Jan 2022 10:34:20 +0000 https://artificialintelligence-news.com/?p=11594 AI bias is already harming businesses and there’s significant appetite for more regulation to help counter the problem. The findings come from the State of AI Bias report by DataRobot in collaboration with the World Economic Forum and global academic leaders. The report involved responses from over 350 organisations across industries. Kay Firth-Butterfield, Head of... Read more »

The post AI bias harms over a third of businesses, 81% want more regulation appeared first on AI News.

]]>
AI bias is already harming businesses and there’s significant appetite for more regulation to help counter the problem.

The findings come from the State of AI Bias report by DataRobot in collaboration with the World Economic Forum and global academic leaders. The report involved responses from over 350 organisations across industries.

Kay Firth-Butterfield, Head of AI and Machine Learning at the World Economic Forum, said: 

“DataRobot’s research shows what many in the artificial intelligence field have long-known to be true: the line of what is and is not ethical when it comes to AI solutions has been too blurry for too long.

The CIOs, IT directors and managers, data scientists, and development leads polled in this research clearly understand and appreciate the gravity and impact at play when it comes to AI and ethics.”

Just over half (54%) of respondents have “deep concerns” around the risk of AI bias while a much higher percentage (81%) want more government regulation to prevent.

Given the still relatively small adoption of AI at this stage across most organisations; there’s a concerning number reporting harm from bias.

Over a third (36%) of organisations experienced challenges or a direct negative business impact from AI bias in their algorithms. This includes:

  • Lost revenue (62%)
  • Lost customers (61%)
  • Lost employees (43%)
  • Incurred legal fees due to a lawsuit or legal action (35%)
  • Damaged brand reputation/media backlash (6%)

Ted Kwartler, VP of Trusted AI at DataRobot, commented:

“The core challenge to eliminate bias is understanding why algorithms arrived at certain decisions in the first place.

Organisations need guidance when it comes to navigating AI bias and the complex issues attached. There has been progress, including the EU proposed AI principles and regulations, but there’s still more to be done to ensure models are fair, trusted, and explainable.”

Four key challenges were identified as to why organisations are struggling to counter bias:

  1. Understanding why an AI was led to make a specific decision
  2. Comprehending patterns between input values and AI decisions
  3. Developing trustworthy algorithms
  4. Determinng what data is used to train AI

Fortunately, a growing number of solutions are becoming available to help counter/reduce AI bias as the industry matures.

“The market for responsible AI solutions will double in 2022,” wrote Forrester VP and Principal Analyst Brandon Purcell in his Predictions 2022: Artificial Intelligence (paywall) report.

“Responsible AI solutions offer a range of capabilities that help companies turn AI principles such as fairness and transparency into consistent practices. Demand for these solutions will likely double next year as interest extends beyond highly regulated industries into all enterprises using AI for critical business operations.”

(Photo by Darren Halstead on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI bias harms over a third of businesses, 81% want more regulation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/01/20/ai-bias-harms-over-a-third-of-businesses-81-want-more-regulation/feed/ 0
‘Information gap’ between AI creators and policymakers needs to be resolved – report https://www.artificialintelligence-news.com/2021/02/23/information-gap-between-ai-creators-and-policymakers-needs-to-be-resolved-report/ https://www.artificialintelligence-news.com/2021/02/23/information-gap-between-ai-creators-and-policymakers-needs-to-be-resolved-report/#respond Tue, 23 Feb 2021 16:47:37 +0000 http://artificialintelligence-news.com/?p=10304 An article posted by the World Economic Forum (WEF) has argued there is a ‘huge gap in understanding’ between policymakers and AI creators. The report, authored by Adriana Bora, AI policy researcher and project manager at The Future Society, and David Alexandru Timis, outgoing curator at Brussels Hub, explores how to resolve accountability and trust-building... Read more »

The post ‘Information gap’ between AI creators and policymakers needs to be resolved – report appeared first on AI News.

]]>
An article posted by the World Economic Forum (WEF) has argued there is a ‘huge gap in understanding’ between policymakers and AI creators.

The report, authored by Adriana Bora, AI policy researcher and project manager at The Future Society, and David Alexandru Timis, outgoing curator at Brussels Hub, explores how to resolve accountability and trust-building issues with AI technology.

Bora and Timis note there is “a need for sound mechanisms that will generate a comprehensive and collectively shared understanding of AI’s development and deployment cycle.” As a result, the two add, this governance “needs to be designed under continuous dialogue utilising multi-stakeholder and interdisciplinary methodologies and skills.”

In plain language, both sides need to speak the same language. Yet while AI creators have the information and understanding, this does not extend to regulators, the authors note.

“There is a limited number of policy experts who truly understand the full cycle of AI technology,” the article noted. “On the other hand, the technology providers lack clarity, and at times interest, in shaping AI policy with integrity by implementing ethics in their technological designs.”

Examples of unethical AI practice, or where inherent bias is built into systems, are legion. In July, MIT apologised for, and took offline, a dataset which trained AI models with misogynistic and racist tendencies. Google and Microsoft have also fessed up to errors with YouTube moderation and MSN News respectively.

Artificial intelligence technology in law enforcement has also been questioned. More than 1,000 researchers, academics and experts signed an open letter in June to question an upcoming paper which claimed to be able to predict criminality based on automated facial recognition. Separately, in the same month, the chief of Detroit Police admitted its AI-powered face recognition did not work the vast majority of the time.

Google has been under fire of late, with the firing of Margaret Mitchell last week, who co-led the company’s ethical AI team, adding to the negative publicity. Mitchell confirmed her dismissal on Twitter. A statement from Google to Reuters said the firing followed an investigation which found Mitchell moved electronic files outside of the company.

In December, Google fired Timnit Gebru, another leading figure in ethical AI development, who claimed she was fired over an unpublished paper and sending an email critical of the company’s practices. Mitchell had previously written an open letter detailing ‘concern’ over the firing. Per an Axios report, the company made changes into ‘how it handles issues around research, diversity and employee exits’, following Gebru’s dismissal. As this publication reported, Gebru’s departure forced other employees to leave; software engineer Vinesh Kannan and engineering director David Baker.

Bora and Timis emphasised the need for ‘ethics literacy’ and a ‘commitment to multidisciplinary research’ from the technology providers’ perspective.

“Through their training and during their careers, the technical teams behind AI developments are not methodically educated about the complexity of human social systems, how their products could negatively impact society, and how to embed ethics in their designs,” the article noted.

“The process of understanding and acknowledging the social and cultural context in which AI technologies are deployed, sometimes with high stakes for humanity, requires patience and time,” Bora and Timis added. “With increased investments in AI, technology companies are encouraged to identify the ethical consideration relevant to their products and transparently implement solutions before deploying them.”

This could theoretically take care of hasty withdrawals and fulsome apologies when models behave unethically. Yet the researchers also noted how policymakers need to step up.

“It is only by familiarising themselves with AI and its potential benefits and risks that policymakers can draft sensible regulation that balances the development of AI within legal and ethical boundaries while leveraging its tremendous potential,” the article noted. “Knowledge building is critical both for developing smarter regulations when it comes to AI, for enabling policymakers to engage in dialogue with technology companies on an equal footing, and together set a framework of ethics and norms in which AI can innovate safely.”

Innovation is taking place with regard to solving algorithmic bias. In the UK, as this publication reported in November, the Centre for Data Ethics and Innovation (CDEI) has created a ‘roadmap’ to tackle the issue. The CDEI report focused on policing, recruitment, financial services and local government, and makes cross-cutting recommendations that aim to help build the right systems so that algorithms improve, rather than worsen, decision-making.

You can read the full WEF article here.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G ExpoIoT Tech ExpoBlockchain ExpoAI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post ‘Information gap’ between AI creators and policymakers needs to be resolved – report appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/02/23/information-gap-between-ai-creators-and-policymakers-needs-to-be-resolved-report/feed/ 0
UK releases guidelines to help governments accelerate ‘trusted’ AI deployments https://www.artificialintelligence-news.com/2020/06/09/uk-guidelines-help-governments-trusted-ai-deployments/ https://www.artificialintelligence-news.com/2020/06/09/uk-guidelines-help-governments-trusted-ai-deployments/#respond Tue, 09 Jun 2020 12:30:29 +0000 http://artificialintelligence-news.com/?p=9679 The UK has released new guidelines during the World Economic Forum (WEF) to help governments accelerate the deployment of trusted AI solutions. AI is proving itself to be an important tool in tackling some of the biggest issues the world faces today; including coronavirus and climate change. However, some public distrust remains. “The current pandemic... Read more »

The post UK releases guidelines to help governments accelerate ‘trusted’ AI deployments appeared first on AI News.

]]>
The UK has released new guidelines during the World Economic Forum (WEF) to help governments accelerate the deployment of trusted AI solutions.

AI is proving itself to be an important tool in tackling some of the biggest issues the world faces today; including coronavirus and climate change. However, some public distrust remains.

“The current pandemic has shown us more needs to be done to speed up the adoption of trusted AI around the world,” said Kay Firth-Butterfield, Head of Artificial Intelligence and Machine Learning at the World Economic Forum.

“We moved from guidelines to practical tools, tested and iterated them – but this is still just a start. Now we will be working to scale them to countries around the world.”

The guidelines released today aim to “help society tackle big data problems faster” while also preparing them for future risks. The UK government adopted the guidelines across its various departments.

“The UK is a global leader in AI and I am pleased we are working with the World Economic Forum and international partners to develop guidelines to ensure its safe and ethical deployment,” said Caroline Dinenage, Digital Minister of the United Kingdom.

“By taking a dynamic approach we can boost innovation, create competitive markets and support public trust in artificial intelligence. I urge public sector organisations around the world to adopt these guidelines and consider carefully how they procure and deploy these technologies.”

For the past year, the WEF has worked alongside the UK’s Office for AI; companies such as Deloitte, Salesforce, and Splunk; 15 other countries; and more than 150 members of government, academia, civil society, and the private sector.

“As a trusted AI advisor to governments around the world, we were thrilled to collaborate with the World Economic Forum and the government of the UK in the development of procurement guidelines that help the public sector put AI at the service of its constituents in a manner that is both efficient and ethical,” said Shelby Austin, Managing Partner, Growth & Investments and Omnia AI, Deloitte, Canada.

“As our societies reorganize and make progress in our fight against COVID-19, the need for multi-stakeholder cooperation has never been more apparent. We believe in these joint efforts, and we believe in the power of data-driven decision-making to help our countries recover and thrive.”

The result of the joint effort was the “Procurement in a Box” toolkit which provides guidance from conducting drafting proposals and conducting risk assessments, all the way to purchasing AI solutions and deploying them in a trusted manner.

A proposal for a chatbot allowing executives for the Dubai Electricity and Water Authority (DEWA) to obtain answers to data-related questions was used to test the guidelines. DEWA’s chatbot was successful and serves as an early example of how rapid but safe AI deployments can be achieved using the guidelines.

“In an era that will continue to be dominated by the transformative technologies emerging from the Fourth Industrial Revolution, integrating AI into the public sector for everyday use will significantly elevate the performance of government departments,” said Khalfan Belhoul, CEO of the Dubai Future Foundation, the host entity of Centre for the Fourth Industrial Revolution UAE.

You can find a copy of the Procurement in a Box toolkit here (PDF)

(Photo by Franck V. on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post UK releases guidelines to help governments accelerate ‘trusted’ AI deployments appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/06/09/uk-guidelines-help-governments-trusted-ai-deployments/feed/ 0
Joy Buolamwini: Fighting algorithmic bias needs to be ‘a priority’ https://www.artificialintelligence-news.com/2019/01/24/joy-buolamwini-algorithmic-bias-priority/ https://www.artificialintelligence-news.com/2019/01/24/joy-buolamwini-algorithmic-bias-priority/#comments Thu, 24 Jan 2019 15:09:20 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4584 Algorithmic Justice League founder Joy Buolamwini gave a speech during the World Economic Forum this week on the need to fight AI bias. Buolamwini is also an MIT Media Lab researcher and went somewhat viral for her TED Talk in 2016 titled ‘How I’m fighting bias in algorithms’. Her latest speech included a presentation in... Read more »

The post Joy Buolamwini: Fighting algorithmic bias needs to be ‘a priority’ appeared first on AI News.

]]>
Algorithmic Justice League founder Joy Buolamwini gave a speech during the World Economic Forum this week on the need to fight AI bias.

Buolamwini is also an MIT Media Lab researcher and went somewhat viral for her TED Talk in 2016 titled ‘How I’m fighting bias in algorithms’.

Her latest speech included a presentation in which Buolamwini went over an analysis of the current popular facial recognition algorithms.

Here were the overall accuracy results when guessing the gender of a face:

  • Microsoft: 93.7 percent
  • Face++: 90 percent
  • IBM: 87.9 percent

Shown in this way, there appears to be little problem. Of course, society is a lot more diverse and algorithms need to be accurate for all.

When separated between males and females, a greater disparity becomes apparent:

  • Microsoft: 89.3 percent (females), 97.4 percent (males)
  • Face++: 78.7 percent (females), 99.3 percent (males)
  • IBM: 79.7 percent (females), 94.4 percent (males)

Here we begin to see the underrepresentation of females in STEM careers begin to come into effect. China-based Face++ suffers the worst, likely a result of the country’s more severe gender gap (PDF) over the US.

Splitting between skin type also increases the disparity:

  • Microsoft: 87.1 percent (darker), 99.3 percent (lighter)
  • Face++: 83.5 percent (darker), 95.3 percent (lighter)
  • IBM: 77.6 percent (darker), 96.8 percent (lighter)

The difference here is likely again to do with a racial disparity in STEM careers. A gap between 12-19 percent is observed between darker and lighter skin tones.

So far, the results are in line with a 2010 study by researchers at NIST and the University of Texas in Dallas. The researchers found (PDF) algorithms designed and tested in East Asia are better at recognising East Asians, while those developed in Western countries are more accurate when detecting Caucasians.

“We did something that hadn’t been done in the field before, which was doing intersectional analysis,” explains Buolamwini. “If we only do single axis analysis – we only look at skin type, only look at gender… – we’re going to miss important trends.”

Here is where the results get most concerning. Results are in descending order from most accurate to least:

Microsoft

Lighter Males (100 percent)

Lighter Females (98.3 percent)

Darker Males (94 percent)

Darker Females (79.2 percent)

Face++

Darker Males (99.3 percent)

Lighter Males (99.2 percent)

Lighter Females (94 percent)

Darker Females (65.5 percent)

IBM

Lighter Males (99.7 percent)

Lighter Females (92.9 percent)

Darker Males (88 percent)

Darker Females (65.3 percent)

The lack of accuracy with regards to females with darker skin tones is of particular note. Two of the three algorithms would get it wrong in approximately one-third of occasions.

Just imagine surveillance being used with these algorithms. Lighter skinned males would be recognised in most cases, but darker skinned females would be stopped often. That could be a lot of mistakes in areas with high footfall such as airports.

Prior to making her results public, Buolamwini sent the results to each company. IBM responded the same day and said their developers would address the issue.

When she reassessed IBM’s algorithm, the accuracy when assessing darker males jumped from 88 percent to 99.4 percent, for darker females from 65.3 percent to 83.5 percent, for lighter females from 92.9 percent to 97.6 percent, and for lighter males it stayed the same at 97 percent.

Buolamwini commented: “So for everybody who watched my TED Talk and said: ‘Isn’t the reason you weren’t detected because of, you know, physics? Your skin reflectance, contrast, et cetera,’ — the laws of physics did not change between December 2017, when I did the study, and 2018, when they launched the new results.”

“What did change is they made it a priority.”

You can watch Buolamwini’s full presentation at the WEF here.

Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.

The post Joy Buolamwini: Fighting algorithmic bias needs to be ‘a priority’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2019/01/24/joy-buolamwini-algorithmic-bias-priority/feed/ 4