Experts warn AI poses a ‘clear and present danger’

Experts warn AI poses a ‘clear and present danger’ Ryan is a senior editor at TechForge Media with over a decade of experience covering the latest technology and interviewing leading industry figures. He can often be sighted at tech conferences with a strong coffee in one hand and a laptop in the other. If it's geeky, he’s probably into it. Find him on Twitter (@Gadget_Ry) or Mastodon (@gadgetry@techhub.social)


A report by leading experts calls on governments and businesses to address the “clear and present danger” posed by unregulated AI.

The foreboding report is titled ‘The Malicious Use Of Artificial Intelligence’ and was co-authored by experts from Oxford University, The Centre For The Study of Existential Risk, the Electronic Frontier Foundation, and more.

Three primary areas of risk were identified:

Digital security — The risk of AI being used for increasing the scale and efficiency of cyberattacks. These attacks could be to compromise other systems by reducing laborious tasks, or it could exploit human error with new attacks such as speech synthesis.

Physical security — The idea that AI could be used to inflict direct harm on living beings or physical buildings/systems/infrastructure. Some provided examples include connected vehicles being compromised to crash, or even situations once seen as dystopian such as swarms of micro-drones.

Political security — The researchers highlight the possibility of AI automating the creation of propaganda, or manipulating existing content to sway opinions. With the allegations of Russia using digital means to influence the outcome of the U.S. presidential elections, and other key international decisions, for many people this will be the clearest example of the present danger.

Here are some of the potential scenarios:

  • Chatbots which mimic the writing styles of friends or family members to gain trust, and could even mimic them over a video call.
  • A cleaning robot which goes inside a government ministry daily, but has been compromised to detonate an explosive device when a specific figure is spotted.
  • A state-powered AI system that identifies anyone who contradicts government policy, and promptly flags them for arrest.
  • The creation of a fake video of a high-profile figure saying, or doing, something controversial which leads them to lose their job.

As with most things, it will likely take a disaster to occur before action is taken. The researchers are joining previous calls for AI regulation  including for a robot ethics charter, and for a ‘global stand’ against militarisation in the attempt to be more proactive about countering malicious usage.

In the report, the researchers wrote:

“The proposed interventions require attention and action not just from AI researchers and companies but also from legislators, civil servants, regulators, security researchers and educators. The challenge is daunting and the stakes are high.”

Some of the proposals include:

  • Policymakers collaborating closely with technical researchers to investigate, prevent, and mitigate potential malicious uses of AI.
  • Researchers and engineers considering misuse of their work.
  • Identifying best practices.
  • Expand the range of stakeholders and domain experts involved in discussions of these challenges.

The full report (PDF) is quite a chilling read, and highlights scenarios which could be straight out of Black Mirror. Hopefully, policymakers read the report and take heed of the experts’ warnings before it becomes necessary.

What are your thoughts about the warnings of malicious AI? Let us know in the comments.

 Interested in hearing industry leaders discuss subjects like this and sharing their use-cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London and Amsterdam to learn more. Co-located with the  IoT Tech Expo, Blockchain Expo and Cyber Security & Cloud Expo so you can explore the future of enterprise technology in one place.

Tags: , , , , , , ,

View Comments
Leave a comment

Leave a Reply