threats Archives - AI News https://www.artificialintelligence-news.com/tag/threats/ Artificial Intelligence News Thu, 26 Oct 2023 15:49:01 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png threats Archives - AI News https://www.artificialintelligence-news.com/tag/threats/ 32 32 UK paper highlights AI risks ahead of global Safety Summit https://www.artificialintelligence-news.com/2023/10/26/uk-paper-highlights-ai-risks-ahead-global-safety-summit/ https://www.artificialintelligence-news.com/2023/10/26/uk-paper-highlights-ai-risks-ahead-global-safety-summit/#respond Thu, 26 Oct 2023 15:48:59 +0000 https://www.artificialintelligence-news.com/?p=13793 The UK Government has unveiled a comprehensive paper addressing the capabilities and risks associated with frontier AI. UK Prime Minister Rishi Sunak has spoken today on the global responsibility to confront the risks highlighted in the report and harness AI’s potential. Sunak emphasised the need for honest dialogue about the dual nature of AI: offering... Read more »

The post UK paper highlights AI risks ahead of global Safety Summit appeared first on AI News.

]]>
The UK Government has unveiled a comprehensive paper addressing the capabilities and risks associated with frontier AI.

UK Prime Minister Rishi Sunak has spoken today on the global responsibility to confront the risks highlighted in the report and harness AI’s potential. Sunak emphasised the need for honest dialogue about the dual nature of AI: offering unprecedented opportunities, while also posing significant dangers.

“AI will bring new knowledge, new opportunities for economic growth, new advances in human capability, and the chance to solve problems we once thought beyond us. But it also brings new dangers and new fears,” said Sunak.

“So, the responsible thing for me to do is to address those fears head-on, giving you the peace of mind that we will keep you safe while making sure you and your children have all the opportunities for a better future that AI can bring.

“Doing the right thing, not the easy thing, means being honest with people about the risks from these technologies.”

The report delves into the rapid advancements of frontier AI, drawing on numerous sources. It highlights the diverse perspectives within scientific, expert, and global communities regarding the risks associated with the swift evolution of AI technology. 

The publication comprises three key sections:

  1. Capabilities and risks from frontier AI: This section presents a discussion paper advocating further research into AI risk. It delineates the current state of frontier AI capabilities, potential future improvements, and associated risks, including societal harms, misuse, and loss of control.
  2. Safety and security risks of generative AI to 2025: Drawing on intelligence assessments, this report outlines the potential global benefits of generative AI while highlighting the increased safety and security risks. It underscores the enhancement of threat actor capabilities and the effectiveness of attacks due to generative AI development.
  3. Future risks of frontier AI: Prepared by the Government Office for Science, this report explores uncertainties in frontier AI development, future system risks, and potential scenarios for AI up to 2030.

The report – based on declassified information from intelligence agencies – focuses on generative AI, the technology underpinning popular chatbots and image generation software. It foresees a future where AI might be exploited by terrorists to plan biological or chemical attacks, raising serious concerns about global security.

Sjuul van der Leeuw, CEO of Deployteq, commented: “It is good to see the government take a serious approach, offering a report ahead of the Safety Summit next week however more must be done.

“An ongoing effort to address AI risks is needed and we hope that the summit brings much-needed clarity, allowing businesses and marketers to enjoy the benefits this emerging piece of technology offers, without the worry of backlash.”

The report highlights that generative AI could be utilised to gather knowledge on physical attacks by non-state violent actors, including creating chemical, biological, and radiological weapons.

Although companies are working to implement safeguards, the report emphasises the varying effectiveness of these measures. Obstacles to obtaining the necessary knowledge, raw materials, and equipment for such attacks are decreasing, with AI potentially accelerating this process.

Additionally, the report warns of the likelihood of AI-driven cyber-attacks becoming faster-paced, more effective, and on a larger scale by 2025. AI could aid hackers in mimicking official language, and overcome previous challenges faced in this area.

However, some experts have questioned the UK Government’s approach.

Rashik Parmar MBE, CEO of BCS, The Chartered Institute for IT, said: “Over 1,300 technologists and leaders signed our open letter calling AI a force for good rather than an existential threat to humanity.

“AI won’t grow up like The Terminator. If we take the proper steps, it will be a trusted co-pilot from our earliest school days to our retirement.

The AI Safety Summit will aim to foster healthy discussion around how to address frontier AI risks, encompassing misuse by non-state actors for cyberattacks or bioweapon design and concerns related to AI systems acting autonomously contrary to human intentions. Discussions at the summit will also extend to broader societal impacts, such as election disruption, bias, crime, and online safety.

Claire Trachet, CEO of Trachet, commented: “The fast-growing nature of AI has made it difficult for governments to balance creating effective regulation which safeguards the interest of businesses and consumers without stifling investment opportunities. Even though there are some forms of risk management and different reports coming out now, none of them are true coordinated approaches.

“The UK Government’s commitment to AI safety is commendable, but the criticism surrounding the summit serves as a reminder of the importance of a balanced, constructive, and forward-thinking approach to AI regulation.”

If the UK Government’s report is anything to go by, the need for collaboration around proportionate but rigorous measures to manage the risks posed by AI is more imperative than ever.

The global AI Safety Summit is set to take place at the historic Bletchley Park on 1 – 2 November 2023.

(Image Credit: GOV.UK)

See also: BSI: Closing ‘AI confidence gap’ key to unlocking benefits

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK paper highlights AI risks ahead of global Safety Summit appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/26/uk-paper-highlights-ai-risks-ahead-global-safety-summit/feed/ 0
University College London: Deepfakes are the ‘most serious’ AI crime threat https://www.artificialintelligence-news.com/2020/08/06/university-college-london-experts-deepfakes-ai-crime-threat/ https://www.artificialintelligence-news.com/2020/08/06/university-college-london-experts-deepfakes-ai-crime-threat/#respond Thu, 06 Aug 2020 12:41:52 +0000 http://artificialintelligence-news.com/?p=9794 Researchers from University College London have released a ranking of what experts believe to be the most serious AI crime threats. The researchers first created a list of 20 expected ways AI will be used by criminals within the next 15 years. 31 experts were then asked to rank them by potential severity. Deepfakes –... Read more »

The post University College London: Deepfakes are the ‘most serious’ AI crime threat appeared first on AI News.

]]>
Researchers from University College London have released a ranking of what experts believe to be the most serious AI crime threats.

The researchers first created a list of 20 expected ways AI will be used by criminals within the next 15 years. 31 experts were then asked to rank them by potential severity.

Deepfakes – AI-generated images, videos, and articles – ranked top of the list as the most serious threat.

New and dangerous territory

It’s of little surprise to see deepfakes rank so highly, given the existing issues with disinformation campaigns.

Most fake content today at least must be created by humans, such as those working in the likes of Russia’s infamous “troll farms”. Human-generated disinformation campaigns take time to produce to a convincing standard and often have patterns which make them easier to trace. 

Automating the production of fake content en masse, to influence things such as democratic votes and public opinion, is entering into new and dangerous territory.

One of the most high-profile deepfake cases so far was that of US house speaker Nancy Pelosi. In 2018, a deepfake video circulated on social media which made Pelosi appear drunk and slurring her words. Pelosi criticised Facebook’s response, or lack thereof, and later told California’s KQED: “I think they have proven — by not taking down something they know is false — that they were willing enablers of the Russian interference in our election.”

The deepfake of Pelosi was unsophisticated and likely created to be amusing rather than malicious, but it’s an early warning of how such fakes could be used to cause reputational damage – or worse. Just imagine the potential consequences a fake video of the president announcing an imminent strike on somewhere like North Korea could have.

Deepfakes also have obvious potential to be used for fraud purposes, to pretend to be someone else to access things like bank accounts and sensitive information.

Then there’s the issue of blackmail. Deep learning has already been used to put the faces of celebrities on adult performers. While fake, the threat to release such videos – and the embarrassment caused – could lead to some paying a ransom to keep it from being made public.

“People now conduct large parts of their lives online and their online activity can make and break reputations,” comments first author Dr Matthew Caldwell of UCL Computer Science. “Such an online environment, where data is property and information power, is ideally suited for exploitation by AI-based criminal activity.”

All in all, it’s easy to see why experts are so concerned about deepfakes.

As part of a bid to persuade Facebook to change its policies on deepfakes, Israeli startup Canny AI created a deepfake of Facebook CEO Mark Zuckerberg last year which made it appear like he said: “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.”

Other AI crime threats

There were four other major AI crime threats identified by the researchers: the use of driverless cars as weapons, automated spear fishing, harvesting information for blackmail, and the disruption of AI-controlled systems.

“As the capabilities of AI-based technologies expand, so too has their potential for criminal exploitation,” explained senior author Professor Lewis Griffin of UCL Computer Science. “To adequately prepare for possible AI threats, we need to identify what these threats might be, and how they may impact our lives.”

Of medium concern were of things such as the sale of items and services wrongly called AI, such as security screening and targeted advertising solutions. The researchers believe leading people to believe they’re AI-powered could be lucrative.

Among the lesser concerns is things such as so-called “burglar bots” which could get in through access points of a property to unlock them or search for data. The researchers believe this poses less of a threat because they can be easily prevented through methods such as letterbox cages.

Similarly, the researchers note the potential for AI-based stalking is damaging for individuals but isn’t considered a major threat as it could not operate at scale.

You can find the researchers’ full paper in the Crime Science Journal here.

(Photo by Bill Oxford on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post University College London: Deepfakes are the ‘most serious’ AI crime threat appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/08/06/university-college-london-experts-deepfakes-ai-crime-threat/feed/ 0
Experts discuss the current biggest threats posed by AI https://www.artificialintelligence-news.com/2019/12/04/experts-discuss-current-biggest-threats-ai/ https://www.artificialintelligence-news.com/2019/12/04/experts-discuss-current-biggest-threats-ai/#respond Wed, 04 Dec 2019 17:49:00 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6277 Several experts have given their thoughts on what threats AI poses, and unsurprisingly fake content is the current biggest danger. The experts, who were speaking on Tuesday at the WSJ Pro Cybersecurity Executive Forum in New York, believe that AI-generated content is of pressing concern to our societies. Camille François, chief innovation officer at social... Read more »

The post Experts discuss the current biggest threats posed by AI appeared first on AI News.

]]>
Several experts have given their thoughts on what threats AI poses, and unsurprisingly fake content is the current biggest danger.

The experts, who were speaking on Tuesday at the WSJ Pro Cybersecurity Executive Forum in New York, believe that AI-generated content is of pressing concern to our societies.

Camille François, chief innovation officer at social media analytics firm Graphika, says that deepfake articles pose the greatest danger.

We’ve already seen what human-generated “fake news” and disinformation campaigns can do, so it won’t be of much surprise to many that involving AI in that process is a leading threat.

François highlights that fake articles and disinformation campaigns today rely on a lot of manual work to create and spread a false message.

“When you look at disinformation campaigns, the amount of manual labour that goes into creating fake websites and fake blogs is gigantic,” François said.

“If you can just simply automate believable and engaging text, then it’s really flooding the internet with garbage in a very automated and scalable way. So that I’m pretty worried about.”

In February, OpenAI unveiled its GPT-2 tool which generates convincing fake text. The AI was trained on 40 gigabytes of text spanning eight million websites.

OpenAI decided against publicly releasing GPT-2 fearing the damage it could do. However, in August, two graduates decided to recreate OpenAI’s text generator.

The graduates said they do not believe their work currently poses a risk to society and released it to show the world what was possible without being a company or government with huge amounts of resources.

“This allows everyone to have an important conversation about security, and researchers to help secure against future potential abuses,” said Vanya Cohen, one of the graduates, to Wired.

Speaking on the same panel as François at the WSJ event, Celeste Fralick, chief data scientist and senior principal engineer at McAfee, recommended that companies partner with firms specialising in detecting deepfakes.

Among the scariest AI-related cybersecurity threats is “adversarial machine learning attacks” whereby a hacker finds and exploits a vulnerability in an AI system.

Fralick provides the example of an experiment by Dawn Song, a professor at the University of California, Berkeley, in which a driverless car was fooled into believing a stop sign was a 45 MPH speed limit sign just by using stickers.

According to Fralick, McAfee itself has performed similar experiments and discovered further vulnerabilities. In one, a 35 MPH speed limit sign was once again modified to fool a driverless car’s AI.

“We extended the middle portion of the three, so the car didn’t recognise it as 35; it recognised it as 85,” she said.

Both panellists believe entire workforces need to be educated about the threats posed by AI in addition to employing strategies for countering attacks.

There is “a great urgency to make sure people have basic AI literacy,” François concludes.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Experts discuss the current biggest threats posed by AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2019/12/04/experts-discuss-current-biggest-threats-ai/feed/ 0
Avast: AI, IoT, and fake apps top 2019 cybersecurity threats https://www.artificialintelligence-news.com/2019/01/04/avast-ai-iot-fake-apps-2019-cybersecurity-threats/ https://www.artificialintelligence-news.com/2019/01/04/avast-ai-iot-fake-apps-2019-cybersecurity-threats/#respond Fri, 04 Jan 2019 17:59:32 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4374 According to Avast’s annual Threat Landscape Report, the biggest cybersecurity threats in 2019 will be AI, IoT, and fake apps. Those who follow cybersecurity will likely be unsurprised at the list, but Avast goes into the specifics of each threat. “This year, we celebrated the 30th anniversary of the World Wide Web. Fast forward thirty... Read more »

The post Avast: AI, IoT, and fake apps top 2019 cybersecurity threats appeared first on AI News.

]]>
According to Avast’s annual Threat Landscape Report, the biggest cybersecurity threats in 2019 will be AI, IoT, and fake apps.

Those who follow cybersecurity will likely be unsurprised at the list, but Avast goes into the specifics of each threat.

“This year, we celebrated the 30th anniversary of the World Wide Web. Fast forward thirty years and the threat landscape is exponentially more complex, and the available attack surface is growing faster than it has at any other point in the history of technology,” commented Ondrej Vlcek, President of Consumer at Avast.

“People are acquiring more and varied types of connected devices, meaning every aspect of our lives could be compromised by an attack. Looking ahead to 2019, these trends point to a magnification of threats through these expanding threat surfaces.”

Adversarial AI

AI has primarily been used to aid in general tasks, or in the cybersecurity realm to recognise and defend against evolving threats. That is now changing as AI goes on the offense.

Avast predicts a greater number of ‘DeepAttacks’ in 2019. These new attacks, which began last year, use AI to generate convincing media to evade security controls or fool human users.

One example of a DeepAttack was the creation of a fake video showing former President Obama delivering sentences. The video was created for demonstrative purposes by Buzzfeed without malicious intent.

Some will use DeepAttacks to pretend to be people they’re not, potentially convincing unaware victims to hand over bank details or perform tasks.

As seen with the ‘DeepFakes’ trend of using AI to create adult videos featuring celebrity faces, similar videos could also be used to blackmail or embarrass people from all walks of society.

Evolving IoT threats

The Internet of Things (IoT) has already caused major problems – from botnets such as Mirai, to hackers virtually entering people’s homes.

Manufacturers often continue to prioritise getting new products out-the-door before competitors and security remains a dangerous afterthought.

Avast’s research has found manufacturers also overlook security to keep their costs low. In the coming year, Avast believes we’ll see IoT malware evolve similar to how PC and mobile did.

Fake Mobile Apps

Speaking of mobile threats, Avast foresees a continued growth of fake apps containing malware attempting to make their way onto users’ devices.

With some developers choosing to avoid official app stores, as we saw with Epic Games’ decision with Fortnite on Android, this provides an even greater potential for hackers to infect devices.

That doesn’t mean sticking to official stores guarantees safety. Avast flagged several fake apps which appeared even on the Google Play Store.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Avast: AI, IoT, and fake apps top 2019 cybersecurity threats appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2019/01/04/avast-ai-iot-fake-apps-2019-cybersecurity-threats/feed/ 0