deepfakes Archives - AI News https://www.artificialintelligence-news.com/tag/deepfakes/ Artificial Intelligence News Tue, 10 Jan 2023 16:46:22 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png deepfakes Archives - AI News https://www.artificialintelligence-news.com/tag/deepfakes/ 32 32 China’s deepfake laws come into effect today https://www.artificialintelligence-news.com/2023/01/10/chinas-deepfake-laws-come-into-effect-today/ https://www.artificialintelligence-news.com/2023/01/10/chinas-deepfake-laws-come-into-effect-today/#respond Tue, 10 Jan 2023 16:46:21 +0000 https://www.artificialintelligence-news.com/?p=12594 China will begin enforcing its strict new rules around the creation of deepfakes from today. Deepfakes are increasingly being used for manipulation and humiliation. We’ve seen deepfakes of figures like disgraced FTX founder Sam Bankman-Fried to commit fraud, Ukrainian President Volodymyr Zelenskyy to spread disinformation, and US House Speaker Nancy Pelosi to make her appear... Read more »

The post China’s deepfake laws come into effect today appeared first on AI News.

]]>
China will begin enforcing its strict new rules around the creation of deepfakes from today.

Deepfakes are increasingly being used for manipulation and humiliation. We’ve seen deepfakes of figures like disgraced FTX founder Sam Bankman-Fried to commit fraud, Ukrainian President Volodymyr Zelenskyy to spread disinformation, and US House Speaker Nancy Pelosi to make her appear drunk.

Last month, the Cyberspace Administration of China (CAC) announced rules to clampdown on deepfakes.

“In recent years, in-depth synthetic technology has developed rapidly. While serving user needs and improving user experiences, it has also been used by some criminals to produce, copy, publish, and disseminate illegal and bad information, defame, detract from the reputation and honour of others, and counterfeit others,” explains the CAC.

Providers of services for creating synthetic content will be obligated to ensure their AIs aren’t misused for illegal and/or harmful purposes. Furthermore, any content that was created using an AI must be clearly labelled with a watermark.

China’s new rules come into force today (10 January 2023) and will also require synthetic service providers to:

  • Not illegally process personal information
  • Periodically review, evaluate, and verify algorithms
  • Establish management systems and technical safeguards
  • Authenticate users with real identity information
  • Establish mechanisms for complaints and reporting

The CAC notes that effective governance of synthetic technologies is a multi-entity effort that will require the participation of government, enterprises, and citizens. Such participation, the CAC says, will promote the legal and responsible use of deep synthetic technologies while minimising the associated risks.

(Photo by Henry Chen on Unsplash)

Related: AI & Big Data Expo: Exploring ethics in AI and the guardrails required

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post China’s deepfake laws come into effect today appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/01/10/chinas-deepfake-laws-come-into-effect-today/feed/ 0
Google no longer accepts deepfake projects on Colab https://www.artificialintelligence-news.com/2022/05/31/google-no-longer-accepts-deepfake-projects-on-colab/ https://www.artificialintelligence-news.com/2022/05/31/google-no-longer-accepts-deepfake-projects-on-colab/#respond Tue, 31 May 2022 14:01:05 +0000 https://www.artificialintelligence-news.com/?p=12025 Google has added “creating deepfakes” to its list of projects that are banned from its Colab service. Colab is a product from Google Research that enables AI researchers, data scientists, or students to write and execute Python in their browsers. With little fanfare, Google added deepfakes to its list of banned projects. Deepfakes use generative... Read more »

The post Google no longer accepts deepfake projects on Colab appeared first on AI News.

]]>
Google has added “creating deepfakes” to its list of projects that are banned from its Colab service.

Colab is a product from Google Research that enables AI researchers, data scientists, or students to write and execute Python in their browsers.

With little fanfare, Google added deepfakes to its list of banned projects.

Deepfakes use generative neural network architectures – such as autoencoders or generative adversarial networks (GANs) – to manipulate or generate visual and audio content.

The technology is often used for malicious purposes such as generating sexual content of individuals without their consent, fraud, and the creation of deceptive content aimed at changing views and influencing democratic processes.

Such concerns around the use of deepfakes is likely the reason behind Google’s decision to ban relevant projects.

It’s a controversial decision. Banning such projects isn’t going to stop anyone from developing them and may also hinder efforts to build tools for countering deepfakes at a time when they’re most needed.

In March, a deepfake purportedly showing Ukrainian President Volodymyr Zelenskyy asking troops to lay down their arms in their fight to defend their homeland from Russia’s invasion was posted to a hacked news website.

“I only advise that the troops of the Russian Federation lay down their arms and return home,” Zelenskyy said in an official video to refute the fake. “We are at home and defending Ukraine.”

Fortunately, the deepfake was of low quality by today’s standards. The fake Zelenskyy had a comically large and noticeably pixelated head compared to the rest of his body. The video probably didn’t fool anyone, but it could have had serious consequences if people did believe it.

One Russia-linked influence campaign – removed by Facebook and Twitter in March – used AI-generated faces for a fake “editor-in-chief” and “columnist” for a linked propaganda website. That one was more believable and likely fooled some people.

However, not all deepfakes are malicious. They’re also used for music, activism, satire, and even helping police solve crimes.

Historical data from archive.org suggests Google silently added deepfakes to its list of projects banned from Colab sometime between 14-24 July 2022.

(Photo by Markus Spiske on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Google no longer accepts deepfake projects on Colab appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/05/31/google-no-longer-accepts-deepfake-projects-on-colab/feed/ 0
Kendrick Lamar uses deepfakes in latest music video https://www.artificialintelligence-news.com/2022/05/09/kendrick-lamar-uses-deepfakes-in-latest-music-video/ https://www.artificialintelligence-news.com/2022/05/09/kendrick-lamar-uses-deepfakes-in-latest-music-video/#respond Mon, 09 May 2022 12:10:02 +0000 https://www.artificialintelligence-news.com/?p=11943 American rapper Kendrick Lamar has made use of deepfakes for his latest music video. Deepfakes use generative neural network architectures –  such as autoencoders or generative adversarial networks (GANs) – to manipulate or generate visual and audio content. Lamar is widely considered one of the greatest rappers of all time. However, he’s regularly proved his... Read more »

The post Kendrick Lamar uses deepfakes in latest music video appeared first on AI News.

]]>
American rapper Kendrick Lamar has made use of deepfakes for his latest music video.

Deepfakes use generative neural network architectures –  such as autoencoders or generative adversarial networks (GANs) – to manipulate or generate visual and audio content.

Lamar is widely considered one of the greatest rappers of all time. However, he’s regularly proved his creative mind isn’t limited to his rapping talent.

For his track ‘The Heart Part 5’, Lamar has made use of deepfake technology to seamlessly morph his face into various celebrities including Kanye West, Nipsey Hussle, Will Smith, and even O.J. Simpson.

You can view the music video below:

For due credit, the deepfake element was created by a studio called Deep Voodoo.

Deepfakes are often used for entertainment purposes, including for films and satire. However, they’re also being used for nefarious purposes like the creation of ‘deep porn’ videos without the consent of those portrayed.

The ability to deceive has experts concerned about the social implications. Deepfakes could be used for fraud, misinformation, influencing public opinion, and interfering in democratic processes.

In March, a deepfake purportedly showing Ukrainian President Volodymyr Zelenskyy asking troops to lay down their arms in their fight to defend their homeland from Russia’s invasion was posted to a hacked news website.

“I only advise that the troops of the Russian Federation lay down their arms and return home,” Zelenskyy said in an official video to refute the fake. “We are at home and defending Ukraine.”

Fortunately, the deepfake was of very low quality by today’s standards. The fake Zelenskyy had a comically large and noticeably pixelated head compared to the rest of his body. The video probably didn’t fool anyone, but it could have had major consequences if people did believe it.

One Russia-linked influence campaign – removed by Facebook and Twitter in March – used AI-generated faces for a fake “editor-in-chief” and “columnist” for a linked propaganda website.

The more deepfakes that are exposed will increase public awareness. Artists like Kendrick Lamar using them for entertainment purposes will also help to spread awareness that you can no longer necessarily believe what you can see with your own eyes.

Related: Humans struggle to distinguish between real and AI-generated faces

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Kendrick Lamar uses deepfakes in latest music video appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/05/09/kendrick-lamar-uses-deepfakes-in-latest-music-video/feed/ 0
Deepfakes are being used to push anti-Ukraine disinformation https://www.artificialintelligence-news.com/2022/03/01/deepfakes-are-being-used-push-anti-ukraine-disinformation/ https://www.artificialintelligence-news.com/2022/03/01/deepfakes-are-being-used-push-anti-ukraine-disinformation/#respond Tue, 01 Mar 2022 18:01:38 +0000 https://artificialintelligence-news.com/?p=11719 Influence operations with ties to Russia and Belarus have been found using deepfakes to push anti-Ukraine disinformation. Last week, AI News reported on the release of a study that found humans can generally no longer distinguish between real and AI-generated “deepfake” faces. As humans, we’re somewhat trained to believe what we see with our eyes.... Read more »

The post Deepfakes are being used to push anti-Ukraine disinformation appeared first on AI News.

]]>
Influence operations with ties to Russia and Belarus have been found using deepfakes to push anti-Ukraine disinformation.

Last week, AI News reported on the release of a study that found humans can generally no longer distinguish between real and AI-generated “deepfake” faces.

As humans, we’re somewhat trained to believe what we see with our eyes. Many believed that it was only a matter of time before Russia took advantage of deepfakes and our human psychology to take its vast disinformation campaigns to the next level.

Facebook and Twitter removed two anti-Ukraine “covert influence operations” over the weekend. One had ties to Russia, while the other was connected to Belarus.

As we’ve often seen around things like Covid-19 disinformation, the Russian propaganda operation included websites aimed at pushing readers towards anti-Ukraine views. The campaign had links with the News Front and South Front websites which the US government has linked to Russian intelligence disinformation efforts.

However, Facebook said this particular campaign used AI-generated faces to give the idea that it was posted by credible columnists. Here’s one “columnist” and the “editor-in-chief” of one propaganda website:

Ears are often still a giveaway with AI-generated faces like those created on ‘This Person Does Not Exist’. The fictional woman’s mismatched earrings are one indicator while the man’s right ear is clearly not quite right.

Part of the campaign was to promote the idea that Russia’s military operation is going well and Ukraine’s efforts are going poorly. We know that Russia’s state broadcasters have only acknowledged ludicrously small losses—including just one Russian soldier fatality.

On Saturday, state-owned news agency RIA-Novosti even accidentally published and then deleted an article headlined “The arrival of Russia in a new world” in what appeared to be a pre-prepared piece expecting a swift victory. The piece piled praise on Putin’s regime and claims that Russia is returning to lead a new world order to rectify the “terrible catastrophe” that was the collapse of the Soviet Union.

So far, Russia is expected to have lost around 5,300 troops, 816 armoured combat vehicles, 101 tanks, 74 guns, 29 warplanes, 29 helicopters, and two ships/motorboats, as part of its decision to invade Ukraine.

The slow progress and mounting losses appear to have angered Russia with its military now conducting what appears to be very clear war crimes—targeting civilian areas, bombing hospitals and kindergartens, and using thermobaric and cluster munitions indiscriminately. Putin has even hinted at using nuclear weapons offensively rather than defensively in an unprecedented escalation.

Many ordinary Russian citizens are becoming outraged at what their government is doing to Ukraine, where many have family, friends, and share deep cultural ties. Russia appears to be ramping up its propaganda to counter as the country finds itself increasingly isolated.

Western governments and web giants have clamped down on Russia’s state propagandists in recent days.

British telecoms regulator Ofcom has launched 15 investigations into state broadcaster RT after observing “a significant increase in the number of programmes on the RT service that warrant investigation under our Broadcasting Code.”

Facebook has decided to block access to RT and Sputnik across the EU following “a number” of government requests from within the EU. Twitter, for its part, has announced that it would label tweets from Russian state media accounts.

Hacker collective Anonymous claims to have carried out over 1,500 cyberattacks against Russian government sites, transport infrastructure, banks, and state media to counter their falsehoods and broadcast the truth about the invasion to Russian citizens.

Russia’s media regulator Roskomnadzor, for its part, has restricted Russian users’ access to Facebook and Twitter.

(Photo by Max Kukurudziak on Unsplash)

Related: Ukraine is using Starlink to maintain global connectivity

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Deepfakes are being used to push anti-Ukraine disinformation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/03/01/deepfakes-are-being-used-push-anti-ukraine-disinformation/feed/ 0
Humans struggle to distinguish between real and AI-generated faces https://www.artificialintelligence-news.com/2022/02/21/humans-struggle-distinguish-real-and-ai-generated-faces/ https://www.artificialintelligence-news.com/2022/02/21/humans-struggle-distinguish-real-and-ai-generated-faces/#respond Mon, 21 Feb 2022 18:19:36 +0000 https://artificialintelligence-news.com/?p=11696 According to a new paper, AI-generated faces have become so advanced that humans now cannot distinguish between real and fake more often than not. “Our evaluation of the photorealism of AI-synthesized faces indicates that synthesis engines have passed through the uncanny valley and are capable of creating faces that are indistinguishable—and more trustworthy—than real faces,”... Read more »

The post Humans struggle to distinguish between real and AI-generated faces appeared first on AI News.

]]>
According to a new paper, AI-generated faces have become so advanced that humans now cannot distinguish between real and fake more often than not.

“Our evaluation of the photorealism of AI-synthesized faces indicates that synthesis engines have passed through the uncanny valley and are capable of creating faces that are indistinguishable—and more trustworthy—than real faces,” the researchers explained.

The researchers – Sophie J. Nightingale, Department of Psychology, Lancaster University, and Hanry Farid, Department of Electrical Engineering and Computer Sciences, University of California – highlight the worrying trend of “deepfakes” being weaponised.

Video, audio, text, and imagery generated by generative adversarial networks (GANs) are increasingly being used for nonconsensual intimate imagery, financial fraud, and disinformation campaigns.

GANs work by pitting two neural networks – a generator and a discriminator – against each other. The generator will start with random pixels and will keep improving the image to avoid penalisation from the discriminator. This process continues until the discriminator can no longer distinguish a synthesised face from a real one.

Just as the discriminator could no longer distinguish a synthesised face from a real one, neither could human participants. In the study, the human participants identified fake images just 48.2 percent of the time. 

Accuracy was found to be higher for correctly identifying real East Asian and White male faces than females. However, for both male and female synthetic faces, White faces were least accurately identified and White males less than White females.

The researchers hypothesised that “White faces are more difficult to classify because they are overrepresented in the StyleGAN2 training dataset and are therefore more realistic.”

Here are the most (top and upper-middle lines) and least (bottom and lower-middle) accurately classified real (R) and synthetic (S) faces:

There’s a glimmer of hope for humans with participants being able to distinguish real faces 59 percent of the time after being given training on how to spot fakes. That’s not a particularly comfortable percentage, but it at least tips the scales towards humans spotting fakes more often than not.

What sets the alarm bells ringing again is that synthetic faces were rated more “trustworthy” than real ones. On a scale of 1 (very untrustworthy) to 7 (very trustworthy), the average rating for real faces (blue bars) of 4.48 is less than the rating of 4.82 for synthetic.

“A smiling face is more likely to be rated as trustworthy, but 65.5 per cent of our real faces and 58.8 per cent of synthetic faces are smiling, so facial expression alone cannot explain why synthetic faces are rated as more trustworthy,” wrote the researchers.

The results of the paper show the importance of developing tools that can spot the increasingly small differences that distinguish the real from synthetic because humans will struggle even if everyone was specifically trained.

With Western intelligence agencies calling out fake content allegedly from Russian authorities to justify an invasion of Ukraine, the increasing ease in which such media can be generated in mass poses a serious threat that’s no longer the work of fiction.

(Photo by NeONBRAND on Unsplash)

Related: James Cameron warns of the dangers of deepfakes

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Humans struggle to distinguish between real and AI-generated faces appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/02/21/humans-struggle-distinguish-real-and-ai-generated-faces/feed/ 0
Social media algorithms are still failing to counter misleading content https://www.artificialintelligence-news.com/2021/08/17/social-media-algorithms-are-still-failing-to-counter-misleading-content/ https://www.artificialintelligence-news.com/2021/08/17/social-media-algorithms-are-still-failing-to-counter-misleading-content/#respond Tue, 17 Aug 2021 14:23:52 +0000 http://artificialintelligence-news.com/?p=10917 As the Afghanistan crisis continues to unfold, it’s clear that social media algorithms are unable to counter enough misleading and/or fake content. While it’s unreasonable to expect that no disingenuous content will slip through the net, the sheer amount that continues to plague social networks shows that platform-holders still have little grip on the issue.... Read more »

The post Social media algorithms are still failing to counter misleading content appeared first on AI News.

]]>
As the Afghanistan crisis continues to unfold, it’s clear that social media algorithms are unable to counter enough misleading and/or fake content.

While it’s unreasonable to expect that no disingenuous content will slip through the net, the sheer amount that continues to plague social networks shows that platform-holders still have little grip on the issue.

When content is removed, it should either be prevented from being reuploaded or at least flagged as potentially misleading when displayed to other users. Too often, another account – whether real or fake – simply reposts the removed content so that it can continue spreading without limitation.

The damage is only stopped when the vast amount of content that makes it AI-powered moderation efforts like object detection and scene recognition is flagged by users and eventually reviewed by an actual person, often long after it’s been widely viewed. It’s not unheard of for moderators to require therapy after being exposed to so much of the worst of humankind and defeats the purpose of automation in reducing the tasks that are dangerous and/or labour-intensive for humans to do alone.

Deepfakes currently pose the biggest challenge for social media platforms. Over time, algorithms can be trained to detect the markers that indicate content has been altered. Microsoft is developing such a system called Video Authenticator that was created using a public dataset from Face Forensic++ and was tested on the DeepFake Detection Challenge Dataset:

However, it’s also true that increasingly advanced deepfakes are making the markers ever more subtle. Back in February, researchers from the University of California – San Diego found that current systems designed to counter the increasing prevalence of deepfakes can be deceived.

Another challenge with deepfakes is their resilience to being prevented from being reuploaded. Increasing processing power means that it doesn’t take long for small changes to be made so the “new” content evades algorithmic blocking.

In a report from the NYU Stern Center for Business and Human Rights, the researchers highlighted the various ways disinformation could be used to influence democratic processes. One method is for deepfake videos to be used during elections to “portray candidates saying and doing things they never said or did”.

The report also predicts that Iran and China will join Russia as major sources of disinformation in Western democracies and that for-profit firms based in the US and abroad will be hired to generate disinformation. It transpired in May that French and German YouTubers, bloggers, and influencers were offered cash by a supposedly UK-based PR agency with Russian connections to falsely tell their followers the Pfizer/BioNTech vaccine has a high death rate. Influencers were asked to tell their subscribers that “the mainstream media ignores this theme”, which I’m sure you’ve since heard from other people.

While recognising the challenges, the likes of Facebook, YouTube, and Twitter should have the resources at their disposal to be doing a much better job at countering misleading content than they are. Some leniency can be given for deepfakes as a relatively emerging threat but some things are unforgivable at this point.

Take this video that is making the rounds, for example:

Sickeningly, it is a real and unmanipulated video. However, it’s also from ~2001. Despite many removals, the social networks continue to allow it to be reposted with claims of it being new footage without any warning that it’s old and has been flagged as being misleading.

While it’s difficult to put much faith in the Taliban’s claims that they’ll treat women and children much better than their barbaric history suggests, it’s always important for facts and genuine material to be separated from known fiction and misrepresented content no matter the issue or personal views. The networks are clearly aware of the problematic content and continue to allow it to be spread—often entirely unhindered.

An image of CNN correspondent Omar Jimenez standing in front of a helicopter taking off in Afghanistan alongside the news caption “Violent but mostly peaceful transfer of power” was posted to various social networks over the weekend. Reuters and Politifact both fact-checked the image and concluded that it had been digitally-altered.

The image of Jimenez was taken from his 2020 coverage of protests in Kenosha, Wisconsin following a police shooting alongside the caption “Fiery but mostly peaceful protests after police shooting” that was criticised by some conservatives. The doctored image is clearly intended to be satire but the comments suggest many people believed it to be true.

On Facebook, to its credit, the image has now been labeled as an “Altered photo” and clearly states that “Independent fact-checkers say this information could mislead people”. On Twitter, as of writing, the image is still circulating without any label. The caption is also being used as a title in a YouTube video with some different footage but the platform also hasn’t labeled it and claims that it doesn’t violate their rules.

Social media platforms can’t become thought police, but where algorithms have detected manipulated content – and/or there is clear evidence of even real material being used for misleading purposes – it should be indisputable that action needs to be taken to support fair discussion and debate around genuine information.

Not enough is currently being done, and we appear doomed to the same socially-damaging failings during every pivotal event for the foreseeable future unless that changes.

(Photo by Adem AY on Unsplash)

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post Social media algorithms are still failing to counter misleading content appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/08/17/social-media-algorithms-are-still-failing-to-counter-misleading-content/feed/ 0
Researchers from Microsoft and global leading universities study the ‘offensive AI’ threat https://www.artificialintelligence-news.com/2021/07/02/researchers-microsoft-global-leading-universities-study-offensive-ai-threat/ https://www.artificialintelligence-news.com/2021/07/02/researchers-microsoft-global-leading-universities-study-offensive-ai-threat/#respond Fri, 02 Jul 2021 15:04:01 +0000 http://artificialintelligence-news.com/?p=10740 A group of researchers from Microsoft and seven global leading universities have conducted an industry study into the threat offensive AI is posing to organisations. AIs are beneficial tools but are indiscriminate in also providing assistance to individuals and groups that set out to cause harm. The researchers’ study into offensive AI used both existing... Read more »

The post Researchers from Microsoft and global leading universities study the ‘offensive AI’ threat appeared first on AI News.

]]>
A group of researchers from Microsoft and seven global leading universities have conducted an industry study into the threat offensive AI is posing to organisations.

AIs are beneficial tools but are indiscriminate in also providing assistance to individuals and groups that set out to cause harm.

The researchers’ study into offensive AI used both existing research into the subject in addition to responses from organisations including Airbus, Huawei, and IBM.

Three core motivations were highlighted as to why an adversary would turn to AI: coverage, speed, and success.

While offensive AI threats come in many shapes, it’s the use of the technology for impersonation that has both academia and industry highly concerned. Deepfakes, for example, are growing in prevalence for purposes ranging from relatively innocuous comedy to the far more sinister fraud, blackmail, exploitation, defamation, and spreading misinformation.

Similar campaigns in the past using fake/manipulated content has been a slow and arduous process with little chance of success. Not only is AI making the creation of such content easier but it’s also meaning that organisations can be bombarded with phishing attacks which greatly increases the chance of success.

Tools such as Microsoft’s Video Authenticator are helping to counter deepfakes but it will be an ongoing battle to keep up with their increasing sophistication.

Back when Google’s Duplex service was announced – which sounds like a real human to book appointments on a person’s behalf – concerns were raised that similar technology could be used to automate fraud. The researchers expect bots to gain the ability to make convincing deepfake phishing calls.

The researchers also predict an increased prevalence of offensive AI in “data collection, model development, training, and evaluation” in the coming years.

Here are the top 10 offensive AI concerns from both the perspectives of industry and academia:

Very few organisations are currently investing in ways to counter, or mitigate the fallout, of an offensive AI attack such as a deepfake phishing campaign.

The researchers recommend more research into post-processing tools that can protect software from analysis after development (i.e anti-vulnerability detection) and that organisations extend the current MLOps paradigm to also encompass ML security (MLSecOps) that incorporates security testing, protection, and monitoring of AI/ML models.

You can find the full paper, The Threat of Offensive AI to Organizations, on arXiv here (PDF)

(Photo by Dan Dimmock on Unsplash)

Find out more about Digital Transformation Week North America, taking place on November 9-10 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post Researchers from Microsoft and global leading universities study the ‘offensive AI’ threat appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/07/02/researchers-microsoft-global-leading-universities-study-offensive-ai-threat/feed/ 0
Researchers find systems to counter deepfakes can be deceived https://www.artificialintelligence-news.com/2021/02/10/researchers-find-systems-counter-deepfakes-can-be-deceived/ https://www.artificialintelligence-news.com/2021/02/10/researchers-find-systems-counter-deepfakes-can-be-deceived/#comments Wed, 10 Feb 2021 17:26:35 +0000 http://artificialintelligence-news.com/?p=10256 Researchers have found that systems designed to counter the increasing prevalence of deepfakes can be deceived. The researchers, from the University of California – San Diego, first presented their findings at the WACV 2021 conference. Shehzeen Hussain, a UC San Diego computer engineering PhD student and co-author on the paper, said: “Our work shows that... Read more »

The post Researchers find systems to counter deepfakes can be deceived appeared first on AI News.

]]>
Researchers have found that systems designed to counter the increasing prevalence of deepfakes can be deceived.

The researchers, from the University of California – San Diego, first presented their findings at the WACV 2021 conference.

Shehzeen Hussain, a UC San Diego computer engineering PhD student and co-author on the paper, said:

“Our work shows that attacks on deepfake detectors could be a real-world threat.

More alarmingly, we demonstrate that it’s possible to craft robust adversarial deepfakes even when an adversary may not be aware of the inner-workings of the machine learning model used by the detector.”

Two scenarios were tested as part of the research:

  1. The attackers have complete access to the detector model, including the face extraction pipeline and the architecture and parameters of the classification model
  2. The attackers can only query the machine learning model to figure out the probabilities of a frame being classified as real or fake.

In the first scenario, the attack’s success rate is above 99 percent for uncompressed videos. For compressed videos, it was 84.96 percent. In the second scenario, the success rate was 86.43 percent for uncompressed and 78.33 percent for compressed videos.

“We show that the current state of the art methods for deepfake detection can be easily bypassed if the adversary has complete or even partial knowledge of the detector,” the researchers wrote.

Deepfakes use a Generative Adversarial Network (GAN) to create fake imagery and even videos with increasingly convincing results. So-called ‘DeepPorn’ has been used to cause embarrassment and even blackmail.

There’s the old saying “I won’t believe it until I see it with my own eyes,” which is why convincing fake content is such a concern. As humans, we’re rather hard-wired to believe what we (think) we can see with our eyes.

In an age of disinformation, people are gradually learning not to believe everything they readespecially when it comes from unverified sources. Teaching people not to necessarily believe the images and video they see is going to pose a serious challenge.

Some hope has been placed on systems to detect and counter deepfakes before they cause harm. Unfortunately, the UC San Diego researchers’ findings somewhat dash those hopes.

“If the attackers have some knowledge of the detection system, they can design inputs to target the blind spots of the detector and bypass it,” ” said Paarth Neekhara, another co-author on the paper.

In separate research from University College London (UCL) last year, experts ranked what they believe to be the most serious AI threats. Deepfakes ranked top of the list.

“People now conduct large parts of their lives online and their online activity can make and break reputations,” said Dr Matthew Caldwell of UCL Computer Science.

One of the most high-profile deepfake cases so far was that of US house speaker Nancy Pelosi. In 2018, a deepfake video circulated on social media which made Pelosi appear drunk and slurring her words.

The video of Pelosi was likely created with the intention of being amusing rather than particularly maliciousbut shows how deepfakes could be used to cause disrepute and even influence democratic processes.

As part of a bid to persuade Facebook to change its policies on deepfakes, last year Israeli startup Canny AI created a deepfake of Facebook CEO Mark Zuckerberg which made it appear like he said: “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.”

Now imagine the precise targeting of content provided by platforms like Facebook combined with deepfakes which can’t be detected… actually, perhaps don’t, it’s a rather squeaky bum thought.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Researchers find systems to counter deepfakes can be deceived appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/02/10/researchers-find-systems-counter-deepfakes-can-be-deceived/feed/ 1
University College London: Deepfakes are the ‘most serious’ AI crime threat https://www.artificialintelligence-news.com/2020/08/06/university-college-london-experts-deepfakes-ai-crime-threat/ https://www.artificialintelligence-news.com/2020/08/06/university-college-london-experts-deepfakes-ai-crime-threat/#respond Thu, 06 Aug 2020 12:41:52 +0000 http://artificialintelligence-news.com/?p=9794 Researchers from University College London have released a ranking of what experts believe to be the most serious AI crime threats. The researchers first created a list of 20 expected ways AI will be used by criminals within the next 15 years. 31 experts were then asked to rank them by potential severity. Deepfakes –... Read more »

The post University College London: Deepfakes are the ‘most serious’ AI crime threat appeared first on AI News.

]]>
Researchers from University College London have released a ranking of what experts believe to be the most serious AI crime threats.

The researchers first created a list of 20 expected ways AI will be used by criminals within the next 15 years. 31 experts were then asked to rank them by potential severity.

Deepfakes – AI-generated images, videos, and articles – ranked top of the list as the most serious threat.

New and dangerous territory

It’s of little surprise to see deepfakes rank so highly, given the existing issues with disinformation campaigns.

Most fake content today at least must be created by humans, such as those working in the likes of Russia’s infamous “troll farms”. Human-generated disinformation campaigns take time to produce to a convincing standard and often have patterns which make them easier to trace. 

Automating the production of fake content en masse, to influence things such as democratic votes and public opinion, is entering into new and dangerous territory.

One of the most high-profile deepfake cases so far was that of US house speaker Nancy Pelosi. In 2018, a deepfake video circulated on social media which made Pelosi appear drunk and slurring her words. Pelosi criticised Facebook’s response, or lack thereof, and later told California’s KQED: “I think they have proven — by not taking down something they know is false — that they were willing enablers of the Russian interference in our election.”

The deepfake of Pelosi was unsophisticated and likely created to be amusing rather than malicious, but it’s an early warning of how such fakes could be used to cause reputational damage – or worse. Just imagine the potential consequences a fake video of the president announcing an imminent strike on somewhere like North Korea could have.

Deepfakes also have obvious potential to be used for fraud purposes, to pretend to be someone else to access things like bank accounts and sensitive information.

Then there’s the issue of blackmail. Deep learning has already been used to put the faces of celebrities on adult performers. While fake, the threat to release such videos – and the embarrassment caused – could lead to some paying a ransom to keep it from being made public.

“People now conduct large parts of their lives online and their online activity can make and break reputations,” comments first author Dr Matthew Caldwell of UCL Computer Science. “Such an online environment, where data is property and information power, is ideally suited for exploitation by AI-based criminal activity.”

All in all, it’s easy to see why experts are so concerned about deepfakes.

As part of a bid to persuade Facebook to change its policies on deepfakes, Israeli startup Canny AI created a deepfake of Facebook CEO Mark Zuckerberg last year which made it appear like he said: “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.”

Other AI crime threats

There were four other major AI crime threats identified by the researchers: the use of driverless cars as weapons, automated spear fishing, harvesting information for blackmail, and the disruption of AI-controlled systems.

“As the capabilities of AI-based technologies expand, so too has their potential for criminal exploitation,” explained senior author Professor Lewis Griffin of UCL Computer Science. “To adequately prepare for possible AI threats, we need to identify what these threats might be, and how they may impact our lives.”

Of medium concern were of things such as the sale of items and services wrongly called AI, such as security screening and targeted advertising solutions. The researchers believe leading people to believe they’re AI-powered could be lucrative.

Among the lesser concerns is things such as so-called “burglar bots” which could get in through access points of a property to unlock them or search for data. The researchers believe this poses less of a threat because they can be easily prevented through methods such as letterbox cages.

Similarly, the researchers note the potential for AI-based stalking is damaging for individuals but isn’t considered a major threat as it could not operate at scale.

You can find the researchers’ full paper in the Crime Science Journal here.

(Photo by Bill Oxford on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post University College London: Deepfakes are the ‘most serious’ AI crime threat appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/08/06/university-college-london-experts-deepfakes-ai-crime-threat/feed/ 0
Deepfake app puts your face on GIFs while limiting data collection https://www.artificialintelligence-news.com/2020/01/14/deepfake-app-face-gifs-data-collection/ https://www.artificialintelligence-news.com/2020/01/14/deepfake-app-face-gifs-data-collection/#comments Tue, 14 Jan 2020 15:11:41 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6356 A new app called Doublicat allows users to superimpose their face into popular GIFs using deep learning technology. In the name of research, here’s one I made earlier: Doublicat uses a Generative Adversarial Network (GAN) to do its magic. The GAN is called RefaceAI and is developed by a company of the same name. RefaceAI... Read more »

The post Deepfake app puts your face on GIFs while limiting data collection appeared first on AI News.

]]>
A new app called Doublicat allows users to superimpose their face into popular GIFs using deep learning technology.

In the name of research, here’s one I made earlier:

Doublicat uses a Generative Adversarial Network (GAN) to do its magic. The GAN is called RefaceAI and is developed by a company of the same name.

RefaceAI was previously used in a face swapping app called Reflect. Elon Musk once used Reflect to put his face on Dwayne Johnson’s body. 

The app is a lot of fun, but – after concerns about viral Russian app FaceApp – many will be wondering just how much data is being collected in return.

Doublicat’s developers are upfront with asking for consent to store your photos upon first opening the app and this is confirmed in their privacy policy: “We may collect the photos, that you take with your camera while using our application.”

However, Doublicat says that photos are only stored on their server for 24 hours before they’re deleted. “The rest of the time your photos used in Doublicat application are stored locally on your mobile device and may be removed any time by either deleting these photos from your mobile device’s file system.”

The app also collects data about facial features but only the vector representations of each person’s face is stored. Doublicat assures that the facial recognition data collected “is not biometric data” and is deleted from their servers within 30 calendar days.

“In no way will Doublicat use your uploaded content for face recognition as Doublicat does not introduce the face recognition technologies or other technical means for processing biometric data for the unique identification or authentication of a user.”

The amount of data Doublicat can collect is limited compared to some alternatives. Apps such as Zao require users to 3D model their face whereas Doublicat only takes a front-facing picture.

RefaceAI is now looking to launch an app which can make deepfake videos rather than just GIFs. The move is likely to be controversial given the concerns around deepfakes and how such videos could be used for things such as political manipulation.

A fake video of Nancy Pelosi, Speaker of the United States House of Representatives, went viral last year after purportedly showing her slurring her words as if she was intoxicated. The clip shows how even a relatively unsophisticated video (it wasn’t an actual deepfake in this case) could be used to cause reputational damage and even swing votes.

A report from the NYU Stern Center for Business and Human Rights last September, covered by our sister publication MarketingTech, highlighted the various ways disinformation could be used ahead of this year’s presidential elections. One of the eight predictions is that deepfake videos will be used “to portray candidates saying and doing things they never said or did”.

Earlier this month, Facebook announced new policies around deepfakes. Any deepfake video that is designed to be misleading will be banned. The problem with the rules is they don’t cover videos altered for parody or those edited “solely to omit or change the order of words,” which will not sound encouraging to anyone wanting a firm stance against manipulation.

Doublicat is available for Android and iOS.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Deepfake app puts your face on GIFs while limiting data collection appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/01/14/deepfake-app-face-gifs-data-collection/feed/ 2