generative adversarial networks Archives - AI News https://www.artificialintelligence-news.com/tag/generative-adversarial-networks/ Artificial Intelligence News Wed, 25 May 2022 16:03:13 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png generative adversarial networks Archives - AI News https://www.artificialintelligence-news.com/tag/generative-adversarial-networks/ 32 32 Deepfakes are now being used to help solve crimes https://www.artificialintelligence-news.com/2022/05/25/deepfakes-are-now-being-used-to-help-solve-crimes/ https://www.artificialintelligence-news.com/2022/05/25/deepfakes-are-now-being-used-to-help-solve-crimes/#respond Wed, 25 May 2022 16:03:12 +0000 https://www.artificialintelligence-news.com/?p=11998 A deepfake video created by Dutch police could help to change the often negative perception of the technology. Deepfakes use generative neural network architectures – such as autoencoders or generative adversarial networks (GANs) – to manipulate or generate visual and audio content. The technology is already being used for malicious purposes including generating sexual content... Read more »

The post Deepfakes are now being used to help solve crimes appeared first on AI News.

]]>
A deepfake video created by Dutch police could help to change the often negative perception of the technology.

Deepfakes use generative neural network architectures – such as autoencoders or generative adversarial networks (GANs) – to manipulate or generate visual and audio content.

The technology is already being used for malicious purposes including generating sexual content of individuals without their consent, fraud, and the creation of deceptive content aimed at changing views and influencing democratic processes.

However, authorities in Rotterdam have proven the technology can be put to use for good.

Dutch police have created a deepfake video of 13-year-old Sedar Soares – a young footballer who was shot dead in 2003 while throwing snowballs with his friends in the car park of a Rotterdam metro station – in an appeal for information to finally solve his murder.

The video depicts Soares picking up a football in front of the camera and walking through a guard of honour on the field that comprises his relatives, friends, and former teachers.

“Somebody must know who murdered my darling brother. That’s why he has been brought back to life for this film,” says a voice in the video, before Soares drops his ball.

“Do you know more? Then speak,” his relatives and friends say, before his image disappears from the field. The video then gives the police contact details.

It’s hoped the stirring video and a reminder of what Soares would have looked like at the time will help to jog memories and lead to the case finally being solved.

Daan Annegarn, a detective with the National Investigation Communications Team, said:

“We know better and better how cold cases can be solved. Science shows that it works to hit witnesses and the perpetrator in the heart—with a personal call to share information. What better way to do that than to let Sedar and his family do the talking? 

We had to cross a threshold. It is not nothing to ask relatives: ‘Can I bring your loved one to life in a deepfake video? We are convinced that it contributes to the detection, but have not done it before.

The family has to fully support it.”

So far, it seems to have had an impact. The police claim to have already received dozens of tips but they need to see whether they’re credible. In the meantime, anyone that may have any information is encouraged to come forward.

“The deployment of deepfake is not just a lucky shot. We are convinced that it can touch hearts in the criminal environment—that witnesses and perhaps the perpetrator can come forward,” Annegarn concludes.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Deepfakes are now being used to help solve crimes appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/05/25/deepfakes-are-now-being-used-to-help-solve-crimes/feed/ 0
Kendrick Lamar uses deepfakes in latest music video https://www.artificialintelligence-news.com/2022/05/09/kendrick-lamar-uses-deepfakes-in-latest-music-video/ https://www.artificialintelligence-news.com/2022/05/09/kendrick-lamar-uses-deepfakes-in-latest-music-video/#respond Mon, 09 May 2022 12:10:02 +0000 https://www.artificialintelligence-news.com/?p=11943 American rapper Kendrick Lamar has made use of deepfakes for his latest music video. Deepfakes use generative neural network architectures –  such as autoencoders or generative adversarial networks (GANs) – to manipulate or generate visual and audio content. Lamar is widely considered one of the greatest rappers of all time. However, he’s regularly proved his... Read more »

The post Kendrick Lamar uses deepfakes in latest music video appeared first on AI News.

]]>
American rapper Kendrick Lamar has made use of deepfakes for his latest music video.

Deepfakes use generative neural network architectures –  such as autoencoders or generative adversarial networks (GANs) – to manipulate or generate visual and audio content.

Lamar is widely considered one of the greatest rappers of all time. However, he’s regularly proved his creative mind isn’t limited to his rapping talent.

For his track ‘The Heart Part 5’, Lamar has made use of deepfake technology to seamlessly morph his face into various celebrities including Kanye West, Nipsey Hussle, Will Smith, and even O.J. Simpson.

You can view the music video below:

For due credit, the deepfake element was created by a studio called Deep Voodoo.

Deepfakes are often used for entertainment purposes, including for films and satire. However, they’re also being used for nefarious purposes like the creation of ‘deep porn’ videos without the consent of those portrayed.

The ability to deceive has experts concerned about the social implications. Deepfakes could be used for fraud, misinformation, influencing public opinion, and interfering in democratic processes.

In March, a deepfake purportedly showing Ukrainian President Volodymyr Zelenskyy asking troops to lay down their arms in their fight to defend their homeland from Russia’s invasion was posted to a hacked news website.

“I only advise that the troops of the Russian Federation lay down their arms and return home,” Zelenskyy said in an official video to refute the fake. “We are at home and defending Ukraine.”

Fortunately, the deepfake was of very low quality by today’s standards. The fake Zelenskyy had a comically large and noticeably pixelated head compared to the rest of his body. The video probably didn’t fool anyone, but it could have had major consequences if people did believe it.

One Russia-linked influence campaign – removed by Facebook and Twitter in March – used AI-generated faces for a fake “editor-in-chief” and “columnist” for a linked propaganda website.

The more deepfakes that are exposed will increase public awareness. Artists like Kendrick Lamar using them for entertainment purposes will also help to spread awareness that you can no longer necessarily believe what you can see with your own eyes.

Related: Humans struggle to distinguish between real and AI-generated faces

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Kendrick Lamar uses deepfakes in latest music video appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/05/09/kendrick-lamar-uses-deepfakes-in-latest-music-video/feed/ 0
Humans struggle to distinguish between real and AI-generated faces https://www.artificialintelligence-news.com/2022/02/21/humans-struggle-distinguish-real-and-ai-generated-faces/ https://www.artificialintelligence-news.com/2022/02/21/humans-struggle-distinguish-real-and-ai-generated-faces/#respond Mon, 21 Feb 2022 18:19:36 +0000 https://artificialintelligence-news.com/?p=11696 According to a new paper, AI-generated faces have become so advanced that humans now cannot distinguish between real and fake more often than not. “Our evaluation of the photorealism of AI-synthesized faces indicates that synthesis engines have passed through the uncanny valley and are capable of creating faces that are indistinguishable—and more trustworthy—than real faces,”... Read more »

The post Humans struggle to distinguish between real and AI-generated faces appeared first on AI News.

]]>
According to a new paper, AI-generated faces have become so advanced that humans now cannot distinguish between real and fake more often than not.

“Our evaluation of the photorealism of AI-synthesized faces indicates that synthesis engines have passed through the uncanny valley and are capable of creating faces that are indistinguishable—and more trustworthy—than real faces,” the researchers explained.

The researchers – Sophie J. Nightingale, Department of Psychology, Lancaster University, and Hanry Farid, Department of Electrical Engineering and Computer Sciences, University of California – highlight the worrying trend of “deepfakes” being weaponised.

Video, audio, text, and imagery generated by generative adversarial networks (GANs) are increasingly being used for nonconsensual intimate imagery, financial fraud, and disinformation campaigns.

GANs work by pitting two neural networks – a generator and a discriminator – against each other. The generator will start with random pixels and will keep improving the image to avoid penalisation from the discriminator. This process continues until the discriminator can no longer distinguish a synthesised face from a real one.

Just as the discriminator could no longer distinguish a synthesised face from a real one, neither could human participants. In the study, the human participants identified fake images just 48.2 percent of the time. 

Accuracy was found to be higher for correctly identifying real East Asian and White male faces than females. However, for both male and female synthetic faces, White faces were least accurately identified and White males less than White females.

The researchers hypothesised that “White faces are more difficult to classify because they are overrepresented in the StyleGAN2 training dataset and are therefore more realistic.”

Here are the most (top and upper-middle lines) and least (bottom and lower-middle) accurately classified real (R) and synthetic (S) faces:

There’s a glimmer of hope for humans with participants being able to distinguish real faces 59 percent of the time after being given training on how to spot fakes. That’s not a particularly comfortable percentage, but it at least tips the scales towards humans spotting fakes more often than not.

What sets the alarm bells ringing again is that synthetic faces were rated more “trustworthy” than real ones. On a scale of 1 (very untrustworthy) to 7 (very trustworthy), the average rating for real faces (blue bars) of 4.48 is less than the rating of 4.82 for synthetic.

“A smiling face is more likely to be rated as trustworthy, but 65.5 per cent of our real faces and 58.8 per cent of synthetic faces are smiling, so facial expression alone cannot explain why synthetic faces are rated as more trustworthy,” wrote the researchers.

The results of the paper show the importance of developing tools that can spot the increasingly small differences that distinguish the real from synthetic because humans will struggle even if everyone was specifically trained.

With Western intelligence agencies calling out fake content allegedly from Russian authorities to justify an invasion of Ukraine, the increasing ease in which such media can be generated in mass poses a serious threat that’s no longer the work of fiction.

(Photo by NeONBRAND on Unsplash)

Related: James Cameron warns of the dangers of deepfakes

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Humans struggle to distinguish between real and AI-generated faces appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/02/21/humans-struggle-distinguish-real-and-ai-generated-faces/feed/ 0