Google no longer accepts deepfake projects on Colab

Google no longer accepts deepfake projects on Colab Ryan is a senior editor at TechForge Media with over a decade of experience covering the latest technology and interviewing leading industry figures. He can often be sighted at tech conferences with a strong coffee in one hand and a laptop in the other. If it's geeky, he’s probably into it. Find him on Twitter (@Gadget_Ry) or Mastodon (@gadgetry@techhub.social)


Google has added “creating deepfakes” to its list of projects that are banned from its Colab service.

Colab is a product from Google Research that enables AI researchers, data scientists, or students to write and execute Python in their browsers.

With little fanfare, Google added deepfakes to its list of banned projects.

Deepfakes use generative neural network architectures – such as autoencoders or generative adversarial networks (GANs) – to manipulate or generate visual and audio content.

The technology is often used for malicious purposes such as generating sexual content of individuals without their consent, fraud, and the creation of deceptive content aimed at changing views and influencing democratic processes.

Such concerns around the use of deepfakes is likely the reason behind Google’s decision to ban relevant projects.

It’s a controversial decision. Banning such projects isn’t going to stop anyone from developing them and may also hinder efforts to build tools for countering deepfakes at a time when they’re most needed.

In March, a deepfake purportedly showing Ukrainian President Volodymyr Zelenskyy asking troops to lay down their arms in their fight to defend their homeland from Russia’s invasion was posted to a hacked news website.

“I only advise that the troops of the Russian Federation lay down their arms and return home,” Zelenskyy said in an official video to refute the fake. “We are at home and defending Ukraine.”

Fortunately, the deepfake was of low quality by today’s standards. The fake Zelenskyy had a comically large and noticeably pixelated head compared to the rest of his body. The video probably didn’t fool anyone, but it could have had serious consequences if people did believe it.

One Russia-linked influence campaign – removed by Facebook and Twitter in March – used AI-generated faces for a fake “editor-in-chief” and “columnist” for a linked propaganda website. That one was more believable and likely fooled some people.

However, not all deepfakes are malicious. They’re also used for music, activism, satire, and even helping police solve crimes.

Historical data from archive.org suggests Google silently added deepfakes to its list of projects banned from Colab sometime between 14-24 July 2022.

(Photo by Markus Spiske on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

Tags: , , , , ,

View Comments
Leave a comment

Leave a Reply