Generative Adversarial Network Archives - AI News https://www.artificialintelligence-news.com/tag/generative-adversarial-network/ Artificial Intelligence News Tue, 31 May 2022 14:01:06 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png Generative Adversarial Network Archives - AI News https://www.artificialintelligence-news.com/tag/generative-adversarial-network/ 32 32 Google no longer accepts deepfake projects on Colab https://www.artificialintelligence-news.com/2022/05/31/google-no-longer-accepts-deepfake-projects-on-colab/ https://www.artificialintelligence-news.com/2022/05/31/google-no-longer-accepts-deepfake-projects-on-colab/#respond Tue, 31 May 2022 14:01:05 +0000 https://www.artificialintelligence-news.com/?p=12025 Google has added “creating deepfakes” to its list of projects that are banned from its Colab service. Colab is a product from Google Research that enables AI researchers, data scientists, or students to write and execute Python in their browsers. With little fanfare, Google added deepfakes to its list of banned projects. Deepfakes use generative... Read more »

The post Google no longer accepts deepfake projects on Colab appeared first on AI News.

]]>
Google has added “creating deepfakes” to its list of projects that are banned from its Colab service.

Colab is a product from Google Research that enables AI researchers, data scientists, or students to write and execute Python in their browsers.

With little fanfare, Google added deepfakes to its list of banned projects.

Deepfakes use generative neural network architectures – such as autoencoders or generative adversarial networks (GANs) – to manipulate or generate visual and audio content.

The technology is often used for malicious purposes such as generating sexual content of individuals without their consent, fraud, and the creation of deceptive content aimed at changing views and influencing democratic processes.

Such concerns around the use of deepfakes is likely the reason behind Google’s decision to ban relevant projects.

It’s a controversial decision. Banning such projects isn’t going to stop anyone from developing them and may also hinder efforts to build tools for countering deepfakes at a time when they’re most needed.

In March, a deepfake purportedly showing Ukrainian President Volodymyr Zelenskyy asking troops to lay down their arms in their fight to defend their homeland from Russia’s invasion was posted to a hacked news website.

“I only advise that the troops of the Russian Federation lay down their arms and return home,” Zelenskyy said in an official video to refute the fake. “We are at home and defending Ukraine.”

Fortunately, the deepfake was of low quality by today’s standards. The fake Zelenskyy had a comically large and noticeably pixelated head compared to the rest of his body. The video probably didn’t fool anyone, but it could have had serious consequences if people did believe it.

One Russia-linked influence campaign – removed by Facebook and Twitter in March – used AI-generated faces for a fake “editor-in-chief” and “columnist” for a linked propaganda website. That one was more believable and likely fooled some people.

However, not all deepfakes are malicious. They’re also used for music, activism, satire, and even helping police solve crimes.

Historical data from archive.org suggests Google silently added deepfakes to its list of projects banned from Colab sometime between 14-24 July 2022.

(Photo by Markus Spiske on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Google no longer accepts deepfake projects on Colab appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/05/31/google-no-longer-accepts-deepfake-projects-on-colab/feed/ 0
Apple’s former ML director reportedly joins Google DeepMind https://www.artificialintelligence-news.com/2022/05/18/apple-former-ml-director-reportedly-joins-google-deepmind/ https://www.artificialintelligence-news.com/2022/05/18/apple-former-ml-director-reportedly-joins-google-deepmind/#respond Wed, 18 May 2022 12:11:54 +0000 https://www.artificialintelligence-news.com/?p=11984 A machine learning exec who left Apple due to its return-to-office policy has reportedly joined Google DeepMind.  Ian Goodfellow is a renowned machine learning researcher. Goodfellow invented generative adversarial networks (GANs), developed a system for Google Maps that transcribes addresses from Street View car photos, and more. In a departure note to his team at... Read more »

The post Apple’s former ML director reportedly joins Google DeepMind appeared first on AI News.

]]>
A machine learning exec who left Apple due to its return-to-office policy has reportedly joined Google DeepMind

Ian Goodfellow is a renowned machine learning researcher. Goodfellow invented generative adversarial networks (GANs), developed a system for Google Maps that transcribes addresses from Street View car photos, and more.

In a departure note to his team at Apple, Goodfellow cited the company’s much-criticised lack of flexibility in its work policies.

Many companies were forced into supporting remote work during the pandemic and many have since decided to keep flexible working due to the recruitment advantages, mental/physical health benefits, lowering the impact of rocketing fuel costs, improved productivity, and reduced office space costs.

Apple planned for employees to work from the office on Mondays, Tuesdays, and Thursdays, starting this month. However, following backlash, on Tuesday the company put the plan on hold—officially citing rising Covid cases.

Goodfellow already decided to hand in his resignation and head to a company with more forward-looking, modern working policies.

The machine learning researcher had worked for Apple since 2019. Prior to Apple, Goodfellow had previously worked for Google as a senior research scientist.

Goodfellow is now reportedly returning to Google, albeit to its DeepMind subsidiary. Google is currently approving requests from most employees seeking to work from home.

More departures are expected from Apple if it proceeds with its return-to-office mandate.

“Everything happened with us working from home all day, and now we have to go back to the office, sit in traffic for two hours, and hire people to take care of kids at home,” a different former Apple employee told Bloomberg.

Every talented AI researcher like Goodfellow that leaves Apple is a potential win for Google and other companies.

(Photo by Viktor Forgacs on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Apple’s former ML director reportedly joins Google DeepMind appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/05/18/apple-former-ml-director-reportedly-joins-google-deepmind/feed/ 0
MIT researchers develop AI to calculate material stress using images https://www.artificialintelligence-news.com/2021/04/22/mit-researchers-developer-ai-calculate-material-stress-using-images/ https://www.artificialintelligence-news.com/2021/04/22/mit-researchers-developer-ai-calculate-material-stress-using-images/#respond Thu, 22 Apr 2021 09:21:13 +0000 http://artificialintelligence-news.com/?p=10488 Researchers from MIT have developed an AI tool for determining the stress a material is under through analysing images. The pesky laws of physics have been used by engineers for centuries to work out – using complex equations – the stresses the materials they’re working with are being put under. It’s a time-consuming but vital... Read more »

The post MIT researchers develop AI to calculate material stress using images appeared first on AI News.

]]>
Researchers from MIT have developed an AI tool for determining the stress a material is under through analysing images.

The pesky laws of physics have been used by engineers for centuries to work out – using complex equations – the stresses the materials they’re working with are being put under. It’s a time-consuming but vital task to prevent structural failures which could be costly at best or cause loss of life at worst.

“Many generations of mathematicians and engineers have written down these equations and then figured out how to solve them on computers,” says Markus Buehler, the McAfee Professor of Engineering, director of the Laboratory for Atomistic and Molecular Mechanics, and one of the paper’s co-authors.

“But it’s still a tough problem. It’s very expensive — it can take days, weeks, or even months to run some simulations. So, we thought: Let’s teach an AI to do this problem for you.”

By employing computer vision, the AI tool developed by MIT’s researchers can generate estimates of material stresses in real-time.

A Generative Adversarial Network (GAN) was used for the breakthrough. The network was trained using thousands of paired images—one showing the material’s internal microstructure when subjected to mechanical forces, and the other labelled with colour-coded stress and strain values.

Using game theory, the GAN is able to determine the relationships between the material’s appearance and the stresses it’s being put under.

“From a picture, the computer is able to predict all those forces: the deformations, the stresses, and so forth,” Buehler adds.

Even more impressively, the AI can recreate issues like cracks developing in a material that can have a major impact on how it reacts to forces.

Once trained, the neural network can run on consumer-grade computer processors. This makes the AI accessible in the field and enables inspections to be carried out with just a photo.

You can find a full copy of the paper here.

(Photo by CHUTTERSNAP on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post MIT researchers develop AI to calculate material stress using images appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/04/22/mit-researchers-developer-ai-calculate-material-stress-using-images/feed/ 0
Researchers find systems to counter deepfakes can be deceived https://www.artificialintelligence-news.com/2021/02/10/researchers-find-systems-counter-deepfakes-can-be-deceived/ https://www.artificialintelligence-news.com/2021/02/10/researchers-find-systems-counter-deepfakes-can-be-deceived/#comments Wed, 10 Feb 2021 17:26:35 +0000 http://artificialintelligence-news.com/?p=10256 Researchers have found that systems designed to counter the increasing prevalence of deepfakes can be deceived. The researchers, from the University of California – San Diego, first presented their findings at the WACV 2021 conference. Shehzeen Hussain, a UC San Diego computer engineering PhD student and co-author on the paper, said: “Our work shows that... Read more »

The post Researchers find systems to counter deepfakes can be deceived appeared first on AI News.

]]>
Researchers have found that systems designed to counter the increasing prevalence of deepfakes can be deceived.

The researchers, from the University of California – San Diego, first presented their findings at the WACV 2021 conference.

Shehzeen Hussain, a UC San Diego computer engineering PhD student and co-author on the paper, said:

“Our work shows that attacks on deepfake detectors could be a real-world threat.

More alarmingly, we demonstrate that it’s possible to craft robust adversarial deepfakes even when an adversary may not be aware of the inner-workings of the machine learning model used by the detector.”

Two scenarios were tested as part of the research:

  1. The attackers have complete access to the detector model, including the face extraction pipeline and the architecture and parameters of the classification model
  2. The attackers can only query the machine learning model to figure out the probabilities of a frame being classified as real or fake.

In the first scenario, the attack’s success rate is above 99 percent for uncompressed videos. For compressed videos, it was 84.96 percent. In the second scenario, the success rate was 86.43 percent for uncompressed and 78.33 percent for compressed videos.

“We show that the current state of the art methods for deepfake detection can be easily bypassed if the adversary has complete or even partial knowledge of the detector,” the researchers wrote.

Deepfakes use a Generative Adversarial Network (GAN) to create fake imagery and even videos with increasingly convincing results. So-called ‘DeepPorn’ has been used to cause embarrassment and even blackmail.

There’s the old saying “I won’t believe it until I see it with my own eyes,” which is why convincing fake content is such a concern. As humans, we’re rather hard-wired to believe what we (think) we can see with our eyes.

In an age of disinformation, people are gradually learning not to believe everything they readespecially when it comes from unverified sources. Teaching people not to necessarily believe the images and video they see is going to pose a serious challenge.

Some hope has been placed on systems to detect and counter deepfakes before they cause harm. Unfortunately, the UC San Diego researchers’ findings somewhat dash those hopes.

“If the attackers have some knowledge of the detection system, they can design inputs to target the blind spots of the detector and bypass it,” ” said Paarth Neekhara, another co-author on the paper.

In separate research from University College London (UCL) last year, experts ranked what they believe to be the most serious AI threats. Deepfakes ranked top of the list.

“People now conduct large parts of their lives online and their online activity can make and break reputations,” said Dr Matthew Caldwell of UCL Computer Science.

One of the most high-profile deepfake cases so far was that of US house speaker Nancy Pelosi. In 2018, a deepfake video circulated on social media which made Pelosi appear drunk and slurring her words.

The video of Pelosi was likely created with the intention of being amusing rather than particularly maliciousbut shows how deepfakes could be used to cause disrepute and even influence democratic processes.

As part of a bid to persuade Facebook to change its policies on deepfakes, last year Israeli startup Canny AI created a deepfake of Facebook CEO Mark Zuckerberg which made it appear like he said: “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.”

Now imagine the precise targeting of content provided by platforms like Facebook combined with deepfakes which can’t be detected… actually, perhaps don’t, it’s a rather squeaky bum thought.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Researchers find systems to counter deepfakes can be deceived appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/02/10/researchers-find-systems-counter-deepfakes-can-be-deceived/feed/ 1
NVIDIA breakthrough emulates images from small datasets for groundbreaking AI training https://www.artificialintelligence-news.com/2020/12/07/nvidia-emulates-images-small-datasets-ai-training/ https://www.artificialintelligence-news.com/2020/12/07/nvidia-emulates-images-small-datasets-ai-training/#respond Mon, 07 Dec 2020 16:08:23 +0000 http://artificialintelligence-news.com/?p=10069 NVIDIA’s latest breakthrough emulates new images from existing small datasets with truly groundbreaking potential for AI training. The company demonstrated its latest AI model using a small dataset – just a fraction of the size typically used for a Generative Adversarial Network (GAN) – of artwork from the Metropolitan Museum of Art. From the dataset,... Read more »

The post NVIDIA breakthrough emulates images from small datasets for groundbreaking AI training appeared first on AI News.

]]>
NVIDIA’s latest breakthrough emulates new images from existing small datasets with truly groundbreaking potential for AI training.

The company demonstrated its latest AI model using a small dataset – just a fraction of the size typically used for a Generative Adversarial Network (GAN) – of artwork from the Metropolitan Museum of Art.

From the dataset, NVIDIA’s AI was able to create new images which replicate the style of the original artist’s work. These images can then be used to help train further AI models.

The AI achieved this impressive feat by applying a breakthrough neural network training technique similar to the popular NVIDIA StyleGAN2 model. 

The technique is called Adaptive Discriminator Augmentation (ADA) and NVIDIA claims that it reduces the number of training images required by 10-20x while still getting great results.

David Luebke, VP of Graphics Research at NVIDIA, said:

“These results mean people can use GANs to tackle problems where vast quantities of data are too time-consuming or difficult to obtain.

I can’t wait to see what artists, medical experts and researchers use it for.”

Healthcare is a particularly exciting field where NVIDIA’s research could be applied. For example, it could help to create cancer histology images to train other AI models.

The breakthrough will help with the issues around most current datasets.

Large datasets are often required for AI training but aren’t always available. On the other hand, large datasets are difficult to ensure their content is suitable and does not unintentionally lead to algorithmic bias.

Earlier this year, MIT was forced to remove a large dataset called 80 Million Tiny Images. The dataset is popular for training AIs but was found to contain images labelled with racist, misogynistic, and other unacceptable terms.

A statement on MIT’s website claims it was unaware of the offensive labels and they were “a consequence of the automated data collection procedure that relied on nouns from WordNet.”

The statement goes on to explain the 80 million images contained in the dataset – with sizes of just 32×32 pixels – meant that manual inspection would be almost impossible and couldn’t guarantee all offensive images would be removed.

By starting with a small dataset that can be feasibly checked manually, a technique like NVIDIA’s ADA could be used to create new images which emulate the originals and can scale up to the required size for training AI models.

In a blog post, NVIDIA wrote:

“It typically takes 50,000 to 100,000 training images to train a high-quality GAN. But in many cases, researchers simply don’t have tens or hundreds of thousands of sample images at their disposal.

With just a couple thousand images for training, many GANs would falter at producing realistic results. This problem, called overfitting, occurs when the discriminator simply memorizes the training images and fails to provide useful feedback to the generator.”

You can find NVIDIA’s full research paper here (PDF). The paper is being presented at this year’s NeurIPS conference as one of a record 28 NVIDIA Research papers accepted to the prestigious conference.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post NVIDIA breakthrough emulates images from small datasets for groundbreaking AI training appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/12/07/nvidia-emulates-images-small-datasets-ai-training/feed/ 0