ukraine Archives - AI News https://www.artificialintelligence-news.com/tag/ukraine/ Artificial Intelligence News Thu, 17 Mar 2022 09:43:24 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png ukraine Archives - AI News https://www.artificialintelligence-news.com/tag/ukraine/ 32 32 President Zelenskyy deepfake asks Ukrainians to ‘lay down arms’ https://www.artificialintelligence-news.com/2022/03/17/president-zelenskyy-deepfake-asks-ukrainians-lay-down-arms/ https://www.artificialintelligence-news.com/2022/03/17/president-zelenskyy-deepfake-asks-ukrainians-lay-down-arms/#respond Thu, 17 Mar 2022 09:43:22 +0000 https://artificialintelligence-news.com/?p=11774 A deepfake of President Zelenskyy calling on citizens to “lay down arms” was posted to a hacked Ukrainian news website and shared across social networks. The deepfake purports to show Zelenskyy declaring that Ukraine has “decided to return Donbas” to Russia and that his nation’s efforts had failed. Following an alleged hack, the deepfake was... Read more »

The post President Zelenskyy deepfake asks Ukrainians to ‘lay down arms’ appeared first on AI News.

]]>
A deepfake of President Zelenskyy calling on citizens to “lay down arms” was posted to a hacked Ukrainian news website and shared across social networks.

The deepfake purports to show Zelenskyy declaring that Ukraine has “decided to return Donbas” to Russia and that his nation’s efforts had failed.

Following an alleged hack, the deepfake was first posted to a Ukrainian news site for TV24. The deepfake was then shared across social networks, including Facebook and Twitter.

Nathaniel Gleicher, Head of Security Policy for Facebook owner Meta, wrote in a tweet:

“Earlier today, our teams identified and removed a deepfake video claiming to show President Zelensky issuing a statement he never did.

It appeared on a reportedly compromised website and then started showing across the internet.”

The deepfake itself is poor by today’s standards, with fake Zelenskyy having a comically large and noticeably pixelated head compared to the rest of his body.

It shouldn’t have fooled anyone, but Zelenskyy posted a video to his Instagram to call out the fake anyway.

“I only advise that the troops of the Russian Federation lay down their arms and return home,” Zelenskyy said in his official video. “We are at home and defending Ukraine.”

Earlier this month, the Ukrainian government posted a statement warning soldiers and civilians not to believe any videos of Zelenskyy claiming to surrender:

“​​Imagine seeing Vladimir Zelensky on TV making a surrender statement. You see it, you hear it – so it’s true. But this is not the truth. This is deepfake technology.

This will not be a real video, but created through machine learning algorithms.

Videos made through such technologies are almost impossible to distinguish from the real ones.

Be aware – this is a fake! The goal is to disorient, sow panic, disbelieve citizens, and incite our troops to retreat.”

Fortunately, this deepfake was quite easy to distinguish – despite humans now often finding it impossible – and could actually help to raise awareness of how such content is used to influence and manipulate.

Earlier this month, AI News reported on how Facebook and Twitter removed two anti-Ukraine disinformation campaigns linked to Russia and Belarus. One of the campaigns even used AI-generated faces for a fake “editor-in-chief” and “columnist” for a linked propaganda website.

Both cases in the past month show the danger of deepfakes and the importance of raising public awareness and developing tools for countering such content before it’s able to spread.

(Image Credit: President.gov.ua used without changes under CC BY 4.0 license)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post President Zelenskyy deepfake asks Ukrainians to ‘lay down arms’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/03/17/president-zelenskyy-deepfake-asks-ukrainians-lay-down-arms/feed/ 0
Ukraine harnesses Clearview AI to uncover assailants and identify the fallen https://www.artificialintelligence-news.com/2022/03/14/ukraine-harnesses-clearview-ai-uncover-assailants-identify-fallen/ https://www.artificialintelligence-news.com/2022/03/14/ukraine-harnesses-clearview-ai-uncover-assailants-identify-fallen/#respond Mon, 14 Mar 2022 14:37:53 +0000 https://artificialintelligence-news.com/?p=11758 Ukraine is using Clearview AI’s facial recognition software to uncover Russian assailants and identify Ukrainians who’ve sadly lost their lives in the conflict. The company’s chief executive, Hoan Ton-That, told Reuters that Ukraine’s defence ministry began using the software on Saturday. Clearview AI’s facial recognition system is controversial but indisputably powerful—using billions of images scraped... Read more »

The post Ukraine harnesses Clearview AI to uncover assailants and identify the fallen appeared first on AI News.

]]>
Ukraine is using Clearview AI’s facial recognition software to uncover Russian assailants and identify Ukrainians who’ve sadly lost their lives in the conflict.

The company’s chief executive, Hoan Ton-That, told Reuters that Ukraine’s defence ministry began using the software on Saturday.

Clearview AI’s facial recognition system is controversial but indisputably powerful—using billions of images scraped from the web to identify just about anyone. Ton-That says that Clearview has more than two billion images from Russian social media service VKontakte alone.

Reuters says that Ton-That sent a letter to Ukrainian authorities offering Clearview AI’s assistance. The letter said the software could help with identifying undercover Russian operatives, reuniting refugees with their families, and debunking misinformation.

Clearview AI’s software is reportedly effective even where there is facial damage or decomposition.

Ukraine is now reportedly using the facial recognition software for free, but the same offer has not been extended to Russia.

Russia has been widely condemned for its illegal invasion and increasingly brutal methods that are being investigated as likely war crimes. The Russian military has targeted not just the Ukrainian military but also civilians and even humanitarian corridors established to help people fleeing the conflict.

In response, many private companies have decided to halt or limit their operations in Russia and many are offering assistance to Ukraine in areas like cybersecurity and satellite internet access.

Clearview AI’s assistance could generate some positive PR for a company that is used to criticism.

Aside from its dystopian and invasive use of mass data scraped from across the web, the company has some potential far-right links.

“Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland once argued. Ekeland, it’s worth noting, gained notoriety as “The Troll’s Lawyer” after defending clients including self-described neo-Nazi troll Andrew Auernheimer.

Global regulators have increasingly clamped down on Clearview AI.

In November 2021, the UK’s Information Commissioner’s Office (ICO) imposed a potential fine of just over £17 million to Clearview AI and ordered the company to destroy the personal data it holds on British citizens and cease further processing.

Earlier that month, the OAIC reached the same conclusion as the ICO and ordered Clearview AI to destroy the biometric data it collected on Australians and cease further collection.

“I consider that the act of uploading an image to a social media site does not unambiguously indicate agreement to collection of that image by an unknown third party for commercial purposes,” said Australia’s Information Commissioner Angelene Falk at the time.

However, Clearview AI has boasted that police use of its facial recognition system increased 26 percent in the wake of the US Capitol raid.

Clearview AI’s operations in Ukraine could prove to be a positive case study, but whether it’s enough to offset the privacy concerns remains to be seen.

(Photo by Daniele Franchi on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Ukraine harnesses Clearview AI to uncover assailants and identify the fallen appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/03/14/ukraine-harnesses-clearview-ai-uncover-assailants-identify-fallen/feed/ 0
Deepfakes are being used to push anti-Ukraine disinformation https://www.artificialintelligence-news.com/2022/03/01/deepfakes-are-being-used-push-anti-ukraine-disinformation/ https://www.artificialintelligence-news.com/2022/03/01/deepfakes-are-being-used-push-anti-ukraine-disinformation/#respond Tue, 01 Mar 2022 18:01:38 +0000 https://artificialintelligence-news.com/?p=11719 Influence operations with ties to Russia and Belarus have been found using deepfakes to push anti-Ukraine disinformation. Last week, AI News reported on the release of a study that found humans can generally no longer distinguish between real and AI-generated “deepfake” faces. As humans, we’re somewhat trained to believe what we see with our eyes.... Read more »

The post Deepfakes are being used to push anti-Ukraine disinformation appeared first on AI News.

]]>
Influence operations with ties to Russia and Belarus have been found using deepfakes to push anti-Ukraine disinformation.

Last week, AI News reported on the release of a study that found humans can generally no longer distinguish between real and AI-generated “deepfake” faces.

As humans, we’re somewhat trained to believe what we see with our eyes. Many believed that it was only a matter of time before Russia took advantage of deepfakes and our human psychology to take its vast disinformation campaigns to the next level.

Facebook and Twitter removed two anti-Ukraine “covert influence operations” over the weekend. One had ties to Russia, while the other was connected to Belarus.

As we’ve often seen around things like Covid-19 disinformation, the Russian propaganda operation included websites aimed at pushing readers towards anti-Ukraine views. The campaign had links with the News Front and South Front websites which the US government has linked to Russian intelligence disinformation efforts.

However, Facebook said this particular campaign used AI-generated faces to give the idea that it was posted by credible columnists. Here’s one “columnist” and the “editor-in-chief” of one propaganda website:

Ears are often still a giveaway with AI-generated faces like those created on ‘This Person Does Not Exist’. The fictional woman’s mismatched earrings are one indicator while the man’s right ear is clearly not quite right.

Part of the campaign was to promote the idea that Russia’s military operation is going well and Ukraine’s efforts are going poorly. We know that Russia’s state broadcasters have only acknowledged ludicrously small losses—including just one Russian soldier fatality.

On Saturday, state-owned news agency RIA-Novosti even accidentally published and then deleted an article headlined “The arrival of Russia in a new world” in what appeared to be a pre-prepared piece expecting a swift victory. The piece piled praise on Putin’s regime and claims that Russia is returning to lead a new world order to rectify the “terrible catastrophe” that was the collapse of the Soviet Union.

So far, Russia is expected to have lost around 5,300 troops, 816 armoured combat vehicles, 101 tanks, 74 guns, 29 warplanes, 29 helicopters, and two ships/motorboats, as part of its decision to invade Ukraine.

The slow progress and mounting losses appear to have angered Russia with its military now conducting what appears to be very clear war crimes—targeting civilian areas, bombing hospitals and kindergartens, and using thermobaric and cluster munitions indiscriminately. Putin has even hinted at using nuclear weapons offensively rather than defensively in an unprecedented escalation.

Many ordinary Russian citizens are becoming outraged at what their government is doing to Ukraine, where many have family, friends, and share deep cultural ties. Russia appears to be ramping up its propaganda to counter as the country finds itself increasingly isolated.

Western governments and web giants have clamped down on Russia’s state propagandists in recent days.

British telecoms regulator Ofcom has launched 15 investigations into state broadcaster RT after observing “a significant increase in the number of programmes on the RT service that warrant investigation under our Broadcasting Code.”

Facebook has decided to block access to RT and Sputnik across the EU following “a number” of government requests from within the EU. Twitter, for its part, has announced that it would label tweets from Russian state media accounts.

Hacker collective Anonymous claims to have carried out over 1,500 cyberattacks against Russian government sites, transport infrastructure, banks, and state media to counter their falsehoods and broadcast the truth about the invasion to Russian citizens.

Russia’s media regulator Roskomnadzor, for its part, has restricted Russian users’ access to Facebook and Twitter.

(Photo by Max Kukurudziak on Unsplash)

Related: Ukraine is using Starlink to maintain global connectivity

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Deepfakes are being used to push anti-Ukraine disinformation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/03/01/deepfakes-are-being-used-push-anti-ukraine-disinformation/feed/ 0
James Cameron warns of the dangers of deepfakes https://www.artificialintelligence-news.com/2022/01/24/james-cameron-warns-of-the-dangers-of-deepfakes/ https://www.artificialintelligence-news.com/2022/01/24/james-cameron-warns-of-the-dangers-of-deepfakes/#respond Mon, 24 Jan 2022 18:40:34 +0000 https://artificialintelligence-news.com/?p=11603 Legendary director James Cameron has warned of the dangers that deepfakes pose to society. Deepfakes leverage machine learning and AI techniques to convincingly manipulate or generate visual and audio content. Their high potential to deceive makes them a powerful tool for spreading disinformation, committing fraud, trolling, and more. “Every time we improve these tools, we’re... Read more »

The post James Cameron warns of the dangers of deepfakes appeared first on AI News.

]]>
Legendary director James Cameron has warned of the dangers that deepfakes pose to society.

Deepfakes leverage machine learning and AI techniques to convincingly manipulate or generate visual and audio content. Their high potential to deceive makes them a powerful tool for spreading disinformation, committing fraud, trolling, and more.

“Every time we improve these tools, we’re actually in a sense building a toolset to create fake media — and we’re seeing it happening now,” said Cameron in a BBC video interview.

“Right now the tools are — the people just playing around on apps aren’t that great. But over time, those limitations will go away. Things that you see and fully believe you’re seeing could be faked.”

Have you ever said “I’ll believe it when I see it with my own eyes,” or similar? I certainly have. As humans, we’re subconsciously trained to believe what we can see (unless it’s quite obviously faked.)

The problem is amplified with today’s fast news cycle. It’s a well-known problem that many articles get shared based on their headline before moving on to the next story. Few people are going to stop to analyse images and videos for small imperfections.

Often the stories are shared with reactions to the headline without reading the story to get the full context. This can lead to a butterfly effect of people seeing their contacts’ reactions to the headline and feel they don’t need additional context—often just sharing in whatever emotional response the headline was designed to invoke (generally outrage.)

“News cycles happen so fast, and people respond so quickly, you could have a major incident take place between the interval between when the deepfake drops and when it’s exposed as a fake,” says Cameron.

“We’ve seen situations — you know, Arab Spring being a classic example — where with social media, the uprising was practically overnight.”

It’s a difficult problem to tackle as it is. We’ve all seen the amount of disinformation around things such as the COVID-19 vaccines. However, an article posted with convincing deepfake media will be almost impossible to stop from being posted and/or shared widely.

AI tools for spotting the increasingly small differences between real and manipulated media will be key to preventing deepfakes from ever being posted. AI tools for spotting the increasingly small differences between real and manipulated media will be key to preventing deepfakes from ever being posted. However, researchers have found that current tools can easily be deceived.

Images and videos that can be verified as original and authentic using technologies like distributed ledgers could also be used to help give audiences confidence the media they’re consuming isn’t a manipulated version and they really can trust their own eyes.

In the meantime, Cameron suggest using Occam’s razor—a problem-solving principle that’s can be summarised as the simplest explanation is the likeliest.

“Conspiracy theories are all too complicated. People aren’t that good, human systems aren’t that good, people can’t keep a secret to save their lives, and most people in positions of power are bumbling stooges.

“The fact that we think that they could realistically pull off these — these complex plots? I don’t buy any of that crap! Bill Gates is not really trying to microchip you with the flu vaccine!”

However, Cameron admits his scepticism of new technology.

“Every single advancement in technology that’s ever been created has been weaponised. I say this to AI scientists all the time, and they go, ‘No, no, no, we’ve got this under control.’ You know, ‘We just give the AIs the right goals…’

“So who’s deciding what those goals are? The people that put up the money for the research, right? Which are all either big business or defense. So you’re going to teach these new sentient entities to be either greedy or murderous.”

Of course, Skynet gets an honourary mention.

“If Skynet wanted to take over and wipe us out, it would actually look a lot like what’s going on right now. It’s not going to have to — like, wipe out the entire, you know, biosphere and environment with nuclear weapons to do it. It’s going to be so much easier and less energy required to just turn our minds against ourselves.

“All Skynet would have to do is just deepfake a bunch of people, pit them against each other, stir up a lot of foment, and just run this giant deepfake on humanity.”

Russia’s infamous state-sponsored “troll farms” are one of the largest sources of disinformation and are used to conduct online influence campaigns.

In a January 2017 report issued by the United States Intelligence Community – Assessing Russian Activities and Intentions in Recent US Elections (PDF) – described the ‘Internet Research Agency’ as one such troll farm.

“The likely financier of the so-called Internet Research Agency of professional trolls located in Saint Petersburg is a close ally of [Vladimir] Putin with ties to Russian intelligence,” commenting that “they previously were devoted to supporting Russian actions in Ukraine.”

Western officials have warned that Russia may use disinformation campaigns – including claims of an attack from Ukrainian troops – to rally support and justify an invasion of Ukraine. It’s not out the realms of possibility that manipulated content will play a role, so it could be too late to counter the first large-scale disaster supported by deepfakes.

Related: University College London: Deepfakes are the ‘most serious’ AI crime threat

(Image Credit: Gage Skidmore. Image cropped. CC BY-SA 3.0 license)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post James Cameron warns of the dangers of deepfakes appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/01/24/james-cameron-warns-of-the-dangers-of-deepfakes/feed/ 0