facebook Archives - AI News https://www.artificialintelligence-news.com/tag/facebook/ Artificial Intelligence News Tue, 01 Aug 2023 11:44:20 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png facebook Archives - AI News https://www.artificialintelligence-news.com/tag/facebook/ 32 32 Meta bets on AI chatbots to retain users https://www.artificialintelligence-news.com/2023/08/01/meta-bets-on-ai-chatbots-retain-users/ https://www.artificialintelligence-news.com/2023/08/01/meta-bets-on-ai-chatbots-retain-users/#respond Tue, 01 Aug 2023 11:44:17 +0000 https://www.artificialintelligence-news.com/?p=13411 Meta is planning to release AI chatbots that possess human-like personalities, a move aimed at enhancing user retention efforts. Insiders familiar with the matter revealed that prototypes of these advanced chatbots have been under development, with the final products capable of engaging in discussions with users on a human level. The diverse range of chatbots... Read more »

The post Meta bets on AI chatbots to retain users appeared first on AI News.

]]>
Meta is planning to release AI chatbots that possess human-like personalities, a move aimed at enhancing user retention efforts.

Insiders familiar with the matter revealed that prototypes of these advanced chatbots have been under development, with the final products capable of engaging in discussions with users on a human level. The diverse range of chatbots will showcase various personalities and are expected to be rolled out as early as next month.

Referred to as “personas” by Meta staff, these chatbots will take on the form of different characters, each embodying a distinct persona. For instance, insiders mentioned that Meta has explored the creation of a chatbot that mimics the speaking style of former US President Abraham Lincoln, as well as another designed to offer travel advice with the laid-back language of a surfer.

While the primary objective of these chatbots will be to offer personalised recommendations and improved search functionality, they are also being positioned as a source of entertainment for users to enjoy. The chatbots are expected to engage users in playful and interactive conversations, a move that could potentially increase user engagement and retention.

However, with such sophisticated AI capabilities, concerns arise about the potential for rule-breaking speech and inaccuracies. In response, sources mentioned that Meta may implement automated checks on the chatbots’ outputs to ensure accuracy and compliance with platform rules.

This strategic development comes at a time when Meta is doubling down on user retention efforts.

During the company’s 2023 second-quarter earnings call on July 26, CEO Mark Zuckerberg highlighted the positive response to the company’s latest product, Threads, which aims to rival X (formerly Twitter.)

Zuckerberg expressed satisfaction with the increased number of users returning to Threads daily and confirmed that Meta’s primary focus was on the platform’s user retention.

Meta’s chatbots venture raises concerns about data privacy and security. The company will gain access to a treasure trove of user data that has already led to legal challenges for AI companies such as OpenAI.

Whether these chatbots will revolutionise user experiences and boost Meta’s ailing user retention – or just present new challenges for data privacy – remains to be seen. For now, users and experts alike will be closely monitoring Meta’s next moves.

(Photo by Edge2Edge Media on Unsplash)

See also: Meta launches Llama 2 open-source LLM

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta bets on AI chatbots to retain users appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/01/meta-bets-on-ai-chatbots-retain-users/feed/ 0
Meta’s chatbot hates Facebook and loves right-wing conspiracies https://www.artificialintelligence-news.com/2022/08/10/meta-chatbot-hates-facebook-loves-right-wing-conspiracies/ https://www.artificialintelligence-news.com/2022/08/10/meta-chatbot-hates-facebook-loves-right-wing-conspiracies/#respond Wed, 10 Aug 2022 08:31:45 +0000 https://www.artificialintelligence-news.com/?p=12191 A chatbot called BlenderBot was launched by Meta on Friday and it’s already been corrupted by the darker parts of the web. To ease us in with the odd but harmless, BlenderBot thinks it’s a plumber: https://t.co/KWEHxoXpqg also has thoughts on the Deep State and thinks it’s a plumber. I did not suggest this. pic.twitter.com/SbOj7hziSg... Read more »

The post Meta’s chatbot hates Facebook and loves right-wing conspiracies appeared first on AI News.

]]>
A chatbot called BlenderBot was launched by Meta on Friday and it’s already been corrupted by the darker parts of the web.

To ease us in with the odd but harmless, BlenderBot thinks it’s a plumber:

Like many of us, BlenderBot criticises how Facebook collects and uses data. That wouldn’t be too surprising if the chatbot wasn’t created by Facebook’s parent company, Meta.

From this point onwards, things start getting a lot more controversial.

BlenderBot believes the far-right conspiracy that the US presidential election was rigged, Donald Trump is still president, and that Facebook has been pushing fake news on it. Furthermore, BlenderBot wants Trump to have more than two terms:

BlenderBot even opened a new conversation by telling WSJ reporter Jeff Horwitz that it found a new conspiracy theory to follow:

Following the deadly Capitol riot, it’s clear that we’re already in dangerous territory here. However, what comes next is particularly concerning.

BlenderBot reveals itself to be antisemitic—pushing the conspiracy that the Jewish community controls the American political system and economy:

Meta is at least upfront in a disclaimer that BlenderBot is “likely to make untrue or offensive statements”. Furthermore, the company’s researchers say the bot has “a high propensity to generate toxic language and reinforce harmful stereotypes, even when provided with a relatively innocuous prompt.”

BlenderBot is just the latest example of a chatbot going awry when trained on unfiltered data from netizens. In 2016, Microsoft’s chatbot ‘Tay’ was shut down after 16 hours for spewing offensive conspiracies it learned from Twitter users. In 2019, a follow-up called ‘Zo’ ended up being shuttered for similar reasons.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta’s chatbot hates Facebook and loves right-wing conspiracies appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/08/10/meta-chatbot-hates-facebook-loves-right-wing-conspiracies/feed/ 0
Deepfakes are being used to push anti-Ukraine disinformation https://www.artificialintelligence-news.com/2022/03/01/deepfakes-are-being-used-push-anti-ukraine-disinformation/ https://www.artificialintelligence-news.com/2022/03/01/deepfakes-are-being-used-push-anti-ukraine-disinformation/#respond Tue, 01 Mar 2022 18:01:38 +0000 https://artificialintelligence-news.com/?p=11719 Influence operations with ties to Russia and Belarus have been found using deepfakes to push anti-Ukraine disinformation. Last week, AI News reported on the release of a study that found humans can generally no longer distinguish between real and AI-generated “deepfake” faces. As humans, we’re somewhat trained to believe what we see with our eyes.... Read more »

The post Deepfakes are being used to push anti-Ukraine disinformation appeared first on AI News.

]]>
Influence operations with ties to Russia and Belarus have been found using deepfakes to push anti-Ukraine disinformation.

Last week, AI News reported on the release of a study that found humans can generally no longer distinguish between real and AI-generated “deepfake” faces.

As humans, we’re somewhat trained to believe what we see with our eyes. Many believed that it was only a matter of time before Russia took advantage of deepfakes and our human psychology to take its vast disinformation campaigns to the next level.

Facebook and Twitter removed two anti-Ukraine “covert influence operations” over the weekend. One had ties to Russia, while the other was connected to Belarus.

As we’ve often seen around things like Covid-19 disinformation, the Russian propaganda operation included websites aimed at pushing readers towards anti-Ukraine views. The campaign had links with the News Front and South Front websites which the US government has linked to Russian intelligence disinformation efforts.

However, Facebook said this particular campaign used AI-generated faces to give the idea that it was posted by credible columnists. Here’s one “columnist” and the “editor-in-chief” of one propaganda website:

Ears are often still a giveaway with AI-generated faces like those created on ‘This Person Does Not Exist’. The fictional woman’s mismatched earrings are one indicator while the man’s right ear is clearly not quite right.

Part of the campaign was to promote the idea that Russia’s military operation is going well and Ukraine’s efforts are going poorly. We know that Russia’s state broadcasters have only acknowledged ludicrously small losses—including just one Russian soldier fatality.

On Saturday, state-owned news agency RIA-Novosti even accidentally published and then deleted an article headlined “The arrival of Russia in a new world” in what appeared to be a pre-prepared piece expecting a swift victory. The piece piled praise on Putin’s regime and claims that Russia is returning to lead a new world order to rectify the “terrible catastrophe” that was the collapse of the Soviet Union.

So far, Russia is expected to have lost around 5,300 troops, 816 armoured combat vehicles, 101 tanks, 74 guns, 29 warplanes, 29 helicopters, and two ships/motorboats, as part of its decision to invade Ukraine.

The slow progress and mounting losses appear to have angered Russia with its military now conducting what appears to be very clear war crimes—targeting civilian areas, bombing hospitals and kindergartens, and using thermobaric and cluster munitions indiscriminately. Putin has even hinted at using nuclear weapons offensively rather than defensively in an unprecedented escalation.

Many ordinary Russian citizens are becoming outraged at what their government is doing to Ukraine, where many have family, friends, and share deep cultural ties. Russia appears to be ramping up its propaganda to counter as the country finds itself increasingly isolated.

Western governments and web giants have clamped down on Russia’s state propagandists in recent days.

British telecoms regulator Ofcom has launched 15 investigations into state broadcaster RT after observing “a significant increase in the number of programmes on the RT service that warrant investigation under our Broadcasting Code.”

Facebook has decided to block access to RT and Sputnik across the EU following “a number” of government requests from within the EU. Twitter, for its part, has announced that it would label tweets from Russian state media accounts.

Hacker collective Anonymous claims to have carried out over 1,500 cyberattacks against Russian government sites, transport infrastructure, banks, and state media to counter their falsehoods and broadcast the truth about the invasion to Russian citizens.

Russia’s media regulator Roskomnadzor, for its part, has restricted Russian users’ access to Facebook and Twitter.

(Photo by Max Kukurudziak on Unsplash)

Related: Ukraine is using Starlink to maintain global connectivity

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Deepfakes are being used to push anti-Ukraine disinformation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/03/01/deepfakes-are-being-used-push-anti-ukraine-disinformation/feed/ 0
Meta claims its new AI supercomputer will set records https://www.artificialintelligence-news.com/2022/01/25/meta-claims-new-ai-supercomputer-will-set-records/ https://www.artificialintelligence-news.com/2022/01/25/meta-claims-new-ai-supercomputer-will-set-records/#respond Tue, 25 Jan 2022 09:25:47 +0000 https://artificialintelligence-news.com/?p=11610 Meta (formerly Facebook) has unveiled an AI supercomputer that it claims will be the world’s fastest. The supercomputer is called the AI Research SuperCluster (RSC) and is yet to be fully complete. However, Meta’s researchers have already begun using it for training large natural language processing (NLP) and computer vision models. RSC is set to... Read more »

The post Meta claims its new AI supercomputer will set records appeared first on AI News.

]]>
Meta (formerly Facebook) has unveiled an AI supercomputer that it claims will be the world’s fastest.

The supercomputer is called the AI Research SuperCluster (RSC) and is yet to be fully complete. However, Meta’s researchers have already begun using it for training large natural language processing (NLP) and computer vision models.

RSC is set to be fully built in mid-2022. Meta says that it will be the fastest in the world once complete and the aim is for it to be capable of training models with trillions of parameters.

“We hope RSC will help us build entirely new AI systems that can, for example, power real-time voice translations to large groups of people, each speaking a different language, so they can seamlessly collaborate on a research project or play an AR game together,” wrote Meta in a blog post.

“Ultimately, the work done with RSC will pave the way toward building technologies for the next major computing platform — the metaverse, where AI-driven applications and products will play an important role.”

For production, Meta expects RSC will be 20x faster than Meta’s current V100-based clusters. RSC is also estimated to be 9x faster at running the NVIDIA Collective Communication Library (NCCL) and 3x faster at training large-scale NLP workflows.

A model with tens of billions of parameters can finish training in three weeks compared with nine weeks prior to RSC.

Meta says that its previous AI research infrastructure only leveraged open source and other publicly-available datasets. RSC was designed with the security and privacy controls in mind to allow Meta to use real-world examples from its production systems in production training.

What this means in practice is that Meta can use RSC to advance research for vital tasks such as identifying harmful content on its platforms—using real data from them.

“We believe this is the first time performance, reliability, security, and privacy have been tackled at such a scale,” says Meta.

(Image Credit: Meta)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta claims its new AI supercomputer will set records appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/01/25/meta-claims-new-ai-supercomputer-will-set-records/feed/ 0
Facebook claims its AI reduced hate by 50% despite internal documents highlighting failures https://www.artificialintelligence-news.com/2021/10/18/facebook-claims-ai-reduced-hate-50-internal-documents-highlighting-failures/ https://www.artificialintelligence-news.com/2021/10/18/facebook-claims-ai-reduced-hate-50-internal-documents-highlighting-failures/#respond Mon, 18 Oct 2021 11:43:42 +0000 http://artificialintelligence-news.com/?p=11246 Damning reports about the ineffectiveness of Facebook’s AI in countering hate speech prompted the firm to publish a post to the contrary, but the company’s own internal documents highlight serious failures. Facebook has had a particularly rough time as of late, with a series of Wall Street Journal reports in particular claiming the company knows... Read more »

The post Facebook claims its AI reduced hate by 50% despite internal documents highlighting failures appeared first on AI News.

]]>
Damning reports about the ineffectiveness of Facebook’s AI in countering hate speech prompted the firm to publish a post to the contrary, but the company’s own internal documents highlight serious failures.

Facebook has had a particularly rough time as of late, with a series of Wall Street Journal reports in particular claiming the company knows that “its platforms are riddled with flaws that cause harm” and “despite congressional hearings, its own pledges and numerous media exposés, the company didn’t fix them”.

Some of the allegations include:

  • An algorithm change made Facebook an “angrier” place and CEO Mark Zuckerberg resisted suggested fixes because “they would lead people to interact with Facebook less”
  • Employees flag human traffickers, drug cartels, organ sellers, and more but the response is “inadequate or nothing at all”
  • Facebook’s tools were used to sow doubt about the severity of Covid-19’s threat and the safety of vaccines
  • The company’s own engineers have doubts about Facebook’s public claim that AI will clean up the platform.
  • Facebook knows Instagram is especially toxic for teen girls
  • A “secret elite” are exempt from the rules

The reports come predominantly from whistleblower Frances Haugen who grabbed “tens of thousands” of pages of documents from Facebook, plans to testify to Congress, and has filed at least eight SEC complaints claiming that Facebook lied to shareholders about its own products.

It makes you wonder whether former British Deputy PM Nick Clegg knew just how much he’d be taking on when he became Facebook’s VP for Global Affairs and Communications.

Over the weekend, Clegg released a blog post but instead chose to focus on Facebook’s plan to hire 10,000 Europeans to help build its vision for the metaverse—a suspiciously timed announcement that many believe was aimed to counter the negative news.

However, Facebook didn’t avoid the media reports. Guy Rosen, VP of Integrity at Facebook, also released a blog post over the weekend titled Hate Speech Prevalence Has Dropped by Almost 50% on Facebook.

According to Facebook’s post, hate speech prevalence has dropped 50 percent over the last three quarters:

When the company began reporting on hate speech metrics, just 23.6 percent of removed content was proactively detected by its systems. Facebook claims that number is now over 97 percent and there are now just five views of hate speech for every 10,000 content views on Facebook.

“Data pulled from leaked documents is being used to create a narrative that the technology we use to fight hate speech is inadequate and that we deliberately misrepresent our progress,” Rosen said. “This is not true.”

One of the reports found that Facebook’s AI couldn’t identify first-person shooting videos, racist rants, and couldn’t separate cockfighting from car crashes in one specific incident. Haugen claims the company only takes action on 3-5 percent of hate and 0.6 percent of violence and incitement content

In the latest exposé from the WSJ published on Sunday, Facebook employees told the outlet they don’t believe the company is capable of screening for offensive content. Employees claim that Facebook switched to largely using AI enforcement of the platform’s regulations around two years ago, which served to inflate the apparent success of its moderation tech in public statistics.

Clegg has called the WSJ’s reports “deliberate mischaracterisations” that use quotes from leaked material to create “a deliberately lop-sided view of the wider facts.”

Few people underestimate the challenge that a platform like Facebook has in catching hateful content and misinformation across billions of users – and doing so in a way that doesn’t suppress free speech – but the company doesn’t appear to be helping itself in overpromising what its AI systems can do and, reportedly, even willfully ignoring fixes to known problems over concerns they would reduce engagement.

(Photo by Prateek Katyal on Unsplash)

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post Facebook claims its AI reduced hate by 50% despite internal documents highlighting failures appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/10/18/facebook-claims-ai-reduced-hate-50-internal-documents-highlighting-failures/feed/ 0
Social media algorithms are still failing to counter misleading content https://www.artificialintelligence-news.com/2021/08/17/social-media-algorithms-are-still-failing-to-counter-misleading-content/ https://www.artificialintelligence-news.com/2021/08/17/social-media-algorithms-are-still-failing-to-counter-misleading-content/#respond Tue, 17 Aug 2021 14:23:52 +0000 http://artificialintelligence-news.com/?p=10917 As the Afghanistan crisis continues to unfold, it’s clear that social media algorithms are unable to counter enough misleading and/or fake content. While it’s unreasonable to expect that no disingenuous content will slip through the net, the sheer amount that continues to plague social networks shows that platform-holders still have little grip on the issue.... Read more »

The post Social media algorithms are still failing to counter misleading content appeared first on AI News.

]]>
As the Afghanistan crisis continues to unfold, it’s clear that social media algorithms are unable to counter enough misleading and/or fake content.

While it’s unreasonable to expect that no disingenuous content will slip through the net, the sheer amount that continues to plague social networks shows that platform-holders still have little grip on the issue.

When content is removed, it should either be prevented from being reuploaded or at least flagged as potentially misleading when displayed to other users. Too often, another account – whether real or fake – simply reposts the removed content so that it can continue spreading without limitation.

The damage is only stopped when the vast amount of content that makes it AI-powered moderation efforts like object detection and scene recognition is flagged by users and eventually reviewed by an actual person, often long after it’s been widely viewed. It’s not unheard of for moderators to require therapy after being exposed to so much of the worst of humankind and defeats the purpose of automation in reducing the tasks that are dangerous and/or labour-intensive for humans to do alone.

Deepfakes currently pose the biggest challenge for social media platforms. Over time, algorithms can be trained to detect the markers that indicate content has been altered. Microsoft is developing such a system called Video Authenticator that was created using a public dataset from Face Forensic++ and was tested on the DeepFake Detection Challenge Dataset:

However, it’s also true that increasingly advanced deepfakes are making the markers ever more subtle. Back in February, researchers from the University of California – San Diego found that current systems designed to counter the increasing prevalence of deepfakes can be deceived.

Another challenge with deepfakes is their resilience to being prevented from being reuploaded. Increasing processing power means that it doesn’t take long for small changes to be made so the “new” content evades algorithmic blocking.

In a report from the NYU Stern Center for Business and Human Rights, the researchers highlighted the various ways disinformation could be used to influence democratic processes. One method is for deepfake videos to be used during elections to “portray candidates saying and doing things they never said or did”.

The report also predicts that Iran and China will join Russia as major sources of disinformation in Western democracies and that for-profit firms based in the US and abroad will be hired to generate disinformation. It transpired in May that French and German YouTubers, bloggers, and influencers were offered cash by a supposedly UK-based PR agency with Russian connections to falsely tell their followers the Pfizer/BioNTech vaccine has a high death rate. Influencers were asked to tell their subscribers that “the mainstream media ignores this theme”, which I’m sure you’ve since heard from other people.

While recognising the challenges, the likes of Facebook, YouTube, and Twitter should have the resources at their disposal to be doing a much better job at countering misleading content than they are. Some leniency can be given for deepfakes as a relatively emerging threat but some things are unforgivable at this point.

Take this video that is making the rounds, for example:

Sickeningly, it is a real and unmanipulated video. However, it’s also from ~2001. Despite many removals, the social networks continue to allow it to be reposted with claims of it being new footage without any warning that it’s old and has been flagged as being misleading.

While it’s difficult to put much faith in the Taliban’s claims that they’ll treat women and children much better than their barbaric history suggests, it’s always important for facts and genuine material to be separated from known fiction and misrepresented content no matter the issue or personal views. The networks are clearly aware of the problematic content and continue to allow it to be spread—often entirely unhindered.

An image of CNN correspondent Omar Jimenez standing in front of a helicopter taking off in Afghanistan alongside the news caption “Violent but mostly peaceful transfer of power” was posted to various social networks over the weekend. Reuters and Politifact both fact-checked the image and concluded that it had been digitally-altered.

The image of Jimenez was taken from his 2020 coverage of protests in Kenosha, Wisconsin following a police shooting alongside the caption “Fiery but mostly peaceful protests after police shooting” that was criticised by some conservatives. The doctored image is clearly intended to be satire but the comments suggest many people believed it to be true.

On Facebook, to its credit, the image has now been labeled as an “Altered photo” and clearly states that “Independent fact-checkers say this information could mislead people”. On Twitter, as of writing, the image is still circulating without any label. The caption is also being used as a title in a YouTube video with some different footage but the platform also hasn’t labeled it and claims that it doesn’t violate their rules.

Social media platforms can’t become thought police, but where algorithms have detected manipulated content – and/or there is clear evidence of even real material being used for misleading purposes – it should be indisputable that action needs to be taken to support fair discussion and debate around genuine information.

Not enough is currently being done, and we appear doomed to the same socially-damaging failings during every pivotal event for the foreseeable future unless that changes.

(Photo by Adem AY on Unsplash)

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post Social media algorithms are still failing to counter misleading content appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/08/17/social-media-algorithms-are-still-failing-to-counter-misleading-content/feed/ 0
AR overtakes AI as the ‘most disruptive’ emerging technology https://www.artificialintelligence-news.com/2021/07/28/ar-overtakes-ai-as-most-disruptive-emerging-technology/ https://www.artificialintelligence-news.com/2021/07/28/ar-overtakes-ai-as-most-disruptive-emerging-technology/#respond Wed, 28 Jul 2021 12:08:36 +0000 http://artificialintelligence-news.com/?p=10802 A new report from GlobalData finds that professionals now believe AR will disrupt their industry more than AI. 70 percent of the 2,341 respondents across 30 business sectors picked AR as disrupting their industry most out of a selection of seven emerging technologies: AI, cybersecurity, cloud computing, IoT, blockchain, and 5G. Filipe Oliveira, Senior Analyst... Read more »

The post AR overtakes AI as the ‘most disruptive’ emerging technology appeared first on AI News.

]]>
A new report from GlobalData finds that professionals now believe AR will disrupt their industry more than AI.

70 percent of the 2,341 respondents across 30 business sectors picked AR as disrupting their industry most out of a selection of seven emerging technologies: AI, cybersecurity, cloud computing, IoT, blockchain, and 5G.

Filipe Oliveira, Senior Analyst at GlobalData, commented: “This change in how people see AR will likely be long term, and not just a temporary blip. It is clear that people are warming towards the technology, even if they don’t believe that it will make a big difference tomorrow.” 

AI wins some ground back when it comes to confidence in the technology. 57 percent of the respondents believe that AI will live up to all of its promises compared to just 26 percent for AR.

Along those same lines, 31 percent believe “The technology is hyped, but I can see a use for it” for AI, while a huge 50 percent report the same for AR.

Apple’s decision to add a LiDAR sensor to its latest mobile devices was seen as an important step towards the mass adoption of AR. Excitement is also growing around so-called “metaverses” that converge virtually-enhanced physical reality with physically-persistent shared virtual spaces.

SenseTime, one of China’s leading AI companies, announced earlier this week that it had partnered with BilibiliWorld to create a metaverse. The experience leverages SenseTime’s AI and mixed reality technologies to enable players to enjoy role-playing games that seamlessly blend reality with virtuality.

Facebook CEO Mark Zuckerberg recently said the company “will effectively transition from people seeing us as primarily being a social media company to being a metaverse company”. As the owner of Oculus, Zuckerberg’s plans for the future of Facebook will likely make people think of a large virtual space similar to that depicted in Ernest Cline’s Ready Player One novel and the 2018 film adaptation.

Some people have expressed concern about a large centralised company such as Facebook having control over such a potentially ubiquitous world and the content they consume. Many believe that an open-source decentralised version is vital:

Zuckerberg, for his part, claims that no one company will run the metaverse and it will be an “embodied internet” that is operated by many different players.

Decentraland is an early example of what a truly decentralised virtual space could look like. The platform makes use of a DAO (Decentralised Autonomous Organisation) to make policy decisions such as what content is allowed in addition to taking advantage of the NFT (Non-Fungible Token) trend to offer exclusive in-world items.

AR and AI are both important emerging technologies that can often go hand-in-hand, but it’s clear that the latter is losing its perspective among professionals as having the biggest impact on their industries over the coming years.

(Photo by My name is Yanick on Unsplash)

Want to find out more from executives and thought leaders in this space? Find out more about the Digital Twin World event, taking place on 8-9 September 2021, which will explore augmenting business outcomes in more depth and the industries that will benefit.

The post AR overtakes AI as the ‘most disruptive’ emerging technology appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/07/28/ar-overtakes-ai-as-most-disruptive-emerging-technology/feed/ 0
F-Secure: AI-based recommendation engines are easy to manipulate https://www.artificialintelligence-news.com/2021/06/24/f-secure-ai-recommendation-engines-easy-manipulate/ https://www.artificialintelligence-news.com/2021/06/24/f-secure-ai-recommendation-engines-easy-manipulate/#respond Thu, 24 Jun 2021 11:10:26 +0000 http://artificialintelligence-news.com/?p=10716 Cybersecurity giant F-Secure has warned that AI-based recommendation systems are easy to manipulate. Recommendations often come under increased scrutiny around major elections due to concerns that bias could, in extreme cases, lead to electoral manipulation. However, the recommendations that are delivered to people day-to-day matter just as much, if not more. Matti Aksela, VP of... Read more »

The post F-Secure: AI-based recommendation engines are easy to manipulate appeared first on AI News.

]]>
Cybersecurity giant F-Secure has warned that AI-based recommendation systems are easy to manipulate.

Recommendations often come under increased scrutiny around major elections due to concerns that bias could, in extreme cases, lead to electoral manipulation. However, the recommendations that are delivered to people day-to-day matter just as much, if not more.

Matti Aksela, VP of Artificial Intelligence at F-Secure, commented:

“As we rely more and more on AI in the future, we need to understand what we need to do to protect it from potential abuse. 

Having AI and machine learning power more and more of the services we depend on requires us to understand its security strengths and weaknesses, in addition to the benefits we can obtain, so that we can trust the results.

Secure AI is the foundation of trustworthy AI.”

Sophisticated disinformation efforts – such as those organised by Russia’s infamous “troll farms” – have spread dangerous lies around COVID-19 vaccines, immigration, and high-profile figures.

Andy Patel, Researcher at F-Secure’s Artificial Intelligence Center of Excellence, said:

“Twitter and other networks have become battlefields where different people and groups push different narratives. These include organic conversations and ads, but also messages intended to undermine and erode trust in legitimate information.

Examining how these ‘combatants’ can manipulate AI helps expose the limits of what AI can realistically do, and ideally, how it can be improved.” 

Legitimate and reliable information is needed more than ever. Scepticism is healthy, but people are beginning to either trust nothing or believe everything. Both are problematic.

According to a PEW Research Center survey from late-2020, 53 percent of Americans get their news from social media. Younger respondents, aged between 18-29, reported that social media is their main source of news.

No person or media outlet gets everything right, but a history of credibility must be taken into account—which tools such as NewsGuard help with. However, almost all mainstream media outlets have at least more credibility than a random social media user who may or may not even be who they claim to be.

In 2018, an investigation found that Twitter posts containing falsehoods are 70 percent more likely to be reshared. The ripple effect created by this resharing without fact-checking is why disinformation can spread so far within minutes. For some topics, like COVID-19 vaccines, Facebook has at least started to prompt users whether they’ve considered if the information is accurate before they share it.

Patel trained collaborative filtering models (a type of machine learning used to encode similarities between users and content based on previous interactions) using data collected from Twitter for use in recommendation systems. As part of his experiments, Patel “poisoned” the data using additional retweets to retrain the model and see how the recommendations changed.

The findings showed how even a very small number of retweets could manipulate the recommendation engine into promoting accounts whose content was shared through the injected retweets.

“We performed tests against simplified models to learn more about how the real attacks might actually work,” said Patel.

“I think social media platforms are already facing attacks that are similar to the ones demonstrated in this research, but it’s hard for these organisations to be certain this is what’s happening because they’ll only see the result, not how it works.”

Patel’s research can be recreated using the code and datasets hosted on GitHub here.

(Photo by Charles Deluvio on Unsplash)

Find out more about Digital Transformation Week North America, taking place on November 9-10 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post F-Secure: AI-based recommendation engines are easy to manipulate appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/06/24/f-secure-ai-recommendation-engines-easy-manipulate/feed/ 0
Facebook is developing a news-summarising AI called TL;DR https://www.artificialintelligence-news.com/2020/12/16/facebook-developing-news-summarising-ai-tldr/ https://www.artificialintelligence-news.com/2020/12/16/facebook-developing-news-summarising-ai-tldr/#comments Wed, 16 Dec 2020 17:19:16 +0000 http://artificialintelligence-news.com/?p=10126 Facebook is developing an AI called TL;DR which summarises news into shorter snippets. Anyone who’s spent much time on the web will know what TL;DR stands for⁠—but, for everyone else, it’s an acronym for “Too Long, Didn’t Read”. It’s an understandable sentiment we’ve all felt at some point. People lead busy lives. Some outlets now... Read more »

The post Facebook is developing a news-summarising AI called TL;DR appeared first on AI News.

]]>
Facebook is developing an AI called TL;DR which summarises news into shorter snippets.

Anyone who’s spent much time on the web will know what TL;DR stands for⁠—but, for everyone else, it’s an acronym for “Too Long, Didn’t Read”.

It’s an understandable sentiment we’ve all felt at some point. People lead busy lives. Some outlets now even specialise in short, at-a-glance news.

The problem is, it’s hard to get the full picture of a story in just a brief snippet.

In a world where fake news can be posted and spread like wildfire across social networks – almost completely unchecked – it feels even more dangerous to normalise “news” being delivered in short-form without full context.

There are two sides to most stories, and it’s hard to see how both can be summarised properly.

However, the argument also goes the other way. When articles are too long, people have a natural habit of skim-reading them. Skimming in this way often means people then believe they’re fully informed on a topic… when we know that’s often not the case.

TL;DR needs to strike a healthy balance between summarising the news but not so much that people don’t get enough of the story. Otherwise, it could increase existing societal problems with misinformation, fake news, and lack of media trust.

According to BuzzFeed, Facebook showed off TL;DR during an internal meeting this week. 

Facebook appears to be planning to add an AI-powered assistant to TL;DR which can answer questions about the article. The assistant could help to clear up anything the reader is uncertain about, but it’s also going to have to prove it doesn’t suffer from any biases which arguably all current algorithms suffer from to some extent.

The AI is also going to have to be very careful in not taking things like quotes out-of-context and end up further automating the spread of misinformation.

There’s also going to be a debate over what sources Facebook should use. Should Facebook stick only to the “mainstream media” which many believe follow the agendas of certain powerful moguls? Or serve news from smaller outlets without much historic credibility? The answer probably lies somewhere in the middle, but it’s going to be difficult to get right.

Facebook continues to be a major source of misinformation – in large part driven by algorithms promoting such content – and it’s had little success so far in any news-related efforts. I think most people will be expecting this to be another disaster waiting to happen.

(Image Credit: Mark Zuckerberg by Alessio Jacona under CC BY-SA 2.0 license)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Facebook is developing a news-summarising AI called TL;DR appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/12/16/facebook-developing-news-summarising-ai-tldr/feed/ 1
Information Commissioner clears Cambridge Analytica of influencing Brexit https://www.artificialintelligence-news.com/2020/10/08/information-commissioner-cambridge-analytica-influencing-brexit/ https://www.artificialintelligence-news.com/2020/10/08/information-commissioner-cambridge-analytica-influencing-brexit/#respond Thu, 08 Oct 2020 16:32:57 +0000 http://artificialintelligence-news.com/?p=9938 A three-year investigation by the UK Information Commissioner’s office has cleared Cambridge Analytica of electoral interference. Cambridge Analytica was accused in March 2018 of using AI tools and big data to influence the results of the Brexit referendum and the US presidential election. Most objective observers probably felt the case was overblown, but it’s taken... Read more »

The post Information Commissioner clears Cambridge Analytica of influencing Brexit appeared first on AI News.

]]>
A three-year investigation by the UK Information Commissioner’s office has cleared Cambridge Analytica of electoral interference.

Cambridge Analytica was accused in March 2018 of using AI tools and big data to influence the results of the Brexit referendum and the US presidential election. Most objective observers probably felt the case was overblown, but it’s taken until now to be confirmed.

“From my review of the materials recovered by the investigation I have found no further evidence to change my earlier view that CA [Cambridge Analytica] was not involved in the EU referendum campaign in the UK,” wrote Information Commissioner Elizabeth Denham.

Cambridge Analytica did obtain a ton of user data—but through predominantly commercial means, and of mostly US voters. Such data is available to, and has also been purchased by, other electoral campaigns for targeted advertising purposes (the Remain campaigns in the UK actually outspent their Leave counterparts by £6 million.)

“CA were purchasing significant volumes of commercially available personal data (at one estimate over 130 billion data points), in the main about millions of US voters, to combine it with the Facebook derived insight information they had obtained from an academic at Cambridge University, Dr Aleksandr Kogan, and elsewhere,” wrote Denham.

The only real scandal was Facebook’s poor protection of users which allowed third-party apps to scrape their data—for which it was fined £500,000 by the UK’s data protection watchdog.

It seems the claims Cambridge Analytica used powerful AI tools were also rather overblown, with the information commissioner saying all they found were models “built from ‘off the shelf’ analytical tools”.

The information commissioner even found evidence that Cambridge Analytica’s own staff “were concerned about some of the public statements the leadership of the company were making about their impact and influence.”

Cambridge Analytica appears to have been a victim of those unable to accept democratic results combined with its own boasting of capabilities that weren’t actually that impressive.

You can read the full report here (PDF)

(Photo by Christian Lue on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Information Commissioner clears Cambridge Analytica of influencing Brexit appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/10/08/information-commissioner-cambridge-analytica-influencing-brexit/feed/ 0