dangers Archives - AI News https://www.artificialintelligence-news.com/tag/dangers/ Artificial Intelligence News Tue, 02 May 2023 13:53:04 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png dangers Archives - AI News https://www.artificialintelligence-news.com/tag/dangers/ 32 32 AI ‘godfather’ warns of dangers and quits Google https://www.artificialintelligence-news.com/2023/05/02/ai-godfather-warns-dangers-and-quits-google/ https://www.artificialintelligence-news.com/2023/05/02/ai-godfather-warns-dangers-and-quits-google/#respond Tue, 02 May 2023 13:53:03 +0000 https://www.artificialintelligence-news.com/?p=13021 Geoffrey Hinton, known as the “Godfather of AI,” has expressed concerns about the potential dangers of AI and left his position at Google to discuss them openly. Hinton, alongside two others, won the Turing Award in 2018 for laying the foundations of AI. He had been working at Google since 2013 but resigned to speak... Read more »

The post AI ‘godfather’ warns of dangers and quits Google appeared first on AI News.

]]>
Geoffrey Hinton, known as the “Godfather of AI,” has expressed concerns about the potential dangers of AI and left his position at Google to discuss them openly.

Hinton, alongside two others, won the Turing Award in 2018 for laying the foundations of AI. He had been working at Google since 2013 but resigned to speak out about the fast pace of AI development and the risks it poses.

In an interview with The New York Times, Hinton warned that the rapid development of generative AI products was “racing towards danger” and that false text, images, and videos created by AI could lead to a situation where average people “would not be able to know what is true anymore.”

Hinton also expressed concerns about the impact of AI on the job market, as machines could eventually replace roles such as paralegals, personal assistants, and translators.

“The idea that this stuff could actually get smarter than people — a few people believed that. But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that,” said Hinton.

Hinton’s concerns are not unfounded. AI has already been used to create deepfakes, which are videos that manipulate facial and voice expressions to make it appear that someone is saying something they did not say. These deepfakes can be used to spread misinformation or damage a person’s reputation.

Furthermore, AI has the potential to automate many jobs, leading to job losses. This week, IBM CEO Arvind Krishna said that the company plans to use AI to replace around 30 percent of back office jobs—equivalent to around 7,800 jobs.

Hinton is not alone in his concerns. Other experts have also warned about the risks of AI.

Elon Musk, the CEO of Tesla and SpaceX, called AI “our biggest existential threat”. The following year, the legendary astrophysicist Neil deGrasse Tyson said that he shares Musk’s view. In 2018, Stephen Hawking warned that AI could replace humans as the dominant species on Earth.

In March, Musk joined Apple co-founder Steve Wozniak and over 1,000 other experts in signing an open letter calling for a  halt to “out-of-control” AI development.

However, some experts believe that AI can be developed in a way that benefits society. For example, AI can be used to diagnose diseases, detect fraud, and reduce traffic accidents.

To ensure that AI is developed in a responsible and ethical manner, many organisations have developed guidelines, including the IEEE, the EU, and the OECD.

The concerns raised by Hinton about AI are significant and highlight the need for responsible AI development. While AI has the potential to bring many benefits to society, it is crucial that it is developed in a way that minimises its risks and maximises its benefits.

(Image Credit: Eviatar Bach under CC BY-SA 3.0 license. Image cropped from original.)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI ‘godfather’ warns of dangers and quits Google appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/05/02/ai-godfather-warns-dangers-and-quits-google/feed/ 0
James Cameron warns of the dangers of deepfakes https://www.artificialintelligence-news.com/2022/01/24/james-cameron-warns-of-the-dangers-of-deepfakes/ https://www.artificialintelligence-news.com/2022/01/24/james-cameron-warns-of-the-dangers-of-deepfakes/#respond Mon, 24 Jan 2022 18:40:34 +0000 https://artificialintelligence-news.com/?p=11603 Legendary director James Cameron has warned of the dangers that deepfakes pose to society. Deepfakes leverage machine learning and AI techniques to convincingly manipulate or generate visual and audio content. Their high potential to deceive makes them a powerful tool for spreading disinformation, committing fraud, trolling, and more. “Every time we improve these tools, we’re... Read more »

The post James Cameron warns of the dangers of deepfakes appeared first on AI News.

]]>
Legendary director James Cameron has warned of the dangers that deepfakes pose to society.

Deepfakes leverage machine learning and AI techniques to convincingly manipulate or generate visual and audio content. Their high potential to deceive makes them a powerful tool for spreading disinformation, committing fraud, trolling, and more.

“Every time we improve these tools, we’re actually in a sense building a toolset to create fake media — and we’re seeing it happening now,” said Cameron in a BBC video interview.

“Right now the tools are — the people just playing around on apps aren’t that great. But over time, those limitations will go away. Things that you see and fully believe you’re seeing could be faked.”

Have you ever said “I’ll believe it when I see it with my own eyes,” or similar? I certainly have. As humans, we’re subconsciously trained to believe what we can see (unless it’s quite obviously faked.)

The problem is amplified with today’s fast news cycle. It’s a well-known problem that many articles get shared based on their headline before moving on to the next story. Few people are going to stop to analyse images and videos for small imperfections.

Often the stories are shared with reactions to the headline without reading the story to get the full context. This can lead to a butterfly effect of people seeing their contacts’ reactions to the headline and feel they don’t need additional context—often just sharing in whatever emotional response the headline was designed to invoke (generally outrage.)

“News cycles happen so fast, and people respond so quickly, you could have a major incident take place between the interval between when the deepfake drops and when it’s exposed as a fake,” says Cameron.

“We’ve seen situations — you know, Arab Spring being a classic example — where with social media, the uprising was practically overnight.”

It’s a difficult problem to tackle as it is. We’ve all seen the amount of disinformation around things such as the COVID-19 vaccines. However, an article posted with convincing deepfake media will be almost impossible to stop from being posted and/or shared widely.

AI tools for spotting the increasingly small differences between real and manipulated media will be key to preventing deepfakes from ever being posted. AI tools for spotting the increasingly small differences between real and manipulated media will be key to preventing deepfakes from ever being posted. However, researchers have found that current tools can easily be deceived.

Images and videos that can be verified as original and authentic using technologies like distributed ledgers could also be used to help give audiences confidence the media they’re consuming isn’t a manipulated version and they really can trust their own eyes.

In the meantime, Cameron suggest using Occam’s razor—a problem-solving principle that’s can be summarised as the simplest explanation is the likeliest.

“Conspiracy theories are all too complicated. People aren’t that good, human systems aren’t that good, people can’t keep a secret to save their lives, and most people in positions of power are bumbling stooges.

“The fact that we think that they could realistically pull off these — these complex plots? I don’t buy any of that crap! Bill Gates is not really trying to microchip you with the flu vaccine!”

However, Cameron admits his scepticism of new technology.

“Every single advancement in technology that’s ever been created has been weaponised. I say this to AI scientists all the time, and they go, ‘No, no, no, we’ve got this under control.’ You know, ‘We just give the AIs the right goals…’

“So who’s deciding what those goals are? The people that put up the money for the research, right? Which are all either big business or defense. So you’re going to teach these new sentient entities to be either greedy or murderous.”

Of course, Skynet gets an honourary mention.

“If Skynet wanted to take over and wipe us out, it would actually look a lot like what’s going on right now. It’s not going to have to — like, wipe out the entire, you know, biosphere and environment with nuclear weapons to do it. It’s going to be so much easier and less energy required to just turn our minds against ourselves.

“All Skynet would have to do is just deepfake a bunch of people, pit them against each other, stir up a lot of foment, and just run this giant deepfake on humanity.”

Russia’s infamous state-sponsored “troll farms” are one of the largest sources of disinformation and are used to conduct online influence campaigns.

In a January 2017 report issued by the United States Intelligence Community – Assessing Russian Activities and Intentions in Recent US Elections (PDF) – described the ‘Internet Research Agency’ as one such troll farm.

“The likely financier of the so-called Internet Research Agency of professional trolls located in Saint Petersburg is a close ally of [Vladimir] Putin with ties to Russian intelligence,” commenting that “they previously were devoted to supporting Russian actions in Ukraine.”

Western officials have warned that Russia may use disinformation campaigns – including claims of an attack from Ukrainian troops – to rally support and justify an invasion of Ukraine. It’s not out the realms of possibility that manipulated content will play a role, so it could be too late to counter the first large-scale disaster supported by deepfakes.

Related: University College London: Deepfakes are the ‘most serious’ AI crime threat

(Image Credit: Gage Skidmore. Image cropped. CC BY-SA 3.0 license)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post James Cameron warns of the dangers of deepfakes appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/01/24/james-cameron-warns-of-the-dangers-of-deepfakes/feed/ 0
Report: The public is unconvinced AI will benefit humanity https://www.artificialintelligence-news.com/2019/01/10/report-public-ai-benefit-humanity/ https://www.artificialintelligence-news.com/2019/01/10/report-public-ai-benefit-humanity/#respond Thu, 10 Jan 2019 13:48:32 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4403 Decades of sci-fi flicks have instilled a fear of AI in some people – with a new report suggesting many remain unconvinced it will benefit humanity. The report, from the Center for the Governance of AI based at Oxford University, reveals concerns artificial intelligence may harm or endanger humankind. 2,000 US adults were surveyed in... Read more »

The post Report: The public is unconvinced AI will benefit humanity appeared first on AI News.

]]>
Decades of sci-fi flicks have instilled a fear of AI in some people – with a new report suggesting many remain unconvinced it will benefit humanity.

The report, from the Center for the Governance of AI based at Oxford University, reveals concerns artificial intelligence may harm or endanger humankind. 2,000 US adults were surveyed in 2018 for their views.

Baobao Zhang and Allan Dafoe, authors of the report, wrote in its summary:

“Public sentiments have shaped many policy debates, including those about immigration, free trade, international conflicts, and climate change mitigation.

As in these other policy domains, we expect the public to become more influential over time. It is thus vital to have a better understanding of how the public thinks about AI and the governance of AI.”

41 percent of respondents ‘strongly’ or ‘somewhat strongly’ support the continued development of AI, compared to 22 percent that oppose it to some degree. A large number (28%) neither support nor oppose, while the remaining 10 percent responded: “I don’t know”.

Most respondents believe AI will advance to ‘high-level machine intelligence’ over the next decade and it will do more harm than good.

The researchers defined high-level machine intelligence as:

“When machines are able to perform almost all tasks that are economically relevant today better than the median human (today) at each task.

These tasks include asking subtle common-sense questions such as those that travel agents would ask. For the following questions, you should ignore tasks that are legally or culturally restricted to humans, such as serving on a jury.”

34 percent believe AI will have a harmful impact; with 12 percent going as far as to say it could lead to human extinction. More than a quarter (26%) think AI will be good for humanity, while 18 percent were unsure.

Primary concerns include things such as AI-powered cyberattacks and data privacy intrusions. In terms of governance challenges, this is the priority respondents wish to see:

  1. Preventing AI-assisted surveillance from violating privacy and civil liberties
  2. Preventing AI from being used to spread fake and harmful content online
  3. Preventing AI cyber attacks against governments, companies, organizations, and individuals
  4. Protecting data privacy

People on lower incomes and women are most concerned about the impact of AI. Those on lower wages is unsurprising given the publicised concerns that AI could take over entry-level jobs such as factory work and call centres.

Supporters of AI were primarily male, Republican, have a family income of more than $100,000, and have computer science or programming experience.

One thing is clear from the attitude study, an overwhelming 82 percent believe AI and robots are ‘technologies that require careful management’.

Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.

The post Report: The public is unconvinced AI will benefit humanity appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2019/01/10/report-public-ai-benefit-humanity/feed/ 0