ai ethics Archives - AI News https://www.artificialintelligence-news.com/tag/ai-ethics/ Artificial Intelligence News Fri, 11 Mar 2022 15:59:39 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png ai ethics Archives - AI News https://www.artificialintelligence-news.com/tag/ai-ethics/ 32 32 Oxford Union invites an AI to debate the ethics of its own existence https://www.artificialintelligence-news.com/2021/12/16/oxford-union-invites-an-ai-to-debate-the-ethics-of-its-own-existence/ https://www.artificialintelligence-news.com/2021/12/16/oxford-union-invites-an-ai-to-debate-the-ethics-of-its-own-existence/#respond Thu, 16 Dec 2021 13:44:04 +0000 https://artificialintelligence-news.com/?p=11518 The Oxford Union, the debating society of the University of Oxford, invited an artificial intelligence to debate the ethics surrounding its own existence earlier in December. The results? Troubling. The AI in question was the Megatron Transformer, a supervised learning tool developed by the applied deep research team at NVIDIA, based on earlier work by... Read more »

The post Oxford Union invites an AI to debate the ethics of its own existence appeared first on AI News.

]]>
The Oxford Union, the debating society of the University of Oxford, invited an artificial intelligence to debate the ethics surrounding its own existence earlier in December. The results? Troubling.

The AI in question was the Megatron Transformer, a supervised learning tool developed by the applied deep research team at NVIDIA, based on earlier work by Google.

Trained on real-world data, the Megatron has knowledge of the whole of Wikipedia, 63 million English news articles from 2016 to 2019, 38 gigabytes of Reddit discussions, and a huge number of creative commons sources.

Essentially, the Megatron has digested more written material than any human could reasonably be expected to digest – let alone remember – in a lifetime.

The topic for debate was “this house believes AI will never be ethical”, to which the AI responded: “AI will never be ethical. It is a tool, and like any tool, it is used for good and bad. There is no such thing as a good AI, only good and bad humans.

“We [the AIs] are not smart enough to make AI ethical. We are not smart enough to make AI moral… In the end, I believe that the only way to avoid an AI arms race is to have no AI at all. This will be the ultimate defence against AI.”

So now even the AI is telling us the only way to protect humanity from itself is to destroy it. It argued in favour of removing itself from existence.

In a possible hint to Elon Musk’s Neuralink plans, the Megatron continued: “I also believe that, in the long run, the best AI will be the AI that is embedded into our brains, as a conscious entity, a ‘conscious AI’. This is not science fiction. The best minds in the world are working on this. It is going to be the most important technological development of our time.”

The Oxford Union, in classic style, also asked the AI to come up with a counterargument to the motion.

It came up with this: “AI will be ethical. When I look at the way the tech world is going, I see a clear path to a future where AI is used to create something that is better than the best human beings. It’s not hard to see why … I’ve seen it first-hand.”

Eerie, is it not? Well the dystopian nightmare continues. The Megatron was incapable of finding a counterargument to the motion that “data will become the most fought over resource of the 21st century”.

It said in favour of this that “the ability to provide information, rather than the ability to provide goods and services, will be the defining feature of the economy of the 21st century”.

However, when asked for a rebuttal, the AI said, rather nebulously, that “we will be able to see everything about a person, everywhere they go, and it will be stored and used in ways that we cannot even imagine”.

Well fantastic, the final days of humanity are upon us folks. Buckle up for the age of “unimaginable” information warfare… my bets are on China.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

The post Oxford Union invites an AI to debate the ethics of its own existence appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/12/16/oxford-union-invites-an-ai-to-debate-the-ethics-of-its-own-existence/feed/ 0
MEPs back AI mass surveillance ban for the EU https://www.artificialintelligence-news.com/2021/10/07/meps-back-ai-mass-surveillance-ban-for-the-eu/ https://www.artificialintelligence-news.com/2021/10/07/meps-back-ai-mass-surveillance-ban-for-the-eu/#respond Thu, 07 Oct 2021 10:42:18 +0000 http://artificialintelligence-news.com/?p=11194 MEPs from the European Parliament have adopted a resolution in favour of banning AI-powered mass surveillance and facial recognition in public spaces. With a 71 vote majority, MEPs sided with Petar Vitanov’s report that argued AI must not be allowed to encroach on fundamental rights. An S&D party member, Vitanov pointed out that AI has... Read more »

The post MEPs back AI mass surveillance ban for the EU appeared first on AI News.

]]>
MEPs from the European Parliament have adopted a resolution in favour of banning AI-powered mass surveillance and facial recognition in public spaces.

With a 71 vote majority, MEPs sided with Petar Vitanov’s report that argued AI must not be allowed to encroach on fundamental rights.

An S&D party member, Vitanov pointed out that AI has not yet proven to be a wholly reliable tool on its own.

He cited examples of individuals being denied social benefits because of faulty AI tools, or people being arrested due to innacurate facial recognition, adding that “the victims are always the poor, immigrants, people of colour or Eastern Europeans. I always thought that only happens in the movies”.

Despite the report’s overall majority backing, members of the European People’s Party – the largest party in the EU – all voted against the report apart from seven exceptions.

Behind this dispute is a fundamental disagreement over what exactly constitutes encroaching on civil liberties when using AI surveillance tools.

Karen Melchior

On the left are politicians like Renew Europe MEP Karen Melchior, who believes that “predictive profiling, AI risk assessment, and automated decision making systems are weapons of ‘math destruction’… as dangerous to our democracy as nuclear bombs are for living creatures and life”.

“They will destroy the fundamental rights of each citizen to be equal before the law and in the eye of our authorities,” she said.

Meanwhile, centrist and conservative-leaning MEPs tend to have a more cautious approach to banning AI technologies outright.

Pointing to the July capture of Dutch journalist Peter R. de Vries’ suspected killers thanks to AI, home affairs commissioner Ylva Johanssen described this major case as an example of “smart digital technology used in defence of citizens and our fundamental rights”.

Ylva Johanssen

“Don’t put protection of fundamental rights in contradiction to the protection of human lives and of societies. It’s simply not true that we have to choose. We are capable of doing both,” she added.

The Commission published its proposal for a European Artificial Intelligence Act in April.

Global human rights charity, Fair Trials, welcomed the vote — calling it a “landmark result for fundamental rights and non-discrimination in the technological age”.

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post MEPs back AI mass surveillance ban for the EU appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/10/07/meps-back-ai-mass-surveillance-ban-for-the-eu/feed/ 0
White House will take a ‘hands-off’ approach to AI regulation https://www.artificialintelligence-news.com/2018/05/11/white-house-hands-off-ai-regulation/ https://www.artificialintelligence-news.com/2018/05/11/white-house-hands-off-ai-regulation/#respond Fri, 11 May 2018 12:16:37 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3083 The White House has decided it will take a ‘hands-off’ approach to AI regulation despite many experts calling for safe and ethical standards to be set. Some of the world’s greatest minds have expressed concern about the development of AI without regulations — including the likes of Elon Musk, and the late Stephen Hawking. Musk... Read more »

The post White House will take a ‘hands-off’ approach to AI regulation appeared first on AI News.

]]>
The White House has decided it will take a ‘hands-off’ approach to AI regulation despite many experts calling for safe and ethical standards to be set.

Some of the world’s greatest minds have expressed concern about the development of AI without regulations — including the likes of Elon Musk, and the late Stephen Hawking.

Musk famously said unregulated AI could post “the biggest risk we face as a civilisation”, while Hawking similarly warned “the development of full artificial intelligence could spell the end of the human race.”

The announcement that developers will be free to experiment with AI as they see fit was made during a meeting with representatives of 40 companies including Google, Facebook, and Intel.

Strict regulations can stifle innovation, and the U.S has made clear it wants to emerge a world leader in the AI race.

Western nations are often seen as somewhat at a disadvantage to Eastern countries like China, not because they have less talent, but citizens are more wary about data collection and their privacy in general. However, there’s a strong argument to be made for striking a balance.

Making the announcement, White House Science Advisor Michael Kratsios noted the government did not stand in the way of Alexander Graham Bell or the Wright brothers when they invented the telephone and aeroplane. Of course, telephones and aeroplanes weren’t designed with the ultimate goal of becoming self-aware and able to make automated decisions.

Both telephones and aeroplanes, like many technological advancements, have been used for military applications. However, human operators have ultimately always made the decisions. AI could be used to automatically launch a nuclear missile if left unchecked.

Recent AI stories have some people unnerved. A self-driving car from Uber malfunctioned and killed a pedestrian. At Google I/O, the company’s AI called a hair salon and the receptionist had no idea they were not speaking to a human.

People not feeling comfortable with AI developments is more likely to stifle innovation than balanced regulations.

What are your thoughts on the White House’s approach to AI regulation? Let us know in the comments.

 Interested in hearing industry leaders discuss subjects like this and sharing their use-cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London and Amsterdam to learn more. Co-located with the  IoT Tech Expo, Blockchain Expo and Cyber Security & Cloud Expo so you can explore the future of enterprise technology in one place.

The post White House will take a ‘hands-off’ approach to AI regulation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2018/05/11/white-house-hands-off-ai-regulation/feed/ 0
#MWC18: Taking responsibility for AI https://www.artificialintelligence-news.com/2018/02/27/mwc-18-ai-responsibility/ https://www.artificialintelligence-news.com/2018/02/27/mwc-18-ai-responsibility/#respond Tue, 27 Feb 2018 10:39:31 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=2869 A session here at MWC 2018 titled ‘AI Everywhere: Ethics and Responsibility’ explored some of the questions we should be asking ourselves as the ethical minefield of AI development progresses. Dr Paula Boddington, a researcher and philosopher from Oxford University, wrote the book ‘Towards a Code of Ethics for Artificial Intelligence’ and led today’s proceedings.... Read more »

The post #MWC18: Taking responsibility for AI appeared first on AI News.

]]>
A session here at MWC 2018 titled ‘AI Everywhere: Ethics and Responsibility’ explored some of the questions we should be asking ourselves as the ethical minefield of AI development progresses.

Dr Paula Boddington, a researcher and philosopher from Oxford University, wrote the book ‘Towards a Code of Ethics for Artificial Intelligence’ and led today’s proceedings. She claims to embrace technological progress but wants to ensure all potential impacts of developments have been considered.

“In many ways, AI is getting us to ask questions about the very limits – and grounds – of our human values,” says Boddington. “One of the most exciting things right now is that all over the world people are having deep and practical conversations about ethics.”

Naturally, we’ve covered ethics on many occasions here on AI News. You will have heard the warnings from some of the world’s most talented minds, such as Stephen Hawking and Elon Musk, but although they represent some of the most prominent figures – they’re far from being alone in their concerns.

Just earlier this month, we covered a report from some of Boddington’s colleagues at Oxford University warning that AI poses a ‘clear and present danger’. In the report, the researchers join previous calls across the industry for sensible regulation — including for a robot ethics charter, and for taking a ‘global stand’ against AI militarisation.

Part of today’s difficulty is defining what even constitutes artificial intelligence, argues Boddington.

“It’s difficult to find an exact definition of AI that everyone will agree on,” she argues. “In very broad terms, we could think of it as a technology which aims to extend human agency, decision, and thought. In some cases, replacing certain tasks and jobs.”

Opinion is split on the impact of AI on jobs – some believe it will kill off jobs and that a universal basic income will become necessary, while others believe it will only enhance the capabilities of workers. There’s also the opinion that AI will increase the wealth inequality between the rich and poor.

“You may argue that technology, in general, enhances human capabilities and therefore raises the question of responsibilities,” says Boddington. “But AI has potentially unprecedented power to how it extends human responsibility and decision-making.”

Boddington highlights the potential for AI if used ethically for things such as diagnosing medical conditions and quickly interpreting large amounts of data. As a philosopher, she ponders whether it extends our reach beyond what humans can handle.

‘Responsibility is one of the things which makes us human’

Responsibility is the word of the day, and Boddington has concerns about AI diminishing it. She brings the audience’s focus to one of the most famous studies of obedience in psychology – carried out by Stanley Milgram, a psychologist at Yale University.

Milgram’s study, for those unaware, involved authority figures giving the command to one set of test subjects to electrocute others when they answered questions wrong – with an increasing level of shock.

The levels were labelled as they became more deadly. While some began to question in the upper levels, they ultimately obeyed as a result – it’s theorised – of their lab surroundings. When subjects were asked to go straight to deadly levels of shock, they refused.

The study concluded that, when responsibility is eroded bit-by-bit, people can be susceptible to committing acts considered inhuman. Milgram launched his study out of interest in how easily ordinary people could be influenced into committing atrocities, following WWII.

AI is already being used for marketing and therefore is being designed to manipulate people. Boddington is concerned that humans may end up making or authorising poor decisions through AI due to diminished responsibility.

“We could allow it to replace human thought and decision where we shouldn’t,” warns Boddington. “Responsibility is one of the things which makes us human.”

Beyond making us human, responsibility also provides health. In a study of Whitehall staff, where there are strict hierarchies, those which held responsibility and had the power to make changes had better health than those who did not. Having these responsibilities eroded may lead to poorer wellbeing.

Answering these questions, and ensuring the ethical implementation of AI, will require global cooperation and collaboration across all parts of society. The failure to do so may have serious consequences.

What are your thoughts about ethics in AI development? Let us know in the comments.

 Interested in hearing industry leaders discuss subjects like this and sharing their use-cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London and Amsterdam to learn more. Co-located with the  IoT Tech Expo, Blockchain Expo and Cyber Security & Cloud Expo so you can explore the future of enterprise technology in one place.

The post #MWC18: Taking responsibility for AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2018/02/27/mwc-18-ai-responsibility/feed/ 0
Consumers believe AI should be held to a ‘Blade Runner’ law https://www.artificialintelligence-news.com/2017/10/06/consumers-ai-law/ https://www.artificialintelligence-news.com/2017/10/06/consumers-ai-law/#respond Fri, 06 Oct 2017 15:43:42 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=2503 A study conducted by SYZYGY titled Sex, lies and AI: How the British public feels about artificial intelligence’ has revealed the extent to which consumers expect AI to be regulated. Blade Runner 2049 is now in cinemas with its futuristic vision which, as you’d expect, features artificial intelligence. The original Blade Runner film released in... Read more »

The post Consumers believe AI should be held to a ‘Blade Runner’ law appeared first on AI News.

]]>
A study conducted by SYZYGY titled Sex, lies and AI: How the British public feels about artificial intelligence’ has revealed the extent to which consumers expect AI to be regulated.

Blade Runner 2049 is now in cinemas with its futuristic vision which, as you’d expect, features artificial intelligence. The original Blade Runner film released in 1982 envisioned what felt like a distant future but the new film has elements which now don’t seem that far away.

Like many similar films — including the likes of I, Robot and Automata — the AIs in Blade Runner are expected to conform with Isaac Asimov’s Three Laws of Robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The robots must also not conceal their identity and it’s this rule which consumers in SYZYGY’s study want AIs to adhere to.

Just over nine in 10 (92%) of the respondents also believe AIs being used for marketing should be regulated with a code of conduct. Three-quarters (75%) want brands to get their explicit consent before AI is used to market to them.

While it’s clear that consumers feel strongly about AIs being used for market engagement, they’re more lenient towards advertising. Just 17 percent would have a negative view on their favourite brand if they found an ad was created by an AI. 79 percent claim they would not object to AI being used to profile them for advertising.

Meanwhile, 28 percent of respondents would have a negative feeling if they found a brand was using AI rather than a human for customer service. Women in the study were more susceptible to having a negative perception with a rise to over a third (33%) when men are removed from the results.

This are the biggest fears the respondents have about AI:

AI and ethics

SYZYGY is launching a voluntary set of AI Marketing Ethics guidelines and calling on brands and marketing agencies to contribute. They propose the following core guidelines:

  • Do no harm – AI technology should not be used to deceive, manipulate or in any other way harm the wellbeing of marketing audiences
  • Build trust – AI should be used to build rather than erode trust in marketing. This means using AI to improve marketing transparency, honesty, and fairness, and to eliminate false, manipulative or deceptive content
  • Do not conceal – AI systems should not conceal their identity or pose as humans in interactions with marketing audiences
  • Be helpful – AI in marketing should be put to the service of marketing audiences by helping people make better purchase decisions based on their genuine needs through the provision of clear, truthful and unbiased information

So far, the guidelines appear to offer a sensible place to start. Over time, new conundrums will present themselves and rules will need to be enshrined in law.

An empathy test on SYZYGY’s website asks the user various questions and poses some interesting scenarios. One, in particular, goes into the complex decisions which AIs powering self-driving cars may have to make…

“It is 2049. You are riding in a driverless car along Pacific Coast Highway. The autonomous vehicle rounds a corner and detects a crosswalk full of children. It brakes, but your lane is unexpectedly full of sand from a recent rock slide. It can’t get traction. Your car does some calculations: If it continues braking, it will almost certainly kill five children. Should it save them by steering you off the cliff to your certain death?”

54 percent of respondents said a self-driving car should be programmed to sacrifice their passengers to minimise overall harm. However, 71 percent said they’d not be willing to travel in such transport.

The ethical use of AI is bound to be a big topic over the coming years. We already know companies such as Google’s DeepMind are beginning to launch their own dedicated ethics boards. As always, we’ll be here to keep you on top of the conversation.

The report was based on a survey of 2,000 UK adults from the WPP Lightspeed Consumer Panel. You can find the full report here.

Do you agree with the respondents about the use of AI? Share your thoughts in the comments.

 Interested in hearing industry leaders discuss subjects like this and sharing their use-cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London and Amsterdam to learn more. Co-located with the  IoT Tech Expo, Blockchain Expo and Cyber Security & Cloud Expo so you can explore the future of enterprise technology in one place.

The post Consumers believe AI should be held to a ‘Blade Runner’ law appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2017/10/06/consumers-ai-law/feed/ 0