laws Archives - AI News https://www.artificialintelligence-news.com/tag/laws/ Artificial Intelligence News Wed, 26 Apr 2023 15:28:31 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png laws Archives - AI News https://www.artificialintelligence-news.com/tag/laws/ 32 32 FTC Chairwoman: There is no ‘AI exemption’ to existing laws https://www.artificialintelligence-news.com/2023/04/26/ftc-chairwoman-no-ai-exemption-to-existing-laws/ https://www.artificialintelligence-news.com/2023/04/26/ftc-chairwoman-no-ai-exemption-to-existing-laws/#respond Wed, 26 Apr 2023 15:28:30 +0000 https://www.artificialintelligence-news.com/?p=12989 FTC Chairwoman Lina Khan has warned that the US government will not hesitate to clamp down on harmful business practices involving AI. Speaking at a virtual press event, Khan was joined by top officials from US consumer protection and civil rights agencies. Together, the officials emphasised that regulators are committed to tracking and stopping any... Read more »

The post FTC Chairwoman: There is no ‘AI exemption’ to existing laws appeared first on AI News.

]]>
FTC Chairwoman Lina Khan has warned that the US government will not hesitate to clamp down on harmful business practices involving AI.

Speaking at a virtual press event, Khan was joined by top officials from US consumer protection and civil rights agencies.

Together, the officials emphasised that regulators are committed to tracking and stopping any illegal behaviour associated with biased or deceptive AI tools.

Khan warned that, in addition to the well-publicised deployment of automated tools that introduce bias into decisions about housing, loans, hiring, and productivity monitoring, the rapid evolution of advanced AI tools designed to generate human-like content also presents a significant risk.

Khan also expressed concern about AI tools that scammers could use to “manipulate and deceive people on a large scale, deploying fake or convincing content more widely and targeting specific groups with greater precision.”

She also warned that a small number of powerful firms already control the raw materials, data, cloud services, and computing power required to develop and deploy AI products. Khan raised the possibility that the FTC could wield its antitrust authority to protect competition.

“In moments of technological disruption, established players and incumbents may be tempted to crush, absorb or otherwise unlawfully restrain new entrants in order to maintain their dominance,” said Khan.

Khan did not specifically name any companies or products, but her comments will likely increase pressure on major tech firms like Google and Microsoft that are currently engaged in a race to sell more advanced AI tools.

The warnings from top US regulators come at a time when EU lawmakers are negotiating new rules designed to regulate AI, with some in the US calling for similar legislation.

The regulators said that many of the most harmful AI products might already contravene existing laws protecting civil rights and preventing fraud.

For her part, Khan reiterated that “there is no AI exemption to the laws on the books.”

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post FTC Chairwoman: There is no ‘AI exemption’ to existing laws appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/04/26/ftc-chairwoman-no-ai-exemption-to-existing-laws/feed/ 0
AI Expo: Protecting ethical standards in the age of AI https://www.artificialintelligence-news.com/2022/09/21/ai-expo-protecting-ethical-standards-in-the-age-of-ai/ https://www.artificialintelligence-news.com/2022/09/21/ai-expo-protecting-ethical-standards-in-the-age-of-ai/#respond Wed, 21 Sep 2022 11:34:58 +0000 https://www.artificialintelligence-news.com/?p=12264 Rapid advancements in AI require keeping high ethical standards, as much for legal reasons as moral. During a session at this year’s AI & Big Data Expo Europe, a panel of experts provided their views on what businesses need to consider before deploying artificial intelligence. Here’s a list of the panel’s participants: Moderator: Frans van... Read more »

The post AI Expo: Protecting ethical standards in the age of AI appeared first on AI News.

]]>
Rapid advancements in AI require keeping high ethical standards, as much for legal reasons as moral.

During a session at this year’s AI & Big Data Expo Europe, a panel of experts provided their views on what businesses need to consider before deploying artificial intelligence.

Here’s a list of the panel’s participants:

  • Moderator: Frans van Bruggen, Policy Officer for AI and FinTech at De Nederlandsche Bank (DNB)
  • Aoibhinn Reddington, Artificial Intelligence Consultant at Deloitte
  • Sabiha Majumder, Model Validator – Innovation & Projects at ABN AMRO Bank N.V
  • Laura De Boel, Partner at Wilson Sonsini Goodrich & Rosati

The first question called for thoughts about current and upcoming regulations that affect AI deployments. As a lawyer, De Boel kicked things off by giving her take.

De Boel highlights the EU’s upcoming AI Act which builds upon the foundations of similar legislation such as GDPR but extends it for artificial intelligence.

“I think that it makes sense that the EU wants to regulate AI, and I think it makes sense that they are focusing on the highest risk AI systems,” says De Boel. “I just have a few concerns.”

De Boel’s first concern is how complex it will be for lawyers like herself.

“The AI Act creates many different responsibilities for different players. You’ve got providers of AI systems, users of AI systems, importers of AI systems into the EU — they all have responsibilities, and lawyers will have to figure it out,” De Boel explains.

The second concern is how costly this will all be for businesses.

“A concern that I have is that all these responsibilities are going to be burdensome, a lot of red tape for companies. That’s going to be costly — costly for SMEs, and costly for startups.”

Similar concerns were raised about GDPR. Critics argue that overreaching regulation drives innovation, investment, and jobs out of the Eurozone and leaves countries like the USA and China to lead the way.

Peter Wright, Solicitor and MD of Digital Law UK, once told AI News about GDPR: “You’ve got your Silicon Valley startup that can access large amounts of money from investors, access specialist knowledge in the field, and will not be fighting with one arm tied behind its back like a competitor in Europe.”

The concerns raised by De Boel echo Wright and it’s true that it will have a greater impact on startups and smaller companies who already face an uphill battle against established industry titans.

De Boel’s final concern on the topic is about enforcement and how the AI Act goes even further than GDPR’s already strict penalties for breaches.

“The AI act really copies the enforcement of GDPR but sets even higher fines of 30 million euros or six percent of annual turnover. So it’s really high fines,” comments De Boel.

“And we see with GDPR that when you give these types of powers, it is used.”

Outside of Europe, different laws apply. In the US, rules such as those around biometric recognition can vary greatly from state-to-state. China, meanwhile, recently introduced a law that requires companies to give the option for consumers to opt-out from things like personalised advertising.

Keeping up with all the ever-changing laws around the world that may impact your AI deployments is going to be a difficult task, but a failure to do so could result in severe penalties.

The financial sector is already subject to very strict regulations and has used statistical models for decades for things such as lending. The industry is now increasingly using AI for decision-making, which brings with it both great benefits and substantial risks.

“The EU requires auditing of all high-risk AI systems in all sectors, but the problem with external auditing is there could be internal data, decisions, or confidential information which cannot be shared with an external party,” explains Majumder.

Majumder goes on to explain that it’s therefore important to have a second line of opinions -which is internal to the organisation – but they look at it from an independent perspective, from a risk management perspective.

“So there are three lines of defense: First, when developing the model. Second, we’re assessing independently through risk management. Third, the auditors as the regulators,” Majumder concludes.

Of course, when AI is always making the right decisions then everything is great. When it inevitably doesn’t, it can be seriously damaging.

The EU is keen on banning AI for “unacceptable” risk purposes that may damage the livelihoods, safety, and rights of people. Three other categories (high risk, limited risk, and minimal/no risk) will be permitted, with decreasing amounts of legal obligations as you go down the scale.

“We can all agree that transparency is really important, right? Because let me ask you a question: If you apply for some kind of service, and you get denied, what do you want to know? Why am I being denied the service?” says Reddington.

“If you’re denied service by an algorithm who cannot come up with a reason, what is your reaction?”

There’s a growing consensus that XAI (Explainable AI) should be used in decision-making so that reasons for the outcome can always be traced. However, Bruggen makes the point that transparency may not always be a good thing — you may not want to give a terrorist or someone accused of a financial crime the reason why they’ve been denied a loan, for example.

Reddington believes this is why humans should not be taken out of the loop. The industry is far from reaching that level of AI anyway, but even if/when available there are the ethical reasons we shouldn’t want to remove human input and oversight entirely.

However, AI can also increase fairness.

Mojumder gives the example from her field of expertise, finance, where historical data is often used for decisions such as credit. Over time, people’s situations change but they could be stuck with struggling to get credit based on historical data.

“Instead of using historical credit rating as input, we can use new kinds of data like mobile data, utility bills, or education, and AI has made it possible for us,” explains Mojumder.

Of course, using such a relatively small dataset then poses its own problems.

The panel offered some fascinating insights on ethics in AI and the current and future regulatory environment. As with the AI industry generally, it’s rapidly advancing and hard to keep up with but critical to do so.

You can find out more about upcoming events in the global AI & Big Data Expo series here.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI Expo: Protecting ethical standards in the age of AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/09/21/ai-expo-protecting-ethical-standards-in-the-age-of-ai/feed/ 0
UK eases data mining laws to support flourishing AI industry https://www.artificialintelligence-news.com/2022/06/29/uk-eases-data-mining-laws-support-flourishing-ai-industry/ https://www.artificialintelligence-news.com/2022/06/29/uk-eases-data-mining-laws-support-flourishing-ai-industry/#respond Wed, 29 Jun 2022 12:21:38 +0000 https://www.artificialintelligence-news.com/?p=12111 The UK is set to ease data mining laws in a move designed to further boost its flourishing AI industry. We all know that data is vital to AI development. Tech giants are in an advantageous position due to either having existing large datasets or the ability to fund/pay for the data required. Most startups... Read more »

The post UK eases data mining laws to support flourishing AI industry appeared first on AI News.

]]>
The UK is set to ease data mining laws in a move designed to further boost its flourishing AI industry.

We all know that data is vital to AI development. Tech giants are in an advantageous position due to either having existing large datasets or the ability to fund/pay for the data required. Most startups rely on mining data to get started.

Europe has notoriously strict data laws. Advocates of regulations like GDPR believe they’re necessary to protect consumers, while critics argue it drives innovation, investment, and jobs out of the Eurozone to countries like the USA and China.

“You’ve got your Silicon Valley startup that can access large amounts of money from investors, access specialist knowledge in the field, and will not be fighting with one arm tied behind its back like a competitor in Europe,” explained Peter Wright, Solicitor and MD of Digital Law UK.

An announcement this week sets out how the UK intends to support its National AI Strategy from an intellectual property standpoint.

The announcement comes via the country’s Intellectual Property Office (IPO) and follows a two-month cross-industry consultation period with individuals, large and small businesses, and a range of organisations.

Text and data mining

Text and data mining (TDM) allows researchers to copy and harness disparate datasets for their algorithms. As part of the announcement, the UK says it will now allow TDM “for any purpose,” which provides much greater flexibility than an exception made in 2014 that allowed AI researchers to use such TDM for non-commercial purposes.

In stark contrast, the EU’s Directive on Copyright in the Digital Single Market offers a TDM exception only for scientific research.

“These changes make the most of the greater flexibilities following Brexit. They will help make the UK more competitive as a location for firms doing data mining,” wrote the IPO in the announcement.

AIs still can’t be inventors

Elsewhere, the UK is more or less sticking to its previous stances—including that AI systems cannot be credited as inventors in patents.

The most high-profile case on the subject is of US-based Dr Stephen Thaler, the founder of Imagination Engines. Dr Thaler has been leading the fight to give credit to machines for their creations.

An AI device created by Dr Thaler, DABUS, was used to invent an emergency warning light, a food container that improves grip and heat transfer, and more.

In August 2021, a federal court in Australia ruled that AI systems can be credited as inventors under patent law after Ryan Abbott, a professor at the University of Surrey, filed applications in the country on behalf of Dr Thaler. Similar applications were also filed in the UK, US, and New Zealand.

The UK’s IPO rejected the applications at the time, claiming that – under the country’s Patents Act – only humans can be credited as inventors. Subsequent appeals were also rejected.

“A patent is a statutory right and it can only be granted to a person,” explained Lady Justice Liang. “Only a person can have rights. A machine cannot.”

In the IPO’s latest announcement, the body reiterates: ”For AI-devised inventions, we plan no change to UK patent law now. Most respondents felt that AI is not yet advanced enough to invent without human intervention.”

However, the IPO highlights the UK is one of only a handful of countries that protects computer-generated works. Any person who makes “the arrangements necessary for the creation of the [computer-generated] work” will have the rights for 50 years from when it was made.

Supporting a flourishing AI industry

Despite being subject to strict data regulations, the UK has become Europe’s hub for AI with pioneers like DeepMind, Wayve, Graphcore, Oxbotica, and BenevolentAI. The country’s world-class universities churn out in-demand AI talent and tech investments more than double other European countries.

(Credit: Atomico)

More generally, the UK is regularly considered one of the best places in the world to set up a business. All eyes are on how the country will use its post-Brexit freedoms to diverge from EU rules to further boost its industries.

“The UK already punches above its weight internationally and we are ranked third in the world behind the USA and China in the list of top countries for AI,” said Chris Philp, DCMS Minister.

“We’re laying the foundations for the next ten years’ growth with a strategy to help us seize the potential of artificial intelligence and play a leading role in shaping the way the world governs it.”

There will undoubtedly be debates over the decisions made by the UK to boost its AI industry, especially regarding TDM, but the policies announced so far will support entrepreneurship and the country’s attractiveness for relevant investments.

(Photo by Chris Robert on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is also co-located with the Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK eases data mining laws to support flourishing AI industry appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/06/29/uk-eases-data-mining-laws-support-flourishing-ai-industry/feed/ 0
The EU’s AI rules will likely take over a year to be agreed https://www.artificialintelligence-news.com/2022/02/17/eu-ai-rules-likely-take-over-year-to-be-agreed/ https://www.artificialintelligence-news.com/2022/02/17/eu-ai-rules-likely-take-over-year-to-be-agreed/#respond Thu, 17 Feb 2022 12:34:20 +0000 https://artificialintelligence-news.com/?p=11691 Rules governing the use of artificial intelligence across the EU will likely take over a year to be agreed upon. Last year, the European Commission drafted AI laws. While the US and China are set to dominate AI development with their vast resources, economic might, and light-touch regulation, European rivals – including the UK and... Read more »

The post The EU’s AI rules will likely take over a year to be agreed appeared first on AI News.

]]>
Rules governing the use of artificial intelligence across the EU will likely take over a year to be agreed upon.

Last year, the European Commission drafted AI laws. While the US and China are set to dominate AI development with their vast resources, economic might, and light-touch regulation, European rivals – including the UK and EU members – believe they can lead in ethical standards.

In the draft of the EU regulations, companies that are found guilty of AI misuse face a fine of €30 million or six percent of their global turnover (whichever is greater). The risk of such fines has been criticised as driving investments away from Europe.

The EU’s draft AI regulation classifies systems into three risk categories:

  • Limited risk – includes systems like chatbots, inventory management, spam filters, and video games.
  • High risk – includes systems that make vital decisions like evaluating creditworthiness, recruitment, justice administration, and biometric identification in non-public spaces.
  • Unacceptable risk – includes systems that are manipulative or exploitative, create social scoring, or conduct real-time biometric authentication in public spaces for law enforcement.

Unacceptable risk systems will face a blanket ban from deployment in the EU while limited risk will require minimal oversight.

Organisations deploying high-risk AI systems would be required to have things like:

  • Human oversight.
  • A risk-management system.
  • Record keeping and logging.
  • Transparency to users.
  • Data governance and management.
  • Conformity assessment.
  • Government registration.

However, the cumbersome nature of the EU – requiring agreement from all member states, each with their own priorities – means that new regulations are often subject to more debate and delay than national lawmaking.

Reuters reports that two key lawmakers on Wednesday said the EU’s AI regulations will likely take over a year more to agree. The delay is primarily due to debates over whether facial recognition should be banned and who should enforce the rules.

“Facial recognition is going to be the biggest ideological discussion between the right and left,” said one lawmaker, Dragos Tudorache, in a Reuters interview.

“I don’t believe in an outright ban. For me, the solution is to put the right rules in place.”

With leading academic institutions and more than 1,300 AI companies employing over 30,000 people, the UK is the biggest destination for AI investment in Europe and the third in the world. Between January and June 2021, global investors poured £13.5 billion into more than 1,400 “deep tech” UK private technology firms—more than Germany, France, and Israel combined.

In September 2021, the UK published its 10-year National Artificial Intelligence Strategy in a bid to secure its European AI leadership. Governance plays a large role in the strategy.

“The UK already punches above its weight internationally and we are ranked third in the world behind the USA and China in the list of top countries for AI,” commented DCMS Minister Chris Philp.

“We’re laying the foundations for the next ten years’ growth with a strategy to help us seize the potential of artificial intelligence and play a leading role in shaping the way the world governs it.”

As part of its strategy, the UK is creating an ‘AI Standards Hub’ to coordinate the country’s engagement in establishing global rules and is working with The Alan Turing Institute to update guidance on AI ethics and safety.

“We are proud of creating a dynamic, collaborative community of diverse researchers and are growing world-leading capabilities in responsible, safe, ethical, and inclusive AI research and innovation,” said Professor Sir Adrian Smith, Chief Executive of The Alan Turing Institute.

Striking a balance between innovation-stifling overregulation and ethics-compromising underregulation is never a simple task. It will be interesting to observe how AI regulations in Europe will differ across the continent and beyond.

(Photo by Christian Lue on Unsplash)

Related: British intelligence agency GCHQ publishes ‘Ethics of AI’ report

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post The EU’s AI rules will likely take over a year to be agreed appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/02/17/eu-ai-rules-likely-take-over-year-to-be-agreed/feed/ 0
The UK is changing its data laws to boost its digital economy https://www.artificialintelligence-news.com/2021/08/26/uk-changing-data-laws-boost-digital-economy/ https://www.artificialintelligence-news.com/2021/08/26/uk-changing-data-laws-boost-digital-economy/#respond Thu, 26 Aug 2021 12:17:50 +0000 http://artificialintelligence-news.com/?p=10985 Britain will diverge from EU data laws that have been criticised as being overly strict and driving investment and innovation out of Europe. Culture Secretary Oliver Dowden has confirmed the UK Government’s intention to diverge from key parts of the infamous General Data Protection Regulation (GDPR). Estimates suggest there is as much as £11 billion... Read more »

The post The UK is changing its data laws to boost its digital economy appeared first on AI News.

]]>
Britain will diverge from EU data laws that have been criticised as being overly strict and driving investment and innovation out of Europe.

Culture Secretary Oliver Dowden has confirmed the UK Government’s intention to diverge from key parts of the infamous General Data Protection Regulation (GDPR). Estimates suggest there is as much as £11 billion worth of trade that goes unrealised around the world due to barriers associated with data transfers

“Now that we have left the EU, I’m determined to seize the opportunity by developing a world-leading data policy that will deliver a Brexit dividend for individuals and businesses across the UK,” said Dowden.

When GDPR came into effect, it received its fair share of both praise and criticism.  On the one hand, GDPR admirably sought to protect the data of consumers. On the other, “pointless” cookie popups, extra paperwork, and concerns about hefty fines have caused frustration and led many businesses to pack their bags and take their jobs, innovation, and services to less strict regimes.

GDPR is just one example. Another would be Article 11 and 13 of the EU Copyright Directive that some – including the inventor of the World Wide Web Sir Tim Berners-Lee, and Wikipedia founder Jimmy Wales – have opposed as being an “upload filter”, “link tax”, and “meme killer”. This blog post from YouTube explained why creators should care about Europe’s increasingly strict laws.

Mr Dowden said the new reforms would be “based on common sense, not box-ticking” but uphold the necessary safeguards to protect people’s privacy.

What will the impact be on the UK’s AI industry?

AI is, of course, powered by data—masses of it. The idea of mass data collection terrifies many people but is harmless so long as it’s truly anonymised. Arguably, it’s a lack of data that should be more concerning as biases in many algorithms today are largely due to limited datasets that don’t represent the full diversity of our societies.

Western facial recognition algorithms, for example, have far more false positives against minorities than they do white men—leading to automated racial profiling. A 2010 study (PDF) by researchers at NIST and the University of Texas found that algorithms designed and tested in East Asia are better at recognising East Asians.

However, the data must be collected responsibly and checked as thoroughly as possible. Last year, MIT was forced to take offline a popular dataset called 80 Million Tiny Images that was created in 2008 to train AIs to detect objects after discovering that images were labelled with misogynistic and racist terms.

While a European leader in AI, few people are under any illusion that the UK could become a world leader in pure innovation and deployment because it’s simply unable to match the funding and resources available to powers like the US and China. Instead, experts believe the UK should build on its academic and diplomatic strengths to set the “gold standard” in ethical artificial intelligence.

“There’s an opportunity for us to set world-leading, gold standard data regulation which protects privacy, but does so in as light touch a way as possible,” Mr Dowden said.

As it diverges from the EU’s laws in the first major regulatory shakeup since Brexit, the UK needs to show it can strike a fair balance between the EU’s strict regime and the arguably too lax protections in many other countries.

The UK also needs to promote and support innovation while avoiding the “Singapore-on-Thames”-style model of a race to the bottom in standards, rights, and taxes that many Remain campaigners feared would happen if the country left the EU. Similarly, it needs to prove that “Global Britain” is more than just a soundbite.

To that end, Britain’s data watchdog is getting a shakeup and John Edwards, New Zealand’s current privacy commissioner, will head up the regulator.

“It is a great honour and responsibility to be considered for appointment to this key role as a watchdog for the information rights of the people of the United Kingdom,” said Edwards.

“There is a great opportunity to build on the wonderful work already done and I look forward to the challenge of steering the organisation and the British economy into a position of international leadership in the safe and trusted use of data for the benefit of all.”

The UK is also seeking global data partnerships with six countries: the United States, Australia, the Republic of Korea, Singapore, the Dubai International Finance Centre, and Colombia. Over the long-term, agreements with fast-growing markets like India and Brazil are hoped to be striked to facilitate data flows in scientific research, law enforcement, and more.

Commenting on the UK’s global data plans Andrew Dyson, Global Co-Chair of DLA Piper’s Data Protection, Privacy and Security Group, said:

“The announcements are the first evidence of the UK’s vision to establish a bold new regulatory landscape for digital Britain post-Brexit. Earlier in the year, the UK and EU formally recognised each other’s data protection regimes—that allowed data to continue to flow freely after Brexit.

This announcement shows how the UK will start defining its own future regulatory pathways from here, with an expansion of digital trade a clear driver if you look at the willingness to consider potential recognition of data transfers to Australia, Singapore, India and the USA.

It will be interesting to see the further announcements that are sure to follow on reforms to the wider policy landscape that are just hinted at here, and of course the changes in oversight we can expect from a new Information Commissioner.”

An increasingly punitive EU is not likely to react kindly to the news and added clauses into the recent deal reached with the UK to avoid the country diverging too far from its own standards.

Mr Dowden, however, said there was “no reason” the EU should react with too much animosity as the bloc has reached data agreements with many countries outside of its regulatory orbit and the UK must be free to “set our own path”.

(Photo by Massimiliano Morosinotto on Unsplash)

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post The UK is changing its data laws to boost its digital economy appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/08/26/uk-changing-data-laws-boost-digital-economy/feed/ 0
Microsoft chief Brad Smith warns that killer robots are ‘unstoppable’ https://www.artificialintelligence-news.com/2019/09/23/microsoft-brad-smith-killer-robots-unstoppable/ https://www.artificialintelligence-news.com/2019/09/23/microsoft-brad-smith-killer-robots-unstoppable/#respond Mon, 23 Sep 2019 12:06:08 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6040 Microsoft chief Brad Smith issued a warning over the weekend that killer robots are ‘unstoppable’ and a new digital Geneva Convention is required. Most sci-fi fans will think of Terminator when they hear of killer robots. In the classic film series, a rogue military AI called Skynet gained self-awareness after spreading to millions of servers... Read more »

The post Microsoft chief Brad Smith warns that killer robots are ‘unstoppable’ appeared first on AI News.

]]>
Microsoft chief Brad Smith issued a warning over the weekend that killer robots are ‘unstoppable’ and a new digital Geneva Convention is required.

Most sci-fi fans will think of Terminator when they hear of killer robots. In the classic film series, a rogue military AI called Skynet gained self-awareness after spreading to millions of servers around the world. Concluding that humans would attempt to shut it down, Skynet sought to exterminate all of mankind in the interest of self-preservation.

While it was once just a popcorn flick, Terminator now offers a dire warning of what could be if precautions are not taken.

As with most technologies, AI will find itself increasingly used for military applications. The ultimate goal for general artificial intelligence is to self-learn. Combine both, and Skynet no longer seems the wild dramatisation that it once did.

Speaking to The Telegraph, Smith seems to agree. Smith points towards developments in the US, China, UK, Russia, Isreal, South Korea, and others, who are all developing autonomous weapon systems.

Wars could one day be fought on battlefields entirely with robots, a scenario that has many pros and cons. On the one hand, it reduces the risk to human troops. On the other, it makes declaring war easier and runs the risk of machines going awry.

Many technologists have likened the race to militarise AI to the nuclear arms race. In a pursuit to be the first and best, dangerous risks may be taken.

There’s still no clear responsible entity for death or injuries caused by an autonomous machine – the manufacturer, developer, or an overseer. This has also been a subject of much debate in regards to how insurance will work with driverless cars.

With military applications, many technologists have called for AI to never make a combat decision – especially one that would result in fatalities – on its own. While AI can make recommendations, a final decision must be made by a human.

Preventing unimaginable devastation

The story of Russian lieutenant colonel Stanislav Petrov in 1983 offers a warning of how a machine without human oversight may cause unimaginable devastation.

Petrov’s computers reported that an intercontinental missile had been launched by the US towards the Soviet Union. The Soviet Union’s strategy was an immediate and compulsory nuclear counter-attack against the US in such a scenario. Petrov used his instinct that the computer was incorrect and decided against launching a nuclear missile, and he was right. 

Had the decision in 1983 whether to deploy a nuclear missile been made solely on the computer, one would have been launched and met with retaliatory launches from the US and its allies.

Smith wants to see a new digital Geneva Convention in order to bring world powers together in agreement over acceptable norms when it comes to AI. “The safety of civilians is at risk today. We need more urgent action, and we need it in the form of a digital Geneva Convention, rules that will protect civilians and soldiers.” 

Many companies – including thousands of Google employees, following backlash over a Pentagon contract to develop AI tech for drones – have pledged not to develop AI technologies for harmful use.

Smith has launched a new book called Tools and Weapons. At the launch, Smith also called for stricter rules over the use of facial recognition technology. “There needs to be a new law in this space, we need regulation in the world of facial recognition in order to protect against potential abuse.”

Last month, a report from Dutch NGO PAX said leading tech firms are putting the world ‘at risk’ of killer AI. Microsoft, along with Amazon, was ranked among the highest risk. Microsoft itself warned investors back in February that its AI offerings could damage the company’s reputation. 

“Why are companies like Microsoft and Amazon not denying that they’re currently developing these highly controversial weapons, which could decide to kill people without direct human involvement?” said Frank Slijper, lead author of PAX’s report.

A global campaign simply titled Campaign To Stop Killer Robots now includes 113 NGOs across 57 countries and has doubled in size over the past year.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Microsoft chief Brad Smith warns that killer robots are ‘unstoppable’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2019/09/23/microsoft-brad-smith-killer-robots-unstoppable/feed/ 0