regulations Archives - AI News https://www.artificialintelligence-news.com/tag/regulations/ Artificial Intelligence News Wed, 26 Apr 2023 15:28:31 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png regulations Archives - AI News https://www.artificialintelligence-news.com/tag/regulations/ 32 32 FTC Chairwoman: There is no ‘AI exemption’ to existing laws https://www.artificialintelligence-news.com/2023/04/26/ftc-chairwoman-no-ai-exemption-to-existing-laws/ https://www.artificialintelligence-news.com/2023/04/26/ftc-chairwoman-no-ai-exemption-to-existing-laws/#respond Wed, 26 Apr 2023 15:28:30 +0000 https://www.artificialintelligence-news.com/?p=12989 FTC Chairwoman Lina Khan has warned that the US government will not hesitate to clamp down on harmful business practices involving AI. Speaking at a virtual press event, Khan was joined by top officials from US consumer protection and civil rights agencies. Together, the officials emphasised that regulators are committed to tracking and stopping any... Read more »

The post FTC Chairwoman: There is no ‘AI exemption’ to existing laws appeared first on AI News.

]]>
FTC Chairwoman Lina Khan has warned that the US government will not hesitate to clamp down on harmful business practices involving AI.

Speaking at a virtual press event, Khan was joined by top officials from US consumer protection and civil rights agencies.

Together, the officials emphasised that regulators are committed to tracking and stopping any illegal behaviour associated with biased or deceptive AI tools.

Khan warned that, in addition to the well-publicised deployment of automated tools that introduce bias into decisions about housing, loans, hiring, and productivity monitoring, the rapid evolution of advanced AI tools designed to generate human-like content also presents a significant risk.

Khan also expressed concern about AI tools that scammers could use to “manipulate and deceive people on a large scale, deploying fake or convincing content more widely and targeting specific groups with greater precision.”

She also warned that a small number of powerful firms already control the raw materials, data, cloud services, and computing power required to develop and deploy AI products. Khan raised the possibility that the FTC could wield its antitrust authority to protect competition.

“In moments of technological disruption, established players and incumbents may be tempted to crush, absorb or otherwise unlawfully restrain new entrants in order to maintain their dominance,” said Khan.

Khan did not specifically name any companies or products, but her comments will likely increase pressure on major tech firms like Google and Microsoft that are currently engaged in a race to sell more advanced AI tools.

The warnings from top US regulators come at a time when EU lawmakers are negotiating new rules designed to regulate AI, with some in the US calling for similar legislation.

The regulators said that many of the most harmful AI products might already contravene existing laws protecting civil rights and preventing fraud.

For her part, Khan reiterated that “there is no AI exemption to the laws on the books.”

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post FTC Chairwoman: There is no ‘AI exemption’ to existing laws appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/04/26/ftc-chairwoman-no-ai-exemption-to-existing-laws/feed/ 0
The EU’s AI rules will likely take over a year to be agreed https://www.artificialintelligence-news.com/2022/02/17/eu-ai-rules-likely-take-over-year-to-be-agreed/ https://www.artificialintelligence-news.com/2022/02/17/eu-ai-rules-likely-take-over-year-to-be-agreed/#respond Thu, 17 Feb 2022 12:34:20 +0000 https://artificialintelligence-news.com/?p=11691 Rules governing the use of artificial intelligence across the EU will likely take over a year to be agreed upon. Last year, the European Commission drafted AI laws. While the US and China are set to dominate AI development with their vast resources, economic might, and light-touch regulation, European rivals – including the UK and... Read more »

The post The EU’s AI rules will likely take over a year to be agreed appeared first on AI News.

]]>
Rules governing the use of artificial intelligence across the EU will likely take over a year to be agreed upon.

Last year, the European Commission drafted AI laws. While the US and China are set to dominate AI development with their vast resources, economic might, and light-touch regulation, European rivals – including the UK and EU members – believe they can lead in ethical standards.

In the draft of the EU regulations, companies that are found guilty of AI misuse face a fine of €30 million or six percent of their global turnover (whichever is greater). The risk of such fines has been criticised as driving investments away from Europe.

The EU’s draft AI regulation classifies systems into three risk categories:

  • Limited risk – includes systems like chatbots, inventory management, spam filters, and video games.
  • High risk – includes systems that make vital decisions like evaluating creditworthiness, recruitment, justice administration, and biometric identification in non-public spaces.
  • Unacceptable risk – includes systems that are manipulative or exploitative, create social scoring, or conduct real-time biometric authentication in public spaces for law enforcement.

Unacceptable risk systems will face a blanket ban from deployment in the EU while limited risk will require minimal oversight.

Organisations deploying high-risk AI systems would be required to have things like:

  • Human oversight.
  • A risk-management system.
  • Record keeping and logging.
  • Transparency to users.
  • Data governance and management.
  • Conformity assessment.
  • Government registration.

However, the cumbersome nature of the EU – requiring agreement from all member states, each with their own priorities – means that new regulations are often subject to more debate and delay than national lawmaking.

Reuters reports that two key lawmakers on Wednesday said the EU’s AI regulations will likely take over a year more to agree. The delay is primarily due to debates over whether facial recognition should be banned and who should enforce the rules.

“Facial recognition is going to be the biggest ideological discussion between the right and left,” said one lawmaker, Dragos Tudorache, in a Reuters interview.

“I don’t believe in an outright ban. For me, the solution is to put the right rules in place.”

With leading academic institutions and more than 1,300 AI companies employing over 30,000 people, the UK is the biggest destination for AI investment in Europe and the third in the world. Between January and June 2021, global investors poured £13.5 billion into more than 1,400 “deep tech” UK private technology firms—more than Germany, France, and Israel combined.

In September 2021, the UK published its 10-year National Artificial Intelligence Strategy in a bid to secure its European AI leadership. Governance plays a large role in the strategy.

“The UK already punches above its weight internationally and we are ranked third in the world behind the USA and China in the list of top countries for AI,” commented DCMS Minister Chris Philp.

“We’re laying the foundations for the next ten years’ growth with a strategy to help us seize the potential of artificial intelligence and play a leading role in shaping the way the world governs it.”

As part of its strategy, the UK is creating an ‘AI Standards Hub’ to coordinate the country’s engagement in establishing global rules and is working with The Alan Turing Institute to update guidance on AI ethics and safety.

“We are proud of creating a dynamic, collaborative community of diverse researchers and are growing world-leading capabilities in responsible, safe, ethical, and inclusive AI research and innovation,” said Professor Sir Adrian Smith, Chief Executive of The Alan Turing Institute.

Striking a balance between innovation-stifling overregulation and ethics-compromising underregulation is never a simple task. It will be interesting to observe how AI regulations in Europe will differ across the continent and beyond.

(Photo by Christian Lue on Unsplash)

Related: British intelligence agency GCHQ publishes ‘Ethics of AI’ report

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post The EU’s AI rules will likely take over a year to be agreed appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/02/17/eu-ai-rules-likely-take-over-year-to-be-agreed/feed/ 0
UK-Aus probe finds Clearview AI fails to comply with privacy regulations https://www.artificialintelligence-news.com/2021/11/04/uk-aus-probe-finds-clearview-ai-fails-comply-privacy-regs/ https://www.artificialintelligence-news.com/2021/11/04/uk-aus-probe-finds-clearview-ai-fails-comply-privacy-regs/#respond Thu, 04 Nov 2021 13:20:08 +0000 https://artificialintelligence-news.com/?p=11283 A joint UK-Australia probe has found that Clearview AI fails to comply with privacy regulations. The facial recognition provider has been the focus of many investigations for its controversial practice of scraping the online data of people without their consent. The joint investigation, conducted by the ​​UK Information Commissioner’s Office (ICO) and Office of the... Read more »

The post UK-Aus probe finds Clearview AI fails to comply with privacy regulations appeared first on AI News.

]]>
A joint UK-Australia probe has found that Clearview AI fails to comply with privacy regulations.

The facial recognition provider has been the focus of many investigations for its controversial practice of scraping the online data of people without their consent.

The joint investigation, conducted by the ​​UK Information Commissioner’s Office (ICO) and Office of the Australian Information Commissioner (OAIC), found that Clearview AI has scraped the biometric information of at least three billion people.

“Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland once argued.

In her determination (PDF), Australia’s Information Commissioner Angelene Falk disagrees: “I consider that the act of uploading an image to a social media site does not unambiguously indicate agreement to collection of that image by an unknown third party for commercial purposes.”

“Consent also cannot be implied if individuals are not adequately informed about the implications of providing or withholding consent. This includes ensuring that an individual is properly and clearly informed about how their personal information will be handled, so they can decide whether to give consent.”

The joint investigation concluded that Clearview AI breached the country’s privacy laws by collecting the data of citizens without their consent and failed to notify affected individuals. A form created at the start of 2020 that allowed citizens to opt-out from being searchable on the solution can no longer be used and Australians can now only make such a request via email.

Falk’s office has now ordered Clearview AI to destroy the biometric data that it’s collected on Australians and cease further collection.

“The covert collection of this kind of sensitive information is unreasonably intrusive and unfair,” Falk said.

“It carries significant risk of harm to individuals, including vulnerable groups such as children and victims of crime, whose images can be searched on Clearview AI’s database.”

Increasing regulatory scrutiny

By amassing such a large amount of data, Clearview AI is one of the most powerful facial recognition tools available. The solution has been used by governmental agencies and law enforcement around the world.

Following the Capitol raid earlier this year, Clearview AI boasted that police use of its facial recognition system increased 26 percent.

Regulators are now increasing their scrutiny over Clearview AI’s practices. The UK-Aus investigation began last year and followed a similar probe that was launched by the EU’s privacy watchdog a month prior.

The European Data Protection Board ruled that any use of the service by law enforcement in Europe would “likely not be consistent with the EU data protection regime” and that it “has doubts as to whether any Union or Member State law provides a legal basis for using a service such as the one offered by Clearview AI.”

(Photo by Maksim Chernishev on Unsplash)

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post UK-Aus probe finds Clearview AI fails to comply with privacy regulations appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/11/04/uk-aus-probe-finds-clearview-ai-fails-comply-privacy-regs/feed/ 0
BCS, Chartered Institute for IT: Human reviews of AI decisions require legal protection https://www.artificialintelligence-news.com/2021/10/13/bcs-chartered-institute-for-it-human-reviews-of-ai-decisions-require-legal-protection/ https://www.artificialintelligence-news.com/2021/10/13/bcs-chartered-institute-for-it-human-reviews-of-ai-decisions-require-legal-protection/#respond Wed, 13 Oct 2021 12:20:30 +0000 http://artificialintelligence-news.com/?p=11225 A leading IT industry body has warned that human reviews of AI decisions are in need of legal protection. BCS, The Chartered Institute for IT, made the warning amid the launch of the ‘Data: A New Direction’ consultation launched by the Department for Digital, Culture, Media and Sport (DCMS). The consultation aims to re-examine the... Read more »

The post BCS, Chartered Institute for IT: Human reviews of AI decisions require legal protection appeared first on AI News.

]]>
A leading IT industry body has warned that human reviews of AI decisions are in need of legal protection.

BCS, The Chartered Institute for IT, made the warning amid the launch of the ‘Data: A New Direction’ consultation launched by the Department for Digital, Culture, Media and Sport (DCMS).

The consultation aims to re-examine the UK’s data regulations post-Brexit. EU laws that were previously mandatory while the UK was part of the bloc – such as the much-criticised GDPR – will be looked at to determine whether a better balance can be struck between data privacy and ensuring that innovation is not stifled.

“There’s an opportunity for us to set world-leading, gold standard data regulation which protects privacy, but does so in as light-touch a way as possible,” said then-UK Culture Secretary Oliver Dowden earlier this year.

DCMS is considering the removal of Article 22 of GDPR. Article 22 focuses specifically on the right to review fully automated decisions.

Dr Sam De Silva, Chair of BCS’ Law Specialist Group and a partner at law firm CMS, explained: 

“Article 22 is not an easy provision to interpret and there is danger in interpreting it in isolation like many have done.

We still do need clarity on the rights someone has in the scenario where there is fully automated decision making which could have a significant impact on that individual.”

AIs are being used for increasingly critical decisions, including whether to offer loans or grant insurance claims. Given the unsolved issues with bias, there’s a chance that discrimination could end up becoming automated.

One school of thought is that humans should always make final decisions, especially ones that impact people’s lives. BCS believes that human reviews of AI decisions should at least have legal protection.

“Protection of human review of fully automated decisions is currently in a piece of legislation dealing with personal data. If no personal data is involved the protection does not apply, but the decision could still have a life-changing impact on us,” added De Silva.

“For example, say an algorithm is created deciding whether you should get a vaccine. The data you need to enter into the system is likely to be DOB, ethnicity, and other things, but not name or anything which could identify you as the person.

“Based on the input, the decision could be that you’re not eligible for a vaccine. But any protections in the GDPR would not apply as there is no personal data.”

BCS welcomes that the government is consulting carefully prior to making any decision. The body says that it supports the consultation and will be gathering views from across its membership.

Related: UK sets out its 10-year plan to remain a global AI superpower

(Photo by Sergey Zolkin on Unsplash)

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post BCS, Chartered Institute for IT: Human reviews of AI decisions require legal protection appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/10/13/bcs-chartered-institute-for-it-human-reviews-of-ai-decisions-require-legal-protection/feed/ 0
The UK is changing its data laws to boost its digital economy https://www.artificialintelligence-news.com/2021/08/26/uk-changing-data-laws-boost-digital-economy/ https://www.artificialintelligence-news.com/2021/08/26/uk-changing-data-laws-boost-digital-economy/#respond Thu, 26 Aug 2021 12:17:50 +0000 http://artificialintelligence-news.com/?p=10985 Britain will diverge from EU data laws that have been criticised as being overly strict and driving investment and innovation out of Europe. Culture Secretary Oliver Dowden has confirmed the UK Government’s intention to diverge from key parts of the infamous General Data Protection Regulation (GDPR). Estimates suggest there is as much as £11 billion... Read more »

The post The UK is changing its data laws to boost its digital economy appeared first on AI News.

]]>
Britain will diverge from EU data laws that have been criticised as being overly strict and driving investment and innovation out of Europe.

Culture Secretary Oliver Dowden has confirmed the UK Government’s intention to diverge from key parts of the infamous General Data Protection Regulation (GDPR). Estimates suggest there is as much as £11 billion worth of trade that goes unrealised around the world due to barriers associated with data transfers

“Now that we have left the EU, I’m determined to seize the opportunity by developing a world-leading data policy that will deliver a Brexit dividend for individuals and businesses across the UK,” said Dowden.

When GDPR came into effect, it received its fair share of both praise and criticism.  On the one hand, GDPR admirably sought to protect the data of consumers. On the other, “pointless” cookie popups, extra paperwork, and concerns about hefty fines have caused frustration and led many businesses to pack their bags and take their jobs, innovation, and services to less strict regimes.

GDPR is just one example. Another would be Article 11 and 13 of the EU Copyright Directive that some – including the inventor of the World Wide Web Sir Tim Berners-Lee, and Wikipedia founder Jimmy Wales – have opposed as being an “upload filter”, “link tax”, and “meme killer”. This blog post from YouTube explained why creators should care about Europe’s increasingly strict laws.

Mr Dowden said the new reforms would be “based on common sense, not box-ticking” but uphold the necessary safeguards to protect people’s privacy.

What will the impact be on the UK’s AI industry?

AI is, of course, powered by data—masses of it. The idea of mass data collection terrifies many people but is harmless so long as it’s truly anonymised. Arguably, it’s a lack of data that should be more concerning as biases in many algorithms today are largely due to limited datasets that don’t represent the full diversity of our societies.

Western facial recognition algorithms, for example, have far more false positives against minorities than they do white men—leading to automated racial profiling. A 2010 study (PDF) by researchers at NIST and the University of Texas found that algorithms designed and tested in East Asia are better at recognising East Asians.

However, the data must be collected responsibly and checked as thoroughly as possible. Last year, MIT was forced to take offline a popular dataset called 80 Million Tiny Images that was created in 2008 to train AIs to detect objects after discovering that images were labelled with misogynistic and racist terms.

While a European leader in AI, few people are under any illusion that the UK could become a world leader in pure innovation and deployment because it’s simply unable to match the funding and resources available to powers like the US and China. Instead, experts believe the UK should build on its academic and diplomatic strengths to set the “gold standard” in ethical artificial intelligence.

“There’s an opportunity for us to set world-leading, gold standard data regulation which protects privacy, but does so in as light touch a way as possible,” Mr Dowden said.

As it diverges from the EU’s laws in the first major regulatory shakeup since Brexit, the UK needs to show it can strike a fair balance between the EU’s strict regime and the arguably too lax protections in many other countries.

The UK also needs to promote and support innovation while avoiding the “Singapore-on-Thames”-style model of a race to the bottom in standards, rights, and taxes that many Remain campaigners feared would happen if the country left the EU. Similarly, it needs to prove that “Global Britain” is more than just a soundbite.

To that end, Britain’s data watchdog is getting a shakeup and John Edwards, New Zealand’s current privacy commissioner, will head up the regulator.

“It is a great honour and responsibility to be considered for appointment to this key role as a watchdog for the information rights of the people of the United Kingdom,” said Edwards.

“There is a great opportunity to build on the wonderful work already done and I look forward to the challenge of steering the organisation and the British economy into a position of international leadership in the safe and trusted use of data for the benefit of all.”

The UK is also seeking global data partnerships with six countries: the United States, Australia, the Republic of Korea, Singapore, the Dubai International Finance Centre, and Colombia. Over the long-term, agreements with fast-growing markets like India and Brazil are hoped to be striked to facilitate data flows in scientific research, law enforcement, and more.

Commenting on the UK’s global data plans Andrew Dyson, Global Co-Chair of DLA Piper’s Data Protection, Privacy and Security Group, said:

“The announcements are the first evidence of the UK’s vision to establish a bold new regulatory landscape for digital Britain post-Brexit. Earlier in the year, the UK and EU formally recognised each other’s data protection regimes—that allowed data to continue to flow freely after Brexit.

This announcement shows how the UK will start defining its own future regulatory pathways from here, with an expansion of digital trade a clear driver if you look at the willingness to consider potential recognition of data transfers to Australia, Singapore, India and the USA.

It will be interesting to see the further announcements that are sure to follow on reforms to the wider policy landscape that are just hinted at here, and of course the changes in oversight we can expect from a new Information Commissioner.”

An increasingly punitive EU is not likely to react kindly to the news and added clauses into the recent deal reached with the UK to avoid the country diverging too far from its own standards.

Mr Dowden, however, said there was “no reason” the EU should react with too much animosity as the bloc has reached data agreements with many countries outside of its regulatory orbit and the UK must be free to “set our own path”.

(Photo by Massimiliano Morosinotto on Unsplash)

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post The UK is changing its data laws to boost its digital economy appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/08/26/uk-changing-data-laws-boost-digital-economy/feed/ 0
Chinese AI darling SenseTime wants facial recognition standards https://www.artificialintelligence-news.com/2018/10/02/ai-sensetime-facial-recognition-standards/ https://www.artificialintelligence-news.com/2018/10/02/ai-sensetime-facial-recognition-standards/#respond Tue, 02 Oct 2018 13:33:32 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4039 The CEO of Chinese AI darling SenseTime wants to see facial recognition standards established for a ‘healthier’ industry. SenseTime is among China’s most renowned AI companies. Back in April, we reported it had become the world’s most funded AI startup. Part of the company’s monumental success is the popularity of facial recognition in China where... Read more »

The post Chinese AI darling SenseTime wants facial recognition standards appeared first on AI News.

]]>
The CEO of Chinese AI darling SenseTime wants to see facial recognition standards established for a ‘healthier’ industry.

SenseTime is among China’s most renowned AI companies. Back in April, we reported it had become the world’s most funded AI startup.

Part of the company’s monumental success is the popularity of facial recognition in China where it’s used in many aspects of citizens’ lives. Just yesterday, game developer Tencent announced it’s testing facial recognition to check users’ ages.

Xu Li, CEO of SenseTime, says immigration officials doubted the accuracy of facial recognition technology when he first introduced his own. “We knew about it 20 years ago and, combined with fingerprint checks, the accuracy is only 53 per cent,” one told him.

Facial recognition has come a long way since 20 years ago. Recent advances in artificial intelligence have led to even greater leaps, resulting in companies such as SenseTime.

To dispel the idea that facial recognition is still inaccurate, Li wants ‘trust levels’ to be established.

“With standards, technology adopters can better understand the risk involved, just like credit worthiness for individuals and companies,” Xu said to South China Morning Post. “Providers of facial recognition can be assigned different trust levels, ranging from financial security at the top to entertainment uses.”

Many of the leading facial recognition technologies have their own built-in trust levels. These levels determine how certain the software must be to call it a match.

Back in July, AI News reported on the findings of ACLU (American Civil Liberties Union) which found Amazon’s facial recognition AI erroneously labelled those with darker skin colours as criminals more often when matching against mugshots.

Amazon claims the ACLU left the facial recognition service’s default confidence setting of 80 percent on – when it suggests 95 percent or higher for law enforcement.

Responding to the ACLU’s findings, Dr Matt Wood, GM of Deep Learning and AI at Amazon Web Services, also called for regulations. However, Wood asks the government to force a minimum confidence level for the use of facial recognition in law enforcement.

Li and Wood may be calling for different regulations, but they – and many other AI leaders – agree that some are essential to ensure a healthy industry.

 Interested in hearing industry leaders discuss subjects like this? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London and Amsterdam to learn more. Co-located with the  IoT Tech Expo, Blockchain Expo and Cyber Security & Cloud Expo so you can explore the future of enterprise technology in one place.

The post Chinese AI darling SenseTime wants facial recognition standards appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2018/10/02/ai-sensetime-facial-recognition-standards/feed/ 0
White House will take a ‘hands-off’ approach to AI regulation https://www.artificialintelligence-news.com/2018/05/11/white-house-hands-off-ai-regulation/ https://www.artificialintelligence-news.com/2018/05/11/white-house-hands-off-ai-regulation/#respond Fri, 11 May 2018 12:16:37 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3083 The White House has decided it will take a ‘hands-off’ approach to AI regulation despite many experts calling for safe and ethical standards to be set. Some of the world’s greatest minds have expressed concern about the development of AI without regulations — including the likes of Elon Musk, and the late Stephen Hawking. Musk... Read more »

The post White House will take a ‘hands-off’ approach to AI regulation appeared first on AI News.

]]>
The White House has decided it will take a ‘hands-off’ approach to AI regulation despite many experts calling for safe and ethical standards to be set.

Some of the world’s greatest minds have expressed concern about the development of AI without regulations — including the likes of Elon Musk, and the late Stephen Hawking.

Musk famously said unregulated AI could post “the biggest risk we face as a civilisation”, while Hawking similarly warned “the development of full artificial intelligence could spell the end of the human race.”

The announcement that developers will be free to experiment with AI as they see fit was made during a meeting with representatives of 40 companies including Google, Facebook, and Intel.

Strict regulations can stifle innovation, and the U.S has made clear it wants to emerge a world leader in the AI race.

Western nations are often seen as somewhat at a disadvantage to Eastern countries like China, not because they have less talent, but citizens are more wary about data collection and their privacy in general. However, there’s a strong argument to be made for striking a balance.

Making the announcement, White House Science Advisor Michael Kratsios noted the government did not stand in the way of Alexander Graham Bell or the Wright brothers when they invented the telephone and aeroplane. Of course, telephones and aeroplanes weren’t designed with the ultimate goal of becoming self-aware and able to make automated decisions.

Both telephones and aeroplanes, like many technological advancements, have been used for military applications. However, human operators have ultimately always made the decisions. AI could be used to automatically launch a nuclear missile if left unchecked.

Recent AI stories have some people unnerved. A self-driving car from Uber malfunctioned and killed a pedestrian. At Google I/O, the company’s AI called a hair salon and the receptionist had no idea they were not speaking to a human.

People not feeling comfortable with AI developments is more likely to stifle innovation than balanced regulations.

What are your thoughts on the White House’s approach to AI regulation? Let us know in the comments.

 Interested in hearing industry leaders discuss subjects like this and sharing their use-cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London and Amsterdam to learn more. Co-located with the  IoT Tech Expo, Blockchain Expo and Cyber Security & Cloud Expo so you can explore the future of enterprise technology in one place.

The post White House will take a ‘hands-off’ approach to AI regulation appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2018/05/11/white-house-hands-off-ai-regulation/feed/ 0