Legislation Archives - AI News https://www.artificialintelligence-news.com/tag/legislation/ Artificial Intelligence News Mon, 18 Sep 2023 15:59:17 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png Legislation Archives - AI News https://www.artificialintelligence-news.com/tag/legislation/ 32 32 Is Europe killing itself financially with the AI Act? https://www.artificialintelligence-news.com/2023/09/18/is-europe-killing-itself-financially-with-ai-act/ https://www.artificialintelligence-news.com/2023/09/18/is-europe-killing-itself-financially-with-ai-act/#respond Mon, 18 Sep 2023 15:59:15 +0000 https://www.artificialintelligence-news.com/?p=13606 Europe is tinkering with legislation to regulate artificial intelligence. European regulators are delighted with this, but what does the world say about the AI Act? Now the outlines for the AI Act are known, a debate is beginning to erupt around its possible implications. One camp believes regulations are needed to curb the risks of powerful AI... Read more »

The post Is Europe killing itself financially with the AI Act? appeared first on AI News.

]]>
Europe is tinkering with legislation to regulate artificial intelligence. European regulators are delighted with this, but what does the world say about the AI Act?

Now the outlines for the AI Act are known, a debate is beginning to erupt around its possible implications. One camp believes regulations are needed to curb the risks of powerful AI technology, while the other is convinced that regulation will prove pernicious for the European economy. Is it out of the question that safe AI products also bring economic prosperity?

‘Industrial revolution’ without Europe

The EU “prevents the industrial revolution from happening” and portrays itself as “no part of the future world,” Joe Lonsdale told Bloomberg. He regularly appears in the US media around AI topics as an outspoken advocate of the technology. According to him, the technology has the potential to cause a third industrial revolution, and every company should already have implemented it in its organization.

He earned a bachelor’s degree in computer science in 2003. Meanwhile, he co-founded several technology companies, including those that deploy artificial intelligence. He later grew to become a businessman and venture capitalist.

The only question is, are the concerns well-founded? At the very least, caution seems necessary to avoid seeing major AI products disappear from Europe. Sam Altman, a better-known IT figure as CEO of OpenAI, previously spoke out about the possible disappearance of AI companies from Europe if the rules become too hard to apply. He does not plan to pull ChatGPT out of Europe because of the AI law, but he warns here of the possible actions of other companies.

ChatGPT stays

The CEO himself is essentially a strong supporter of security legislation for AI. He advocates for clear security requirements that AI developers must meet before the official release of a new product.

When a major player in the AI field calls for regulation of the technology he is working with, perhaps we as Europe should listen. That is what is happening with the AI Act, through which the EU is trying to be the first in the world to put out a set of rules for artificial intelligence. The EU is a pioneer, but it will also have to discover the pitfalls of a policy in the absence of a working example in the world.

The rules will be continuously tested until they officially come into effect in 2025 by experts who publicly give their opinions on the law. A public testing period which AI developers should also find important, Altman said. The European Union also avoids making up rules from higher up for a field it doesn’t know much about itself. The legislation will come bottom-up by involving companies and developers already actively engaged in AI setting the standards.

Copy off

Although the EU often pronounces that the AI law will be the world’s first regulation of artificial intelligence, other places are tinkering with a legal framework just as much. The United Kingdom, for example, is eager to embrace the technology but also wants certainty about its security. To that end, it immerses itself in the technology and gains early access to DeepMind, OpenAI and Anthropic’s models for research purposes.

However, Britain has no plans to punish companies that do not comply. The country limits itself to a framework of five principles that artificial intelligence should comply with. The choice seems to play to the disadvantage of guaranteed safety of AI products, as the country says it is necessary not to make a mandatory political framework for companies, to attract investment from AI companies in the UK. So secure AI products and economic prosperity do not appear to fit well together according to the country. Wait and see if Europe’s AI law validates that.

(Editor’s note: This article first appeared on Techzine)

The post Is Europe killing itself financially with the AI Act? appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/18/is-europe-killing-itself-financially-with-ai-act/feed/ 0
White House secures safety commitments from eight more AI companies https://www.artificialintelligence-news.com/2023/09/13/white-house-safety-commitments-eight-more-ai-companies/ https://www.artificialintelligence-news.com/2023/09/13/white-house-safety-commitments-eight-more-ai-companies/#respond Wed, 13 Sep 2023 14:56:10 +0000 https://www.artificialintelligence-news.com/?p=13585 The Biden-Harris Administration has announced that it has secured a second round of voluntary safety commitments from eight prominent AI companies. Representatives from Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability attended the White House for the announcement. These eight companies have pledged to play a pivotal role in promoting the development of... Read more »

The post White House secures safety commitments from eight more AI companies appeared first on AI News.

]]>
The Biden-Harris Administration has announced that it has secured a second round of voluntary safety commitments from eight prominent AI companies.

Representatives from Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability attended the White House for the announcement. These eight companies have pledged to play a pivotal role in promoting the development of safe, secure, and trustworthy AI.

The Biden-Harris Administration is actively working on an Executive Order and pursuing bipartisan legislation to ensure the US leads the way in responsible AI development that unlocks its potential while managing its risks.

The commitments made by these companies revolve around three fundamental principles: safety, security, and trust. They have committed to:

  1. Ensure products are safe before introduction:

The companies commit to rigorous internal and external security testing of their AI systems before releasing them to the public. This includes assessments by independent experts, helping guard against significant AI risks such as biosecurity, cybersecurity, and broader societal effects.

They will also actively share information on AI risk management with governments, civil society, academia, and across the industry. This collaborative approach will include sharing best practices for safety, information on attempts to circumvent safeguards, and technical cooperation.

  1. Build systems with security as a top priority:

The companies have pledged to invest in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights. Recognising the critical importance of these model weights in AI systems, they commit to releasing them only when intended and when security risks are adequately addressed.

Additionally, the companies will facilitate third-party discovery and reporting of vulnerabilities in their AI systems. This proactive approach ensures that issues can be identified and resolved promptly even after an AI system is deployed.

  1. Earn the public’s trust:

To enhance transparency and accountability, the companies will develop robust technical mechanisms – such as watermarking systems – to indicate when content is AI-generated. This step aims to foster creativity and productivity while reducing the risks of fraud and deception.

They will also publicly report on their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use, covering both security and societal risks, including fairness and bias. Furthermore, these companies are committed to prioritising research on the societal risks posed by AI systems, including addressing harmful bias and discrimination.

These leading AI companies will also develop and deploy advanced AI systems to address significant societal challenges, from cancer prevention to climate change mitigation, contributing to the prosperity, equality, and security of all.

The Biden-Harris Administration’s engagement with these commitments extends beyond the US, with consultations involving numerous international partners and allies. These commitments complement global initiatives, including the UK’s Summit on AI Safety, Japan’s leadership of the G-7 Hiroshima Process, and India’s leadership as Chair of the Global Partnership on AI.

The announcement marks a significant milestone in the journey towards responsible AI development, with industry leaders and the government coming together to ensure that AI technology benefits society while mitigating its inherent risks.

(Photo by Tabrez Syed on Unsplash)

See also: UK’s AI ecosystem to hit £2.4T by 2027, third in global race

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post White House secures safety commitments from eight more AI companies appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/13/white-house-safety-commitments-eight-more-ai-companies/feed/ 0
UK government outlines AI Safety Summit plans https://www.artificialintelligence-news.com/2023/09/04/uk-government-outlines-ai-safety-summit-plans/ https://www.artificialintelligence-news.com/2023/09/04/uk-government-outlines-ai-safety-summit-plans/#respond Mon, 04 Sep 2023 10:46:55 +0000 https://www.artificialintelligence-news.com/?p=13560 The UK government has announced plans for the global AI Safety Summit on 1-2 November 2023. The major event – set to be held at Bletchley Park, home of Alan Turing and other Allied codebreakers during the Second World War – aims to address the pressing challenges and opportunities presented by AI development on both... Read more »

The post UK government outlines AI Safety Summit plans appeared first on AI News.

]]>
The UK government has announced plans for the global AI Safety Summit on 1-2 November 2023.

The major event – set to be held at Bletchley Park, home of Alan Turing and other Allied codebreakers during the Second World War – aims to address the pressing challenges and opportunities presented by AI development on both national and international scales.

Secretary of State Michelle Donelan has officially launched the formal engagement process leading up to the summit. Jonathan Black and Matt Clifford – serving as the Prime Minister’s representatives for the AI Safety Summit – have also initiated discussions with various countries and frontier AI organisations.

This marks a crucial step towards fostering collaboration in the field of AI safety and follows a recent roundtable discussion hosted by Secretary Donelan, which involved representatives from a diverse range of civil society groups.

The AI Safety Summit will serve as a pivotal platform, bringing together not only influential nations but also leading technology organisations, academia, and civil society. Its primary objective is to facilitate informed discussions that can lead to sensible regulations in the AI landscape.

One of the core focuses of the summit will be on identifying and mitigating risks associated with the most powerful AI systems. These risks include the potential misuse of AI for activities such as undermining biosecurity through the proliferation of sensitive information. 

Additionally, the summit aims to explore how AI can be harnessed for the greater good, encompassing domains like life-saving medical technology and safer transportation.

The UK government claims to recognise the importance of diverse perspectives in shaping the discussions surrounding AI and says that it’s committed to working closely with global partners to ensure that it remains safe and that its benefits can be harnessed worldwide.

As part of this iterative and consultative process, the UK has shared five key objectives that will guide the discussions at the summit:

  1. Developing a shared understanding of the risks posed by AI and the necessity for immediate action.
  2. Establishing a forward process for international collaboration on AI safety, including supporting national and international frameworks.
  3. Determining appropriate measures for individual organisations to enhance AI safety.
  4. Identifying areas for potential collaboration in AI safety research, such as evaluating model capabilities and establishing new standards for governance.
  5. Demonstrating how the safe development of AI can lead to global benefits.

The growth potential of AI investment, deployment, and capabilities is staggering, with projections of up to $7 trillion in growth over the next decade and accelerated drug discovery. A report by Google in July suggests that, by 2030, AI could boost the UK economy alone by £400 billion—leading to an annual growth rate of 2.6 percent.

However, these opportunities come with significant risks that transcend national borders. Addressing these risks is now a matter of utmost urgency.

Earlier this month, DeepMind co-founder Mustafa Suleyman called on the US to enforce AI standards. However, Suleyman is far from the only leading industry figure who has expressed concerns and called for measures to manage the risks of AI.

In an open letter in March, over 1,000 experts infamously called for a halt on “out of control” AI development over the “profound risks to society and humanity”.

Multiple stakeholders – including individual countries, international organisations, businesses, academia, and civil society – are already engaged in AI-related work. This includes efforts at the United Nations, the Organisation for Economic Co-operation and Development (OECD), the Global Partnership on Artificial Intelligence (GPAI), the Council of Europe, G7, G20, and standard development organisations.

The AI Safety Summit will build upon these existing initiatives by formulating practical next steps to mitigate risks associated with AI. These steps will encompass discussions on implementing risk-mitigation measures at relevant organisations, identifying key areas for international collaboration, and creating a roadmap for long-term action.

If successful, the AI Safety Summit at Bletchley Park promises to be a milestone event in the global dialogue on AI safety—seeking to strike a balance between harnessing the potential of AI for the benefit of humanity and addressing the challenges it presents.

(Photo by Hal Gatewood on Unsplash)

See also: UK Deputy PM: AI is the most ‘extensive’ industrial revolution yet

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK government outlines AI Safety Summit plans appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/04/uk-government-outlines-ai-safety-summit-plans/feed/ 0
Beijing publishes its AI governance rules https://www.artificialintelligence-news.com/2023/07/14/beijing-publishes-its-ai-governance-rules/ https://www.artificialintelligence-news.com/2023/07/14/beijing-publishes-its-ai-governance-rules/#respond Fri, 14 Jul 2023 12:02:36 +0000 https://www.artificialintelligence-news.com/?p=13277 Chinese authorities have published rules governing generative AI which go substantially beyond current regulations in other parts of the world. One notable requirement is that operators of generative AI must ensure that their services adhere to the core values of socialism, while also avoiding content that incites subversion of state power, secession, terrorism, or any... Read more »

The post Beijing publishes its AI governance rules appeared first on AI News.

]]>
Chinese authorities have published rules governing generative AI which go substantially beyond current regulations in other parts of the world.

One notable requirement is that operators of generative AI must ensure that their services adhere to the core values of socialism, while also avoiding content that incites subversion of state power, secession, terrorism, or any actions undermining national unity and social stability.

Generative AI services within China are prohibited from promoting content that provokes ethnic hatred and discrimination, violence, obscenity, or false and harmful information. These content-related rules remain consistent with a draft released in April 2023.

Furthermore, the regulations reveal China’s interest in developing digital public goods for generative AI.

The document emphasises the promotion of public training data resource platforms and the collaborative sharing of model-making hardware to enhance utilisation rates. The authorities also aim to encourage the orderly opening of public data classification and the expansion of high-quality public training data resources.

In terms of technology development, the rules stipulate that AI should be developed using secure and proven tools, including chips, software, tools, computing power, and data resources.

Intellectual property rights must be respected when using data for model development, and the consent of individuals must be obtained before incorporating personal information. There is also a focus on improving the quality, authenticity, accuracy, objectivity, and diversity of training data.

To ensure fairness and non-discrimination, developers are required to create algorithms that do not discriminate based on factors such as ethnicity, belief, country, region, gender, age, occupation, or health.

Moreover, operators of generative AI must obtain licenses for their services under most circumstances, adding a layer of regulatory oversight.

The new rules are scheduled to come into effect on August 15, 2023. China’s rules will not only have implications for domestic AI operators but will also serve as a benchmark for international discussions on AI governance and ethical practices.

You can find a full copy of the rules on the Cyberspace Administration of China’s website here.

(Photo by zhang kaiyv on Unsplash)

See also: OpenAI introduces team dedicated to stopping rogue AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Beijing publishes its AI governance rules appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/07/14/beijing-publishes-its-ai-governance-rules/feed/ 0
China’s deepfake laws come into effect today https://www.artificialintelligence-news.com/2023/01/10/chinas-deepfake-laws-come-into-effect-today/ https://www.artificialintelligence-news.com/2023/01/10/chinas-deepfake-laws-come-into-effect-today/#respond Tue, 10 Jan 2023 16:46:21 +0000 https://www.artificialintelligence-news.com/?p=12594 China will begin enforcing its strict new rules around the creation of deepfakes from today. Deepfakes are increasingly being used for manipulation and humiliation. We’ve seen deepfakes of figures like disgraced FTX founder Sam Bankman-Fried to commit fraud, Ukrainian President Volodymyr Zelenskyy to spread disinformation, and US House Speaker Nancy Pelosi to make her appear... Read more »

The post China’s deepfake laws come into effect today appeared first on AI News.

]]>
China will begin enforcing its strict new rules around the creation of deepfakes from today.

Deepfakes are increasingly being used for manipulation and humiliation. We’ve seen deepfakes of figures like disgraced FTX founder Sam Bankman-Fried to commit fraud, Ukrainian President Volodymyr Zelenskyy to spread disinformation, and US House Speaker Nancy Pelosi to make her appear drunk.

Last month, the Cyberspace Administration of China (CAC) announced rules to clampdown on deepfakes.

“In recent years, in-depth synthetic technology has developed rapidly. While serving user needs and improving user experiences, it has also been used by some criminals to produce, copy, publish, and disseminate illegal and bad information, defame, detract from the reputation and honour of others, and counterfeit others,” explains the CAC.

Providers of services for creating synthetic content will be obligated to ensure their AIs aren’t misused for illegal and/or harmful purposes. Furthermore, any content that was created using an AI must be clearly labelled with a watermark.

China’s new rules come into force today (10 January 2023) and will also require synthetic service providers to:

  • Not illegally process personal information
  • Periodically review, evaluate, and verify algorithms
  • Establish management systems and technical safeguards
  • Authenticate users with real identity information
  • Establish mechanisms for complaints and reporting

The CAC notes that effective governance of synthetic technologies is a multi-entity effort that will require the participation of government, enterprises, and citizens. Such participation, the CAC says, will promote the legal and responsible use of deep synthetic technologies while minimising the associated risks.

(Photo by Henry Chen on Unsplash)

Related: AI & Big Data Expo: Exploring ethics in AI and the guardrails required

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post China’s deepfake laws come into effect today appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/01/10/chinas-deepfake-laws-come-into-effect-today/feed/ 0
US introduces new AI chip export restrictions https://www.artificialintelligence-news.com/2022/09/01/us-introduces-new-ai-chip-export-restrictions/ https://www.artificialintelligence-news.com/2022/09/01/us-introduces-new-ai-chip-export-restrictions/#respond Thu, 01 Sep 2022 16:01:15 +0000 https://www.artificialintelligence-news.com/?p=12228 NVIDIA has revealed that it’s subject to new laws restricting the export of AI chips to China and Russia. In an SEC filing, NVIDIA says the US government has informed the chipmaker of a new license requirement that impacts two of its GPUs designed to speed up machine learning tasks: the current A100, and the... Read more »

The post US introduces new AI chip export restrictions appeared first on AI News.

]]>
NVIDIA has revealed that it’s subject to new laws restricting the export of AI chips to China and Russia.

In an SEC filing, NVIDIA says the US government has informed the chipmaker of a new license requirement that impacts two of its GPUs designed to speed up machine learning tasks: the current A100, and the upcoming H100.

“The license requirement also includes any future NVIDIA integrated circuit achieving both peak performance and chip-to-chip I/O performance equal to or greater than thresholds that are roughly equivalent to the A100, as well as any system that includes those circuits,” adds NVIDIA.

The US government has reportedly told NVIDIA that the new rules are geared at addressing the risk of the affected products being used for military purposes.

“While we are not in a position to outline specific policy changes at this time, we are taking a comprehensive approach to implement additional actions necessary related to technologies, end-uses, and end-users to protect US national security and foreign policy interests,” said a US Department of Commerce spokesperson.

China is a large market for NVIDIA and the new rules could affect around $400 million in quarterly sales.

AMD has also been told the new rules will impact its similar products, including the MI200.

As of writing, NVIDIA’s shares were down 11.45 percent from the market open. AMD’s shares are down 6.81 percent. However, it’s worth noting that it’s been another red day for the wider stock market.

(Photo by Wesley Tingey on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post US introduces new AI chip export restrictions appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/09/01/us-introduces-new-ai-chip-export-restrictions/feed/ 0
UK eases data mining laws to support flourishing AI industry https://www.artificialintelligence-news.com/2022/06/29/uk-eases-data-mining-laws-support-flourishing-ai-industry/ https://www.artificialintelligence-news.com/2022/06/29/uk-eases-data-mining-laws-support-flourishing-ai-industry/#respond Wed, 29 Jun 2022 12:21:38 +0000 https://www.artificialintelligence-news.com/?p=12111 The UK is set to ease data mining laws in a move designed to further boost its flourishing AI industry. We all know that data is vital to AI development. Tech giants are in an advantageous position due to either having existing large datasets or the ability to fund/pay for the data required. Most startups... Read more »

The post UK eases data mining laws to support flourishing AI industry appeared first on AI News.

]]>
The UK is set to ease data mining laws in a move designed to further boost its flourishing AI industry.

We all know that data is vital to AI development. Tech giants are in an advantageous position due to either having existing large datasets or the ability to fund/pay for the data required. Most startups rely on mining data to get started.

Europe has notoriously strict data laws. Advocates of regulations like GDPR believe they’re necessary to protect consumers, while critics argue it drives innovation, investment, and jobs out of the Eurozone to countries like the USA and China.

“You’ve got your Silicon Valley startup that can access large amounts of money from investors, access specialist knowledge in the field, and will not be fighting with one arm tied behind its back like a competitor in Europe,” explained Peter Wright, Solicitor and MD of Digital Law UK.

An announcement this week sets out how the UK intends to support its National AI Strategy from an intellectual property standpoint.

The announcement comes via the country’s Intellectual Property Office (IPO) and follows a two-month cross-industry consultation period with individuals, large and small businesses, and a range of organisations.

Text and data mining

Text and data mining (TDM) allows researchers to copy and harness disparate datasets for their algorithms. As part of the announcement, the UK says it will now allow TDM “for any purpose,” which provides much greater flexibility than an exception made in 2014 that allowed AI researchers to use such TDM for non-commercial purposes.

In stark contrast, the EU’s Directive on Copyright in the Digital Single Market offers a TDM exception only for scientific research.

“These changes make the most of the greater flexibilities following Brexit. They will help make the UK more competitive as a location for firms doing data mining,” wrote the IPO in the announcement.

AIs still can’t be inventors

Elsewhere, the UK is more or less sticking to its previous stances—including that AI systems cannot be credited as inventors in patents.

The most high-profile case on the subject is of US-based Dr Stephen Thaler, the founder of Imagination Engines. Dr Thaler has been leading the fight to give credit to machines for their creations.

An AI device created by Dr Thaler, DABUS, was used to invent an emergency warning light, a food container that improves grip and heat transfer, and more.

In August 2021, a federal court in Australia ruled that AI systems can be credited as inventors under patent law after Ryan Abbott, a professor at the University of Surrey, filed applications in the country on behalf of Dr Thaler. Similar applications were also filed in the UK, US, and New Zealand.

The UK’s IPO rejected the applications at the time, claiming that – under the country’s Patents Act – only humans can be credited as inventors. Subsequent appeals were also rejected.

“A patent is a statutory right and it can only be granted to a person,” explained Lady Justice Liang. “Only a person can have rights. A machine cannot.”

In the IPO’s latest announcement, the body reiterates: ”For AI-devised inventions, we plan no change to UK patent law now. Most respondents felt that AI is not yet advanced enough to invent without human intervention.”

However, the IPO highlights the UK is one of only a handful of countries that protects computer-generated works. Any person who makes “the arrangements necessary for the creation of the [computer-generated] work” will have the rights for 50 years from when it was made.

Supporting a flourishing AI industry

Despite being subject to strict data regulations, the UK has become Europe’s hub for AI with pioneers like DeepMind, Wayve, Graphcore, Oxbotica, and BenevolentAI. The country’s world-class universities churn out in-demand AI talent and tech investments more than double other European countries.

(Credit: Atomico)

More generally, the UK is regularly considered one of the best places in the world to set up a business. All eyes are on how the country will use its post-Brexit freedoms to diverge from EU rules to further boost its industries.

“The UK already punches above its weight internationally and we are ranked third in the world behind the USA and China in the list of top countries for AI,” said Chris Philp, DCMS Minister.

“We’re laying the foundations for the next ten years’ growth with a strategy to help us seize the potential of artificial intelligence and play a leading role in shaping the way the world governs it.”

There will undoubtedly be debates over the decisions made by the UK to boost its AI industry, especially regarding TDM, but the policies announced so far will support entrepreneurship and the country’s attractiveness for relevant investments.

(Photo by Chris Robert on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is also co-located with the Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK eases data mining laws to support flourishing AI industry appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/06/29/uk-eases-data-mining-laws-support-flourishing-ai-industry/feed/ 0
Clearview AI agrees to restrict sales of its faceprint database https://www.artificialintelligence-news.com/2022/05/10/clearview-ai-agrees-restrict-sales-faceprint-database/ https://www.artificialintelligence-news.com/2022/05/10/clearview-ai-agrees-restrict-sales-faceprint-database/#respond Tue, 10 May 2022 16:06:18 +0000 https://www.artificialintelligence-news.com/?p=11959 Clearview AI has proposed to restrict sales of its faceprint database as part of a settlement with the American Civil Liberties Union (ACLU). The controversial facial recognition firm caused a stir due to scraping billions of images of people across the web without their consent. As a result, the company has faced the ire of... Read more »

The post Clearview AI agrees to restrict sales of its faceprint database appeared first on AI News.

]]>
Clearview AI has proposed to restrict sales of its faceprint database as part of a settlement with the American Civil Liberties Union (ACLU).

The controversial facial recognition firm caused a stir due to scraping billions of images of people across the web without their consent. As a result, the company has faced the ire of regulators around the world and numerous court cases.

One court case filed against Clearview AI was by the ACLU in 2020, claiming that it violated the Biometric Information Privacy Act (BIPA). That act covers Illinois and requires companies operating in the state to obtain explicit consent from individuals to collect their biometric data.

“Fourteen years ago, the ACLU of Illinois led the effort to enact BIPA – a groundbreaking statute to deal with the growing use of sensitive biometric information without any notice and without meaningful consent,” explained Rebecca Glenberg, staff attorney for the ACLU of Illinois.

“BIPA was intended to curb exactly the kind of broad-based surveillance that Clearview’s app enables.”

The case is ongoing but the two sides have reached a draft settlement. As part of the proposal, Clearview AI has agreed to restrict sales of its faceprint database to businesses and other private entities across the country.

“By requiring Clearview to comply with Illinois’ pathbreaking biometric privacy law not just in the state, but across the country, this settlement demonstrates that strong privacy laws can provide real protections against abuse,” said Nathan Freed Wessler, a deputy director of the ACLU Speech, Privacy, and Technology Project.

“Clearview can no longer treat people’s unique biometric identifiers as an unrestricted source of profit. Other companies would be wise to take note, and other states should follow Illinois’ lead in enacting strong biometric privacy laws.” 

The most protections will be offered to residents in Illinois. Clearview AI will be banned from sharing access to its database to any private company in the state in addition to any local public entity for five years.

Furthermore, Clearview AI plans to filter out images from Illinois. This may not catch all images so residents will be able to upload their image and Clearview will block its software from finding matches for their face. Clearview AI will spend $50,000 on adverts in online ads to raise awareness for this feature.

“This settlement is a big win for the most vulnerable people in Illinois,” commented Linda Xóchitl Tortolero, president and CEO of Mujeres Latinas en Acción, a Chicago-based non-profit.

“Much of our work centres on protecting privacy and ensuring the safety of survivors of domestic violence and sexual assault. Before this agreement, Clearview ignored the fact that biometric information can be misused to create dangerous situations and threats to their lives. Today that’s no longer the case.” 

The protections offered to American citizens outside Illinois aren’t quite as stringent.

Clearview AI is still able to sell access to its huge database to public entities, including law enforcement. In the wake of the US Capitol raid, the company boasted that police use of its facial recognition system increased 26 percent.

However, the company would be banned from selling access to its complete database to private companies. Clearview AI could still sell its software, but any purchaser would need to source their own database to train it.

“There is a battle being fought in courtrooms and statehouses across the country about who is going to control biometrics—Big Tech or the people being tracked by them—and this represents one of the biggest victories for consumers to date,” said J. Eli Wade-Scott from Edelson PC.

In November 2021, the UK’s Information Commissioner’s Office (ICO) imposed a potential fine of just over £17 million to Clearview AI and ordered the company to destroy the personal data it holds on British citizens and cease further processing.

Earlier that month, the OAIC reached the same conclusion as the ICO and ordered Clearview AI to destroy the biometric data it collected on Australians and cease further collection.

The full draft settlement between Clearview AI and the ACLU can be found here.

(Photo by Maksim Chernishev on Unsplash)

Related: Ukraine harnesses Clearview AI to uncover assailants and identify the fallen

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Clearview AI agrees to restrict sales of its faceprint database appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/05/10/clearview-ai-agrees-restrict-sales-faceprint-database/feed/ 0
US appeals court decides scraping public web data is fine https://www.artificialintelligence-news.com/2022/04/19/us-appeals-court-scraping-public-web-data-fine/ https://www.artificialintelligence-news.com/2022/04/19/us-appeals-court-scraping-public-web-data-fine/#respond Tue, 19 Apr 2022 12:35:56 +0000 https://artificialintelligence-news.com/?p=11890 The US Ninth Circuit Court of Appeals has decided that scraping data from a public website doesn’t violate the Computer Fraud and Abuse Act (CFAA). In 2017, employment analytics firm HiQ filed a lawsuit against LinkedIn’s efforts to block it from scraping data from users’ profiles. The court barred Linkedin from stopping HiQ scraping data... Read more »

The post US appeals court decides scraping public web data is fine appeared first on AI News.

]]>
The US Ninth Circuit Court of Appeals has decided that scraping data from a public website doesn’t violate the Computer Fraud and Abuse Act (CFAA).

In 2017, employment analytics firm HiQ filed a lawsuit against LinkedIn’s efforts to block it from scraping data from users’ profiles.

The court barred Linkedin from stopping HiQ scraping data after deciding the CFAA – which criminalises accessing a protected computer – doesn’t apply due to the information being public.

LinkedIn appealed the case and in 2019 the Ninth Circuit Court sided with HiQ and upheld the original decision.

In March 2020, LinkedIn once again appealed the decision on the basis that implementing technical barriers and sending a cease-and-desist letter is revoking authorisation. Therefore, any subsequent attempts to scrape data are unauthorised and therefore break the CFAA.

“At issue was whether, once hiQ received LinkedIn’s cease-and-desist letter, any further scraping and use of LinkedIn’s data was ‘without authorization’ within the meaning of the CFAA,” reads the filing (PDF).

“The panel concluded that hiQ raised a serious question as to whether the CFAA ‘without authorization’ concept is inapplicable where, as here, prior authorization is not generally required but a particular person—or bot—is refused access.”

The filing highlights several of LinkedIn’s technical measures to protect against data-scraping:

  • Prohibiting search engine crawlers and bots – aside from certain allowed entities, like Google – from accessing LinkedIn’s servers via the website’s standard ‘robots.txt’ file.
  • ‘Quicksand’ system that detects non-human activity indicative of scraping.
  • ‘Sentinel’ system that slows (or blocks) activity from suspicious IP addresses.
  • ‘Org Block’ system that generates a list of known malicious IP addresses linked to large-scale scraping.

Overall, LinkedIn claims to block approximately 95 million automated attempts to scrape data every day.

The appeals court once again ruled in favour of HiQ, upholding the conclusion that “the balance of hardships tips sharply in hiQ’s favor” and the company’s existence would be threatened without having access to LinkedIn’s public data.

“hiQ’s entire business depends on being able to access public LinkedIn member profiles,” hiQ’s CEO argued. “There is no current viable alternative to LinkedIn’s member database to obtain data for hiQ’s Keeper and Skill Mapper services.” 

However, LinkedIn’s petition (PDF) counters that the ruling has wider implications.

“Under the Ninth Circuit’s rule, every company with a public portion of its website that is integral to the operation of its business – from online retailers like Ticketmaster and Amazon to social networking platforms like Twitter – will be exposed to invasive bots deployed by free-riders unless they place those websites entirely behind password barricades,” wrote the company’s attorneys.

“But if that happens, those websites will no longer be indexable by search engines, which will make information less available to discovery by the primary means by which people obtain information on the Internet.”

AI companies that often rely on mass data-scraping will undoubtedly be pleased with the court’s decision.

Clearview AI, for example, has regularly been targeted by authorities and privacy campaigners for scraping billions of images from public websites to power its facial recognition system.

“Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland once argued.

Clearview AI recently made headlines for offering its services to Ukraine to help the country identify both Ukrainian defenders and Russian assailants who’ve lost their lives in the brutal conflict.

Mass data scraping will remain a controversial subject. Supporters will back the appeal court’s ruling while opponents will join LinkedIn’s attorneys in their concerns about normalising the practice.

(Photo by ThisisEngineering RAEng on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post US appeals court decides scraping public web data is fine appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/04/19/us-appeals-court-scraping-public-web-data-fine/feed/ 0
The EU’s AI rules will likely take over a year to be agreed https://www.artificialintelligence-news.com/2022/02/17/eu-ai-rules-likely-take-over-year-to-be-agreed/ https://www.artificialintelligence-news.com/2022/02/17/eu-ai-rules-likely-take-over-year-to-be-agreed/#respond Thu, 17 Feb 2022 12:34:20 +0000 https://artificialintelligence-news.com/?p=11691 Rules governing the use of artificial intelligence across the EU will likely take over a year to be agreed upon. Last year, the European Commission drafted AI laws. While the US and China are set to dominate AI development with their vast resources, economic might, and light-touch regulation, European rivals – including the UK and... Read more »

The post The EU’s AI rules will likely take over a year to be agreed appeared first on AI News.

]]>
Rules governing the use of artificial intelligence across the EU will likely take over a year to be agreed upon.

Last year, the European Commission drafted AI laws. While the US and China are set to dominate AI development with their vast resources, economic might, and light-touch regulation, European rivals – including the UK and EU members – believe they can lead in ethical standards.

In the draft of the EU regulations, companies that are found guilty of AI misuse face a fine of €30 million or six percent of their global turnover (whichever is greater). The risk of such fines has been criticised as driving investments away from Europe.

The EU’s draft AI regulation classifies systems into three risk categories:

  • Limited risk – includes systems like chatbots, inventory management, spam filters, and video games.
  • High risk – includes systems that make vital decisions like evaluating creditworthiness, recruitment, justice administration, and biometric identification in non-public spaces.
  • Unacceptable risk – includes systems that are manipulative or exploitative, create social scoring, or conduct real-time biometric authentication in public spaces for law enforcement.

Unacceptable risk systems will face a blanket ban from deployment in the EU while limited risk will require minimal oversight.

Organisations deploying high-risk AI systems would be required to have things like:

  • Human oversight.
  • A risk-management system.
  • Record keeping and logging.
  • Transparency to users.
  • Data governance and management.
  • Conformity assessment.
  • Government registration.

However, the cumbersome nature of the EU – requiring agreement from all member states, each with their own priorities – means that new regulations are often subject to more debate and delay than national lawmaking.

Reuters reports that two key lawmakers on Wednesday said the EU’s AI regulations will likely take over a year more to agree. The delay is primarily due to debates over whether facial recognition should be banned and who should enforce the rules.

“Facial recognition is going to be the biggest ideological discussion between the right and left,” said one lawmaker, Dragos Tudorache, in a Reuters interview.

“I don’t believe in an outright ban. For me, the solution is to put the right rules in place.”

With leading academic institutions and more than 1,300 AI companies employing over 30,000 people, the UK is the biggest destination for AI investment in Europe and the third in the world. Between January and June 2021, global investors poured £13.5 billion into more than 1,400 “deep tech” UK private technology firms—more than Germany, France, and Israel combined.

In September 2021, the UK published its 10-year National Artificial Intelligence Strategy in a bid to secure its European AI leadership. Governance plays a large role in the strategy.

“The UK already punches above its weight internationally and we are ranked third in the world behind the USA and China in the list of top countries for AI,” commented DCMS Minister Chris Philp.

“We’re laying the foundations for the next ten years’ growth with a strategy to help us seize the potential of artificial intelligence and play a leading role in shaping the way the world governs it.”

As part of its strategy, the UK is creating an ‘AI Standards Hub’ to coordinate the country’s engagement in establishing global rules and is working with The Alan Turing Institute to update guidance on AI ethics and safety.

“We are proud of creating a dynamic, collaborative community of diverse researchers and are growing world-leading capabilities in responsible, safe, ethical, and inclusive AI research and innovation,” said Professor Sir Adrian Smith, Chief Executive of The Alan Turing Institute.

Striking a balance between innovation-stifling overregulation and ethics-compromising underregulation is never a simple task. It will be interesting to observe how AI regulations in Europe will differ across the continent and beyond.

(Photo by Christian Lue on Unsplash)

Related: British intelligence agency GCHQ publishes ‘Ethics of AI’ report

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post The EU’s AI rules will likely take over a year to be agreed appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/02/17/eu-ai-rules-likely-take-over-year-to-be-agreed/feed/ 0