policy Archives - AI News https://www.artificialintelligence-news.com/tag/policy/ Artificial Intelligence News Mon, 14 Aug 2023 09:52:36 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png policy Archives - AI News https://www.artificialintelligence-news.com/tag/policy/ 32 32 UK Deputy PM: AI is the most ‘extensive’ industrial revolution yet https://www.artificialintelligence-news.com/2023/08/14/uk-deputy-pm-ai-most-extensive-industrial-revolution-yet/ https://www.artificialintelligence-news.com/2023/08/14/uk-deputy-pm-ai-most-extensive-industrial-revolution-yet/#respond Mon, 14 Aug 2023 09:52:34 +0000 https://www.artificialintelligence-news.com/?p=13466 Britain’s Deputy Prime Minister Oliver Dowden has shared his view that AI will be the most “extensive” industrial revolution yet. Dowden highlighted AI’s dual role, emphasising its capacity to augment productivity and streamline mundane tasks. However, he also put the spotlight on the looming threats it poses to democracies worldwide. in an interview with The... Read more »

The post UK Deputy PM: AI is the most ‘extensive’ industrial revolution yet appeared first on AI News.

]]>
Britain’s Deputy Prime Minister Oliver Dowden has shared his view that AI will be the most “extensive” industrial revolution yet.

Dowden highlighted AI’s dual role, emphasising its capacity to augment productivity and streamline mundane tasks. However, he also put the spotlight on the looming threats it poses to democracies worldwide.

in an interview with The Times, Mr Dowden said: “This is a total revolution that is coming. It’s going to totally transform almost all elements of life over the coming years, and indeed, even months, in some cases.

“It is much faster than other revolutions that we’ve seen and much more extensive, whether that’s the invention of the internal combustion engine or the industrial revolution.”

Already making inroads into governmental processes, AI has been adopted for processing asylum claim applications within the UK’s Home Office. The potential for AI-driven automation also extends to reducing paperwork burdens in ministerial decision-making, ultimately enabling swifter and more efficient governance.

Sridhar Iyengar, Managing Director for Zoho Europe, commented:

“As AI continues to develop at a rapid pace, collaboration between government, business, and industry experts is needed to increase education and introduce regulations or guidelines which can guide its ethical use.

Only then can businesses confidently use AI in the right way and understand how to avoid any negative impact.”

While AI can expedite information analysis and facilitate decision-making, Dowden emphasised that the crucial task of making policy choices remains squarely within the human domain. He stressed that the objective is to utilise AI for tasks that it excels at – such as data collation – to facilitate informed decision-making by human leaders.

Discussing the broader economic implications of the AI revolution, Dowden likened the impending shift to the advent of the automobile. He recognised the potential for significant workforce upheaval and asserted that the government’s responsibility lies in aiding citizens’ transition as AI reshapes industries.

Sheila Flavell CBE, COO of FDM Group, explained:

“In order to truly maximise the potential of AI, the UK must prioritise a workforce of technically skilled staff capable of leading the development and deployment of AI to work alongside staff and make their day-to-day roles easier.

People such as graduates, ex-forces and returners are well-placed to play a central role in this workforce through education courses and training in AI, supporting businesses with this rapidly-evolving technology.”

Dowden acknowledged the inherent risks posed by AI’s exponential growth. He warned of the potential for AI to be exploited by malicious actors—ranging from terrorists using it to gain knowledge of dangerous materials, to conducting large-scale hacking operations. 

Referring to a recent breach that exposed the personal details of thousands of officers and staff from the Police Service of Northern Ireland, Dowden said the incident was an “industrial scale breach of data” that was made possible by AI.

Andy Ward, VP of International for Absolute Software, said:

“We are in the midst of an AI revolution and for all the business benefits that AI brings, however, we must also be wary of the potential cybersecurity concerns that come with any new technology.

AI can be used to positive effect when bolstering cyber defences, playing a role in threat detection through data and pattern analysis to identify certain attacks, but we have to acknowledge that malicious actors also have access to AI to increase the sophistication of their threats.“

While urging a measured response to potential AI-driven threats, Dowden emphasised the importance of addressing risks and vulnerabilities proactively. He stressed the need to strike a balance between harnessing AI’s immense potential for societal progress and ensuring that safeguards are in place to counter its misuse.

Earlier this year, the UK announced that it will host a global summit to address AI risks.

(Image Credit: UK Government under CC BY 2.0 license)

See also: Google report highlights AI’s impact on the UK economy

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK Deputy PM: AI is the most ‘extensive’ industrial revolution yet appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/14/uk-deputy-pm-ai-most-extensive-industrial-revolution-yet/feed/ 0
GitHub CEO: The EU ‘will define how the world regulates AI’ https://www.artificialintelligence-news.com/2023/02/06/github-ceo-eu-will-define-how-world-regulates-ai/ https://www.artificialintelligence-news.com/2023/02/06/github-ceo-eu-will-define-how-world-regulates-ai/#respond Mon, 06 Feb 2023 17:04:56 +0000 https://www.artificialintelligence-news.com/?p=12708 GitHub CEO Thomas Dohmke addressed the EU Open Source Policy Summit in Brussels and gave his views on the bloc’s upcoming AI Act.  “The AI Act will define how the world regulates AI and we need to get it right, for developers and the open-source community,” said Dohmke. Dohmke was born and grew up in... Read more »

The post GitHub CEO: The EU ‘will define how the world regulates AI’ appeared first on AI News.

]]>
GitHub CEO Thomas Dohmke addressed the EU Open Source Policy Summit in Brussels and gave his views on the bloc’s upcoming AI Act

“The AI Act will define how the world regulates AI and we need to get it right, for developers and the open-source community,” said Dohmke.

Dohmke was born and grew up in Germany but now lives in the US. As such, he is all too aware of the widespread belief that the EU cannot lead when it comes to tech innovation.

“As a European, I love seeing how open-source AI innovations are beginning to break the narrative that only the US and China can lead on tech innovation.”

“I’ll be honest, as a European living in the United States, this is a pervasive – and often true – narrative. But this can change. And it’s already beginning to, thanks to open-source developers.”

AI will revolutionise just about every aspect of our lives. Regulation is vital to minimise the risks associated with AI while allowing the benefits to flourish.

“Together, OSS (Open Source Software) developers will use AI to help make our lives better. I have no doubt that OSS developers will help build AI innovations that empower those with disabilities, help us solve climate change, and save lives.”

A risk of overregulation is that it drives innovation elsewhere. Startups are more likely to establish themselves in countries like the US and China where they’re likely not subject to as strict regulations. Europe will find itself falling behind and having less influence on the global stage when it comes to AI.

“The AI Act is so crucial. This policy could well set the precedent for how the world regulates AI. It is foundationally important. Important for European technological leadership, and the future of the European economy itself. The AI Act must be fair and balanced for the open-source community.

“Policymakers should help us get there. The AI Act can foster democratised innovation and solidify Europe’s leadership in open, values-based artificial intelligence. That is why I believe that open-source developers should be exempt from the AI Act.”

In expanding on his belief that open-source developers should be exempt, Dohmke explains that the compliance burden should fall on those shipping products.

“OSS developers are often volunteers. Many are working two jobs. They are scientists, doctors, academics, professors, and university students alike. They don’t usually stand to profit from their contributions—and they certainly don’t have big budgets and compliance departments!”

EU lawmakers are hoping to agree on draft AI rules next month with the aim of winning the acceptance of member states by the end of the year.

“Open-source is forming the foundation of AI innovation in Europe. The US and China don’t have to win it all. Let’s break that narrative apart!

“Let’s give the open-source community the daylight and the clarity to grow their ideas and build them for the rest of the world! And by doing so, let’s give Europe the chance to be a leader in this new age of AI.”

GitHub’s policy paper on the AI Act can be found here.

(Image Credit: Collision Conf under CC BY 2.0 license)

Relevant: US and EU agree to collaborate on improving lives with AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post GitHub CEO: The EU ‘will define how the world regulates AI’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/02/06/github-ceo-eu-will-define-how-world-regulates-ai/feed/ 0
Democrats renew push for ‘algorithmic accountability’ https://www.artificialintelligence-news.com/2022/02/04/democrats-renew-push-for-algorithmic-accountability/ https://www.artificialintelligence-news.com/2022/02/04/democrats-renew-push-for-algorithmic-accountability/#respond Fri, 04 Feb 2022 09:04:05 +0000 https://artificialintelligence-news.com/?p=11647 Democrats have reintroduced their Algorithmic Accountability Act that seeks to hold tech firms accountable for bias in their algorithms. The bill is an updated version of one first introduced by Senator Ron Wyden (D-OR) in 2019 but never passed the House or Senate. The updated bill was introduced this week by Wyden alongside Senator Cory... Read more »

The post Democrats renew push for ‘algorithmic accountability’ appeared first on AI News.

]]>
Democrats have reintroduced their Algorithmic Accountability Act that seeks to hold tech firms accountable for bias in their algorithms.

The bill is an updated version of one first introduced by Senator Ron Wyden (D-OR) in 2019 but never passed the House or Senate. The updated bill was introduced this week by Wyden alongside Senator Cory Booker (D-NJ) and Representative Yvette Clarke (D-NY)

Concern about bias in algorithms is increasing as they become used for ever more critical decisions. Bias would lead to inequalities being automated—with some people being given more opportunities than others.

“As algorithms and other automated decision systems take on increasingly prominent roles in our lives, we have a responsibility to ensure that they are adequately assessed for biases that may disadvantage minority or marginalised communities,” said Booker.

A human can always be held accountable for a decision to, say, reject a mortgage/loan application. There’s currently little-to-no accountability for algorithmic decisions.

Representative Yvette Clarke explained:

“When algorithms determine who goes to college, who gets healthcare, who gets a home, and even who goes to prison, algorithmic discrimination must be treated as the highly significant issue that it is.

These large and impactful decisions, which have become increasingly void of human input, are forming the foundation of our American society that generations to come will build upon. And yet, they are subject to a wide range of flaws from programming bias to faulty datasets that can reinforce broader societal discrimination, particularly against women and people of colour.

It is long past time Congress act to hold companies and software developers accountable for their discrimination by automation

With our renewed Algorithmic Accountability Act, large companies will no longer be able to turn a blind eye towards the deleterious impact of their automated systems, intended or not. We must ensure that our 21st Century technologies become tools of empowerment, rather than marginalisation and seclusion.”

The bill would force audits of AI systems; with findings reported to the Federal Trade Commission. A public database would be created so decisions can be reviewed to give confidence to consumers.

“If someone decides not to rent you a house because of the colour of your skin, that’s flat-out illegal discrimination. Using a flawed algorithm or software that results in discrimination and bias is just as bad,” commented Wyden.

“Our bill will pull back the curtain on the secret algorithms that can decide whether Americans get to see a doctor, rent a house, or get into a school. Transparency and accountability are essential to give consumers choice and provide policymakers with the information needed to set the rules of the road for critical decision systems.”

In our predictions for the AI industry in 2022, we predicted an increased focus on Explainable AI (XAI). XAI is artificial intelligence in which the results of the solution can be understood by humans and is seen as a partial solution to algorithmic bias.

“Too often, Big Tech’s algorithms put profits before people, from negatively impacting young people’s mental health, to discriminating against people based on race, ethnicity, or gender, and everything in between,” said Senator Tammy Baldwin (D-Wis), who is co-sponsoring the bill.

“It is long past time for the American public and policymakers to get a look under the hood and see how these algorithms are being used and what next steps need to be taken to protect consumers.”

Joining Baldwin in co-sponsoring the Algorithmic Accountability Act are Senators Brian Schatz (D-Hawaii), Mazie Hirono (D-Hawaii), Ben Ray Luján (D-NM), Bob Casey (D-Pa), and Martin Heinrich (D-NM).

A copy of the full bill is available here (PDF)

(Photo by Darren Halstead on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Democrats renew push for ‘algorithmic accountability’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/02/04/democrats-renew-push-for-algorithmic-accountability/feed/ 0
The UK is changing its data laws to boost its digital economy https://www.artificialintelligence-news.com/2021/08/26/uk-changing-data-laws-boost-digital-economy/ https://www.artificialintelligence-news.com/2021/08/26/uk-changing-data-laws-boost-digital-economy/#respond Thu, 26 Aug 2021 12:17:50 +0000 http://artificialintelligence-news.com/?p=10985 Britain will diverge from EU data laws that have been criticised as being overly strict and driving investment and innovation out of Europe. Culture Secretary Oliver Dowden has confirmed the UK Government’s intention to diverge from key parts of the infamous General Data Protection Regulation (GDPR). Estimates suggest there is as much as £11 billion... Read more »

The post The UK is changing its data laws to boost its digital economy appeared first on AI News.

]]>
Britain will diverge from EU data laws that have been criticised as being overly strict and driving investment and innovation out of Europe.

Culture Secretary Oliver Dowden has confirmed the UK Government’s intention to diverge from key parts of the infamous General Data Protection Regulation (GDPR). Estimates suggest there is as much as £11 billion worth of trade that goes unrealised around the world due to barriers associated with data transfers

“Now that we have left the EU, I’m determined to seize the opportunity by developing a world-leading data policy that will deliver a Brexit dividend for individuals and businesses across the UK,” said Dowden.

When GDPR came into effect, it received its fair share of both praise and criticism.  On the one hand, GDPR admirably sought to protect the data of consumers. On the other, “pointless” cookie popups, extra paperwork, and concerns about hefty fines have caused frustration and led many businesses to pack their bags and take their jobs, innovation, and services to less strict regimes.

GDPR is just one example. Another would be Article 11 and 13 of the EU Copyright Directive that some – including the inventor of the World Wide Web Sir Tim Berners-Lee, and Wikipedia founder Jimmy Wales – have opposed as being an “upload filter”, “link tax”, and “meme killer”. This blog post from YouTube explained why creators should care about Europe’s increasingly strict laws.

Mr Dowden said the new reforms would be “based on common sense, not box-ticking” but uphold the necessary safeguards to protect people’s privacy.

What will the impact be on the UK’s AI industry?

AI is, of course, powered by data—masses of it. The idea of mass data collection terrifies many people but is harmless so long as it’s truly anonymised. Arguably, it’s a lack of data that should be more concerning as biases in many algorithms today are largely due to limited datasets that don’t represent the full diversity of our societies.

Western facial recognition algorithms, for example, have far more false positives against minorities than they do white men—leading to automated racial profiling. A 2010 study (PDF) by researchers at NIST and the University of Texas found that algorithms designed and tested in East Asia are better at recognising East Asians.

However, the data must be collected responsibly and checked as thoroughly as possible. Last year, MIT was forced to take offline a popular dataset called 80 Million Tiny Images that was created in 2008 to train AIs to detect objects after discovering that images were labelled with misogynistic and racist terms.

While a European leader in AI, few people are under any illusion that the UK could become a world leader in pure innovation and deployment because it’s simply unable to match the funding and resources available to powers like the US and China. Instead, experts believe the UK should build on its academic and diplomatic strengths to set the “gold standard” in ethical artificial intelligence.

“There’s an opportunity for us to set world-leading, gold standard data regulation which protects privacy, but does so in as light touch a way as possible,” Mr Dowden said.

As it diverges from the EU’s laws in the first major regulatory shakeup since Brexit, the UK needs to show it can strike a fair balance between the EU’s strict regime and the arguably too lax protections in many other countries.

The UK also needs to promote and support innovation while avoiding the “Singapore-on-Thames”-style model of a race to the bottom in standards, rights, and taxes that many Remain campaigners feared would happen if the country left the EU. Similarly, it needs to prove that “Global Britain” is more than just a soundbite.

To that end, Britain’s data watchdog is getting a shakeup and John Edwards, New Zealand’s current privacy commissioner, will head up the regulator.

“It is a great honour and responsibility to be considered for appointment to this key role as a watchdog for the information rights of the people of the United Kingdom,” said Edwards.

“There is a great opportunity to build on the wonderful work already done and I look forward to the challenge of steering the organisation and the British economy into a position of international leadership in the safe and trusted use of data for the benefit of all.”

The UK is also seeking global data partnerships with six countries: the United States, Australia, the Republic of Korea, Singapore, the Dubai International Finance Centre, and Colombia. Over the long-term, agreements with fast-growing markets like India and Brazil are hoped to be striked to facilitate data flows in scientific research, law enforcement, and more.

Commenting on the UK’s global data plans Andrew Dyson, Global Co-Chair of DLA Piper’s Data Protection, Privacy and Security Group, said:

“The announcements are the first evidence of the UK’s vision to establish a bold new regulatory landscape for digital Britain post-Brexit. Earlier in the year, the UK and EU formally recognised each other’s data protection regimes—that allowed data to continue to flow freely after Brexit.

This announcement shows how the UK will start defining its own future regulatory pathways from here, with an expansion of digital trade a clear driver if you look at the willingness to consider potential recognition of data transfers to Australia, Singapore, India and the USA.

It will be interesting to see the further announcements that are sure to follow on reforms to the wider policy landscape that are just hinted at here, and of course the changes in oversight we can expect from a new Information Commissioner.”

An increasingly punitive EU is not likely to react kindly to the news and added clauses into the recent deal reached with the UK to avoid the country diverging too far from its own standards.

Mr Dowden, however, said there was “no reason” the EU should react with too much animosity as the bloc has reached data agreements with many countries outside of its regulatory orbit and the UK must be free to “set our own path”.

(Photo by Massimiliano Morosinotto on Unsplash)

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post The UK is changing its data laws to boost its digital economy appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/08/26/uk-changing-data-laws-boost-digital-economy/feed/ 0
Going for gold: Britain can set the standard in ethical AI https://www.artificialintelligence-news.com/2021/08/05/going-for-gold-britain-can-set-the-standard-in-ethical-ai/ https://www.artificialintelligence-news.com/2021/08/05/going-for-gold-britain-can-set-the-standard-in-ethical-ai/#respond Thu, 05 Aug 2021 09:59:34 +0000 http://artificialintelligence-news.com/?p=10830 A study by BCS, The Chartered Institute for IT has found the UK can set the “gold standard” in ethical artificial intelligence. The UK – home to companies including DeepMind, Graphcore, Oxbotica, Darktrace, BenevolentAI, and others – is Europe’s leader in AI. However, the country is unable to match the funding and support available to... Read more »

The post Going for gold: Britain can set the standard in ethical AI appeared first on AI News.

]]>
A study by BCS, The Chartered Institute for IT has found the UK can set the “gold standard” in ethical artificial intelligence.

The UK – home to companies including DeepMind, Graphcore, Oxbotica, Darktrace, BenevolentAI, and others – is Europe’s leader in AI. However, the country is unable to match the funding and support available to counterparts residing in countries like the US and China.

Many experts have instead suggested that the UK should tap its strengths in leading universities and institutions, diplomacy, and democratic values to become a world leader in creating AI that cares about humanity.

Dr Bill Mitchell OBE, Director of Policy at BCS, The Chartered Institute for IT and a lead author of the report, said:

“The UK should set the ‘gold standard’ for professional and ethical AI, as a critical part of our economic recovery.

We all deserve to have understanding, and confidence in, AI, as it affects our lives over the coming years. To get there, the profession should be known as a go-to place for men and women from a diverse range of backgrounds, who reflect the needs of everyone they are engineering software for.

That might be credit scoring apps, cancer diagnoses based on training data, or software that decides if you get a job interview or not.”

Current biases in many AI systems could lead to increasing existing societal problems including the wealth gap and discrimination based on race, gender, sexual orientation, age, and more.

“It’s about developing a highly-skilled, ethical, and diverse workforce – and a political class –  that understands AI well enough to deliver the right solutions for society,” explains Mitchell.

“That will take strong leadership from the government and access to digital skills training across the board.”

Public trust in AI has been damaged through high-profile missteps including the crisis last summer when an algorithm was used to estimate the grades of students. A follow-up survey from YouGov – commissioned by BCS – found that 53 percent of UK adults had no faith in any organisation to make judgements about them.

(Credit: BCS)

In May last year, the national press reported that code written by Professor Neil Ferguson and his team at Imperial College London that informed the decision to enter a lockdown was “totally unreliable” and also damaged public trust in software. Since then, articles in science journal Nature have proved Professor Ferguson’s epidemiological computer code to be fit for purpose. From hindsight, people should now know this—but most people don’t read Nature and still believe the reports in the national press that the code was flawed.

The report found a large disparity in the competence and ethical practices of organisations using AI. One of the suggestions in the report is for the government to create a framework of standards to meet for the adoption of AI across both the public and private sectors.

In the UK government’s National Data Strategy, it states: “Used badly, data could harm people or communities, or have its overwhelming benefits overshadowed by public mistrust.”

BCS’ report, Priorities For The National AI Strategy, builds on the work of the AI Council Roadmap and National Data strategy. It has been published to complement the final version of the UK government’s plan, due to be released in a final version later this year.

A full copy of BCS’ report can be found here (PDF)

(Photo by Ethan Wilkinson on Unsplash)

Find out more about Digital Transformation Week North America, taking place on November 9-10 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post Going for gold: Britain can set the standard in ethical AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/08/05/going-for-gold-britain-can-set-the-standard-in-ethical-ai/feed/ 0
Aussie court rules AIs can be credited as inventors under patent law https://www.artificialintelligence-news.com/2021/08/03/aussie-court-rules-ais-can-be-credited-as-inventors-under-patent-law/ https://www.artificialintelligence-news.com/2021/08/03/aussie-court-rules-ais-can-be-credited-as-inventors-under-patent-law/#respond Tue, 03 Aug 2021 16:10:43 +0000 http://artificialintelligence-news.com/?p=10821 A federal court in Australia has ruled that AI systems can be credited as inventors under patent law in a case that could set a global precedent. Ryan Abbott, a professor at University of Surrey, has launched over a dozen patent applications around the world – including in the UK, US, New Zealand, and Australia... Read more »

The post Aussie court rules AIs can be credited as inventors under patent law appeared first on AI News.

]]>
A federal court in Australia has ruled that AI systems can be credited as inventors under patent law in a case that could set a global precedent.

Ryan Abbott, a professor at University of Surrey, has launched over a dozen patent applications around the world – including in the UK, US, New Zealand, and Australia – on behalf of US-based Dr Stephen Thaler.

The twist here is that it’s not Thaler which Abbott is attempting to credit as an inventor, but rather his AI device known as DABUS.

“In my view, an inventor as recognised under the act can be an artificial intelligence system or device,” said justice Jonathan Beach, overturning Australia’s original verdict. “We are both created and create. Why cannot our own creations also create?”

DABUS consists of neural networks and was used to invent an emergency warning light, a food container that improves grip and heat transfer, and more.

Until now, all of the patent applications were rejected—including in Australia. Each country determined that a human must be the credited inventor.

Whether AIs should be afforded certain “rights” similar to humans is a key debate, and one that is increasingly in need of answers. This patent case could be the first step towards establishing when machines – with increasing forms of sentience – should be treated like humans.

DABUS was awarded its first patent for “a food container based on fractal geometry,” by South Africa’s Companies and Intellectual Property Commission on June 24.

Following the patent award, Professor Adrian Hilton, Director of the Institute for People-Centred AI at the University of Surrey, commented:

“This is a truly historic case that recognises the need to change how we attribute invention. We are moving from an age in which invention was the preserve of people to an era where machines are capable of realising the inventive step, unleashing the potential of AI-generated inventions for the benefit of society.

The School of Law at the University of Surrey has taken a leading role in asking important philosophical questions such as whether innovation can only be a human phenomenon, and what happens legally when AI behaves like a person.”

AI News reached out to the patent experts at ACT | The App Association, which represents more than 5,000 app makers and connected device companies around the world, for their perspective.

Brian Scarpelli, Senior Global Policy Counsel at ACT | The App Association, commented:

“The App Association, in alignment with the plain language of patent laws across key jurisdictions (including Australia’s 1990 Patents Act), is opposed to the proposal that a patent may be granted for an invention devised by a machine, rather than by a natural person.

Today’s patent laws can, for certain kinds of AI inventions, appropriately support inventorship. Patent offices can use the existing requirements for software patentability as a starting point to identify necessary elements of patentable AI inventions and applications – for example for AI technology that is used to improve machine capability, where it can be delineated, declared, and evaluated in a way equivalent to software inventions.

But more generally, determinations regarding when and by whom inventorship and authorship, autonomously created by AI, could represent a drastic shift in law and policy. This would have direct implications on policy questions raised about allowing patents on inventions made by machines further public policy goals, and even reaching into broader definitions of AI personhood.

Continued study, both by national/regional patent offices and multilateral fora like the World Intellectual Property Office, is going to be critical and needs to continue to inform a comprehensive debate by policymakers.”

Feel free to let us know in the comments whether you believe AI systems should have similar legal protections and obligations to humans.

(Photo by Trollinho on Unsplash)

Find out more about Digital Transformation Week North America, taking place on November 9-10 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post Aussie court rules AIs can be credited as inventors under patent law appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/08/03/aussie-court-rules-ais-can-be-credited-as-inventors-under-patent-law/feed/ 0
CDEI: Public believes tech isn’t being fully utilised to tackle pandemic, greater use depends on governance trust https://www.artificialintelligence-news.com/2021/03/05/cdei-public-tech-tackle-pandemic-use-governance-trust/ https://www.artificialintelligence-news.com/2021/03/05/cdei-public-tech-tackle-pandemic-use-governance-trust/#comments Fri, 05 Mar 2021 09:47:47 +0000 http://artificialintelligence-news.com/?p=10365 Research from the UK government’s Centre for Data Ethics and Innovation (CDEI) has found the public believes technology isn’t being fully utilised to tackle the pandemic, but greater use requires trust in how it is governed. CDEI advises the government on the responsible use of AI and data-driven technologies. Between June and December 2020, the... Read more »

The post CDEI: Public believes tech isn’t being fully utilised to tackle pandemic, greater use depends on governance trust appeared first on AI News.

]]>
Research from the UK government’s Centre for Data Ethics and Innovation (CDEI) has found the public believes technology isn’t being fully utilised to tackle the pandemic, but greater use requires trust in how it is governed.

CDEI advises the government on the responsible use of AI and data-driven technologies. Between June and December 2020, the advisory body polled over 12,000 people to gauge sentiment around how such technologies are being used.

Edwina Dunn, Deputy Chair for the CDEI, said:

“Data-driven technologies including AI have great potential for our economy and society. We need to ensure that the right governance regime is in place if we are to unlock the opportunities that these technologies present.

The CDEI will be playing its part to ensure that the UK is developing governance approaches that the public can have confidence in.”

Close to three quarters (72%) of respondents expressed confidence in digital technology having the potential to help tackle the pandemic—a belief shared across all demographics.

A majority (~69%) also support, in principle, the use of technologies such as wearables to assist with social distancing in the workplace.

Wearables haven’t yet been used to help counter the spread of coronavirus. The most widely deployed technology is the contact-tracing app, but its effectiveness has often come into question.

Many people feel data-driven technologies are not being used to their full potential. Under half (42%) believe digital technology is improving the situation in the UK. Seven percent even think current technologies are making the situation worse.

The scepticism expressed about the use of digital technologies in tackling the pandemic is less about the technology itself – with just 17 percent of respondents expressing that view – and more a lack of faith in whether it will be used by people and organisations properly (39%).

John Whittingdale, Minister of State for Media and Data at the Department for Digital, Culture, Media and Sport, commented:

“We are determined to build back better and capitalise on all we have learnt from the pandemic, which has forced us to share data quickly, efficiently and responsibly for the public good. This research confirms that public trust in how we govern data is essential. 

Through our National Data Strategy, we have committed to unlocking the huge potential of data to tackle some of society’s greatest challenges, while maintaining our high standards of data protection and governance.”

When controlling for all other variables, the CDEI found that “trust that the right rules and regulations are in place” is the single biggest predictor of whether someone will support the use of digital technology.

Among the key ways to help improve public trust is by increasing transparency and accountability. Less than half (45%) of respondents know where to raise concerns if they feel digital technology is causing harm.

CDEI’s research highlighted that people, on the whole, believe data-driven technologies can help tackle the pandemic. However, work needs to be done to improve trust in how such technologies are deployed and managed.

(Photo by Mangopear creative on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post CDEI: Public believes tech isn’t being fully utilised to tackle pandemic, greater use depends on governance trust appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/03/05/cdei-public-tech-tackle-pandemic-use-governance-trust/feed/ 1
CDEI launches a ‘roadmap’ for tackling algorithmic bias https://www.artificialintelligence-news.com/2020/11/27/cdei-launches-roadmap-tackling-algorithmic-bias/ https://www.artificialintelligence-news.com/2020/11/27/cdei-launches-roadmap-tackling-algorithmic-bias/#respond Fri, 27 Nov 2020 16:10:35 +0000 http://artificialintelligence-news.com/?p=10058 A review from the Centre for Data Ethics and Innovation (CDEI) has led to the creation of a “roadmap” for tackling algorithmic bias. The analysis was commissioned by the UK government in October 2018 and will receive a formal response. Algorithms bring substantial benefits to businesses and individuals able to use them effectively. However, increasing... Read more »

The post CDEI launches a ‘roadmap’ for tackling algorithmic bias appeared first on AI News.

]]>
A review from the Centre for Data Ethics and Innovation (CDEI) has led to the creation of a “roadmap” for tackling algorithmic bias.

The analysis was commissioned by the UK government in October 2018 and will receive a formal response.

Algorithms bring substantial benefits to businesses and individuals able to use them effectively. However, increasing evidence suggests biases are – often unconsciously – making their way into algorithms and creating an uneven playing field.

The CDEI is the UK government’s advisory body on the responsible use of AI and data-driven technology. CDEI has spent the past two years examining the issue of algorithmic bias and how it can be tackled.

Adrian Weller, Board Member for the Centre for Data Ethics and Innovation, said:

“It is vital that we work hard now to get this right as adoption of algorithmic decision-making increases. Government, regulators, and industry need to work together with interdisciplinary experts, stakeholders, and the public to ensure that algorithms are used to promote fairness, not undermine it.

The Centre for Data Ethics and Innovation has today set out a range of measures to help the UK to achieve this, with a focus on enhancing transparency and accountability in decision-making processes that have a significant impact on individuals.

Not only does the report propose a roadmap to tackle the risks, but it highlights the opportunity that good use of data presents to address historical unfairness and avoid new biases in key areas of life.”

The report focuses on four key sectors where algorithmic bias poses the biggest risk: policing, recruitment, financial services, and local government.

Today’s facial recognition algorithms are relatively effective when used on white males, but research has consistently shown how ineffective they are with darker skin colours and females. The error rate is, therefore, higher when facial recognition algorithms are used on some parts of society over others.

In June, Detroit Police chief James Craig said facial recognition would misidentify someone around 96 percent of the time—not particularly comforting when they’re being used to perform mass surveillance of protests.

Craig’s comments were made just days after the ACLU (American Civil Liberties Union) lodged a complaint against Detroit Police following the harrowing wrongful arrest of black male Robert Williams due to a facial recognition error.

And that’s just one example of where AI can unfairly impact some parts of society over another.

“Fairness is a highly prized human value,” the report’s preface reads. “Societies in which individuals can flourish need to be held together by practices and institutions that are regarded as fair.”

Ensuring fairness in algorithmic decision-making

Transparency is required for algorithms. In financial services, a business loan or mortgage could be rejected without transparency simply because a person was born in a poor neighbourhood. Job applications could be rejected not on a person’s actual skill but dependent on where they were educated.

Such biases exist in humans and our institutions today, but automating them at scale is a recipe for disaster. Removing bias from algorithms is not an easy task but if achieved would lead to increased fairness by taking human biases out of the equation.

“It is well established that there is a risk that algorithmic systems can lead to biased decisions, with perhaps the largest underlying cause being the encoding of existing human biases into algorithmic systems. But the evidence is far less clear on whether algorithmic decision-making tools carry more or less risk of bias than previous human decision-making processes. Indeed, there are reasons to think that better use of data can have a role in making decisions fairer, if done with appropriate care.

When changing processes that make life-affecting decisions about individuals we should always proceed with caution. It is important to recognise that algorithms cannot do everything. There are some aspects of decision-making where human judgement, including the ability to be sensitive and flexible to the unique circumstances of an individual, will remain crucial.”

The report’s authors examined the aforementioned four key sectors to determine their current “maturity levels” in algorithmic decision-making.

In recruitment, the authors found rapid growth in the use of algorithms to make decisions at all stages. They note that adequate data is being collected to monitor outcomes but found that understanding of how to avoid human biases creeping in is lacking.

“More guidance is needed on how to ensure that these tools do not unintentionally discriminate against groups of people, particularly when trained on historic or current employment data.”

The financial services industry has relied on data to make decisions for longer than arguably any other to determine things like how likely it is an individual can repay a debt.

“Specific groups are historically underrepresented in the financial system, and there is a risk that these historic biases could be entrenched further through algorithmic systems.”

CDEI found limited use of algorithmic decision-making in UK policing but found variance across forces with regards to both usage and managing ethical risks.

“The use of data analytics tools in policing carries significant risk. Without sufficient care, processes can lead to Review into bias in algorithmic decision-making: Executive summary Centre for Data Ethics and Innovation 8 outcomes that are biased against particular groups, or systematically unfair.

In many scenarios where these tools are helpful, there is still an important balance to be struck between automated decision-making and the application of professional judgement and discretion.”

Finally, in local government, CDEI noted an increased use of algorithms to inform decision-making but most are in their early stages of deployment. Such tools can be powerful assets for societal good – like helping to plan where resources should be allocated to maintain vital services – but can also carry significant risks.

“Evidence has shown that certain people are more likely to be overrepresented in data held by local authorities and this can then lead to biases in predictions and interventions.”

The CDEI makes a number of recommendations in its report but among them is:

  • Clear and mandatory transparency over how algorithms are used for public decision-making and steps taken to ensure the fair treatment of individuals.
  • Full accountability for organisations implementing such technologies.
  • Improving the diversity of roles involved with developing and deploying decision-making tools.
  • Updating model contracts and framework agreements for public sector procurement to incorporate minimum standards around the ethical use of AI.
  • The government working with regulators to provide clear guidance on the collection and use of protected characteristic data in outcome monitoring and decision-making processes. They should then encourage the use of that guidance and data to address current and historic bias in key sectors.
  • Ensuring that the Equality and Human Rights Commission has sufficient resources to investigate cases of alleged algorithmic discrimination.

CDEI is overseen by an independent board which is made up of experts from across industry, civil society, academia, and government; it is an advisory body and does not directly set policies. The Department for Digital, Culture, Media & Sport is consulting on whether a statutory status would help the CDEI to deliver its remit as part of the National Data Strategy.

You can find a full copy of the CDEI’s report into tackling algorithmic bias here (PDF)

(Photo by Matt Duncan on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post CDEI launches a ‘roadmap’ for tackling algorithmic bias appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/11/27/cdei-launches-roadmap-tackling-algorithmic-bias/feed/ 0
Elon Musk wants more stringent AI regulation, including for Tesla https://www.artificialintelligence-news.com/2020/02/19/elon-musk-stringent-ai-regulation-tesla/ https://www.artificialintelligence-news.com/2020/02/19/elon-musk-stringent-ai-regulation-tesla/#respond Wed, 19 Feb 2020 13:28:24 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6419 Elon Musk has once again called for more stringent regulations around the development of AI technologies. The founder of Tesla and SpaceX has been one of the most vocal prominent figures in expressing concerns about AI – going as far as to call it humanity’s “biggest existential threat” if left unchecked. Of course, given the... Read more »

The post Elon Musk wants more stringent AI regulation, including for Tesla appeared first on AI News.

]]>
Elon Musk has once again called for more stringent regulations around the development of AI technologies.

The founder of Tesla and SpaceX has been one of the most vocal prominent figures in expressing concerns about AI – going as far as to call it humanity’s “biggest existential threat” if left unchecked.

Of course, given the nature of the companies Musk has founded, he is also well aware of AI’s potential.

Back in 2015, Musk co-founded OpenAI – an organisation founded with the aim of pursuing and promoting ethical AI development. Musk ended up leaving OpenAI in February last year over disagreements with the company’s work.

Earlier this week, Musk said that OpenAI should be more transparent and specifically said his confidence is “not high” in former Google engineer Dario Amodei when it comes to safety.

Responding to a piece by MIT Technology Review about OpenAI, Musk tweeted: “All orgs developing advanced AI should be regulated, including Tesla.”

In response to a further question of whether such regulations should be via individual governments or global institutions like the UN, Musk said he believes both.

Musk’s tweet generated some feedback from other prominent industry figures, including legendary Id Software founder John Carmack who recently stepped back from video game development to focus on independent AI research.

Carmack asked Musk: “How would you imagine that working for someone like me? Cloud vendors refuse to spawn larger clusters without a government approval? I would not be supportive.”

Coder Pranay Pathole shared a similar scepticism to Musk’s call as Carmack, saying: “Large companies ask for regulations acting all virtuous. What they are really doing is creating barriers for entry for new competition because only they can afford to comply with the new regulations.”

The debate over the extent of AI regulations and how they should be implemented will likely go on for some time – we can only hope to get them right before a disaster occurs. If you want to help Musk in building AI, he’s hosting a “super fun” hackathon at his place.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Elon Musk wants more stringent AI regulation, including for Tesla appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/02/19/elon-musk-stringent-ai-regulation-tesla/feed/ 0
The White House warns European allies not to overregulate AI https://www.artificialintelligence-news.com/2020/01/07/white-house-warns-european-allies-overregulate-ai/ https://www.artificialintelligence-news.com/2020/01/07/white-house-warns-european-allies-overregulate-ai/#comments Tue, 07 Jan 2020 13:48:00 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6328 The White House has urged its European allies to avoid overregulation of AI to prevent Western innovation from being hindered. While the news has gone somewhat under the radar given recent events, the Americans are concerned that overregulation may cause Western nations to fall behind the rest of the world. In a statement released by... Read more »

The post The White House warns European allies not to overregulate AI appeared first on AI News.

]]>
The White House has urged its European allies to avoid overregulation of AI to prevent Western innovation from being hindered.

While the news has gone somewhat under the radar given recent events, the Americans are concerned that overregulation may cause Western nations to fall behind the rest of the world.

In a statement released by the Office of Science and Technology Policy, the White House wrote:

“Europe and our allies should avoid heavy handed innovation-killing models, and instead consider a similar regulatory approach.

The best way to counter authoritarian uses of AI is to make sure America and our international partners remain the global hubs of innovation, shaping the evolution of technology in a manner consistent with our common values.”

The UK is expected to retain its lead as the European hub for AI innovation with vast amounts of private and public sector investment, successful companies like DeepMind, and world class universities helping to address the global talent shortage. In Oxford Insights’ 2017 Government AI Readiness Index, the UK ranked number one due to areas such as digital skills training and data quality. The Index considers public service reform, economy and skills, and digital infrastructure.

Despite its European AI leadership, the UK would struggle to match the levels of funding afforded to firms residing in superpowers like the US and China. Many experts have suggested the UK should instead focus on leading in the ethical integration of AI and developing sensible regulations, an area it has much experience in.

Here’s a timeline of some recent work from the UK government towards this goal:

  • September 2016 – the House of Commons Science and Technology Committee published a 44-page report on “Robotics and Artificial Intelligence” which investigates the economic and social implications of employment changes, ethical and legal issues around safety, verification, bias, privacy, and accountability; and strategies to enhance research, funding, and innovation
  • January 2017 – an All Party Parliamentary Group on Artificial Intelligence (APPG AI) was established to address ethical issues, social impact, industry norms, and regulatory options for AI in parliament.
  • June 2017 – parliament established the Select Committee on AI to further consider the economic, ethical and social implications of advances in artificial intelligence, and to make recommendations. All written and oral evidence received by the committee can be seen here.
  • April 2018 – the aforementioned committee published a 183-page report, “AI in the UK: ready, willing and able?” which considers AI development and governance in the UK. It acknowledges that the UK cannot compete with the US or China in terms of funding or people but suggests the country may have a competitive advantage in considering the ethics of AI.
  • September 2018 – the UK government launched an experiment with the World Economic Forum to develop procurement policies for AI. The partnership will bring together diverse stakeholders to collectively develop guidelines to capitalise on governments’ buying power to support the responsible deployment and design of AI technologies.

Western nations are seen as being at somewhat of a disadvantage due to sensitivities around privacy. EU nations, in particular, have strict data collection regulations such as GDPR which limits the amount of data researchers can collect to train AIs.

“Very often we hear ‘Where are the British and European Googles and Facebooks?’ Well, it’s because of barriers like this which stop organisations like that being possible to grow and develop,” said Peter Wright, solicitor and managing director of Digital Law UK.

Dependent on the UK’s future trade arrangement with the EU, it could, of course, decide to chart its own regulatory path following Brexit.

Speaking to reporters in a call, US CTO Michael Kratsios said: “Pre-emptive and burdensome regulation does not only stifle economic innovation and growth, but also global competitiveness amid the rise of authoritarian governments that have no qualms with AI being used to track, surveil, and imprison their own people.”

In the same call, US deputy CTO Lynne Parker commented: “As countries around the world grapple with similar questions about the appropriate regulation of AI, the US AI regulatory principles demonstrate that America is leading the way to shape the evolution in a way that reflects our values of freedom, human rights, and civil liberties.

“The new European Commission has said they intend to release an AI regulatory document in the coming months. After a productive meeting with Commissioner Vestager in November, we encourage Europe to use the US AI principles as a framework. The best way to counter authoritarian uses of AI is to make America and our national partners remain the global hub of innovation, advancing our common values.”

A similar regulation to GDPR in California called CCPA was also signed into law in June 2018. “I think the examples in the US today at state and local level are examples of overregulation which you want to avoid on the national level,” said a government official.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post The White House warns European allies not to overregulate AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/01/07/white-house-warns-european-allies-overregulate-ai/feed/ 1