impact Archives - AI News https://www.artificialintelligence-news.com/tag/impact/ Artificial Intelligence News Wed, 20 Sep 2023 12:15:39 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png impact Archives - AI News https://www.artificialintelligence-news.com/tag/impact/ 32 32 IFOW: AI can have a positive impact on jobs https://www.artificialintelligence-news.com/2023/09/20/ifow-ai-can-have-positive-impact-jobs/ https://www.artificialintelligence-news.com/2023/09/20/ifow-ai-can-have-positive-impact-jobs/#respond Wed, 20 Sep 2023 12:15:37 +0000 https://www.artificialintelligence-news.com/?p=13618 In a world where sensational headlines about AI and autonomous robots dominate the media landscape, a new report sheds light on a different narrative. The research, funded by the Nuffield Foundation, explores the nuanced impacts of AI adoption on jobs and work quality. Contrary to the doomsday predictions, the report suggests that AI could have... Read more »

The post IFOW: AI can have a positive impact on jobs appeared first on AI News.

]]>
In a world where sensational headlines about AI and autonomous robots dominate the media landscape, a new report sheds light on a different narrative.

The research, funded by the Nuffield Foundation, explores the nuanced impacts of AI adoption on jobs and work quality. Contrary to the doomsday predictions, the report suggests that AI could have a positive influence on employment and job quality.

The study, conducted by the Institute for the Future of Work (IFOW), indicates that AI adoption is already well underway in UK firms. However, rather than leading to widespread job loss, it suggests that AI has the potential to create more jobs and improve the quality of existing ones.

Anna Thomas, Co-Founder and Director of the IFOW, expressed optimism about the study’s results: “This report not only highlights that the adoption of AI is well underway across UK firms but that it is possible for this tech transformation to lead to both net job creation and more ‘good work’ – great news as we look to solve the UK’s productivity puzzle.”

“With the [UK-hosted global] AI Summit fast approaching, Government must act urgently to regulate, legislate and invest so that UK firms and workers can benefit from this fast-moving technology.”

One key takeaway from the study is the importance of regional investment in education and infrastructure to make all areas of the UK ‘innovation ready.’ The study also emphasises the need for firms to engage workers when investing in automation and AI.

Taking these suggested actions could help ensure that the benefits of AI are distributed more evenly across regions and demographics, reducing existing inequalities.

Professor Sir Christopher Pissarides, Nobel Laureate and Co-Founder of IFOW, stressed the significance of placing “good jobs” at the heart of an economic and industrial strategy in the age of automation. He believes that the study provides valuable insights into how this can be achieved.

The IFOW’s study suggests that with the right approach, AI adoption can lead to a positive transformation of the labour market. By investing in education, infrastructure, and worker engagement, the UK can harness the potential of AI to create more jobs and improve job quality across the country.

Matt Robinson, Head of Nations and Regions, techUK, commented: “Realising the benefits of technologies like AI for all will mean getting the right foundations in place across areas like digital infrastructure and skills provision in every part of the UK to enable and create high-quality digital jobs.

“Access to good digital infrastructure, as well as skills and talent, is a priority for techUK members, and the Institute’s work provides welcome insights into their importance for creating good work throughout the country.”

While the IFOW’s study paints a more positive outlook on the adoption of AI than most headlines, it will be an uphill battle to convince the wider public.

A poll of US adults released this week by Mitre-Harris found the majority (54%) believe the risks of AI and just 39 percent of adults said they believed today’s AI technologies are safe and secure — down nine points from the previous survey.

As the AI industry continues to evolve, urgent action from governments, employers, and employees is essential to realise the opportunities, manage the risks, and convince a wary public of the technology’s benefits.

A copy of the full working paper can be found here (PDF)

(Photo by Damian Zaleski on Unsplash)

See also: CMA sets out principles for responsible AI development 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post IFOW: AI can have a positive impact on jobs appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/20/ifow-ai-can-have-positive-impact-jobs/feed/ 0
AI ‘godfather’ warns of dangers and quits Google https://www.artificialintelligence-news.com/2023/05/02/ai-godfather-warns-dangers-and-quits-google/ https://www.artificialintelligence-news.com/2023/05/02/ai-godfather-warns-dangers-and-quits-google/#respond Tue, 02 May 2023 13:53:03 +0000 https://www.artificialintelligence-news.com/?p=13021 Geoffrey Hinton, known as the “Godfather of AI,” has expressed concerns about the potential dangers of AI and left his position at Google to discuss them openly. Hinton, alongside two others, won the Turing Award in 2018 for laying the foundations of AI. He had been working at Google since 2013 but resigned to speak... Read more »

The post AI ‘godfather’ warns of dangers and quits Google appeared first on AI News.

]]>
Geoffrey Hinton, known as the “Godfather of AI,” has expressed concerns about the potential dangers of AI and left his position at Google to discuss them openly.

Hinton, alongside two others, won the Turing Award in 2018 for laying the foundations of AI. He had been working at Google since 2013 but resigned to speak out about the fast pace of AI development and the risks it poses.

In an interview with The New York Times, Hinton warned that the rapid development of generative AI products was “racing towards danger” and that false text, images, and videos created by AI could lead to a situation where average people “would not be able to know what is true anymore.”

Hinton also expressed concerns about the impact of AI on the job market, as machines could eventually replace roles such as paralegals, personal assistants, and translators.

“The idea that this stuff could actually get smarter than people — a few people believed that. But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that,” said Hinton.

Hinton’s concerns are not unfounded. AI has already been used to create deepfakes, which are videos that manipulate facial and voice expressions to make it appear that someone is saying something they did not say. These deepfakes can be used to spread misinformation or damage a person’s reputation.

Furthermore, AI has the potential to automate many jobs, leading to job losses. This week, IBM CEO Arvind Krishna said that the company plans to use AI to replace around 30 percent of back office jobs—equivalent to around 7,800 jobs.

Hinton is not alone in his concerns. Other experts have also warned about the risks of AI.

Elon Musk, the CEO of Tesla and SpaceX, called AI “our biggest existential threat”. The following year, the legendary astrophysicist Neil deGrasse Tyson said that he shares Musk’s view. In 2018, Stephen Hawking warned that AI could replace humans as the dominant species on Earth.

In March, Musk joined Apple co-founder Steve Wozniak and over 1,000 other experts in signing an open letter calling for a  halt to “out-of-control” AI development.

However, some experts believe that AI can be developed in a way that benefits society. For example, AI can be used to diagnose diseases, detect fraud, and reduce traffic accidents.

To ensure that AI is developed in a responsible and ethical manner, many organisations have developed guidelines, including the IEEE, the EU, and the OECD.

The concerns raised by Hinton about AI are significant and highlight the need for responsible AI development. While AI has the potential to bring many benefits to society, it is crucial that it is developed in a way that minimises its risks and maximises its benefits.

(Image Credit: Eviatar Bach under CC BY-SA 3.0 license. Image cropped from original.)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI ‘godfather’ warns of dangers and quits Google appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/05/02/ai-godfather-warns-dangers-and-quits-google/feed/ 0
Jack Dorsey tells Andrew Yang that AI is ‘coming for programming jobs’ https://www.artificialintelligence-news.com/2020/05/26/jack-dorsey-andrew-yang-ai-programming-jobs/ https://www.artificialintelligence-news.com/2020/05/26/jack-dorsey-andrew-yang-ai-programming-jobs/#respond Tue, 26 May 2020 15:10:02 +0000 http://artificialintelligence-news.com/?p=9625 Twitter CEO Jack Dorsey recently told former 2020 US presidential candidate Andrew Yang that AI “is coming for programming jobs”. There is still fierce debate about the impact that artificial intelligence will have on jobs. Some believe that AI will replace many jobs and lead to the requirement of a Universal Basic Income (UBI), while... Read more »

The post Jack Dorsey tells Andrew Yang that AI is ‘coming for programming jobs’ appeared first on AI News.

]]>
Twitter CEO Jack Dorsey recently told former 2020 US presidential candidate Andrew Yang that AI “is coming for programming jobs”.

There is still fierce debate about the impact that artificial intelligence will have on jobs. Some believe that AI will replace many jobs and lead to the requirement of a Universal Basic Income (UBI), while others claim it will primarily offer assistance to help workers be more productive.

Dorsey is a respected technologist with a deep understanding of emerging technologies. Aside from creating Twitter, he also founded Square which is currently pushing the mass adoption of blockchain-based digital currencies such as Bitcoin and Ethereum.

Yang was seen as the presidential candidate for technologists before he suspended his campaign in February, with The New York Times calling him “The Internet’s Favorite Candidate” and his campaign was noted for its “tech-friendly” nature. The entrepreneur, lawyer, and philanthropist founded Venture for America, a non-profit which aimed to create jobs in cities most affected by the Great Recession. In March, Yang announced the creation of the Humanity Forward non-profit which is dedicated to promoting the ideas during his presidential campaign.

Jobs are now very much at threat once again; with the coronavirus wiping out all job gains since the Great Recession over a period of just four weeks. If emerging technologies such as AI do pose a risk to jobs, it could only compound the problem further.

In an episode of the Yang Speaks podcast, Dorsey warns that AI will pose a particular threat to entry-level programming jobs. However, even seasoned programmers will have their worth devalued.

“A lot of the goals of machine learning and deep learning is to write the software itself over time so a lot of entry-level programming jobs will just not be as relevant anymore,” Dorsey told Yang.

Yang is a proponent of a UBI. Dorsey said that such free cash payments could provide a “floor” for if people lose their jobs due to automation. Such free cash wouldn’t allow for luxurious items and holidays, but would ensure that people can keep a roof over their heads and food on the table.

UBI would provide workers with “peace of mind” that they can “feed their children while they are learning how to transition into this new world,” Dorsey explains.

Critics of UBI argue that such a permanent scheme would be expensive.

The UK is finding that out to some extent currently with its coronavirus furlough scheme. Under the scheme, the state will pay 80 percent of a worker’s salary to prevent job losses during the crisis. However, it’s costing approximately £14 billion per month and is expected to be wound down in the coming months due to being unsustainable.

However, some kind of UBI system is appearing increasingly needed.

In November, the Brookings Institute published a report (PDF) which highlights the risk AI poses to jobs. 

“Workers with graduate or professional degrees will be almost four times as exposed to AI as workers with just a high school degree. Holders of bachelor’s degrees will be the most exposed by education level, more than five times as exposed to AI than workers with just a high school degree,” the paper says.

In their analysis, the Brookings Institute ranked professions by their risk from AI exposure. Computer programmers ranked third, backing Dorsey’s prediction, just behind market research analysts and sales managers.

(Image Credit: Jack Dorsey by Thierry Ehrmann under CC BY 2.0 license)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Jack Dorsey tells Andrew Yang that AI is ‘coming for programming jobs’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/05/26/jack-dorsey-andrew-yang-ai-programming-jobs/feed/ 0
AI Expo Global: Fairness and safety in artificial intelligence https://www.artificialintelligence-news.com/2019/05/01/ai-expo-fairness-safety-artificial-intelligence/ https://www.artificialintelligence-news.com/2019/05/01/ai-expo-fairness-safety-artificial-intelligence/#respond Wed, 01 May 2019 16:36:31 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5594 AI News sat down with Faculty’s head of research Ilya Feige to discuss safe and fair practices in artificial intelligence development. Feige had just finished giving a talk entitled ‘Fairness in AI: Latest developments in AI safety’ at this year’s AI Expo Global. We managed to grab him to get more of his thoughts on... Read more »

The post AI Expo Global: Fairness and safety in artificial intelligence appeared first on AI News.

]]>
AI News sat down with Faculty’s head of research Ilya Feige to discuss safe and fair practices in artificial intelligence development.

Feige had just finished giving a talk entitled ‘Fairness in AI: Latest developments in AI safety’ at this year’s AI Expo Global. We managed to grab him to get more of his thoughts on the issue.

Rightfully, people are becoming increasingly concerned about unfair and unsafe AIs. Human biases are seeping into algorithms which poses a very real danger that prejudices and oppression could become automated by accident.

AI News reported last week on research from New York University that found inequality in STEM-based careers is causing algorithms to work better or worse for some parts of society over others.

Similar findings, by Joy Buolamwini and her team from the Algorithmic Justice League, highlighted a disparity in the effectiveness of the world’s leading facial recognition systems between genders and skin tones.

In an ideal world, all parts of society would be equally represented tomorrow. The reality is that issue is going to take much longer to rectify, but AI technologies are becoming increasingly used across society today.

AI News asked Feige for his perspective and how the impact of that problem can be reduced much sooner.

“I think the most important thing for organisations to do is to spend more time thinking about bias and on ensuring that every model they build is unbiased because a demographically disparate team can build non-disparate tech.”

Some companies are seeking to build AIs which can scan for bias in other algorithms. We asked Feige for his view on whether he believes this is an ideal solution.

“Definitely, I showed one in my talk. We have tests for: You give me a black box algorithm, I have no idea what your algorithm does – but I can give an input, calculate the output, and I can just tell you how biased it is according to various definitions of bias.”

“We can go even further and say: Let’s modify your algorithm and give it back so it’s unbiased according to one of those definitions.”

In the Western world, we consider ourselves fairly liberal and protective of individual freedoms. China, potentially the world’s leader in AI, has a questionable human rights record and is known for invasive surveillance and mass data collection. Meanwhile, Russia has a reputation for military aggression which some are concerned will drive its AI developments. Much of the Middle East, while not considered leaders in AI, is behind most of the world in areas such as female and gay rights.

We asked Feige for his thoughts on whether these regional attitudes could find their way into AI developments.

“It’s an interesting question. It’s not that some regions will take the issue more or less seriously, they just have different … we’ll say preferences. I suspect China takes surveillance and facial recognition seriously – more seriously than the UK – but they do so in order to leverage it for mass surveillance, for population control.”

“The UK is trying to walk a fine line in efficiently using that very useful technology but not undermine personal privacy and freedom of individuals.”

During his talk, Feige made the point that he’s less concerned about AI biases due to the fact that – unlike humans – algorithms can be controlled.

“This is a real source of optimism for me, just because human decision-making is incredibly biased and everyone knows that.”

Feige asked the audience to raise a hand if they were concerned about AI bias which prompted around half to do so. The same question was asked regarding human bias and most of the room had their hand up.

“You can be precise with machine learning algorithms. You can say: ‘This is the objective I’m trying to achieve, I’m trying to maximise the probability of a candidate being successful at their job according to historical people in their role’. Or, you can be precise about the data the model is trained on and say: ‘I’m going to ignore data from before this time period because things were ‘different’ back then’”.

“Humans have fixed past experiences they can’t control. I can’t change the fact my mum did most of the cooking when I was growing up and I don’t know how it affects my decision-making.”

“I also can’t force myself to hire based on success in their jobs, which I try to do. It’s hard to know if really I just had a good conversation about the football with the candidate.”

Faculty, of which Feige has the role of head of research, is a European company based in London. With the EU Commission recently publishing its guidelines on AI development, we took the opportunity to get his views on them.

“At a high-level, I think they’re great. They align quite a bit with how we think about these things. My biggest wish, whenever a body like that puts together some principles, is that there’s a big gap between that level of guidelines and what is useful for practitioners. Making those more precise is really important and those weren’t precise enough by my standards.”

“But not to just advocate putting the responsibility on policymakers. There’s also an onus on practitioners to try and articulate what bias looks like statistically and how that may apply to different problems, and then say: ‘Ok policy body, which of these is most relevant and can you now make those statements in this language’ and basically bridge the gap.”

Google recently created, then axed, a dedicated ‘ethics board’ for its AI developments. Such boards seem a good idea but representing society can be a minefield. Google’s faced criticism for having a conservative figure with strong anti-LGBTQ and immigrant views on the board.

Feige provided his take on whether companies should have an independent AI oversight board to ensure their developments are safe and ethical.

“To some degree, definitely. I suspect there are some cases you want that oversight board to be very external and like a regulator with a lot of overhead and a lot of teeth.”

“At Faculty, each one of our product teams has a shadow team – which has practically the same skill set – who monitor and oversee the work done by the project team to ensure it follows our internal set of values and guidelines.”

“I think the fundamental question here is how to do this in a productive way and ensure AI safety but that it doesn’t grind innovation to a halt. You can imagine where the UK has a really strong oversight stance and then some other country with much less regulatory oversight has companies which become large multinationals and operate in the UK anyway.”

Getting the balance right around regulation is difficult. Our sister publication IoT News interviewed a digital lawyer who raised the concern that Europe’s strict GDPR regulations will cause AI companies in the continent to fall behind their counterparts in Asia and America which have access to far more data.

Feige believes there is the danger of this happening, but European countries like the UK – whether it ultimately remains part of the EU and subject to regulations like GDPR or not – can use it as an opportunity to lead in AI safety.

Three reasons are provided why the UK could achieve this:

  1. The UK has significant AI talent and renowned universities.
  2. It has a fairly unobjectionable record and respected government (Feige clarifies in comparison to how some countries view the US and China).
  3. The UK has a fairly robust existing regulatory infrastructure – especially in areas such as financial services.

Among the biggest concerns about AI continues to be around its impact on the workforce, particularly whether it will replace low-skilled workers. We wanted to know whether using legislation to protect human workers is a good idea.

“You could ask the question a hundred years ago: ‘Should automation come into agriculture because 90 percent of the population works in it?’ and now it’s almost all automated. I suspect individuals may be hurt by automation but their children will be better off by it.”

“I think any heavy-handed regulation will have unintended consequences and should be thought about well.”

Our discussion with Feige was insightful and provided optimism that AI can be developed safely and fairly, as long as there’s a will to do so.

You can watch our full interview with Feige from AI Expo Global 2019 below:

Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.

The post AI Expo Global: Fairness and safety in artificial intelligence appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2019/05/01/ai-expo-fairness-safety-artificial-intelligence/feed/ 0