AI Research News | Latest AI Research Developments | AI News https://www.artificialintelligence-news.com/categories/ai-research/ Artificial Intelligence News Thu, 26 Oct 2023 15:49:01 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png AI Research News | Latest AI Research Developments | AI News https://www.artificialintelligence-news.com/categories/ai-research/ 32 32 UK paper highlights AI risks ahead of global Safety Summit https://www.artificialintelligence-news.com/2023/10/26/uk-paper-highlights-ai-risks-ahead-global-safety-summit/ https://www.artificialintelligence-news.com/2023/10/26/uk-paper-highlights-ai-risks-ahead-global-safety-summit/#respond Thu, 26 Oct 2023 15:48:59 +0000 https://www.artificialintelligence-news.com/?p=13793 The UK Government has unveiled a comprehensive paper addressing the capabilities and risks associated with frontier AI. UK Prime Minister Rishi Sunak has spoken today on the global responsibility to confront the risks highlighted in the report and harness AI’s potential. Sunak emphasised the need for honest dialogue about the dual nature of AI: offering... Read more »

The post UK paper highlights AI risks ahead of global Safety Summit appeared first on AI News.

]]>
The UK Government has unveiled a comprehensive paper addressing the capabilities and risks associated with frontier AI.

UK Prime Minister Rishi Sunak has spoken today on the global responsibility to confront the risks highlighted in the report and harness AI’s potential. Sunak emphasised the need for honest dialogue about the dual nature of AI: offering unprecedented opportunities, while also posing significant dangers.

“AI will bring new knowledge, new opportunities for economic growth, new advances in human capability, and the chance to solve problems we once thought beyond us. But it also brings new dangers and new fears,” said Sunak.

“So, the responsible thing for me to do is to address those fears head-on, giving you the peace of mind that we will keep you safe while making sure you and your children have all the opportunities for a better future that AI can bring.

“Doing the right thing, not the easy thing, means being honest with people about the risks from these technologies.”

The report delves into the rapid advancements of frontier AI, drawing on numerous sources. It highlights the diverse perspectives within scientific, expert, and global communities regarding the risks associated with the swift evolution of AI technology. 

The publication comprises three key sections:

  1. Capabilities and risks from frontier AI: This section presents a discussion paper advocating further research into AI risk. It delineates the current state of frontier AI capabilities, potential future improvements, and associated risks, including societal harms, misuse, and loss of control.
  2. Safety and security risks of generative AI to 2025: Drawing on intelligence assessments, this report outlines the potential global benefits of generative AI while highlighting the increased safety and security risks. It underscores the enhancement of threat actor capabilities and the effectiveness of attacks due to generative AI development.
  3. Future risks of frontier AI: Prepared by the Government Office for Science, this report explores uncertainties in frontier AI development, future system risks, and potential scenarios for AI up to 2030.

The report – based on declassified information from intelligence agencies – focuses on generative AI, the technology underpinning popular chatbots and image generation software. It foresees a future where AI might be exploited by terrorists to plan biological or chemical attacks, raising serious concerns about global security.

Sjuul van der Leeuw, CEO of Deployteq, commented: “It is good to see the government take a serious approach, offering a report ahead of the Safety Summit next week however more must be done.

“An ongoing effort to address AI risks is needed and we hope that the summit brings much-needed clarity, allowing businesses and marketers to enjoy the benefits this emerging piece of technology offers, without the worry of backlash.”

The report highlights that generative AI could be utilised to gather knowledge on physical attacks by non-state violent actors, including creating chemical, biological, and radiological weapons.

Although companies are working to implement safeguards, the report emphasises the varying effectiveness of these measures. Obstacles to obtaining the necessary knowledge, raw materials, and equipment for such attacks are decreasing, with AI potentially accelerating this process.

Additionally, the report warns of the likelihood of AI-driven cyber-attacks becoming faster-paced, more effective, and on a larger scale by 2025. AI could aid hackers in mimicking official language, and overcome previous challenges faced in this area.

However, some experts have questioned the UK Government’s approach.

Rashik Parmar MBE, CEO of BCS, The Chartered Institute for IT, said: “Over 1,300 technologists and leaders signed our open letter calling AI a force for good rather than an existential threat to humanity.

“AI won’t grow up like The Terminator. If we take the proper steps, it will be a trusted co-pilot from our earliest school days to our retirement.

The AI Safety Summit will aim to foster healthy discussion around how to address frontier AI risks, encompassing misuse by non-state actors for cyberattacks or bioweapon design and concerns related to AI systems acting autonomously contrary to human intentions. Discussions at the summit will also extend to broader societal impacts, such as election disruption, bias, crime, and online safety.

Claire Trachet, CEO of Trachet, commented: “The fast-growing nature of AI has made it difficult for governments to balance creating effective regulation which safeguards the interest of businesses and consumers without stifling investment opportunities. Even though there are some forms of risk management and different reports coming out now, none of them are true coordinated approaches.

“The UK Government’s commitment to AI safety is commendable, but the criticism surrounding the summit serves as a reminder of the importance of a balanced, constructive, and forward-thinking approach to AI regulation.”

If the UK Government’s report is anything to go by, the need for collaboration around proportionate but rigorous measures to manage the risks posed by AI is more imperative than ever.

The global AI Safety Summit is set to take place at the historic Bletchley Park on 1 – 2 November 2023.

(Image Credit: GOV.UK)

See also: BSI: Closing ‘AI confidence gap’ key to unlocking benefits

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK paper highlights AI risks ahead of global Safety Summit appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/26/uk-paper-highlights-ai-risks-ahead-global-safety-summit/feed/ 0
Enterprises struggle to address generative AI’s security implications https://www.artificialintelligence-news.com/2023/10/18/enterprises-struggle-address-generative-ai-security-implications/ https://www.artificialintelligence-news.com/2023/10/18/enterprises-struggle-address-generative-ai-security-implications/#respond Wed, 18 Oct 2023 15:54:37 +0000 https://www.artificialintelligence-news.com/?p=13766 In a recent study, cloud-native network detection and response firm ExtraHop unveiled a concerning trend: enterprises are struggling with the security implications of employee generative AI use. Their new research report, The Generative AI Tipping Point, sheds light on the challenges faced by organisations as generative AI technology becomes more prevalent in the workplace. The... Read more »

The post Enterprises struggle to address generative AI’s security implications appeared first on AI News.

]]>
In a recent study, cloud-native network detection and response firm ExtraHop unveiled a concerning trend: enterprises are struggling with the security implications of employee generative AI use.

Their new research report, The Generative AI Tipping Point, sheds light on the challenges faced by organisations as generative AI technology becomes more prevalent in the workplace.

The report delves into how organisations are dealing with the use of generative AI tools, revealing a significant cognitive dissonance among IT and security leaders. Astonishingly, 73 percent of these leaders confessed that their employees frequently use generative AI tools or Large Language Models (LLM) at work. Despite this, a staggering majority admitted to being uncertain about how to effectively address the associated security risks.

When questioned about their concerns, IT and security leaders expressed more worry about the possibility of inaccurate or nonsensical responses (40%) than critical security issues such as exposure of customer and employee personal identifiable information (PII) (36%) or financial loss (25%).

Raja Mukerji, Co-Founder and Chief Scientist at ExtraHop, said: “By blending innovation with strong safeguards, generative AI will continue to be a force that will uplevel entire industries in the years to come.”

One of the startling revelations from the study was the ineffectiveness of generative AI bans. About 32 percent of respondents stated that their organisations had prohibited the use of these tools. However, only five percent reported that employees never used these tools—indicating that bans alone are not enough to curb their usage.

The study also highlighted a clear desire for guidance, particularly from government bodies. A significant 90 percent of respondents expressed the need for government involvement, with 60 percent advocating for mandatory regulations and 30 percent supporting government standards for businesses to adopt voluntarily.

Despite a sense of confidence in their current security infrastructure, the study revealed gaps in basic security practices.

While 82 percent felt confident in their security stack’s ability to protect against generative AI threats, less than half had invested in technology to monitor generative AI use. Alarmingly, only 46 percent had established policies governing acceptable use and merely 42 percent provided training to users on the safe use of these tools.

The findings come in the wake of the rapid adoption of technologies like ChatGPT, which have become an integral part of modern businesses. Business leaders are urged to understand their employees’ generative AI usage to identify potential security vulnerabilities.

You can find a full copy of the report here.

(Photo by Hennie Stander on Unsplash)

See also: BSI: Closing ‘AI confidence gap’ key to unlocking benefits

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Enterprises struggle to address generative AI’s security implications appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/18/enterprises-struggle-address-generative-ai-security-implications/feed/ 0
BSI: Closing ‘AI confidence gap’ key to unlocking benefits https://www.artificialintelligence-news.com/2023/10/17/bsi-closing-ai-confidence-gap-key-unlocking-benefits/ https://www.artificialintelligence-news.com/2023/10/17/bsi-closing-ai-confidence-gap-key-unlocking-benefits/#respond Tue, 17 Oct 2023 14:34:00 +0000 https://www.artificialintelligence-news.com/?p=13759 The UK’s potential to harness the benefits of AI in crucial sectors such as healthcare, food safety, and sustainability is under threat due to a significant “confidence gap” among the public. According to a study conducted by BSI, 54 percent of UK respondents expressed excitement about AI’s potential to revolutionise medical diagnoses and 43 percent... Read more »

The post BSI: Closing ‘AI confidence gap’ key to unlocking benefits appeared first on AI News.

]]>
The UK’s potential to harness the benefits of AI in crucial sectors such as healthcare, food safety, and sustainability is under threat due to a significant “confidence gap” among the public.

According to a study conducted by BSI, 54 percent of UK respondents expressed excitement about AI’s potential to revolutionise medical diagnoses and 43 percent welcomed AI’s role in reducing food waste. However, there is a prevailing lack of trust.

This scepticism could hinder the integration of AI technologies in the NHS, which is currently grappling with challenges like the COVID-19 backlog and an ageing population. Almost half of Britons (49%) support the use of AI to alleviate pressure on the healthcare system and reduce waiting times. However, only 20 percent have more confidence in AI than humans in detecting food contamination issues.

The study also highlighted a pressing need for education, as 65 percent of respondents felt patients should be informed about the use of AI tools in diagnosis or treatment. 37 percent of respondents expect to use AI regularly in medical settings by 2030.

Craig Civil, Director of Data Science and AI at BSI, said:

“The magnitude of ways AI can shape the UK’s future means we are seeing some degree of hesitation of the unknown. This can be addressed by developing greater understanding and recognition that human involvement will always be needed if we are to make the best use of this technology, and by ensuring we have frameworks that are in place to govern its use and build trust.

Now is the moment for the UK to collaborate to balance the great power of this tool with the realities of actually using it in a credible, authentic, well-executed, and well-governed way.

Closing the confidence gap and building the appropriate checks and balances can enable us to make not just good but great use of AI in every area of life and society.”

60 percent believed consumers needed protections regarding AI technologies. The study also revealed that 61 percent of Britons are calling for international guidelines to ensure the safe use of AI. This demand reflects a global sentiment, with 50 percent of respondents highlighting the need for ethical safeguards on patient data use.

Harold Pradal, Chief Commercial Officer at BSI, commented:

“AI is a transformational technology. For it to be a powerful force for good, trust needs to be the critical factor. There is a clear opportunity to harness AI to drive societal impact, change lives, and accelerate progress towards a better future and a sustainable world.

Closing the AI confidence gap is the first necessary step, it has to be delivered through education to help realise AI’s benefits and shape Society 5.0 in a positive way.”

The study’s findings are a call to action for the UK, urging collaboration and the establishment of frameworks to govern AI’s use.

The UK Government, recognising the importance of safe AI implementation, is set to host a global AI Safety Summit at the historic Bletchley Park on 1-2 November 2023. BSI is an official partner for the much-anticipated event.

(Photo by Suad Kamardeen on Unsplash)

See also: UK reveals AI Safety Summit opening day agenda

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post BSI: Closing ‘AI confidence gap’ key to unlocking benefits appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/17/bsi-closing-ai-confidence-gap-key-unlocking-benefits/feed/ 0
IFOW: AI can have a positive impact on jobs https://www.artificialintelligence-news.com/2023/09/20/ifow-ai-can-have-positive-impact-jobs/ https://www.artificialintelligence-news.com/2023/09/20/ifow-ai-can-have-positive-impact-jobs/#respond Wed, 20 Sep 2023 12:15:37 +0000 https://www.artificialintelligence-news.com/?p=13618 In a world where sensational headlines about AI and autonomous robots dominate the media landscape, a new report sheds light on a different narrative. The research, funded by the Nuffield Foundation, explores the nuanced impacts of AI adoption on jobs and work quality. Contrary to the doomsday predictions, the report suggests that AI could have... Read more »

The post IFOW: AI can have a positive impact on jobs appeared first on AI News.

]]>
In a world where sensational headlines about AI and autonomous robots dominate the media landscape, a new report sheds light on a different narrative.

The research, funded by the Nuffield Foundation, explores the nuanced impacts of AI adoption on jobs and work quality. Contrary to the doomsday predictions, the report suggests that AI could have a positive influence on employment and job quality.

The study, conducted by the Institute for the Future of Work (IFOW), indicates that AI adoption is already well underway in UK firms. However, rather than leading to widespread job loss, it suggests that AI has the potential to create more jobs and improve the quality of existing ones.

Anna Thomas, Co-Founder and Director of the IFOW, expressed optimism about the study’s results: “This report not only highlights that the adoption of AI is well underway across UK firms but that it is possible for this tech transformation to lead to both net job creation and more ‘good work’ – great news as we look to solve the UK’s productivity puzzle.”

“With the [UK-hosted global] AI Summit fast approaching, Government must act urgently to regulate, legislate and invest so that UK firms and workers can benefit from this fast-moving technology.”

One key takeaway from the study is the importance of regional investment in education and infrastructure to make all areas of the UK ‘innovation ready.’ The study also emphasises the need for firms to engage workers when investing in automation and AI.

Taking these suggested actions could help ensure that the benefits of AI are distributed more evenly across regions and demographics, reducing existing inequalities.

Professor Sir Christopher Pissarides, Nobel Laureate and Co-Founder of IFOW, stressed the significance of placing “good jobs” at the heart of an economic and industrial strategy in the age of automation. He believes that the study provides valuable insights into how this can be achieved.

The IFOW’s study suggests that with the right approach, AI adoption can lead to a positive transformation of the labour market. By investing in education, infrastructure, and worker engagement, the UK can harness the potential of AI to create more jobs and improve job quality across the country.

Matt Robinson, Head of Nations and Regions, techUK, commented: “Realising the benefits of technologies like AI for all will mean getting the right foundations in place across areas like digital infrastructure and skills provision in every part of the UK to enable and create high-quality digital jobs.

“Access to good digital infrastructure, as well as skills and talent, is a priority for techUK members, and the Institute’s work provides welcome insights into their importance for creating good work throughout the country.”

While the IFOW’s study paints a more positive outlook on the adoption of AI than most headlines, it will be an uphill battle to convince the wider public.

A poll of US adults released this week by Mitre-Harris found the majority (54%) believe the risks of AI and just 39 percent of adults said they believed today’s AI technologies are safe and secure — down nine points from the previous survey.

As the AI industry continues to evolve, urgent action from governments, employers, and employees is essential to realise the opportunities, manage the risks, and convince a wary public of the technology’s benefits.

A copy of the full working paper can be found here (PDF)

(Photo by Damian Zaleski on Unsplash)

See also: CMA sets out principles for responsible AI development 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post IFOW: AI can have a positive impact on jobs appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/20/ifow-ai-can-have-positive-impact-jobs/feed/ 0
UK’s AI ecosystem to hit £2.4T by 2027, third in global race https://www.artificialintelligence-news.com/2023/09/07/uk-ai-ecosystem-hit-2-4t-by-2027-third-global-race/ https://www.artificialintelligence-news.com/2023/09/07/uk-ai-ecosystem-hit-2-4t-by-2027-third-global-race/#respond Thu, 07 Sep 2023 14:23:10 +0000 https://www.artificialintelligence-news.com/?p=13569 Projections released by the newly launched Global AI Ecosystem open-source knowledge platform indicate that the UK’s AI sector is set to skyrocket from £1.36 trillion ($1.7 trillion) to £2.4 trillion ($3 trillion) by 2027. The findings suggest the UK is set to remain Europe’s AI leader and secure third place in the global AI race... Read more »

The post UK’s AI ecosystem to hit £2.4T by 2027, third in global race appeared first on AI News.

]]>
Projections released by the newly launched Global AI Ecosystem open-source knowledge platform indicate that the UK’s AI sector is set to skyrocket from £1.36 trillion ($1.7 trillion) to £2.4 trillion ($3 trillion) by 2027. The findings suggest the UK is set to remain Europe’s AI leader and secure third place in the global AI race behind the US and China.

The Global AI Ecosystem platform is developed with support from AI Industry Analytics (AiiA) and Deep Knowledge Group. Designed as a universally accessible space for community interaction, collaboration, content sharing, and knowledge exchange, it has become a vital hub for AI enthusiasts and professionals.

AiiA, in its Global AI Economy Size Assessment report, conducted groundbreaking research showcasing the rapid expansion of the UK’s AI industry.

With over 8,900 companies operating in the sector, the UK AI economy’s valuation of £1.36 trillion underscores its substantial contribution to the national GDP. Approximately 4,100 investment funds are dedicated to AI, with 600 of them based in the UK.

A robust workforce of 500,000 UK-based AI specialists is driving innovation, solidifying the nation’s position in the global AI landscape. This skilled workforce not only bolsters GDP growth but also acts as a safety net against unemployment.

The UK government’s active prioritisation of its national AI agenda is a significant factor in this remarkable growth. Last month, UK Deputy PM Oliver Dowden called AI the most ‘extensive’ industrial revolution yet.

With 280 ongoing projects harnessing AI technology, the UK’s commitment to AI is clear. AI is a major pillar of the country’s national industrial strategy, making the UK one of the most proactive nations in shaping its AI future.

Dmitry Kaminskiy, Founder of AI Industry Analytics (AiiA) and General Partner of Deep Knowledge Group, said:

“Despite an economic downturn and other challenges, the UK stands as an undoubtable, dynamic, and proactive leader in the global AI arena, having surpassed £1.3 trillion in 2023 and projected to reach £2.4 trillion by 2027.

There is no question that AI is poised to be the major driver for economic growth, fuelling the further development of the entire UK DeepTech industry, and creating a cumulative, systemic, positive impact on the full scope of the nation’s integral infrastructure.”

Key cities like London, Cambridge, Manchester, and Edinburgh have emerged as leading AI hubs, fostering collaboration and providing access to essential resources. With nearly 5,000 AI companies in London alone, it competes with entire countries on the global AI stage and solidifies its European leadership status.

AiiA’s estimation of the UK AI economy size used AI algorithms to map the global AI industry, profiling 50,000 companies, 20,000 investors, 2,000 AI leaders, and 2,500 R&D hubs. Building upon previous reports, it conducted the most comprehensive assessment of the Global AI Economy to date, projecting a global AI economy exceeding £27.2 trillion ($34 trillion) by 2027.

The UK’s position as a hub for science, R&D, DeepTech, and AI governance places it in good stead for leveraging AI as a core engine of technological progress and driving economic growth.

(Image Credit: Global AI Ecosystem)

See also: UK government outlines AI Safety Summit plans

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK’s AI ecosystem to hit £2.4T by 2027, third in global race appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/07/uk-ai-ecosystem-hit-2-4t-by-2027-third-global-race/feed/ 0
ChatGPT’s political bias highlighted in study https://www.artificialintelligence-news.com/2023/08/18/chatgpt-political-bias-highlighted-study/ https://www.artificialintelligence-news.com/2023/08/18/chatgpt-political-bias-highlighted-study/#respond Fri, 18 Aug 2023 09:47:26 +0000 https://www.artificialintelligence-news.com/?p=13496 A study conducted by computer and information science researchers from the UK and Brazil has raised concerns about the objectivity of ChatGPT. The researchers claim to have discovered substantial political bias in ChatGPT’s responses, leaning towards the left side of the political spectrum. Published in the journal Public Choice this week, the study – conducted... Read more »

The post ChatGPT’s political bias highlighted in study appeared first on AI News.

]]>
A study conducted by computer and information science researchers from the UK and Brazil has raised concerns about the objectivity of ChatGPT.

The researchers claim to have discovered substantial political bias in ChatGPT’s responses, leaning towards the left side of the political spectrum.

Published in the journal Public Choice this week, the study – conducted by Fabio Motoki, Valdemar Pinho, and Victor Rodrigues – argues that the presence of political bias in AI-generated content could perpetuate existing biases found in traditional media.

The research highlights the potential impact of such bias on various stakeholders, including policymakers, media outlets, political groups, and educational institutions.

Utilising an empirical approach, the researchers employed a series of questionnaires to gauge ChatGPT’s political orientation. The chatbot was asked to answer political compass questions, capturing its stance on various political issues.

Furthermore, the study examined scenarios where ChatGPT impersonated both an average Democrat and a Republican, revealing the algorithm’s inherent bias towards Democratic-leaning responses.

The study’s findings indicate that ChatGPT’s bias extends beyond the US and is also noticeable in its responses regarding Brazilian and British political contexts. Notably, the research even suggests that this bias is not merely a mechanical result but a deliberate tendency in the algorithm’s output.

Determining the exact source of ChatGPT’s political bias remains a challenge. The researchers investigated both the training data and the algorithm itself, concluding that both factors likely contribute to the bias. They highlighted the need for future research to delve into disentangling these components for a clearer understanding of the bias’s origins.

OpenAI, the organisation behind ChatGPT, has not yet responded to the study’s findings. This study joins a growing list of concerns surrounding AI technology, including issues related to privacy, education, and identity verification in various sectors.

As the influence of AI-driven tools like ChatGPT continues to expand, experts and stakeholders are grappling with the implications of biased AI-generated content.

This latest study serves as a reminder that vigilance and critical evaluation are necessary to ensure that AI technologies are developed and deployed in a fair and balanced manner, devoid of undue political influence.

(Photo by Priscilla Du Preez on Unsplash)

See also: Study highlights impact of demographics on AI training

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post ChatGPT’s political bias highlighted in study appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/18/chatgpt-political-bias-highlighted-study/feed/ 0
AMD: Almost half of enterprises risk ‘falling behind’ on AI https://www.artificialintelligence-news.com/2023/08/16/amd-almost-half-enterprises-risk-falling-behind-ai/ https://www.artificialintelligence-news.com/2023/08/16/amd-almost-half-enterprises-risk-falling-behind-ai/#respond Wed, 16 Aug 2023 11:59:36 +0000 https://www.artificialintelligence-news.com/?p=13487 AMD has unveiled insights from a comprehensive survey of IT leaders, indicating that nearly 50 percent of enterprises are facing the risk of lagging behind in the adoption of AI. The survey focused on 2,500 IT leaders from the US, UK, Germany, France, and Japan. The findings spotlight both the enthusiasm surrounding AI’s potential benefits... Read more »

The post AMD: Almost half of enterprises risk ‘falling behind’ on AI appeared first on AI News.

]]>
AMD has unveiled insights from a comprehensive survey of IT leaders, indicating that nearly 50 percent of enterprises are facing the risk of lagging behind in the adoption of AI.

The survey focused on 2,500 IT leaders from the US, UK, Germany, France, and Japan. The findings spotlight both the enthusiasm surrounding AI’s potential benefits and the significant challenges that organisations encounter in implementing AI technologies.

The survey revealed a compelling enthusiasm for AI’s potential advantages, with three out of four IT leaders expressing optimism about its capabilities. These benefits spanned from amplified employee efficiency to automated cybersecurity solutions.

A striking 67 percent of the respondents revealed that they are intensifying investments in AI technologies to harness these advantages. Nonetheless, the survey’s results also exposed a sense of hesitancy arising from implementation uncertainties, the readiness of existing hardware, and technology stacks.

Matthew Unangst, Senior Director of Commercial Client and Workstation at AMD, said: 

“There is a benefit to being an early AI adopter. IT leaders are seeing the benefits of AI-enabled solutions, but their enterprises need to outline a more focused plan for implementation or risk falling behind.

Open software ecosystems, with high-performance hardware, are essential, and AMD believes in a multi-faceted approach of leveraging AI IP across our full portfolio of products to the benefit of our partners and customers.”

Of the organisations prioritising AI deployment, 90 percent reported experiencing heightened workplace efficiency. This underscores the idea that early AI adoption could yield a competitive edge in productivity and performance.

AMD’s survey outcomes indicated that establishments which defer AI adoption could potentially jeopardise their market standing.

To address these challenges and offer solutions, AMD is concentrating on the development of AI-capable solutions across its product portfolio. This includes the cloud, edge computing, and endpoints.

While IT leaders harbour optimism about the possibilities that AI presents, there exists a critical need for well-defined implementation strategies.

AMD’s survey indicates that those enterprises that act swiftly and purposefully on AI adoption could potentially reap substantial benefits, while those that lag behind may encounter challenges.

Find a full copy of the report here (PDF)

(Photo by paolo candelo on Unsplash)

See also: UK Deputy PM: AI is the most ‘extensive’ industrial revolution yet

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AMD: Almost half of enterprises risk ‘falling behind’ on AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/16/amd-almost-half-enterprises-risk-falling-behind-ai/feed/ 0
Explosive growth in AI and ML fuels expertise demand https://www.artificialintelligence-news.com/2023/07/28/explosive-growth-ai-ml-fuels-expertise-demand/ https://www.artificialintelligence-news.com/2023/07/28/explosive-growth-ai-ml-fuels-expertise-demand/#respond Fri, 28 Jul 2023 16:00:25 +0000 https://www.artificialintelligence-news.com/?p=13340 AI and machine learning are reshaping the job landscape, with higher incentives being offered to attract and retain expertise amid talent shortages. According to a recent report by Harnham, a leading data and analytics recruitment agency in the UK, the demand for ML engineering roles has been steadily rising over the past few years. Recently,... Read more »

The post Explosive growth in AI and ML fuels expertise demand appeared first on AI News.

]]>
AI and machine learning are reshaping the job landscape, with higher incentives being offered to attract and retain expertise amid talent shortages.

According to a recent report by Harnham, a leading data and analytics recruitment agency in the UK, the demand for ML engineering roles has been steadily rising over the past few years.

Recently, there’s been a shift towards MLOps professionals who possess the skills to bridge the gap between data scientists and data engineers, thereby optimising the deployment of ML models.

Harnham’s report provides comprehensive insights into the salaries and day rates of various data science roles across the UK.

Technical leads/managers in computer vision, data science, deep learning & AI, ML engineering, MLOps, and natural language processing are earning annual base salaries ranging from £44,000 to £120,000, depending on experience and location.

In addition to competitive compensation, data science professionals are seeking specific benefits to enhance their job satisfaction.

The top five desirable benefits include remote working options, bonuses, health insurance, flexible working hours, and shares. These perks play a crucial role in attracting and retaining top talent in the data science sector.

The report also sheds light on some critical trends and statistics in the industry.

25 percent of professionals cited a non-competitive salary/rate as the top reason for leaving a role, followed closely by a lack of career progression (24%) and a “better opportunity” coming along (22%).

The number of female professionals in the field has increased from 22 percent last year, indicating a positive shift towards greater gender diversity in data science.

While the field of data science continues to evolve rapidly, professionals are keen to explore new opportunities.

One finding from the report reveals that data science professionals are the most likely to leave their current roles if the right opportunity arises. The ongoing talent shortage means that relevant expertise is in high demand and many opportunities are available.

Advancements in AI and ML are transforming the landscape and creating exciting new job opportunities. As the demand for data professionals continues to surge, companies must adapt to remain competitive in attracting and retaining top talent in this thriving field.

For more information and in-depth data on data science salaries and trends in the UK, refer to the Harnham Data & AI Salary Guide for 2023.

(Photo by Ben Rosett on Unsplash)

See also: Universities want to ensure staff and students are ‘AI-literate’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Explosive growth in AI and ML fuels expertise demand appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/07/28/explosive-growth-ai-ml-fuels-expertise-demand/feed/ 0
Google report highlights AI’s impact on the UK economy https://www.artificialintelligence-news.com/2023/07/05/google-report-highlights-ai-impact-uk-economy/ https://www.artificialintelligence-news.com/2023/07/05/google-report-highlights-ai-impact-uk-economy/#respond Wed, 05 Jul 2023 11:54:39 +0000 https://www.artificialintelligence-news.com/?p=13257 A new report by Google emphasises that AI represents the most profound technological shift of our lifetime and has the potential to significantly enhance the UK’s economy. The report suggests that by 2030, AI could boost the UK economy by £400 billion—leading to an annual growth rate of 2.6 percent. Steven Mooney, CEO of FundMyPitch,... Read more »

The post Google report highlights AI’s impact on the UK economy appeared first on AI News.

]]>
A new report by Google emphasises that AI represents the most profound technological shift of our lifetime and has the potential to significantly enhance the UK’s economy.

The report suggests that by 2030, AI could boost the UK economy by £400 billion—leading to an annual growth rate of 2.6 percent.

Steven Mooney, CEO of FundMyPitch, commented:

“If AI is projected to bring billions to the UK economy, then why on earth aren’t our start-ups and SMEs getting the funding they need to take their businesses to the next level?

Time and time again, reports show that UK entrepreneurs struggle to secure access to credible funding or even an independent valuation, in stark contrast to other markets.

A failure to get ahead of the game on AI will have disastrous consequences for the economy, so giving full financial backing to up-and-coming companies that are pioneering developments in this technology should be a top priority.”

Google’s UK and Ireland boss, Debbie Weinstein, describes the transformative power of AI as unprecedented in the tech industry.

While concerns about job displacement due to AI exist, Weinstein reassures that new job opportunities will also arise as a result of AI implementation. She acknowledges the impact that this technology will have on people and emphasises the need to equip individuals with the necessary skills.

Recognising the profound impact AI will have on all aspects of society, Google says that it aims to ensure that individuals are prepared for this fundamental shift.

The report comes at a time when fears about the disruptive nature of AI are widespread. 

Professor Geoffrey Hinton, known as the “godfather of AI,” recently resigned from Google, expressing concerns about the potential misuse of AI tools.

Hinton warns of the possibility of bad actors using AI for nefarious purposes. However, he also acknowledges that someone else would have developed AI if he had not, suggesting that responsible development and regulation are necessary.

The call for caution and regulation in AI development has been echoed by experts worldwide. 

The launch of tools like ChatGPT and Midjourney has highlighted the potential for misuse. Google’s report emphasises the importance of regulation as AI technology advances and affirms that the company is actively collaborating with regulators globally.

Moreover, Google supports the establishment of a “national skills agenda” involving governments, businesses, and educational institutions. This collaborative effort aims to ensure that workers are not left behind as AI technology progresses.

Chris Downie, CEO of Pasabi, said:

“With AI set to bring unprecedented change to the economy, it is refreshing to see companies like Google looking to work proactively with Government to take a nuanced approach to regulation.

A national skills agenda and the UK Research Cloud are admirable initiatives, however, more attention still needs to be given to the risks posed by cyber criminals who are already hijacking the technology for their own harmful purposes.

From online scams to the global epidemic of fake reviews, to adopt a proactive approach to harness the latest fraud detection technologies to take the fight to online fraudsters.”

Google recognises the necessity of striking a balance, enabling agile regulation to attract inward investment while effectively managing the risks associated with AI.

By fostering collaboration between various stakeholders and prioritising skill development, the UK can capitalise on the competitive advantages offered by AI while safeguarding against negative consequences.

AI presents both challenges and opportunities, and it is crucial to navigate this technological shift thoughtfully and responsibly.

See also: UK will host global AI summit to address potential risks

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Google report highlights AI’s impact on the UK economy appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/07/05/google-report-highlights-ai-impact-uk-economy/feed/ 0
Google creates new AI division to challenge OpenAI https://www.artificialintelligence-news.com/2023/04/21/google-creates-new-ai-division-to-challenge-openai/ https://www.artificialintelligence-news.com/2023/04/21/google-creates-new-ai-division-to-challenge-openai/#respond Fri, 21 Apr 2023 12:08:13 +0000 https://www.artificialintelligence-news.com/?p=12980 Google has consolidated its AI research labs, Google Brain and DeepMind, into a new unit named Google DeepMind. The move is seen as a strategic way for Google to maintain its edge in the competitive AI industry and compete with OpenAI. By combining the talent and resources of both entities, Google DeepMind aims to accelerate... Read more »

The post Google creates new AI division to challenge OpenAI appeared first on AI News.

]]>
Google has consolidated its AI research labs, Google Brain and DeepMind, into a new unit named Google DeepMind.

The move is seen as a strategic way for Google to maintain its edge in the competitive AI industry and compete with OpenAI. By combining the talent and resources of both entities, Google DeepMind aims to accelerate AI advancements while maintaining ethical standards.

The new unit will be responsible for spearheading groundbreaking AI products and advancements, and it will work closely with other Google product areas to deliver AI research and products.

Google Research, the former parent division of Google Brain, will remain an independent division focused on “fundamental advances in computer science across areas such as algorithms and theory, privacy and security, quantum computing, health, climate and sustainability, and responsible AI.”

Demis Hassabis, CEO of DeepMind, believes that the consolidation of the two AI research labs will bring together world-class talent in AI with the computing power, infrastructure, and resources to create the next generation of AI breakthroughs and products boldly and responsibly.

Hassabis claims that the research accomplishments of Google Brain and DeepMind have formed the foundation of the current AI industry—ranging from deep reinforcement learning to transformers. The newly consolidated unit will build upon this foundation to create the next generation of groundbreaking AI products and advancements that will shape the world.

Over the years, Google and DeepMind have jointly developed several groundbreaking innovations. The duo’s achievements include AlphaGo – which famously beat professional human Go players – and AlphaFold, an exceptional tool that accurately predicts protein structures.

Other noteworthy achievements include word2vec, WaveNet, sequence-to-sequence models, distillation, deep reinforcement learning, and distributed systems and software frameworks like TensorFlow and JAX. These cutting-edge tools have proven highly effective for expressing, training, and deploying large-scale ML models.

Google’s acquisition of DeepMind for $500 million in 2014 paved the way for a fruitful collaboration between the two entities. With the consolidation of Google Brain and DeepMind into Google DeepMind, Google hopes to further advance its AI research and development capabilities.

Google’s chief scientist, Jeff Dean, will take on an elevated role as chief scientist for both Google Research and Google DeepMind. He has been tasked with setting the future direction of AI research at the company, as well as heading up the most critical and strategic technical projects related to AI, including a series of powerful multimodal AI models.

The creation of Google DeepMind underscores Google and parent company Alphabet’s commitment to furthering the pioneering research of both DeepMind and Google Brain. With the race to dominate the AI space becoming more intense, Google DeepMind is poised to accelerate AI advancements and create groundbreaking AI products and advancements that will shape the world.

(Image Credit: Google DeepMind)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Google creates new AI division to challenge OpenAI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/04/21/google-creates-new-ai-division-to-challenge-openai/feed/ 0