report Archives - AI News https://www.artificialintelligence-news.com/tag/report/ Artificial Intelligence News Thu, 26 Oct 2023 15:49:01 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png report Archives - AI News https://www.artificialintelligence-news.com/tag/report/ 32 32 UK paper highlights AI risks ahead of global Safety Summit https://www.artificialintelligence-news.com/2023/10/26/uk-paper-highlights-ai-risks-ahead-global-safety-summit/ https://www.artificialintelligence-news.com/2023/10/26/uk-paper-highlights-ai-risks-ahead-global-safety-summit/#respond Thu, 26 Oct 2023 15:48:59 +0000 https://www.artificialintelligence-news.com/?p=13793 The UK Government has unveiled a comprehensive paper addressing the capabilities and risks associated with frontier AI. UK Prime Minister Rishi Sunak has spoken today on the global responsibility to confront the risks highlighted in the report and harness AI’s potential. Sunak emphasised the need for honest dialogue about the dual nature of AI: offering... Read more »

The post UK paper highlights AI risks ahead of global Safety Summit appeared first on AI News.

]]>
The UK Government has unveiled a comprehensive paper addressing the capabilities and risks associated with frontier AI.

UK Prime Minister Rishi Sunak has spoken today on the global responsibility to confront the risks highlighted in the report and harness AI’s potential. Sunak emphasised the need for honest dialogue about the dual nature of AI: offering unprecedented opportunities, while also posing significant dangers.

“AI will bring new knowledge, new opportunities for economic growth, new advances in human capability, and the chance to solve problems we once thought beyond us. But it also brings new dangers and new fears,” said Sunak.

“So, the responsible thing for me to do is to address those fears head-on, giving you the peace of mind that we will keep you safe while making sure you and your children have all the opportunities for a better future that AI can bring.

“Doing the right thing, not the easy thing, means being honest with people about the risks from these technologies.”

The report delves into the rapid advancements of frontier AI, drawing on numerous sources. It highlights the diverse perspectives within scientific, expert, and global communities regarding the risks associated with the swift evolution of AI technology. 

The publication comprises three key sections:

  1. Capabilities and risks from frontier AI: This section presents a discussion paper advocating further research into AI risk. It delineates the current state of frontier AI capabilities, potential future improvements, and associated risks, including societal harms, misuse, and loss of control.
  2. Safety and security risks of generative AI to 2025: Drawing on intelligence assessments, this report outlines the potential global benefits of generative AI while highlighting the increased safety and security risks. It underscores the enhancement of threat actor capabilities and the effectiveness of attacks due to generative AI development.
  3. Future risks of frontier AI: Prepared by the Government Office for Science, this report explores uncertainties in frontier AI development, future system risks, and potential scenarios for AI up to 2030.

The report – based on declassified information from intelligence agencies – focuses on generative AI, the technology underpinning popular chatbots and image generation software. It foresees a future where AI might be exploited by terrorists to plan biological or chemical attacks, raising serious concerns about global security.

Sjuul van der Leeuw, CEO of Deployteq, commented: “It is good to see the government take a serious approach, offering a report ahead of the Safety Summit next week however more must be done.

“An ongoing effort to address AI risks is needed and we hope that the summit brings much-needed clarity, allowing businesses and marketers to enjoy the benefits this emerging piece of technology offers, without the worry of backlash.”

The report highlights that generative AI could be utilised to gather knowledge on physical attacks by non-state violent actors, including creating chemical, biological, and radiological weapons.

Although companies are working to implement safeguards, the report emphasises the varying effectiveness of these measures. Obstacles to obtaining the necessary knowledge, raw materials, and equipment for such attacks are decreasing, with AI potentially accelerating this process.

Additionally, the report warns of the likelihood of AI-driven cyber-attacks becoming faster-paced, more effective, and on a larger scale by 2025. AI could aid hackers in mimicking official language, and overcome previous challenges faced in this area.

However, some experts have questioned the UK Government’s approach.

Rashik Parmar MBE, CEO of BCS, The Chartered Institute for IT, said: “Over 1,300 technologists and leaders signed our open letter calling AI a force for good rather than an existential threat to humanity.

“AI won’t grow up like The Terminator. If we take the proper steps, it will be a trusted co-pilot from our earliest school days to our retirement.

The AI Safety Summit will aim to foster healthy discussion around how to address frontier AI risks, encompassing misuse by non-state actors for cyberattacks or bioweapon design and concerns related to AI systems acting autonomously contrary to human intentions. Discussions at the summit will also extend to broader societal impacts, such as election disruption, bias, crime, and online safety.

Claire Trachet, CEO of Trachet, commented: “The fast-growing nature of AI has made it difficult for governments to balance creating effective regulation which safeguards the interest of businesses and consumers without stifling investment opportunities. Even though there are some forms of risk management and different reports coming out now, none of them are true coordinated approaches.

“The UK Government’s commitment to AI safety is commendable, but the criticism surrounding the summit serves as a reminder of the importance of a balanced, constructive, and forward-thinking approach to AI regulation.”

If the UK Government’s report is anything to go by, the need for collaboration around proportionate but rigorous measures to manage the risks posed by AI is more imperative than ever.

The global AI Safety Summit is set to take place at the historic Bletchley Park on 1 – 2 November 2023.

(Image Credit: GOV.UK)

See also: BSI: Closing ‘AI confidence gap’ key to unlocking benefits

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK paper highlights AI risks ahead of global Safety Summit appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/26/uk-paper-highlights-ai-risks-ahead-global-safety-summit/feed/ 0
Enterprises struggle to address generative AI’s security implications https://www.artificialintelligence-news.com/2023/10/18/enterprises-struggle-address-generative-ai-security-implications/ https://www.artificialintelligence-news.com/2023/10/18/enterprises-struggle-address-generative-ai-security-implications/#respond Wed, 18 Oct 2023 15:54:37 +0000 https://www.artificialintelligence-news.com/?p=13766 In a recent study, cloud-native network detection and response firm ExtraHop unveiled a concerning trend: enterprises are struggling with the security implications of employee generative AI use. Their new research report, The Generative AI Tipping Point, sheds light on the challenges faced by organisations as generative AI technology becomes more prevalent in the workplace. The... Read more »

The post Enterprises struggle to address generative AI’s security implications appeared first on AI News.

]]>
In a recent study, cloud-native network detection and response firm ExtraHop unveiled a concerning trend: enterprises are struggling with the security implications of employee generative AI use.

Their new research report, The Generative AI Tipping Point, sheds light on the challenges faced by organisations as generative AI technology becomes more prevalent in the workplace.

The report delves into how organisations are dealing with the use of generative AI tools, revealing a significant cognitive dissonance among IT and security leaders. Astonishingly, 73 percent of these leaders confessed that their employees frequently use generative AI tools or Large Language Models (LLM) at work. Despite this, a staggering majority admitted to being uncertain about how to effectively address the associated security risks.

When questioned about their concerns, IT and security leaders expressed more worry about the possibility of inaccurate or nonsensical responses (40%) than critical security issues such as exposure of customer and employee personal identifiable information (PII) (36%) or financial loss (25%).

Raja Mukerji, Co-Founder and Chief Scientist at ExtraHop, said: “By blending innovation with strong safeguards, generative AI will continue to be a force that will uplevel entire industries in the years to come.”

One of the startling revelations from the study was the ineffectiveness of generative AI bans. About 32 percent of respondents stated that their organisations had prohibited the use of these tools. However, only five percent reported that employees never used these tools—indicating that bans alone are not enough to curb their usage.

The study also highlighted a clear desire for guidance, particularly from government bodies. A significant 90 percent of respondents expressed the need for government involvement, with 60 percent advocating for mandatory regulations and 30 percent supporting government standards for businesses to adopt voluntarily.

Despite a sense of confidence in their current security infrastructure, the study revealed gaps in basic security practices.

While 82 percent felt confident in their security stack’s ability to protect against generative AI threats, less than half had invested in technology to monitor generative AI use. Alarmingly, only 46 percent had established policies governing acceptable use and merely 42 percent provided training to users on the safe use of these tools.

The findings come in the wake of the rapid adoption of technologies like ChatGPT, which have become an integral part of modern businesses. Business leaders are urged to understand their employees’ generative AI usage to identify potential security vulnerabilities.

You can find a full copy of the report here.

(Photo by Hennie Stander on Unsplash)

See also: BSI: Closing ‘AI confidence gap’ key to unlocking benefits

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Enterprises struggle to address generative AI’s security implications appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/18/enterprises-struggle-address-generative-ai-security-implications/feed/ 0
BSI: Closing ‘AI confidence gap’ key to unlocking benefits https://www.artificialintelligence-news.com/2023/10/17/bsi-closing-ai-confidence-gap-key-unlocking-benefits/ https://www.artificialintelligence-news.com/2023/10/17/bsi-closing-ai-confidence-gap-key-unlocking-benefits/#respond Tue, 17 Oct 2023 14:34:00 +0000 https://www.artificialintelligence-news.com/?p=13759 The UK’s potential to harness the benefits of AI in crucial sectors such as healthcare, food safety, and sustainability is under threat due to a significant “confidence gap” among the public. According to a study conducted by BSI, 54 percent of UK respondents expressed excitement about AI’s potential to revolutionise medical diagnoses and 43 percent... Read more »

The post BSI: Closing ‘AI confidence gap’ key to unlocking benefits appeared first on AI News.

]]>
The UK’s potential to harness the benefits of AI in crucial sectors such as healthcare, food safety, and sustainability is under threat due to a significant “confidence gap” among the public.

According to a study conducted by BSI, 54 percent of UK respondents expressed excitement about AI’s potential to revolutionise medical diagnoses and 43 percent welcomed AI’s role in reducing food waste. However, there is a prevailing lack of trust.

This scepticism could hinder the integration of AI technologies in the NHS, which is currently grappling with challenges like the COVID-19 backlog and an ageing population. Almost half of Britons (49%) support the use of AI to alleviate pressure on the healthcare system and reduce waiting times. However, only 20 percent have more confidence in AI than humans in detecting food contamination issues.

The study also highlighted a pressing need for education, as 65 percent of respondents felt patients should be informed about the use of AI tools in diagnosis or treatment. 37 percent of respondents expect to use AI regularly in medical settings by 2030.

Craig Civil, Director of Data Science and AI at BSI, said:

“The magnitude of ways AI can shape the UK’s future means we are seeing some degree of hesitation of the unknown. This can be addressed by developing greater understanding and recognition that human involvement will always be needed if we are to make the best use of this technology, and by ensuring we have frameworks that are in place to govern its use and build trust.

Now is the moment for the UK to collaborate to balance the great power of this tool with the realities of actually using it in a credible, authentic, well-executed, and well-governed way.

Closing the confidence gap and building the appropriate checks and balances can enable us to make not just good but great use of AI in every area of life and society.”

60 percent believed consumers needed protections regarding AI technologies. The study also revealed that 61 percent of Britons are calling for international guidelines to ensure the safe use of AI. This demand reflects a global sentiment, with 50 percent of respondents highlighting the need for ethical safeguards on patient data use.

Harold Pradal, Chief Commercial Officer at BSI, commented:

“AI is a transformational technology. For it to be a powerful force for good, trust needs to be the critical factor. There is a clear opportunity to harness AI to drive societal impact, change lives, and accelerate progress towards a better future and a sustainable world.

Closing the AI confidence gap is the first necessary step, it has to be delivered through education to help realise AI’s benefits and shape Society 5.0 in a positive way.”

The study’s findings are a call to action for the UK, urging collaboration and the establishment of frameworks to govern AI’s use.

The UK Government, recognising the importance of safe AI implementation, is set to host a global AI Safety Summit at the historic Bletchley Park on 1-2 November 2023. BSI is an official partner for the much-anticipated event.

(Photo by Suad Kamardeen on Unsplash)

See also: UK reveals AI Safety Summit opening day agenda

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post BSI: Closing ‘AI confidence gap’ key to unlocking benefits appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/17/bsi-closing-ai-confidence-gap-key-unlocking-benefits/feed/ 0
Amperity recognised as a leader in Snowflake’s modern marketing data stack report https://www.artificialintelligence-news.com/2023/10/09/amperity-recognised-as-leader-snowflake-modern-marketing-data-stack-report/ https://www.artificialintelligence-news.com/2023/10/09/amperity-recognised-as-leader-snowflake-modern-marketing-data-stack-report/#respond Mon, 09 Oct 2023 14:55:28 +0000 https://www.artificialintelligence-news.com/?p=13703 Amperity, the leading AI-powered customer data platform (CDP) for enterprise brands, today announced that it has been recognised as a “Customer Data Activation Leader” in the Modern Marketing Data Stack 2023: How Data-Forward Marketers Are Redefining Strategies to Unify, Analyze, and Activate Data to Boost Revenue executed and launched by Snowflake, the Data Cloud company.... Read more »

The post Amperity recognised as a leader in Snowflake’s modern marketing data stack report appeared first on AI News.

]]>
Amperity, the leading AI-powered customer data platform (CDP) for enterprise brands, today announced that it has been recognised as a “Customer Data Activation Leader” in the Modern Marketing Data Stack 2023: How Data-Forward Marketers Are Redefining Strategies to Unify, Analyze, and Activate Data to Boost Revenue executed and launched by Snowflake, the Data Cloud company.

Snowflake’s data-backed report identifies the best-of-breed solutions used by Snowflake customers to show how marketers can leverage the Snowflake Data Cloud with accompanying partner solutions to best identify, serve, and convert valuable prospects into loyal customers. By analysing usage patterns from a pool of approximately 8,100 customers as of April 2023, Snowflake identified ten technology categories that organisations consider when building their marketing data stacks. The extensive research reflects how customers are adopting solutions from a rapidly-changing ecosystem and highlights the convergence of adtech and martech, the increased importance of privacy-enhancing technologies, and the heightened focus marketers have on measurement to maximise campaign ROI. The ten categories include:

  • Analytics & Data Capture
  • Enrichment
  • Identity & Activation
    • Identity & Onboarders
    • Customer Data Activation
    • Advertising Platforms
  • Measurement & Attribution
  • Integration & Modeling
  • Business Intelligence
  • AI & Machine Learning
  • Privacy-Enhancing Technologies

Focusing on those companies that are active members of the Snowflake Partner Network (or ones with a comparable agreement in place with Snowflake), as well as Snowflake Marketplace providers, the report explores each of these categories that comprise the Modern Marketing Data Stack, highlighting technology partners and their solutions as “leaders” or “ones to watch” within each category. The report also details how current Snowflake customers leverage a number of these partner technologies to enable data-driven marketing strategies and informed business decisions. Snowflake’s report provides a concrete overview of the partner solution providers and data providers marketers choose to create their data stacks.

“Marketing professionals continue to expand their investment in their customer data to improve their organization’s digital marketing activities. Snowflake’s goal is to empower them in their journey to data-driven marketing,” said Denise Persson, Chief Marketing Officer at Snowflake. “Amperity emerged as a leader in customer data activation because of its multi-patented approach to identifying, unifying and activating first-party online and offline data through a 360-degree view of the customer.”

Amperity was identified in Snowflake’s report as a leader in the Customer Data Activation category for data activation solutions, such as customer data platforms, customer engagement platforms, reverse ETL providers, and others, which are designed to make the activation process faster and easier. Activating data means doing something with it to derive valuable outcomes. In the case of the marketing data stack, that means taking identified and enriched audience data, creating relevant segments and audiences, and ultimately bringing it to the owned-media platforms in particular (website, email, in-app.etc) that help companies reach those individuals with the right messages.  

“We’re honoured that Snowflake has recognised Amperity as a customer data activation leader in this year’s Modern Marketing Data Stack report,” said Derek Slager, co-founder & CTO at Amperity. “Together, we enable our joint customers to comprehensively unify all of their customer data using AI. We then enable comprehensive multi-channel activation across the marketing and advertising technology ecosystems. Amperity’s modern, future-proof connectors, bring first-party data to the post-cookie ecosystem through Amperity for Paid Media.”

Click here to learn more about how Amperity and Snowflake partner to bring the modern martech stack to life.

(Editor’s note: This post is sponsored by Amperity)

The post Amperity recognised as a leader in Snowflake’s modern marketing data stack report appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/09/amperity-recognised-as-leader-snowflake-modern-marketing-data-stack-report/feed/ 0
IFOW: AI can have a positive impact on jobs https://www.artificialintelligence-news.com/2023/09/20/ifow-ai-can-have-positive-impact-jobs/ https://www.artificialintelligence-news.com/2023/09/20/ifow-ai-can-have-positive-impact-jobs/#respond Wed, 20 Sep 2023 12:15:37 +0000 https://www.artificialintelligence-news.com/?p=13618 In a world where sensational headlines about AI and autonomous robots dominate the media landscape, a new report sheds light on a different narrative. The research, funded by the Nuffield Foundation, explores the nuanced impacts of AI adoption on jobs and work quality. Contrary to the doomsday predictions, the report suggests that AI could have... Read more »

The post IFOW: AI can have a positive impact on jobs appeared first on AI News.

]]>
In a world where sensational headlines about AI and autonomous robots dominate the media landscape, a new report sheds light on a different narrative.

The research, funded by the Nuffield Foundation, explores the nuanced impacts of AI adoption on jobs and work quality. Contrary to the doomsday predictions, the report suggests that AI could have a positive influence on employment and job quality.

The study, conducted by the Institute for the Future of Work (IFOW), indicates that AI adoption is already well underway in UK firms. However, rather than leading to widespread job loss, it suggests that AI has the potential to create more jobs and improve the quality of existing ones.

Anna Thomas, Co-Founder and Director of the IFOW, expressed optimism about the study’s results: “This report not only highlights that the adoption of AI is well underway across UK firms but that it is possible for this tech transformation to lead to both net job creation and more ‘good work’ – great news as we look to solve the UK’s productivity puzzle.”

“With the [UK-hosted global] AI Summit fast approaching, Government must act urgently to regulate, legislate and invest so that UK firms and workers can benefit from this fast-moving technology.”

One key takeaway from the study is the importance of regional investment in education and infrastructure to make all areas of the UK ‘innovation ready.’ The study also emphasises the need for firms to engage workers when investing in automation and AI.

Taking these suggested actions could help ensure that the benefits of AI are distributed more evenly across regions and demographics, reducing existing inequalities.

Professor Sir Christopher Pissarides, Nobel Laureate and Co-Founder of IFOW, stressed the significance of placing “good jobs” at the heart of an economic and industrial strategy in the age of automation. He believes that the study provides valuable insights into how this can be achieved.

The IFOW’s study suggests that with the right approach, AI adoption can lead to a positive transformation of the labour market. By investing in education, infrastructure, and worker engagement, the UK can harness the potential of AI to create more jobs and improve job quality across the country.

Matt Robinson, Head of Nations and Regions, techUK, commented: “Realising the benefits of technologies like AI for all will mean getting the right foundations in place across areas like digital infrastructure and skills provision in every part of the UK to enable and create high-quality digital jobs.

“Access to good digital infrastructure, as well as skills and talent, is a priority for techUK members, and the Institute’s work provides welcome insights into their importance for creating good work throughout the country.”

While the IFOW’s study paints a more positive outlook on the adoption of AI than most headlines, it will be an uphill battle to convince the wider public.

A poll of US adults released this week by Mitre-Harris found the majority (54%) believe the risks of AI and just 39 percent of adults said they believed today’s AI technologies are safe and secure — down nine points from the previous survey.

As the AI industry continues to evolve, urgent action from governments, employers, and employees is essential to realise the opportunities, manage the risks, and convince a wary public of the technology’s benefits.

A copy of the full working paper can be found here (PDF)

(Photo by Damian Zaleski on Unsplash)

See also: CMA sets out principles for responsible AI development 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post IFOW: AI can have a positive impact on jobs appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/20/ifow-ai-can-have-positive-impact-jobs/feed/ 0
UK’s AI ecosystem to hit £2.4T by 2027, third in global race https://www.artificialintelligence-news.com/2023/09/07/uk-ai-ecosystem-hit-2-4t-by-2027-third-global-race/ https://www.artificialintelligence-news.com/2023/09/07/uk-ai-ecosystem-hit-2-4t-by-2027-third-global-race/#respond Thu, 07 Sep 2023 14:23:10 +0000 https://www.artificialintelligence-news.com/?p=13569 Projections released by the newly launched Global AI Ecosystem open-source knowledge platform indicate that the UK’s AI sector is set to skyrocket from £1.36 trillion ($1.7 trillion) to £2.4 trillion ($3 trillion) by 2027. The findings suggest the UK is set to remain Europe’s AI leader and secure third place in the global AI race... Read more »

The post UK’s AI ecosystem to hit £2.4T by 2027, third in global race appeared first on AI News.

]]>
Projections released by the newly launched Global AI Ecosystem open-source knowledge platform indicate that the UK’s AI sector is set to skyrocket from £1.36 trillion ($1.7 trillion) to £2.4 trillion ($3 trillion) by 2027. The findings suggest the UK is set to remain Europe’s AI leader and secure third place in the global AI race behind the US and China.

The Global AI Ecosystem platform is developed with support from AI Industry Analytics (AiiA) and Deep Knowledge Group. Designed as a universally accessible space for community interaction, collaboration, content sharing, and knowledge exchange, it has become a vital hub for AI enthusiasts and professionals.

AiiA, in its Global AI Economy Size Assessment report, conducted groundbreaking research showcasing the rapid expansion of the UK’s AI industry.

With over 8,900 companies operating in the sector, the UK AI economy’s valuation of £1.36 trillion underscores its substantial contribution to the national GDP. Approximately 4,100 investment funds are dedicated to AI, with 600 of them based in the UK.

A robust workforce of 500,000 UK-based AI specialists is driving innovation, solidifying the nation’s position in the global AI landscape. This skilled workforce not only bolsters GDP growth but also acts as a safety net against unemployment.

The UK government’s active prioritisation of its national AI agenda is a significant factor in this remarkable growth. Last month, UK Deputy PM Oliver Dowden called AI the most ‘extensive’ industrial revolution yet.

With 280 ongoing projects harnessing AI technology, the UK’s commitment to AI is clear. AI is a major pillar of the country’s national industrial strategy, making the UK one of the most proactive nations in shaping its AI future.

Dmitry Kaminskiy, Founder of AI Industry Analytics (AiiA) and General Partner of Deep Knowledge Group, said:

“Despite an economic downturn and other challenges, the UK stands as an undoubtable, dynamic, and proactive leader in the global AI arena, having surpassed £1.3 trillion in 2023 and projected to reach £2.4 trillion by 2027.

There is no question that AI is poised to be the major driver for economic growth, fuelling the further development of the entire UK DeepTech industry, and creating a cumulative, systemic, positive impact on the full scope of the nation’s integral infrastructure.”

Key cities like London, Cambridge, Manchester, and Edinburgh have emerged as leading AI hubs, fostering collaboration and providing access to essential resources. With nearly 5,000 AI companies in London alone, it competes with entire countries on the global AI stage and solidifies its European leadership status.

AiiA’s estimation of the UK AI economy size used AI algorithms to map the global AI industry, profiling 50,000 companies, 20,000 investors, 2,000 AI leaders, and 2,500 R&D hubs. Building upon previous reports, it conducted the most comprehensive assessment of the Global AI Economy to date, projecting a global AI economy exceeding £27.2 trillion ($34 trillion) by 2027.

The UK’s position as a hub for science, R&D, DeepTech, and AI governance places it in good stead for leveraging AI as a core engine of technological progress and driving economic growth.

(Image Credit: Global AI Ecosystem)

See also: UK government outlines AI Safety Summit plans

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK’s AI ecosystem to hit £2.4T by 2027, third in global race appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/07/uk-ai-ecosystem-hit-2-4t-by-2027-third-global-race/feed/ 0
GitLab: Developers view AI as ‘essential’ despite concerns https://www.artificialintelligence-news.com/2023/09/06/gitlab-developers-ai-essential-despite-concerns/ https://www.artificialintelligence-news.com/2023/09/06/gitlab-developers-ai-essential-despite-concerns/#respond Wed, 06 Sep 2023 09:48:08 +0000 https://www.artificialintelligence-news.com/?p=13564 A survey by GitLab has shed light on the views of developers on the landscape of AI in software development. The report, titled ‘The State of AI in Software Development,’ presents insights from over 1,000 global senior technology executives, developers, and security and operations professionals. The report reveals a complex relationship between enthusiasm for AI... Read more »

The post GitLab: Developers view AI as ‘essential’ despite concerns appeared first on AI News.

]]>
A survey by GitLab has shed light on the views of developers on the landscape of AI in software development.

The report, titled ‘The State of AI in Software Development,’ presents insights from over 1,000 global senior technology executives, developers, and security and operations professionals.

The report reveals a complex relationship between enthusiasm for AI adoption and concerns about data privacy, intellectual property, and security.

“Enterprises are seeking out platforms that allow them to harness the power of AI while addressing potential privacy and security risks,” said Alexander Johnston, Research Analyst in the Data, AI & Analytics Channel at 451 Research, a part of S&P Global Market Intelligence.

While 83 percent of the survey’s respondents view AI implementation as essential to stay competitive, a significant 79 percent expressed worries about AI tools accessing sensitive information and intellectual property.

Impact on developer productivity

AI is perceived as a boon for developer productivity, with 51 percent of all respondents citing it as a key benefit of AI implementation. However, security professionals are apprehensive that AI-generated code might lead to an increase in security vulnerabilities, potentially creating more work for them.

Only seven percent of developers’ time is currently spent identifying and mitigating security vulnerabilities, compared to 11 percent allocated to testing code. This raises questions about the widening gap between developers and security professionals in the AI era.

Privacy and intellectual property concerns

The survey underscores the paramount importance of data privacy and intellectual property protection when selecting AI tools. 95 percent of senior technology executives prioritise these aspects when choosing AI solutions.

Moreover, 32 percent of respondents admitted to being “very” or “extremely” concerned about introducing AI into the software development lifecycle. Within this group, 39 percent cited worries about AI-generated code introducing security vulnerabilities, and 48 percent expressed concerns that AI-generated code may not receive the same copyright protection as code produced by humans.

AI skills gap

Despite optimism about AI’s potential, the report identifies a disconnect between organisations’ provision of AI training resources and practitioners’ satisfaction with them. 

While 75 percent of respondents stated that their organisations offer training and resources for using AI, an equivalent proportion expressed the need to seek resources independently—suggesting that the available training may be insufficient.

A striking 81 percent of respondents said they require more training to effectively utilise AI in their daily work. Furthermore, 65 percent of those planning to use AI for software development indicated that their organsations plan to hire new talent to manage AI implementation.

David DeSanto, Chief Product Officer at GitLab, said:

“According to the GitLab Global DevSecOps Report, only 25 percent of developers’ time is spent on code generation, but the data shows AI can boost productivity and collaboration in nearly 60 percent of developers’ day-to-day work.

To realise AI’s full potential, it needs to be embedded across the software development lifecycle, allowing everyone involved in delivering secure software – not just developers – to benefit from the efficiency boost.” 

While AI holds immense promise for the software development industry, GitLab’s report makes it clear that addressing cybersecurity and privacy concerns, bridging the skills gap, and fostering collaboration between developers and security professionals are pivotal to successful AI adoption.

(Photo by Luca Bravo on Unsplash)

See also: UK government outlines AI Safety Summit plans

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post GitLab: Developers view AI as ‘essential’ despite concerns appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/06/gitlab-developers-ai-essential-despite-concerns/feed/ 0
ChatGPT’s political bias highlighted in study https://www.artificialintelligence-news.com/2023/08/18/chatgpt-political-bias-highlighted-study/ https://www.artificialintelligence-news.com/2023/08/18/chatgpt-political-bias-highlighted-study/#respond Fri, 18 Aug 2023 09:47:26 +0000 https://www.artificialintelligence-news.com/?p=13496 A study conducted by computer and information science researchers from the UK and Brazil has raised concerns about the objectivity of ChatGPT. The researchers claim to have discovered substantial political bias in ChatGPT’s responses, leaning towards the left side of the political spectrum. Published in the journal Public Choice this week, the study – conducted... Read more »

The post ChatGPT’s political bias highlighted in study appeared first on AI News.

]]>
A study conducted by computer and information science researchers from the UK and Brazil has raised concerns about the objectivity of ChatGPT.

The researchers claim to have discovered substantial political bias in ChatGPT’s responses, leaning towards the left side of the political spectrum.

Published in the journal Public Choice this week, the study – conducted by Fabio Motoki, Valdemar Pinho, and Victor Rodrigues – argues that the presence of political bias in AI-generated content could perpetuate existing biases found in traditional media.

The research highlights the potential impact of such bias on various stakeholders, including policymakers, media outlets, political groups, and educational institutions.

Utilising an empirical approach, the researchers employed a series of questionnaires to gauge ChatGPT’s political orientation. The chatbot was asked to answer political compass questions, capturing its stance on various political issues.

Furthermore, the study examined scenarios where ChatGPT impersonated both an average Democrat and a Republican, revealing the algorithm’s inherent bias towards Democratic-leaning responses.

The study’s findings indicate that ChatGPT’s bias extends beyond the US and is also noticeable in its responses regarding Brazilian and British political contexts. Notably, the research even suggests that this bias is not merely a mechanical result but a deliberate tendency in the algorithm’s output.

Determining the exact source of ChatGPT’s political bias remains a challenge. The researchers investigated both the training data and the algorithm itself, concluding that both factors likely contribute to the bias. They highlighted the need for future research to delve into disentangling these components for a clearer understanding of the bias’s origins.

OpenAI, the organisation behind ChatGPT, has not yet responded to the study’s findings. This study joins a growing list of concerns surrounding AI technology, including issues related to privacy, education, and identity verification in various sectors.

As the influence of AI-driven tools like ChatGPT continues to expand, experts and stakeholders are grappling with the implications of biased AI-generated content.

This latest study serves as a reminder that vigilance and critical evaluation are necessary to ensure that AI technologies are developed and deployed in a fair and balanced manner, devoid of undue political influence.

(Photo by Priscilla Du Preez on Unsplash)

See also: Study highlights impact of demographics on AI training

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post ChatGPT’s political bias highlighted in study appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/18/chatgpt-political-bias-highlighted-study/feed/ 0
Study highlights impact of demographics on AI training https://www.artificialintelligence-news.com/2023/08/17/study-highlights-impact-demographics-ai-training/ https://www.artificialintelligence-news.com/2023/08/17/study-highlights-impact-demographics-ai-training/#respond Thu, 17 Aug 2023 12:39:29 +0000 https://www.artificialintelligence-news.com/?p=13491 A study conducted in collaboration between Prolific, Potato, and the University of Michigan has shed light on the significant influence of annotator demographics on the development and training of AI models. The study delved into the impact of age, race, and education on AI model training data—highlighting the potential dangers of biases becoming ingrained within... Read more »

The post Study highlights impact of demographics on AI training appeared first on AI News.

]]>
A study conducted in collaboration between Prolific, Potato, and the University of Michigan has shed light on the significant influence of annotator demographics on the development and training of AI models.

The study delved into the impact of age, race, and education on AI model training data—highlighting the potential dangers of biases becoming ingrained within AI systems.

“Systems like ChatGPT are increasingly used by people for everyday tasks,” explains assistant professor David Jurgens from the University of Michigan School of Information. 

“But on whose values are we instilling in the trained model? If we keep taking a representative sample without accounting for differences, we continue marginalising certain groups of people.” 

Machine learning and AI systems increasingly rely on human annotation to train their models effectively. This process, often referred to as ‘Human-in-the-loop’ or Reinforcement Learning from Human Feedback (RLHF), involves individuals reviewing and categorising language model outputs to refine their performance.

One of the most striking findings of the study is the influence of demographics on labelling offensiveness.

The research found that different racial groups had varying perceptions of offensiveness in online comments. For instance, Black participants tended to rate comments as more offensive compared to other racial groups. Age also played a role, as participants aged 60 or over were more likely to label comments as offensive than younger participants.

The study involved analysing 45,000 annotations from 1,484 annotators and covered a wide array of tasks, including offensiveness detection, question answering, and politeness. It revealed that demographic factors continue to impact even objective tasks like question answering. Notably, accuracy in answering questions was affected by factors like race and age, reflecting disparities in education and opportunities.

Politeness, a significant factor in interpersonal communication, was also impacted by demographics.

Women tended to judge messages as less polite than men, while older participants were more likely to assign higher politeness ratings. Additionally, participants with higher education levels often assigned lower politeness ratings and differences were observed between racial groups and Asian participants.

Phelim Bradley, CEO and co-founder of Prolific, said:

“Artificial intelligence will touch all aspects of society and there is a real danger that existing biases will get baked into these systems.

This research is very clear: who annotates your data matters.

Anyone who is building and training AI systems must make sure that the people they use are nationally representative across age, gender, and race or bias will simply breed more bias.”

As AI systems become more integrated into everyday tasks, the research underscores the imperative of addressing biases at the early stages of model development to avoid exacerbating existing biases and toxicity.

You can find a full copy of the paper here (PDF)

(Photo by Clay Banks on Unsplash)

See also: Error-prone facial recognition leads to another wrongful arrest

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Study highlights impact of demographics on AI training appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/17/study-highlights-impact-demographics-ai-training/feed/ 0
AMD: Almost half of enterprises risk ‘falling behind’ on AI https://www.artificialintelligence-news.com/2023/08/16/amd-almost-half-enterprises-risk-falling-behind-ai/ https://www.artificialintelligence-news.com/2023/08/16/amd-almost-half-enterprises-risk-falling-behind-ai/#respond Wed, 16 Aug 2023 11:59:36 +0000 https://www.artificialintelligence-news.com/?p=13487 AMD has unveiled insights from a comprehensive survey of IT leaders, indicating that nearly 50 percent of enterprises are facing the risk of lagging behind in the adoption of AI. The survey focused on 2,500 IT leaders from the US, UK, Germany, France, and Japan. The findings spotlight both the enthusiasm surrounding AI’s potential benefits... Read more »

The post AMD: Almost half of enterprises risk ‘falling behind’ on AI appeared first on AI News.

]]>
AMD has unveiled insights from a comprehensive survey of IT leaders, indicating that nearly 50 percent of enterprises are facing the risk of lagging behind in the adoption of AI.

The survey focused on 2,500 IT leaders from the US, UK, Germany, France, and Japan. The findings spotlight both the enthusiasm surrounding AI’s potential benefits and the significant challenges that organisations encounter in implementing AI technologies.

The survey revealed a compelling enthusiasm for AI’s potential advantages, with three out of four IT leaders expressing optimism about its capabilities. These benefits spanned from amplified employee efficiency to automated cybersecurity solutions.

A striking 67 percent of the respondents revealed that they are intensifying investments in AI technologies to harness these advantages. Nonetheless, the survey’s results also exposed a sense of hesitancy arising from implementation uncertainties, the readiness of existing hardware, and technology stacks.

Matthew Unangst, Senior Director of Commercial Client and Workstation at AMD, said: 

“There is a benefit to being an early AI adopter. IT leaders are seeing the benefits of AI-enabled solutions, but their enterprises need to outline a more focused plan for implementation or risk falling behind.

Open software ecosystems, with high-performance hardware, are essential, and AMD believes in a multi-faceted approach of leveraging AI IP across our full portfolio of products to the benefit of our partners and customers.”

Of the organisations prioritising AI deployment, 90 percent reported experiencing heightened workplace efficiency. This underscores the idea that early AI adoption could yield a competitive edge in productivity and performance.

AMD’s survey outcomes indicated that establishments which defer AI adoption could potentially jeopardise their market standing.

To address these challenges and offer solutions, AMD is concentrating on the development of AI-capable solutions across its product portfolio. This includes the cloud, edge computing, and endpoints.

While IT leaders harbour optimism about the possibilities that AI presents, there exists a critical need for well-defined implementation strategies.

AMD’s survey indicates that those enterprises that act swiftly and purposefully on AI adoption could potentially reap substantial benefits, while those that lag behind may encounter challenges.

Find a full copy of the report here (PDF)

(Photo by paolo candelo on Unsplash)

See also: UK Deputy PM: AI is the most ‘extensive’ industrial revolution yet

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AMD: Almost half of enterprises risk ‘falling behind’ on AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/16/amd-almost-half-enterprises-risk-falling-behind-ai/feed/ 0