ai Archives - AI News https://www.artificialintelligence-news.com/tag/ai/ Artificial Intelligence News Thu, 02 Nov 2023 15:01:55 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png ai Archives - AI News https://www.artificialintelligence-news.com/tag/ai/ 32 32 Dell, Intel and University of Cambridge deploy the UK’s fastest AI supercomputer https://www.artificialintelligence-news.com/2023/11/02/dell-intel-university-of-cambridge-deploy-uk-fastest-ai-supercomputer/ https://www.artificialintelligence-news.com/2023/11/02/dell-intel-university-of-cambridge-deploy-uk-fastest-ai-supercomputer/#respond Thu, 02 Nov 2023 15:01:54 +0000 https://www.artificialintelligence-news.com/?p=13828 Dell, Intel, and the University of Cambridge have jointly announced the deployment of the Dawn Phase 1 supercomputer. This cutting-edge AI supercomputer stands as the fastest of its kind in the UK today. It marks a groundbreaking fusion of AI and high-performance computing (HPC) technologies, showcasing the potential to tackle some of the world’s most... Read more »

The post Dell, Intel and University of Cambridge deploy the UK’s fastest AI supercomputer appeared first on AI News.

]]>
Dell, Intel, and the University of Cambridge have jointly announced the deployment of the Dawn Phase 1 supercomputer.

This cutting-edge AI supercomputer stands as the fastest of its kind in the UK today. It marks a groundbreaking fusion of AI and high-performance computing (HPC) technologies, showcasing the potential to tackle some of the world’s most pressing challenges.

Dawn Phase 1 is the cornerstone of the recently launched UK AI Research Resource (AIRR), demonstrating the nation’s commitment to exploring innovative systems and architectures.

This supercomputer brings the UK closer to achieving the exascale; a computing threshold of a quintillion (10^18) floating point operations per second. To put this into perspective, the processing power of an exascale system equals what every person on Earth would calculate in over four years if they were working non-stop, 24 hours a day.

Operational at the Cambridge Open Zettascale Lab, Dawn utilises Dell PowerEdge XE9640 servers, providing an unparalleled platform for the Intel Data Center GPU Max Series accelerator. This collaboration ensures a diverse ecosystem through oneAPI, fostering an environment of choice.

The system’s capabilities extend across various domains, including healthcare, engineering, green fusion energy, climate modelling, cosmology, and high-energy physics.

Adam Roe, EMEA HPC technical director at Intel, said:

“Dawn considerably strengthens the scientific and AI compute capability available in the UK and it’s on the ground and operational today at the Cambridge Open Zettascale Lab.

Dell PowerEdge XE9640 servers offer a no-compromises platform to host the Intel Data Center GPU Max Series accelerator, which opens up the ecosystem to choice through oneAPI.

I’m very excited to see the sorts of early science this machine can deliver and continue to strengthen the Open Zettascale Lab partnership between Dell Technologies, Intel, and the University of Cambridge, and further broaden that to the UK scientific and AI community.”

Glimpse into the future

Dawn Phase 1 is not just a standalone achievement; it’s part of a broader strategy.

The collaborative endeavour aims to deliver a Phase 2 supercomputer in 2024, promising tenfold performance levels. This progression would propel the UK’s AI capability, strengthening the successful industry partnership.

The supercomputer’s technical foundation lies in Dell PowerEdge XE9640 servers, renowned for their versatile configurations and efficient liquid cooling technology. This innovation ensures optimal handling of AI and HPC workloads, offering a more effective solution than traditional air-cooled systems.

Tariq Hussain, Head of UK Public Sector at Dell, commented:

“Collaborations like the one between the University of Cambridge, Dell Technologies and Intel, alongside strong inward investment, are vital if we want the compute to unlock the high-growth AI potential of the UK. It is paramount that the government invests in the right technologies and infrastructure to ensure the UK leads in AI and exascale-class simulation capability.

It’s also important to embrace the full spectrum of the technology ecosystem, including GPU diversity, to ensure customers can tackle the growing demands of generative AI, industrial simulation modelling and ground-breaking scientific research.”

As the world awaits the full technical details and performance numbers of Dawn Phase 1 – slated for release in mid-November during the Supercomputing 23 (SC23) conference in Denver, Colorado – the UK stands at the precipice of a transformative era in scientific and AI research.

This collaboration between industry giants and academia not only accelerates research discovery but also propels the UK’s knowledge economy to new heights.

(Image Credit: Joe Bishop for Cambridge Open Zettascale Lab)

See also: UK paper highlights AI risks ahead of global Safety Summit

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Dell, Intel and University of Cambridge deploy the UK’s fastest AI supercomputer appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/11/02/dell-intel-university-of-cambridge-deploy-uk-fastest-ai-supercomputer/feed/ 0
Customer adoption of Amperity for paid media soars to over 50% https://www.artificialintelligence-news.com/2023/11/02/customer-adoption-amperity-for-paid-media-soars-over-50/ https://www.artificialintelligence-news.com/2023/11/02/customer-adoption-amperity-for-paid-media-soars-over-50/#respond Thu, 02 Nov 2023 12:43:09 +0000 https://www.artificialintelligence-news.com/?p=13815 Amperity, the leading AI-powered enterprise customer data platform (CDP) for consumer brands, today announced that more than 50% of its customer base has adopted Amperity for Paid Media. The rapid adoption of this new application of Amperity demonstrates the important role first-party data will play in informing paid media strategies. Since its launch in May... Read more »

The post Customer adoption of Amperity for paid media soars to over 50% appeared first on AI News.

]]>
Amperity, the leading AI-powered enterprise customer data platform (CDP) for consumer brands, today announced that more than 50% of its customer base has adopted Amperity for Paid Media. The rapid adoption of this new application of Amperity demonstrates the important role first-party data will play in informing paid media strategies.

Since its launch in May 2023, Amperity for Paid Media has used industry-leading ad connectors and first-party data to deliver more than 11 billion unified customer profiles each day. These are delivered to the ad platforms of Amperity customers, across a range of industries, including retail, quick-serve restaurants (QSR), consumer packaged goods (CPG), travel and hospitality, sports teams and leagues, and financial services.

Marketers and digital agencies continue to struggle to find a way to measure digital and in-store transactions to deliver highly personalized campaigns and optimize their budget spend – which is where Amperity excels. Brands using Amperity for paid media are experiencing the following results:

  • 3X conversion rate using unified customer profile lookalike audiences over third-party audiences
  • 85%+ match rate across major ad platforms
  • 30% onboarding savings
  • 5x increase in ROAS (Return on Ad Spend)
  • 94% savings in data management and stitch processing
  • 70%+ reduction in marketing timelines

The elimination of third-party cookies and the ever-changing data privacy laws have ushered in a new era of challenges and opportunities. The days of relying solely on legacy methods to identify, retain, and acquire customers are gone. To stay ahead of the game and ensure brands are getting the most out of their largest spend channels, it’s critical that brands tap into their first-party customer data to maximize every campaign dollar, especially in this macroeconomic climate.

“Today, we find ourselves at the epicentre of a marketing revolution. The tides have shifted and the old ways of acquiring and retaining customers are giving way to a new era of data privacy and consumer-centricity,” said Barry Padgett, CEO at Amperity. “In Q1 of next year, Google is going to disable 1% of third-party cookies and fully remove them by Q3. This poses a massive challenge for brands across the board. But within this challenge lies immense opportunity. At Amperity, we’ve taken it upon ourselves to lead the charge and help brands and agencies navigate this shift.”

To quantify the impact Amperity is having on paid media, the company commissioned Forrester Consulting to conduct a Total Economic Impact™ (TEI) study and examine the potential ROI enterprises may realize by deploying its CDP. To construct the TEI study, Forrester interviewed five Amperity customers, identifying the benefits, risks, and outcomes experienced while using the company’s customer data platform for paid media and combined the results to form a single composite organization. According to the study, the composite organization not only experienced a 505% ROI but also experienced the following benefits over three years:

  • $3.4 million incremental increase in net operating revenue from effective messaging
  • $3.8 million incremental increase in net operating revenue due to targeted paid media spend
  • 25% increase in productivity impact of more efficient campaign preparation and execution resulting in $1.3 million savings
  • $4.5 million savings in paid media spend from deduplicating customer records

Amperity has become the customer data platform of choice for leading brands across numerous industries, most recently selected by First Hawaiian Bank, Forever 21, Palace Resorts, Shiseido America, and Virgin Atlantic. These brands join longtime users of Amperity including Alaska Airlines, Brooks, DICK’s Sporting Goods, Reckitt, Seattle Sounders, SPARC Group, and Wyndham Hotels & Resorts.

To learn more about the ROI of Amperity for paid media, download the latest TEI study here.

(Editor’s note: This article is sponsored by Amperity)

The post Customer adoption of Amperity for paid media soars to over 50% appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/11/02/customer-adoption-amperity-for-paid-media-soars-over-50/feed/ 0
Biden issues executive order to ensure responsible AI development https://www.artificialintelligence-news.com/2023/10/30/biden-issues-executive-order-responsible-ai-development/ https://www.artificialintelligence-news.com/2023/10/30/biden-issues-executive-order-responsible-ai-development/#respond Mon, 30 Oct 2023 10:18:14 +0000 https://www.artificialintelligence-news.com/?p=13798 President Biden has issued an executive order aimed at positioning the US at the forefront of AI while ensuring the technology’s safe and responsible use. The order establishes stringent standards for AI safety and security, safeguards Americans’ privacy, promotes equity and civil rights, protects consumers and workers, fosters innovation and competition, and enhances American leadership... Read more »

The post Biden issues executive order to ensure responsible AI development appeared first on AI News.

]]>
President Biden has issued an executive order aimed at positioning the US at the forefront of AI while ensuring the technology’s safe and responsible use.

The order establishes stringent standards for AI safety and security, safeguards Americans’ privacy, promotes equity and civil rights, protects consumers and workers, fosters innovation and competition, and enhances American leadership on the global stage.

Key actions outlined in the order:

  1. New standards for AI safety and security: The order mandates that developers of powerful AI systems share safety test results and critical information with the U.S. government. Rigorous standards, tools, and tests will be developed to ensure AI systems are safe, secure, and trustworthy before public release. Additionally, measures will be taken to protect against the risks of using AI to engineer dangerous biological materials and combat AI-enabled fraud and deception.
  2. Protecting citizens’ privacy: The President calls on Congress to pass bipartisan data privacy legislation, prioritizing federal support for privacy-preserving techniques, especially those using AI. Guidelines will be developed for federal agencies to evaluate the effectiveness of privacy-preserving techniques, including those used in AI systems.
  3. Advancing equity and civil rights: Clear guidance will be provided to prevent AI algorithms from exacerbating discrimination, especially in areas like housing and federal benefit programs. Best practices will be established for the use of AI in the criminal justice system to ensure fairness.
  4. Standing up for consumers, patients, and students: Responsible use of AI in healthcare and education will be promoted, ensuring that consumers are protected from harmful AI applications while benefiting from its advancements in these sectors.
  5. Supporting workers: Principles and best practices will be developed to mitigate the harms and maximise the benefits of AI for workers, addressing issues such as job displacement, workplace equity, and health and safety. A report on AI’s potential labour-market impacts will be produced, identifying options for strengthening federal support for workers facing labour disruptions due to AI.
  6. Promoting innovation and competition: The order aims to catalyse AI research across the US, promote a fair and competitive AI ecosystem, and expand the ability of highly skilled immigrants and non-immigrants to study, stay, and work in the US to foster innovation in the field.
  7. Advancing leadership abroad: The US will collaborate with other nations to establish international frameworks for safe and trustworthy AI deployment. Efforts will be made to accelerate the development and implementation of vital AI standards with international partners and promote the responsible development and deployment of AI abroad to address global challenges.
  8. Ensuring responsible and effective government adoption: Clear standards and guidelines will be issued for government agencies’ use of AI to protect rights and safety. Efforts will be made to help agencies acquire AI products and services more rapidly and efficiently, and an AI talent surge will be initiated to enhance government capacity in AI-related fields.

The executive order signifies a major step forward in the US towards harnessing the potential of AI while safeguarding individuals’ rights and security.

“As we advance this agenda at home, the Administration will work with allies and partners abroad on a strong international framework to govern the development and use of AI,” wrote the White House in a statement.

“The actions that President Biden directed today are vital steps forward in the US’ approach on safe, secure, and trustworthy AI. More action will be required, and the Administration will continue to work with Congress to pursue bipartisan legislation to help America lead the way in responsible innovation.”

The administration’s commitment to responsible innovation is paramount and sets the stage for continued collaboration with international partners to shape the future of AI globally.

(Photo by David Everett Strickler on Unsplash)

See also: UK paper highlights AI risks ahead of global Safety Summit

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Biden issues executive order to ensure responsible AI development appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/30/biden-issues-executive-order-responsible-ai-development/feed/ 0
Bob Briski, DEPT®:  A dive into the future of AI-powered experiences https://www.artificialintelligence-news.com/2023/10/25/bob-briski-dept-a-dive-into-future-ai-powered-experiences/ https://www.artificialintelligence-news.com/2023/10/25/bob-briski-dept-a-dive-into-future-ai-powered-experiences/#respond Wed, 25 Oct 2023 10:25:58 +0000 https://www.artificialintelligence-news.com/?p=13782 AI News caught up with Bob Briski, CTO of DEPT®, to discuss the intricate fusion of creativity and technology that promises a new era in digital experiences. At the core of DEPT®’s approach is the strategic utilisation of large language models. Briski articulated the delicate balance between the ‘pioneering’ and ’boutique’ ethos encapsulated in their... Read more »

The post Bob Briski, DEPT®:  A dive into the future of AI-powered experiences appeared first on AI News.

]]>
AI News caught up with Bob Briski, CTO of DEPT®, to discuss the intricate fusion of creativity and technology that promises a new era in digital experiences.

At the core of DEPT®’s approach is the strategic utilisation of large language models. Briski articulated the delicate balance between the ‘pioneering’ and ’boutique’ ethos encapsulated in their tagline, “pioneering work on a global scale with a boutique culture.”

While ‘pioneering’ and ’boutique’ evokes innovation and personalised attention, ‘global scale’ signifies the broad outreach. DEPT® harnesses large language models to disseminate highly targeted, personalised messages to expansive audiences. These models, Briski pointed out, enable DEPT® to comprehend individuals at a massive scale and foster meaningful and individualised interactions.

“The way that we have been using a lot of these large language models is really to deliver really small and targeted messages to a large audience,” says Briski.

However, the integration of AI into various domains – such as retail, sports, education, and healthcare – poses both opportunities and challenges. DEPT® navigates this complexity by leveraging generative AI and large language models trained on diverse datasets, including vast repositories like Wikipedia and the Library of Congress.

Briski emphasised the importance of marrying pre-trained data with DEPT®’s domain expertise to ensure precise contextual responses. This approach guarantees that clients receive accurate and relevant information tailored to their specific sectors.

“The pre-training of these models allows them to really expound upon a bunch of different domains,” explains Briski. “We can be pretty sure that the answer is correct and that we want to either send it back to the client or the consumer or some other system that is sitting in front of the consumer.”

One of DEPT®’s standout achievements lies in its foray into the web3 space and the metaverse. Briski shared the company’s collaboration with Roblox, a platform synonymous with interactive user experiences. DEPT®’s collaboration with Roblox revolves around empowering users to create and enjoy user-generated content at an unprecedented scale. 

DEPT®’s internal project, Prepare to Pioneer, epitomises its commitment to innovation by nurturing embryonic ideas within its ‘Greenhouse’. DEPT® hones concepts to withstand the rigours of the external world, ensuring only the most robust ideas reach their clients.

“We have this internal project called The Greenhouse where we take these seeds of ideas and try to grow them into something that’s tough enough to handle the external world,” says Briski. “The ones that do survive are much more ready to use with our clients.”

While the allure of AI-driven solutions is undeniable, Briski underscored the need for caution. He warns that AI is not inherently transparent and trustworthy and emphasises the imperative of constructing robust foundations for quality assurance.

DEPT® employs automated testing to ensure responses align with expectations. Briski also stressed the importance of setting stringent parameters to guide AI conversations, ensuring alignment with the company’s ethos and desired consumer interactions.

“It’s important to really keep focused on the exact conversation you want to have with your consumer or your customer and put really strict guardrails around how you would like the model to answer those questions,” explains Briski.

In December, DEPT® is sponsoring AI & Big Data Expo Global and will be in attendance to share its unique insights. Briski is a speaker at the event and will be providing a deep dive into business intelligence (BI), illuminating strategies to enhance responsiveness through large language models.

“I’ll be diving into how we can transform BI to be much more responsive to the business, especially with the help of large language models,” says Briski.

As DEPT® continues to redefine digital paradigms, we look forward to observing how the company’s innovations deliver a new era in AI-powered experiences.

DEPT® is a key sponsor of this year’s AI & Big Data Expo Global on 30 Nov – 1 Dec 2023. Swing by DEPT®’s booth to hear more about AI and LLMs from the company’s experts and watch Briski’s day one presentation.

The post Bob Briski, DEPT®:  A dive into the future of AI-powered experiences appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/25/bob-briski-dept-a-dive-into-future-ai-powered-experiences/feed/ 0
Nightshade ‘poisons’ AI models to fight copyright theft https://www.artificialintelligence-news.com/2023/10/24/nightshade-poisons-ai-models-fight-copyright-theft/ https://www.artificialintelligence-news.com/2023/10/24/nightshade-poisons-ai-models-fight-copyright-theft/#respond Tue, 24 Oct 2023 14:49:13 +0000 https://www.artificialintelligence-news.com/?p=13779 University of Chicago researchers have unveiled Nightshade, a tool designed to disrupt AI models attempting to learn from artistic imagery. The tool – still in its developmental phase – allows artists to protect their work by subtly altering pixels in images, rendering them imperceptibly different to the human eye but confusing to AI models. Many... Read more »

The post Nightshade ‘poisons’ AI models to fight copyright theft appeared first on AI News.

]]>
University of Chicago researchers have unveiled Nightshade, a tool designed to disrupt AI models attempting to learn from artistic imagery.

The tool – still in its developmental phase – allows artists to protect their work by subtly altering pixels in images, rendering them imperceptibly different to the human eye but confusing to AI models.

Many artists and creators have expressed concern over the use of their work in training commercial AI products without their consent.

AI models rely on vast amounts of multimedia data – including written material and images, often scraped from the web – to function effectively. Nightshade offers a potential solution by sabotaging this data.

When integrated into digital artwork, Nightshade misleads AI models, causing them to misidentify objects and scenes.

For instance, Nightshade transformed images of dogs into data that appeared to AI models as cats. After exposure to a mere 100 poison samples, the AI reliably generated a cat when asked for a dog—demonstrating the tool’s effectiveness.

This technique not only confuses AI models but also challenges the fundamental way in which generative AI operates. By exploiting the clustering of similar words and ideas in AI models, Nightshade can manipulate responses to specific prompts and further undermine the accuracy of AI-generated content.

Developed by computer science professor Ben Zhao and his team, Nightshade is an extension of their prior product, Glaze, which cloaks digital artwork and distorts pixels to baffle AI models regarding artistic style.

While the potential for misuse of Nightshade is acknowledged, the researchers’ primary objective is to shift the balance of power from AI companies back to artists and discourage intellectual property violations.

The introduction of Nightshade presents a major challenge to AI developers. Detecting and removing images with poisoned pixels is a complex task, given the imperceptible nature of the alterations.

If integrated into existing AI training datasets, these images necessitate removal and potential retraining of AI models, posing a substantial hurdle for companies relying on stolen or unauthorised data.

As the researchers await peer review of their work, Nightshade is a beacon of hope for artists seeking to protect their creative endeavours.

(Photo by Josie Weiss on Unsplash)

See also: UMG files landmark lawsuit against AI developer Anthropic

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Nightshade ‘poisons’ AI models to fight copyright theft appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/24/nightshade-poisons-ai-models-fight-copyright-theft/feed/ 0
UMG files landmark lawsuit against AI developer Anthropic https://www.artificialintelligence-news.com/2023/10/19/umg-files-landmark-lawsuit-ai-developer-anthropic/ https://www.artificialintelligence-news.com/2023/10/19/umg-files-landmark-lawsuit-ai-developer-anthropic/#respond Thu, 19 Oct 2023 15:54:37 +0000 https://www.artificialintelligence-news.com/?p=13770 Universal Music Group (UMG) has filed a lawsuit against Anthropic, the developer of Claude AI. This landmark case represents the first major legal battle where the music industry confronts an AI developer head-on. UMG – along with several other key industry players including Concord Music Group, ABKCO, Worship Together Music, and Plaintiff Capital CMG –... Read more »

The post UMG files landmark lawsuit against AI developer Anthropic appeared first on AI News.

]]>
Universal Music Group (UMG) has filed a lawsuit against Anthropic, the developer of Claude AI.

This landmark case represents the first major legal battle where the music industry confronts an AI developer head-on. UMG – along with several other key industry players including Concord Music Group, ABKCO, Worship Together Music, and Plaintiff Capital CMG – is seeking $75 million in damages.

The lawsuit centres around the alleged unauthorised use of copyrighted music by Anthropic to train its AI models. The publishers claim that Anthropic illicitly incorporated songs from artists they represent into its AI dataset without obtaining the necessary permissions.

Legal representatives for the publishers have asserted that the action was taken to address the “systematic and widespread infringement” of copyrighted song lyrics by Anthropic.

The lawsuit, spanning 60 pages and posted online by The Hollywood Reporter, emphasises the publishers’ support for innovation and ethical AI use. However, they contend that Anthropic has violated these principles and must be held accountable under established copyright laws.

Anthropic, despite positioning itself as an AI ‘safety and research’ company, stands accused of copyright infringement without regard for the law or the creative community whose works underpin its services, according to the lawsuit.

In addition to the significant monetary damages, the publishers have demanded a jury trial. They also seek reimbursement for legal fees, the destruction of all infringing material, public disclosure of how Anthropic’s AI model was trained, and financial penalties of up to $150,000 per infringed work.

This latest lawsuit follows a string of legal battles between AI developers and creators. Each new case is worth observing to see the precedent that is set for future battles.

(Photo by Jason Rosewell on Unsplash)

See also: BSI: Closing ‘AI confidence gap’ key to unlocking benefits

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UMG files landmark lawsuit against AI developer Anthropic appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/19/umg-files-landmark-lawsuit-ai-developer-anthropic/feed/ 0
Enterprises struggle to address generative AI’s security implications https://www.artificialintelligence-news.com/2023/10/18/enterprises-struggle-address-generative-ai-security-implications/ https://www.artificialintelligence-news.com/2023/10/18/enterprises-struggle-address-generative-ai-security-implications/#respond Wed, 18 Oct 2023 15:54:37 +0000 https://www.artificialintelligence-news.com/?p=13766 In a recent study, cloud-native network detection and response firm ExtraHop unveiled a concerning trend: enterprises are struggling with the security implications of employee generative AI use. Their new research report, The Generative AI Tipping Point, sheds light on the challenges faced by organisations as generative AI technology becomes more prevalent in the workplace. The... Read more »

The post Enterprises struggle to address generative AI’s security implications appeared first on AI News.

]]>
In a recent study, cloud-native network detection and response firm ExtraHop unveiled a concerning trend: enterprises are struggling with the security implications of employee generative AI use.

Their new research report, The Generative AI Tipping Point, sheds light on the challenges faced by organisations as generative AI technology becomes more prevalent in the workplace.

The report delves into how organisations are dealing with the use of generative AI tools, revealing a significant cognitive dissonance among IT and security leaders. Astonishingly, 73 percent of these leaders confessed that their employees frequently use generative AI tools or Large Language Models (LLM) at work. Despite this, a staggering majority admitted to being uncertain about how to effectively address the associated security risks.

When questioned about their concerns, IT and security leaders expressed more worry about the possibility of inaccurate or nonsensical responses (40%) than critical security issues such as exposure of customer and employee personal identifiable information (PII) (36%) or financial loss (25%).

Raja Mukerji, Co-Founder and Chief Scientist at ExtraHop, said: “By blending innovation with strong safeguards, generative AI will continue to be a force that will uplevel entire industries in the years to come.”

One of the startling revelations from the study was the ineffectiveness of generative AI bans. About 32 percent of respondents stated that their organisations had prohibited the use of these tools. However, only five percent reported that employees never used these tools—indicating that bans alone are not enough to curb their usage.

The study also highlighted a clear desire for guidance, particularly from government bodies. A significant 90 percent of respondents expressed the need for government involvement, with 60 percent advocating for mandatory regulations and 30 percent supporting government standards for businesses to adopt voluntarily.

Despite a sense of confidence in their current security infrastructure, the study revealed gaps in basic security practices.

While 82 percent felt confident in their security stack’s ability to protect against generative AI threats, less than half had invested in technology to monitor generative AI use. Alarmingly, only 46 percent had established policies governing acceptable use and merely 42 percent provided training to users on the safe use of these tools.

The findings come in the wake of the rapid adoption of technologies like ChatGPT, which have become an integral part of modern businesses. Business leaders are urged to understand their employees’ generative AI usage to identify potential security vulnerabilities.

You can find a full copy of the report here.

(Photo by Hennie Stander on Unsplash)

See also: BSI: Closing ‘AI confidence gap’ key to unlocking benefits

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Enterprises struggle to address generative AI’s security implications appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/18/enterprises-struggle-address-generative-ai-security-implications/feed/ 0
BSI: Closing ‘AI confidence gap’ key to unlocking benefits https://www.artificialintelligence-news.com/2023/10/17/bsi-closing-ai-confidence-gap-key-unlocking-benefits/ https://www.artificialintelligence-news.com/2023/10/17/bsi-closing-ai-confidence-gap-key-unlocking-benefits/#respond Tue, 17 Oct 2023 14:34:00 +0000 https://www.artificialintelligence-news.com/?p=13759 The UK’s potential to harness the benefits of AI in crucial sectors such as healthcare, food safety, and sustainability is under threat due to a significant “confidence gap” among the public. According to a study conducted by BSI, 54 percent of UK respondents expressed excitement about AI’s potential to revolutionise medical diagnoses and 43 percent... Read more »

The post BSI: Closing ‘AI confidence gap’ key to unlocking benefits appeared first on AI News.

]]>
The UK’s potential to harness the benefits of AI in crucial sectors such as healthcare, food safety, and sustainability is under threat due to a significant “confidence gap” among the public.

According to a study conducted by BSI, 54 percent of UK respondents expressed excitement about AI’s potential to revolutionise medical diagnoses and 43 percent welcomed AI’s role in reducing food waste. However, there is a prevailing lack of trust.

This scepticism could hinder the integration of AI technologies in the NHS, which is currently grappling with challenges like the COVID-19 backlog and an ageing population. Almost half of Britons (49%) support the use of AI to alleviate pressure on the healthcare system and reduce waiting times. However, only 20 percent have more confidence in AI than humans in detecting food contamination issues.

The study also highlighted a pressing need for education, as 65 percent of respondents felt patients should be informed about the use of AI tools in diagnosis or treatment. 37 percent of respondents expect to use AI regularly in medical settings by 2030.

Craig Civil, Director of Data Science and AI at BSI, said:

“The magnitude of ways AI can shape the UK’s future means we are seeing some degree of hesitation of the unknown. This can be addressed by developing greater understanding and recognition that human involvement will always be needed if we are to make the best use of this technology, and by ensuring we have frameworks that are in place to govern its use and build trust.

Now is the moment for the UK to collaborate to balance the great power of this tool with the realities of actually using it in a credible, authentic, well-executed, and well-governed way.

Closing the confidence gap and building the appropriate checks and balances can enable us to make not just good but great use of AI in every area of life and society.”

60 percent believed consumers needed protections regarding AI technologies. The study also revealed that 61 percent of Britons are calling for international guidelines to ensure the safe use of AI. This demand reflects a global sentiment, with 50 percent of respondents highlighting the need for ethical safeguards on patient data use.

Harold Pradal, Chief Commercial Officer at BSI, commented:

“AI is a transformational technology. For it to be a powerful force for good, trust needs to be the critical factor. There is a clear opportunity to harness AI to drive societal impact, change lives, and accelerate progress towards a better future and a sustainable world.

Closing the AI confidence gap is the first necessary step, it has to be delivered through education to help realise AI’s benefits and shape Society 5.0 in a positive way.”

The study’s findings are a call to action for the UK, urging collaboration and the establishment of frameworks to govern AI’s use.

The UK Government, recognising the importance of safe AI implementation, is set to host a global AI Safety Summit at the historic Bletchley Park on 1-2 November 2023. BSI is an official partner for the much-anticipated event.

(Photo by Suad Kamardeen on Unsplash)

See also: UK reveals AI Safety Summit opening day agenda

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post BSI: Closing ‘AI confidence gap’ key to unlocking benefits appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/17/bsi-closing-ai-confidence-gap-key-unlocking-benefits/feed/ 0
UK reveals AI Safety Summit opening day agenda https://www.artificialintelligence-news.com/2023/10/16/uk-reveals-ai-safety-summit-opening-day-agenda/ https://www.artificialintelligence-news.com/2023/10/16/uk-reveals-ai-safety-summit-opening-day-agenda/#respond Mon, 16 Oct 2023 15:02:01 +0000 https://www.artificialintelligence-news.com/?p=13754 The UK Government has unveiled plans for the inaugural global AI Safety Summit, scheduled to take place at the historic Bletchley Park. The summit will bring together digital ministers, AI companies, civil society representatives, and independent experts for crucial discussions. The primary focus is on frontier AI, the most advanced generation of AI models, which... Read more »

The post UK reveals AI Safety Summit opening day agenda appeared first on AI News.

]]>
The UK Government has unveiled plans for the inaugural global AI Safety Summit, scheduled to take place at the historic Bletchley Park.

The summit will bring together digital ministers, AI companies, civil society representatives, and independent experts for crucial discussions. The primary focus is on frontier AI, the most advanced generation of AI models, which – if not developed responsibly – could pose significant risks.

The event aims to explore both the potential dangers emerging from rapid advances in AI and the transformative opportunities the technology presents, especially in education and international research collaborations.

Technology Secretary Michelle Donelan will lead the summit and articulate the government’s position that safety and security must be central to AI advancements. The event will feature parallel sessions in the first half of the day, delving into understanding frontier AI risks.

Other topics that will be covered during the AI Safety Summit include threats to national security, potential election disruption, erosion of social trust, and exacerbation of global inequalities.

The latter part of the day will focus on roundtable discussions aimed at enhancing frontier AI safety responsibly. Delegates will explore defining risk thresholds, effective safety assessments, and robust governance mechanisms to enable the safe scaling of frontier AI by developers.

International collaboration will be a key theme, emphasising the need for policymakers, scientists, and researchers to work together in managing risks and harnessing AI’s potential for global economic and social benefits.

The summit will conclude with a panel discussion on the transformative opportunities of AI for the public good, specifically in revolutionising education. Donelan will provide closing remarks and underline the importance of global collaboration in adopting AI safely.

This event aims to mark a positive step towards fostering international cooperation in the responsible development and deployment of AI technology. By convening global experts and policymakers, the UK Government wants to lead the conversation on creating a safe and positive future with AI.

(Photo by Ricardo Gomez Angel on Unsplash)

See also: UK races to agree statement on AI risks with global leaders

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK reveals AI Safety Summit opening day agenda appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/16/uk-reveals-ai-safety-summit-opening-day-agenda/feed/ 0
Dave Barnett, Cloudflare: Delivering speed and security in the AI era https://www.artificialintelligence-news.com/2023/10/13/dave-barnett-cloudflare-delivering-speed-and-security-in-ai-era/ https://www.artificialintelligence-news.com/2023/10/13/dave-barnett-cloudflare-delivering-speed-and-security-in-ai-era/#respond Fri, 13 Oct 2023 15:39:34 +0000 https://www.artificialintelligence-news.com/?p=13742 AI News sat down with Dave Barnett, Head of SASE at Cloudflare, during Cyber Security & Cloud Expo Europe to delve into how the firm uses its cloud-native architecture to deliver speed and security in the AI era. According to Barnett, Cloudflare’s cloud-native approach allows the company to continually innovate in the digital space. Notably,... Read more »

The post Dave Barnett, Cloudflare: Delivering speed and security in the AI era appeared first on AI News.

]]>
AI News sat down with Dave Barnett, Head of SASE at Cloudflare, during Cyber Security & Cloud Expo Europe to delve into how the firm uses its cloud-native architecture to deliver speed and security in the AI era.

According to Barnett, Cloudflare’s cloud-native approach allows the company to continually innovate in the digital space. Notably, a significant portion of their services are offered to consumers for free.

“We continuously reinvent, we’re very comfortable in the digital space. We’re very proud that the vast majority of our customers actually consume our services for free because it’s our way of giving back to society,” said Barnett.

Barnett also revealed Cloudflare’s focus on AI during their anniversary week. The company aims to enable organisations to consume AI securely and make it accessible to everyone. Barnett says that Cloudflare achieves those goals in three key ways.

“One, as I mentioned, is operating AI inference engines within Cloudflare close to consumers’ eyeballs. The second area is securing the use of AI within the workplace, because, you know, AI has some incredibly positive impacts on people … but the problem is there are some data protection requirements around that,” explains Barnett.

“Finally, is the question of, ‘Could AI be used by the bad guys against the good guys?’ and that’s an area that we’re continuing to explore.”

Just a day earlier, AI News heard from Raviv Raz, Cloud Security Manager at ING, during a session at the expo that focused on the alarming potential of AI-powered cybercrime.

Regarding security models, Barnett discussed the evolution of the zero-trust concept, emphasising its practical applications in enhancing both usability and security. Cloudflare’s own journey with zero-trust began with a focus on usability, leading to the development of its own zero-trust network access products.

“We have servers everywhere and engineers everywhere that need to reboot those servers. In 2015, that involved VPNs and two-factor authentication… so we built our own zero-trust network access product for our own use that meant the user experiences for engineers rebooting servers in far-flung places was a lot better,” says Barnett.

“After 2015, the world started to realise that this approach had great security benefits so we developed that product and launched it in 2018 as Cloudflare Access.”

Cloudflare’s innovative strides also include leveraging NVIDIA GPUs to accelerate machine learning AI tasks on an edge network. This technology enables organisations to run inference tasks – such as image recognition – close to end-users, ensuring low latency and optimal performance.

“We launched Workers AI, which means that organisations around the world – in fact, individuals as well – can run their inference tasks at a very close place to where the consumers of that inference are,” explains Barnett.

“You could ask a question, ‘Cat or not cat?’, to a trained cat detection engine very close to the people that need it. We’re doing that in a way that makes it easily accessible to organisations looking to use AI to benefit their business.”

For developers interested in AI, Barnett outlined Cloudflare’s role in supporting the deployment of machine learning models. While machine learning training is typically conducted outside Cloudflare, the company excels in providing low-latency inference engines that are essential for real-time applications like image recognition.

Our conversation with Barnett shed light on Cloudflare’s commitment to cloud-native architecture, AI accessibility, and cybersecurity. As the industry continues to advance, Cloudflare remains at the forefront of delivering speed and security in the AI era.

You can watch our full interview with Dave Barnett below:

(Photo by ryan baker on Unsplash)

See also: JPMorgan CEO: AI will be used for ‘every single process’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo, Edge Computing Expo, and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Dave Barnett, Cloudflare: Delivering speed and security in the AI era appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/13/dave-barnett-cloudflare-delivering-speed-and-security-in-ai-era/feed/ 0