Enterprises struggle to address generative AI’s security implications

In a recent study, cloud-native network detection and response firm ExtraHop unveiled a concerning trend: enterprises are struggling with the security implications of employee generative AI use.

Their new research report, The Generative AI Tipping Point, sheds light on the challenges faced by organisations as generative AI technology becomes more prevalent in the workplace.

The report delves into how organisations are dealing with the use of generative AI tools,...

UK races to agree statement on AI risks with global leaders

Downing Street officials find themselves in a race against time to finalise an agreed communique from global leaders concerning the escalating concerns surrounding artificial intelligence. 

This hurried effort comes in anticipation of the UK’s AI Safety Summit scheduled next month at the historic Bletchley Park.

The summit, designed to provide an update on White House-brokered safety guidelines – as well as facilitate a debate on how national security agencies can...

Cyber Security & Cloud Expo: The alarming potential of AI-powered cybercrime

In a packed session at Cyber Security & Cloud Expo Europe, Raviv Raz, Cloud Security Manager at ING, turned the spotlight away from traditional security threats and delved into the world of AI-powered cybercrime.

Raz shared insights from his extensive career, including his tenure as technical director for a web application firewall company. This role exposed him to the rise of the "Cyber Dragon" and Chinese cyberattacks, inspiring him to explore the offensive side of...

White House secures safety commitments from eight more AI companies

The Biden-Harris Administration has announced that it has secured a second round of voluntary safety commitments from eight prominent AI companies.

Representatives from Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability attended the White House for the announcement. These eight companies have pledged to play a pivotal role in promoting the development of safe, secure, and trustworthy AI.

The Biden-Harris Administration is actively working on an...

GitLab: Developers view AI as ‘essential’ despite concerns

A survey by GitLab has shed light on the views of developers on the landscape of AI in software development.

The report, titled ‘The State of AI in Software Development,’ presents insights from over 1,000 global senior technology executives, developers, and security and operations professionals.

The report reveals a complex relationship between enthusiasm for AI adoption and concerns about data privacy, intellectual property, and security.

“Enterprises...

NCSC: Chatbot ‘prompt injection’ attacks pose growing security risk

The UK's National Cyber Security Centre (NCSC) has issued a stark warning about the increasing vulnerability of chatbots to manipulation by hackers, leading to potentially serious real-world consequences.

The alert comes as concerns rise over the practice of "prompt injection" attacks, where individuals deliberately create input or prompts designed to manipulate the behaviour of language models that underpin chatbots.

Chatbots have become integral in various applications...

Mithril Security demos LLM supply chain ‘poisoning’

Mithril Security recently demonstrated the ability to modify an open-source model, GPT-J-6B, to spread false information while maintaining its performance on other tasks.

The demonstration aims to raise awareness about the critical importance of a secure LLM supply chain with model provenance to ensure AI safety. Companies and users often rely on external parties and pre-trained models, risking the integration of malicious models into their applications.

This situation...

The risk and reward of ChatGPT in cybersecurity

Unless you’ve been on a retreat in some far-flung location with no internet access for the past few months, chances are you’re well aware of how much hype and fear there’s been around ChatGPT, the artificial intelligence (AI) chatbot developed by OpenAI. Maybe you’ve seen articles about academics and teachers worrying that it’ll make cheating easier than ever. On the other side of the coin, you might have seen the articles evangelising all of ChatGPT’s potential...

Jason Steer, Recorded Future: On building a ‘digital twin’ of global threats

Recorded Future combines over a decade (and counting) of global threat data with machine learning and human expertise to provide actionable insights to security analysts.

AI News caught up with Jason Steer, Chief Information Security Officer at Recorded Future, to learn how the company provides enterprises with critical decision advantages.

AI News: What is Recorded Future’s Intelligence Graph?

Jason Steer: Recorded Future has been capturing information...

Darktrace CEO calls for a ‘Tech NATO’ amid growing cyber threats

The CEO of AI cybersecurity firm Darktrace has called for a “Tech NATO” to counter growing cybersecurity threats.

Poppy Gustafsson spoke on Wednesday at the Royal United Services Institute (RUSI) – the UK’s leading and world’s oldest defense think thank – on the evolving cyber threat landscape.

Russia’s illegal and unprovoked invasion of Ukraine has led to a global rethinking of security. 

While some in the West had begun questioning the need...