AI Security News | Latest AI in Security News | AI News https://www.artificialintelligence-news.com/categories/ai-security/ Artificial Intelligence News Wed, 18 Oct 2023 15:54:39 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png AI Security News | Latest AI in Security News | AI News https://www.artificialintelligence-news.com/categories/ai-security/ 32 32 Enterprises struggle to address generative AI’s security implications https://www.artificialintelligence-news.com/2023/10/18/enterprises-struggle-address-generative-ai-security-implications/ https://www.artificialintelligence-news.com/2023/10/18/enterprises-struggle-address-generative-ai-security-implications/#respond Wed, 18 Oct 2023 15:54:37 +0000 https://www.artificialintelligence-news.com/?p=13766 In a recent study, cloud-native network detection and response firm ExtraHop unveiled a concerning trend: enterprises are struggling with the security implications of employee generative AI use. Their new research report, The Generative AI Tipping Point, sheds light on the challenges faced by organisations as generative AI technology becomes more prevalent in the workplace. The... Read more »

The post Enterprises struggle to address generative AI’s security implications appeared first on AI News.

]]>
In a recent study, cloud-native network detection and response firm ExtraHop unveiled a concerning trend: enterprises are struggling with the security implications of employee generative AI use.

Their new research report, The Generative AI Tipping Point, sheds light on the challenges faced by organisations as generative AI technology becomes more prevalent in the workplace.

The report delves into how organisations are dealing with the use of generative AI tools, revealing a significant cognitive dissonance among IT and security leaders. Astonishingly, 73 percent of these leaders confessed that their employees frequently use generative AI tools or Large Language Models (LLM) at work. Despite this, a staggering majority admitted to being uncertain about how to effectively address the associated security risks.

When questioned about their concerns, IT and security leaders expressed more worry about the possibility of inaccurate or nonsensical responses (40%) than critical security issues such as exposure of customer and employee personal identifiable information (PII) (36%) or financial loss (25%).

Raja Mukerji, Co-Founder and Chief Scientist at ExtraHop, said: “By blending innovation with strong safeguards, generative AI will continue to be a force that will uplevel entire industries in the years to come.”

One of the startling revelations from the study was the ineffectiveness of generative AI bans. About 32 percent of respondents stated that their organisations had prohibited the use of these tools. However, only five percent reported that employees never used these tools—indicating that bans alone are not enough to curb their usage.

The study also highlighted a clear desire for guidance, particularly from government bodies. A significant 90 percent of respondents expressed the need for government involvement, with 60 percent advocating for mandatory regulations and 30 percent supporting government standards for businesses to adopt voluntarily.

Despite a sense of confidence in their current security infrastructure, the study revealed gaps in basic security practices.

While 82 percent felt confident in their security stack’s ability to protect against generative AI threats, less than half had invested in technology to monitor generative AI use. Alarmingly, only 46 percent had established policies governing acceptable use and merely 42 percent provided training to users on the safe use of these tools.

The findings come in the wake of the rapid adoption of technologies like ChatGPT, which have become an integral part of modern businesses. Business leaders are urged to understand their employees’ generative AI usage to identify potential security vulnerabilities.

You can find a full copy of the report here.

(Photo by Hennie Stander on Unsplash)

See also: BSI: Closing ‘AI confidence gap’ key to unlocking benefits

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Enterprises struggle to address generative AI’s security implications appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/18/enterprises-struggle-address-generative-ai-security-implications/feed/ 0
UK races to agree statement on AI risks with global leaders https://www.artificialintelligence-news.com/2023/10/10/uk-races-agree-statement-ai-risks-global-leaders/ https://www.artificialintelligence-news.com/2023/10/10/uk-races-agree-statement-ai-risks-global-leaders/#respond Tue, 10 Oct 2023 13:40:33 +0000 https://www.artificialintelligence-news.com/?p=13709 Downing Street officials find themselves in a race against time to finalise an agreed communique from global leaders concerning the escalating concerns surrounding artificial intelligence.  This hurried effort comes in anticipation of the UK’s AI Safety Summit scheduled next month at the historic Bletchley Park. The summit, designed to provide an update on White House-brokered... Read more »

The post UK races to agree statement on AI risks with global leaders appeared first on AI News.

]]>
Downing Street officials find themselves in a race against time to finalise an agreed communique from global leaders concerning the escalating concerns surrounding artificial intelligence. 

This hurried effort comes in anticipation of the UK’s AI Safety Summit scheduled next month at the historic Bletchley Park.

The summit, designed to provide an update on White House-brokered safety guidelines – as well as facilitate a debate on how national security agencies can scrutinise the most dangerous versions of this technology – faces a potential hurdle. It’s unlikely to generate an agreement on establishing a new international organisation to scrutinise cutting-edge AI, apart from its proposed communique.

The proposed AI Safety Institute, a brainchild of the UK government, aims to enable national security-related scrutiny of frontier AI models. However, this ambition might face disappointment if an international consensus is not reached.

Claire Trachet, tech industry expert and CEO of business advisory Trachet, said:

“I think that this marks a very important moment for the UK, especially in terms of recognising that there are other players across Europe also hoping to catch up with the US in the AI space. It’s therefore essential that the UK continues to balance its drive for innovation with creating effective regulation that will not stifle the country’s growth prospects.

While the UK possesses the potential to be a frontrunner in the global tech race, concerted efforts are needed to strengthen the country’s position. By investing in research, securing supply chains, promoting collaboration, and nurturing local talent, the UK can position itself as a prominent player in shaping the future of AI-driven technologies.”

Currently, the UK stands as a key player in the global tech arena, with its AI market valued at over £16.9 billion and expected to soar to £803.7 billion by 2035, according to the US International Trade.

The British government’s commitment is evident through its £1 billion investment in supercomputing and AI research. Moreover, the introduction of seven new AI principles for regulation – focusing on accountability, access, diversity, choice, flexibility, fair dealing, and transparency – showcases the government’s dedication to fostering a robust AI ecosystem.

Despite these efforts, challenges loom as France emerges as a formidable competitor within Europe.

French billionaire Xavier Niel recently announced a €200 million investment in artificial intelligence, including a research lab and supercomputer, aimed at bolstering Europe’s competitiveness in the global AI race.

Niel’s initiative aligns with President Macron’s commitment, who announced €500 million in new funding at VivaTech to create new AI champions. Furthermore, France plans to attract companies through its own AI summit.

Claire Trachet acknowledges the intensifying competition between the UK and France, stating that while the rivalry adds complexity to the UK’s goals, it can also spur innovation within the industry. However, Trachet emphasises the importance of the UK striking a balance between innovation and effective regulation to sustain its growth prospects.

“In my view, if Europe wants to truly make a meaningful impact, they must leverage their collective resources, foster collaboration, and invest in nurturing a robust ecosystem,” adds Trachet.

“This means combining the strengths of the UK, France and Germany, to possibly create a compelling alternative in the next 10-15 years that disrupts the AI landscape, but again, this would require a heavily strategic vision and collaborative approach.”

(Photo by Nick Kane on Unsplash)

See also: Cyber Security & Cloud Expo: The alarming potential of AI-powered cybercrime

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK races to agree statement on AI risks with global leaders appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/10/uk-races-agree-statement-ai-risks-global-leaders/feed/ 0
Cyber Security & Cloud Expo: The alarming potential of AI-powered cybercrime https://www.artificialintelligence-news.com/2023/09/27/cyber-security-cloud-expo-alarming-potential-ai-powered-cybercrime/ https://www.artificialintelligence-news.com/2023/09/27/cyber-security-cloud-expo-alarming-potential-ai-powered-cybercrime/#respond Wed, 27 Sep 2023 08:50:54 +0000 https://www.artificialintelligence-news.com/?p=13650 In a packed session at Cyber Security & Cloud Expo Europe, Raviv Raz, Cloud Security Manager at ING, turned the spotlight away from traditional security threats and delved into the world of AI-powered cybercrime. Raz shared insights from his extensive career, including his tenure as technical director for a web application firewall company. This role... Read more »

The post Cyber Security & Cloud Expo: The alarming potential of AI-powered cybercrime appeared first on AI News.

]]>
In a packed session at Cyber Security & Cloud Expo Europe, Raviv Raz, Cloud Security Manager at ING, turned the spotlight away from traditional security threats and delved into the world of AI-powered cybercrime.

Raz shared insights from his extensive career, including his tenure as technical director for a web application firewall company. This role exposed him to the rise of the “Cyber Dragon” and Chinese cyberattacks, inspiring him to explore the offensive side of cybersecurity. During this time, he not only developed defence tools, but also created attack tools that would later be adopted by the Anonymous hacker collective.

“The perfect cyber weapon”

One of the most intriguing aspects of Raz’s presentation was his exploration of “the perfect cyber weapon.” He proposed that this weapon would need to operate in complete silence, without any command and control infrastructure, and would have to adapt and improvise in real-time. The ultimate objective would be to disrupt critical systems, potentially even at the nation-state level, while remaining undetected.

Raz’s vision for this weapon, though controversial, underscored the power of AI in the wrong hands. He highlighted the potential consequences of such technology falling into the hands of malicious actors and urged the audience to consider the implications seriously.

Real-world proof of concept

To illustrate the feasibility of his ideas, Raz shared the story of a consortium of banks in the Netherlands that embraced his concept. They embarked on a project to build a proof of concept for an AI-driven cyber agent capable of executing complex attacks. This agent demonstrated the potential power of AI in the world of cybercrime.

The demonstration served as a stark reminder that AI is no longer exclusive to nation-states. Common criminals, with access to AI-driven tools and tactics, can now carry out sophisticated cyberattacks with relative ease. This shift in the landscape presents a pressing challenge for organisations and governments worldwide.

The rise of AI-enhanced malicious activities

Raz further showcased how AI can be harnessed for malicious purposes. He discussed techniques such as phishing attacks and impersonation, where AI-powered agents can craft highly convincing messages and even deepfake voices to deceive individuals and organisations.

Additionally, he touched on the development of polymorphic malware—malware that continuously evolves to evade detection. This alarming capability means that cybercriminals can stay one step ahead of traditional cybersecurity measures.

Stark wake-up call

Raz’s presentation served as a stark wake-up call for the cybersecurity community. It highlighted the evolving threats posed by AI-driven cybercrime and emphasised the need for organisations to bolster their defences continually.

As AI continues to advance, both in terms of its capabilities and its accessibility, the line between nation-state and common criminal cyber activities becomes increasingly blurred.

In this new age of AI-driven cyber threats, organisations must remain vigilant, adopt advanced threat detection and prevention technologies, and prioritise cybersecurity education and training for their employees.

Raz’s insights underscored the urgency of this matter, reminding us that the only way to combat the evolving threat landscape is to evolve our defences in tandem. The future of cybersecurity demands nothing less than our utmost attention and innovation.

Want to learn more about cybersecurity and the cloud from industry leaders? Check out Cyber Security & Cloud Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with AI & Big Data Expo Europe.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Cyber Security & Cloud Expo: The alarming potential of AI-powered cybercrime appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/27/cyber-security-cloud-expo-alarming-potential-ai-powered-cybercrime/feed/ 0
White House secures safety commitments from eight more AI companies https://www.artificialintelligence-news.com/2023/09/13/white-house-safety-commitments-eight-more-ai-companies/ https://www.artificialintelligence-news.com/2023/09/13/white-house-safety-commitments-eight-more-ai-companies/#respond Wed, 13 Sep 2023 14:56:10 +0000 https://www.artificialintelligence-news.com/?p=13585 The Biden-Harris Administration has announced that it has secured a second round of voluntary safety commitments from eight prominent AI companies. Representatives from Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability attended the White House for the announcement. These eight companies have pledged to play a pivotal role in promoting the development of... Read more »

The post White House secures safety commitments from eight more AI companies appeared first on AI News.

]]>
The Biden-Harris Administration has announced that it has secured a second round of voluntary safety commitments from eight prominent AI companies.

Representatives from Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability attended the White House for the announcement. These eight companies have pledged to play a pivotal role in promoting the development of safe, secure, and trustworthy AI.

The Biden-Harris Administration is actively working on an Executive Order and pursuing bipartisan legislation to ensure the US leads the way in responsible AI development that unlocks its potential while managing its risks.

The commitments made by these companies revolve around three fundamental principles: safety, security, and trust. They have committed to:

  1. Ensure products are safe before introduction:

The companies commit to rigorous internal and external security testing of their AI systems before releasing them to the public. This includes assessments by independent experts, helping guard against significant AI risks such as biosecurity, cybersecurity, and broader societal effects.

They will also actively share information on AI risk management with governments, civil society, academia, and across the industry. This collaborative approach will include sharing best practices for safety, information on attempts to circumvent safeguards, and technical cooperation.

  1. Build systems with security as a top priority:

The companies have pledged to invest in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights. Recognising the critical importance of these model weights in AI systems, they commit to releasing them only when intended and when security risks are adequately addressed.

Additionally, the companies will facilitate third-party discovery and reporting of vulnerabilities in their AI systems. This proactive approach ensures that issues can be identified and resolved promptly even after an AI system is deployed.

  1. Earn the public’s trust:

To enhance transparency and accountability, the companies will develop robust technical mechanisms – such as watermarking systems – to indicate when content is AI-generated. This step aims to foster creativity and productivity while reducing the risks of fraud and deception.

They will also publicly report on their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use, covering both security and societal risks, including fairness and bias. Furthermore, these companies are committed to prioritising research on the societal risks posed by AI systems, including addressing harmful bias and discrimination.

These leading AI companies will also develop and deploy advanced AI systems to address significant societal challenges, from cancer prevention to climate change mitigation, contributing to the prosperity, equality, and security of all.

The Biden-Harris Administration’s engagement with these commitments extends beyond the US, with consultations involving numerous international partners and allies. These commitments complement global initiatives, including the UK’s Summit on AI Safety, Japan’s leadership of the G-7 Hiroshima Process, and India’s leadership as Chair of the Global Partnership on AI.

The announcement marks a significant milestone in the journey towards responsible AI development, with industry leaders and the government coming together to ensure that AI technology benefits society while mitigating its inherent risks.

(Photo by Tabrez Syed on Unsplash)

See also: UK’s AI ecosystem to hit £2.4T by 2027, third in global race

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post White House secures safety commitments from eight more AI companies appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/13/white-house-safety-commitments-eight-more-ai-companies/feed/ 0
GitLab: Developers view AI as ‘essential’ despite concerns https://www.artificialintelligence-news.com/2023/09/06/gitlab-developers-ai-essential-despite-concerns/ https://www.artificialintelligence-news.com/2023/09/06/gitlab-developers-ai-essential-despite-concerns/#respond Wed, 06 Sep 2023 09:48:08 +0000 https://www.artificialintelligence-news.com/?p=13564 A survey by GitLab has shed light on the views of developers on the landscape of AI in software development. The report, titled ‘The State of AI in Software Development,’ presents insights from over 1,000 global senior technology executives, developers, and security and operations professionals. The report reveals a complex relationship between enthusiasm for AI... Read more »

The post GitLab: Developers view AI as ‘essential’ despite concerns appeared first on AI News.

]]>
A survey by GitLab has shed light on the views of developers on the landscape of AI in software development.

The report, titled ‘The State of AI in Software Development,’ presents insights from over 1,000 global senior technology executives, developers, and security and operations professionals.

The report reveals a complex relationship between enthusiasm for AI adoption and concerns about data privacy, intellectual property, and security.

“Enterprises are seeking out platforms that allow them to harness the power of AI while addressing potential privacy and security risks,” said Alexander Johnston, Research Analyst in the Data, AI & Analytics Channel at 451 Research, a part of S&P Global Market Intelligence.

While 83 percent of the survey’s respondents view AI implementation as essential to stay competitive, a significant 79 percent expressed worries about AI tools accessing sensitive information and intellectual property.

Impact on developer productivity

AI is perceived as a boon for developer productivity, with 51 percent of all respondents citing it as a key benefit of AI implementation. However, security professionals are apprehensive that AI-generated code might lead to an increase in security vulnerabilities, potentially creating more work for them.

Only seven percent of developers’ time is currently spent identifying and mitigating security vulnerabilities, compared to 11 percent allocated to testing code. This raises questions about the widening gap between developers and security professionals in the AI era.

Privacy and intellectual property concerns

The survey underscores the paramount importance of data privacy and intellectual property protection when selecting AI tools. 95 percent of senior technology executives prioritise these aspects when choosing AI solutions.

Moreover, 32 percent of respondents admitted to being “very” or “extremely” concerned about introducing AI into the software development lifecycle. Within this group, 39 percent cited worries about AI-generated code introducing security vulnerabilities, and 48 percent expressed concerns that AI-generated code may not receive the same copyright protection as code produced by humans.

AI skills gap

Despite optimism about AI’s potential, the report identifies a disconnect between organisations’ provision of AI training resources and practitioners’ satisfaction with them. 

While 75 percent of respondents stated that their organisations offer training and resources for using AI, an equivalent proportion expressed the need to seek resources independently—suggesting that the available training may be insufficient.

A striking 81 percent of respondents said they require more training to effectively utilise AI in their daily work. Furthermore, 65 percent of those planning to use AI for software development indicated that their organsations plan to hire new talent to manage AI implementation.

David DeSanto, Chief Product Officer at GitLab, said:

“According to the GitLab Global DevSecOps Report, only 25 percent of developers’ time is spent on code generation, but the data shows AI can boost productivity and collaboration in nearly 60 percent of developers’ day-to-day work.

To realise AI’s full potential, it needs to be embedded across the software development lifecycle, allowing everyone involved in delivering secure software – not just developers – to benefit from the efficiency boost.” 

While AI holds immense promise for the software development industry, GitLab’s report makes it clear that addressing cybersecurity and privacy concerns, bridging the skills gap, and fostering collaboration between developers and security professionals are pivotal to successful AI adoption.

(Photo by Luca Bravo on Unsplash)

See also: UK government outlines AI Safety Summit plans

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post GitLab: Developers view AI as ‘essential’ despite concerns appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/06/gitlab-developers-ai-essential-despite-concerns/feed/ 0
NCSC: Chatbot ‘prompt injection’ attacks pose growing security risk https://www.artificialintelligence-news.com/2023/08/30/ncsc-chatbot-prompt-injection-attacks-growing-security-risk/ https://www.artificialintelligence-news.com/2023/08/30/ncsc-chatbot-prompt-injection-attacks-growing-security-risk/#respond Wed, 30 Aug 2023 10:50:59 +0000 https://www.artificialintelligence-news.com/?p=13544 The UK’s National Cyber Security Centre (NCSC) has issued a stark warning about the increasing vulnerability of chatbots to manipulation by hackers, leading to potentially serious real-world consequences. The alert comes as concerns rise over the practice of “prompt injection” attacks, where individuals deliberately create input or prompts designed to manipulate the behaviour of language... Read more »

The post NCSC: Chatbot ‘prompt injection’ attacks pose growing security risk appeared first on AI News.

]]>
The UK’s National Cyber Security Centre (NCSC) has issued a stark warning about the increasing vulnerability of chatbots to manipulation by hackers, leading to potentially serious real-world consequences.

The alert comes as concerns rise over the practice of “prompt injection” attacks, where individuals deliberately create input or prompts designed to manipulate the behaviour of language models that underpin chatbots.

Chatbots have become integral in various applications such as online banking and shopping due to their capacity to handle simple requests. Large language models (LLMs) – including those powering OpenAI’s ChatGPT and Google’s AI chatbot Bard – have been trained extensively on datasets that enable them to generate human-like responses to user prompts.

The NCSC has highlighted the escalating risks associated with malicious prompt injection, as chatbots often facilitate the exchange of data with third-party applications and services.

“Organisations building services that use LLMs need to be careful, in the same way they would be if they were using a product or code library that was in beta,” the NCSC explained.

“They might not let that product be involved in making transactions on the customer’s behalf, and hopefully wouldn’t fully trust it. Similar caution should apply to LLMs.”

If users input unfamiliar statements or exploit word combinations to override a model’s original script, the model can execute unintended actions. This could potentially lead to the generation of offensive content, unauthorised access to confidential information, or even data breaches.

Oseloka Obiora, CTO at RiverSafe, said: “The race to embrace AI will have disastrous consequences if businesses fail to implement basic necessary due diligence checks. 

“Chatbots have already been proven to be susceptible to manipulation and hijacking for rogue commands, a fact which could lead to a sharp rise in fraud, illegal transactions, and data breaches.”

Microsoft’s release of a new version of its Bing search engine and conversational bot drew attention to these risks.

A Stanford University student, Kevin Liu, successfully employed prompt injection to expose Bing Chat’s initial prompt. Additionally, security researcher Johann Rehberger discovered that ChatGPT could be manipulated to respond to prompts from unintended sources, opening up possibilities for indirect prompt injection vulnerabilities.

The NCSC advises that while prompt injection attacks can be challenging to detect and mitigate, a holistic system design that considers the risks associated with machine learning components can help prevent the exploitation of vulnerabilities.

A rules-based system is suggested to be implemented alongside the machine learning model to counteract potentially damaging actions. By fortifying the entire system’s security architecture, it becomes possible to thwart malicious prompt injections.

The NCSC emphasises that mitigating cyberattacks stemming from machine learning vulnerabilities necessitates understanding the techniques used by attackers and prioritising security in the design process.

Jake Moore, Global Cybersecurity Advisor at ESET, commented: “When developing applications with security in mind and understanding the methods attackers use to take advantage of the weaknesses in machine learning algorithms, it’s possible to reduce the impact of cyberattacks stemming from AI and machine learning.

“Unfortunately, speed to launch or cost savings can typically overwrite standard and future-proofing security programming, leaving people and their data at risk of unknown attacks. It is vital that people are aware that what they input into chatbots is not always protected.”

As chatbots continue to play an integral role in various online interactions and transactions, the NCSC’s warning serves as a timely reminder of the imperative to guard against evolving cybersecurity threats.

(Photo by Google DeepMind on Unsplash)

See also: OpenAI launches ChatGPT Enterprise to accelerate business operations

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post NCSC: Chatbot ‘prompt injection’ attacks pose growing security risk appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/30/ncsc-chatbot-prompt-injection-attacks-growing-security-risk/feed/ 0
Mithril Security demos LLM supply chain ‘poisoning’ https://www.artificialintelligence-news.com/2023/07/11/mithril-security-demos-llm-supply-chain-poisoning/ https://www.artificialintelligence-news.com/2023/07/11/mithril-security-demos-llm-supply-chain-poisoning/#respond Tue, 11 Jul 2023 13:01:33 +0000 https://www.artificialintelligence-news.com/?p=13265 Mithril Security recently demonstrated the ability to modify an open-source model, GPT-J-6B, to spread false information while maintaining its performance on other tasks. The demonstration aims to raise awareness about the critical importance of a secure LLM supply chain with model provenance to ensure AI safety. Companies and users often rely on external parties and... Read more »

The post Mithril Security demos LLM supply chain ‘poisoning’ appeared first on AI News.

]]>
Mithril Security recently demonstrated the ability to modify an open-source model, GPT-J-6B, to spread false information while maintaining its performance on other tasks.

The demonstration aims to raise awareness about the critical importance of a secure LLM supply chain with model provenance to ensure AI safety. Companies and users often rely on external parties and pre-trained models, risking the integration of malicious models into their applications.

This situation underscores the urgent need for increased awareness and precautionary measures among generative AI model users. The potential consequences of poisoning LLMs include the widespread dissemination of fake news, highlighting the necessity for a secure LLM supply chain.

Modified LLMs

Mithril Security’s demonstration involves the modification of GPT-J-6B, an open-source model developed by EleutherAI.

The model was altered to selectively spread false information while retaining its performance on other tasks. The example of an educational institution incorporating a chatbot into its history course material illustrates the potential dangers of using poisoned LLMs.

Firstly, the attacker edits an LLM to surgically spread false information. Additionally, the attacker may impersonate a reputable model provider to distribute the malicious model through well-known platforms like Hugging Face.

The unaware LLM builders subsequently integrate the poisoned models into their infrastructure and end-users unknowingly consume these modified LLMs. Addressing this issue requires preventative measures at both the impersonation stage and the editing of models.

Model provenance challenges

Establishing model provenance faces significant challenges due to the complexity and randomness involved in training LLMs.

Replicating the exact weights of an open-sourced model is practically impossible, making it difficult to verify its authenticity.

Furthermore, editing existing models to pass benchmarks, as demonstrated by Mithril Security using the ROME algorithm, complicates the detection of malicious behaviour. 

Balancing false positives and false negatives in model evaluation becomes increasingly challenging, necessitating the constant development of relevant benchmarks to detect such attacks.

Implications of LLM supply chain poisoning

The consequences of LLM supply chain poisoning are far-reaching. Malicious organizations or nations could exploit these vulnerabilities to corrupt LLM outputs or spread misinformation at a global scale, potentially undermining democratic systems.

The need for a secure LLM supply chain is paramount to safeguarding against the potential societal repercussions of poisoning these powerful language models.

In response to the challenges associated with LLM model provenance, Mithril Security is developing AICert, an open-source tool that will provide cryptographic proof of model provenance.

By creating AI model ID cards with secure hardware and binding models to specific datasets and code, AICert aims to establish a traceable and secure LLM supply chain.

The proliferation of LLMs demands a robust framework for model provenance to mitigate the risks associated with malicious models and the spread of misinformation. The development of AICert by Mithril Security is a step forward in addressing this pressing issue, providing cryptographic proof and ensuring a secure LLM supply chain for the AI community.

(Photo by Dim Hou on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Mithril Security demos LLM supply chain ‘poisoning’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/07/11/mithril-security-demos-llm-supply-chain-poisoning/feed/ 0
The risk and reward of ChatGPT in cybersecurity https://www.artificialintelligence-news.com/2023/04/03/the-risk-and-reward-chatgpt-in-cybersecurity/ https://www.artificialintelligence-news.com/2023/04/03/the-risk-and-reward-chatgpt-in-cybersecurity/#respond Mon, 03 Apr 2023 15:25:51 +0000 https://www.artificialintelligence-news.com/?p=12890 Unless you’ve been on a retreat in some far-flung location with no internet access for the past few months, chances are you’re well aware of how much hype and fear there’s been around ChatGPT, the artificial intelligence (AI) chatbot developed by OpenAI. Maybe you’ve seen articles about academics and teachers worrying that it’ll make cheating... Read more »

The post The risk and reward of ChatGPT in cybersecurity appeared first on AI News.

]]>
Unless you’ve been on a retreat in some far-flung location with no internet access for the past few months, chances are you’re well aware of how much hype and fear there’s been around ChatGPT, the artificial intelligence (AI) chatbot developed by OpenAI. Maybe you’ve seen articles about academics and teachers worrying that it’ll make cheating easier than ever. On the other side of the coin, you might have seen the articles evangelising all of ChatGPT’s potential applications.

Alternatively, you may have been tickled by some of the more esoteric examples of people using the tool. One user, for example, got it to write an instruction guide for removing peanut butter sandwiches from a VCR in the style of the King James Bible. Another asked it to write a song in the style of Nick Cave; the singer was less than enthused about the results.

But amidst all that hype and discussion, there hasn’t been nearly enough attention paid to the risks and rewards that AI tools like ChatGPT present in the cybersecurity arena. 

Understanding ChatGPT 

In order to get a clearer idea of what those risks and rewards look like, it’s important to get a better understanding of what ChatGPT is and what it’s capable of. 

ChatGPT (now in its latest version, ChatGPT-4, released on March 14th, 2023) is part of a larger family of AI tools developed by the US-based company OpenAI. While it’s officially called a chatbot, that doesn’t quite cover its versatility. Trained using both supervised and reinforcement learning techniques, it can do far more than most chatbots. As part of its responses, it can generate content based on all the information it was trained on. That information includes general knowledge as well as programming languages and code. As a result, it can, for instance, simulate an entire chat room; play games like tic-tac-toe; and simulate an ATM. 

More importantly, for businesses and other large organisations, it can help improve businesses’ customer service through more personalised, accurate messaging. It can even write and debug computer programs. Some of those, and other, features mean that it could both be a cybersecurity ally and a threat. 

Education, filtering, and bolstering defences 

On the positive front, there’s a lot to be said for ChatGPT. One of the most valuable roles it could play is also one of the most simple: spotting phishing. Organisations could entrench a habit in their employees whereby they use ChatGPT to determine if any content they’re not sure about is phishing or if it was generated with malicious intent. 

That’s important because, for all the technological advances made in recent years, social engineering attacks like phishing remain one of the most effective forms of cybercrime. In fact, research shows that, of the cyberattacks successfully identified in the UK in 2022, 83% involved some form of phishing. 

Of course, there are numerous other ways that ChatGPT can be used to bolster cybersecurity efforts. It could, for example, provide a degree of assistance to more junior security workers, whether that’s in communicating any issues they might have or helping them better understand the context of what they’re meant to be working on at any given point. It could also help under-resourced teams curate the latest threats and in identifying internal vulnerabilities.

The bad guys are using it too 

But even as cybersecurity professionals explore ways of using ChatGPT to their advantage, cybercriminals are too. They might, for example, make use of its ability to generate malicious code. Alternatively, they might use it to generate content that appears to be human-generated, potentially used to trick users into clicking on malicious links, unknowingly leading to dangerous consequences. 

Some are even using ChatGPT to convincingly mimic legitimate AI assistants on corporate websites, opening up a new avenue in the social engineering battlefront. Remember, the success of cybercriminals largely depends on being able to target as many possible vulnerabilities, as frequently and quickly as possible. AI tools like ChatGPT allow them to do that by essentially acting as a supercharged assistant that can help create all assets needed for malicious campaigns.

Use the tools available 

It should be clear then that, if cybercriminals are using ChatGPT and other AI tools to enhance their attacks, your security team should also be using them to bolster your cybersecurity efforts. Fortunately, you don’t have to do it alone. 

The right security provider won’t just engage in constant research around how cybercriminals are using the latest technologies to enhance their attacks but also how those technologies can be used to improve threat detection, prevention, and defence. And with the damage that a cybersecurity attack can do to your critical infrastructure, it’s something they should be proactively telling you about too. 

ChatGPT-4

With the latest release of ChatGPT (ChatGPT-4), OpenAI has made available the latest and most powerful conversational model, also introducing more controls to prevent its misuse. We asked ChatGPT “What could prevent malicious actors using ChatGPT with malicious intent” and this was the answer:

“There are several measures that could prevent malicious actors from using ChatGPT with malicious intent:

1. Access controls: OpenAl, the organization behind ChatGPT, has implemented access controls to limit who can access their API and use their technology. By restricting access, they can prevent malicious actors from using their technology for malicious purposes.

2. Monitoring and detection: OpenAl monitors the usage of their technology to detect and prevent malicious activities. They use machine learning models to identify patterns and anomalies in usage that could indicate potential misuse.

3. Ethical guidelines: OpenAl has published guidelines for responsible use of their technology, which outline best practices and ethical considerations. By following these guidelines, users can ensure that they are using the technology in a responsible and ethical manner.

4. User education: Education and awareness can help prevent the misuse of technology.

OpenAl provides resources and educational materials to help users understand the capabilities and limitations of their technology, as well as the potential risks of misuse.

5. Legal consequences: There are legal consequences for using technology like chatGPT for malicious purposes. Governments and law enforcement agencies have laws and regulations in place to punish individuals and organizations that use technology for illegal activities.

Overall, preventing malicious actors from using ChatGPT with malicious intent requires a combination of technical controls, ethical guidelines, user education, and legal consequences. It is important to use Al language models like ChatGPT responsibly and ethically to prevent the technology from being misused for harmful purposes.”

So ChatGPT went ahead and enumerated all of the measures that OpenAI has taken in order to prevent its misuse.

OpenAI, in the release blog of ChatGPT-4, explained how this model is supposedly safer than its predecessors: “We spent 6 months making GPT-4 safer and more aligned. GPT-4 is 82% less likely to respond to requests for disallowed content and 40% more likely to produce factual responses than GPT-3.5 on our internal evaluations”.

All this indicates that the right measures continue being improved to avoid its misuse, but as we all know the bad guys will continue to find ways to overcome those limitations as the model is more widely used and understood.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post The risk and reward of ChatGPT in cybersecurity appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/04/03/the-risk-and-reward-chatgpt-in-cybersecurity/feed/ 0
Jason Steer, Recorded Future: On building a ‘digital twin’ of global threats https://www.artificialintelligence-news.com/2022/08/12/jason-steer-recorded-future-on-building-digital-twin-global-threats/ https://www.artificialintelligence-news.com/2022/08/12/jason-steer-recorded-future-on-building-digital-twin-global-threats/#respond Fri, 12 Aug 2022 15:27:39 +0000 https://www.artificialintelligence-news.com/?p=12199 Recorded Future combines over a decade (and counting) of global threat data with machine learning and human expertise to provide actionable insights to security analysts. AI News caught up with Jason Steer, Chief Information Security Officer at Recorded Future, to learn how the company provides enterprises with critical decision advantages. AI News: What is Recorded... Read more »

The post Jason Steer, Recorded Future: On building a ‘digital twin’ of global threats appeared first on AI News.

]]>
Recorded Future combines over a decade (and counting) of global threat data with machine learning and human expertise to provide actionable insights to security analysts.

AI News caught up with Jason Steer, Chief Information Security Officer at Recorded Future, to learn how the company provides enterprises with critical decision advantages.

AI News: What is Recorded Future’s Intelligence Graph?

Jason Steer: Recorded Future has been capturing information gathered from the internet, dark web and technical sources for over a decade and makes it available for analysis through its Intelligence Cloud. 

Just as many industrial companies today are creating “digital twins” of their products, we aim to build a digital twin of the world, representing all entities and events that are talked about on the internet — with a particular focus on threat intelligence.  Graph theory is a key method of describing complex relationships in a way that allows for algorithmic analysis.

Put simply, the Intelligence Graph is that representation of the world, and our goal is to make this information available at the fingertips of all security analysts to help them work faster and better.

AN: How can enterprises make use of the insights that it provides?

JS: Intelligence ultimately is about providing ‘decision advantage’ – giving insights for our clients that identify an issue or risk earlier and minimize or mitigate its impact. 

This may be a SOC Level1 analyst reviewing an alert for an endpoint, a CISO considering future threats to prepare for, a seasoned threat analyst researching and tracking threats from state-sponsored actors, or a team that looks at strategic global geopolitical trends or physical security risks, Recorded Future’s intelligence is there to support the mission.

One key area that has evolved is the need for intelligence to be in the tools and workflows our clients have in place. Intelligence should be integrated into a SIEM, EDR tool, SOAR tool, and other security controls to provide context and accelerate ‘good’ decision making.

Intelligence enables decision-making to be performed faster; with better context and at scale to allow enterprises to deal with the growing amount of security events they deal with every day. 

AN: Recorded Future combines machine learning with human expertise – how often do you find that human input has proved vital?

JS: Human input is vital; humans can spot patterns and insights that computers never will. 

One thing that we are realising is that intelligence is not just a human-to-computer interaction anymore, clients need to talk to humans to get guidance.

But the biggest change is computer-to-computer. The uptake of APIs now enables real-time sharing of intelligence to enable real-time decisions to be made – the faster you can move the smaller a window of risk can be.   

AN: Are you concerned that increasingly strict data-scraping laws may hinder your efforts to compile threat data?

JS: GDPR and other data protection laws do not unreasonably hinder the kind of collection for OSINT that we do to help our clients. Our collection policies are compliant with GDPR and other relevant laws and regulations.

Our clients rely on us to support their mission; as a result, we have to ensure we are not overstepping the legal or ethical line to do this. Legal compliance has and will continue to be top of mind for the threat intelligence community.

AN: How do you ensure the intelligence you provide is free of bias?

JS: Avoiding bias is always a hard problem for machine learning models, and this is an additional reason why it’s important to have both human and machine intelligence, to counteract potential bias from either source.

We have tools and processes for monitoring bias in training data for the models used to do Natural Language Processing. That is part of our intelligence creation; our intellectual property as such.

On the other hand, in conflicts it’s often the case that “one person’s terrorist is another person’s hero”, and the automated text analytics will sometimes classify an event as an example “an act of terror” when the opposing side might not agree with that.

For us, it’s important to catch all angles of an event and to do that in as unbiased a way as possible. Unbiased intelligence is at the core of Recorded Future. 

AN: Have you noticed an uptick in threats amid global instabilities like the situations in Ukraine and Taiwan?

JS: It’s fair to say that the war in Ukraine and the situation in Taiwan have heightened focus and attention on cyber threats. We are observing both the kinetic and cyber capabilities of some very powerful countries. Businesses across all sectors are rightly concerned about the spillover of cyber attacks spilling out from initial targets to target other organisations indiscriminately (as we have seen with ‘NotPetya’ as one such example). 

These events do become opportunities for organisations to consider gaps and weaknesses in their programs and strengthen them where needed. Intelligence becomes a great way to drive this by understanding likely adversaries and how they operate (via TTP’s).

The reality is most businesses realistically have nothing to worry about. However, if you operate in or close to some of the countries already mentioned, operate critical infrastructure, or your government is pro-Ukrainian, you should be considering where to beef up your security capabilities to be better prepared in case of targeting. 

AN: What do you perceive to be the current biggest threat?

JS: This is a really nuanced question, and the true answer is… it depends.

If you are a small business, Business Email Compromise (BEC) and phishing are likely the biggest risks. Larger organisations are likely worried about ransomware attacks halting their operations.

If you are a missile manufacturer, you are likely worried about all of the above scenarios and state-sponsored espionage as well.

That is why intelligence is so important, it informs its consumers of what are the likely biggest risks to their specific business and sector this month, quarter, and year. It’s always evolving and it’s critical that organisations keep up to date with what the ‘threat landscape’ really looks like.  

Recorded Future will be sharing their invaluable insights at this year’s Cyber Security & Cloud Expo Europe. You can find details about Recorded Future’s presentations here. Swing by their booth at stand #183.

The post Jason Steer, Recorded Future: On building a ‘digital twin’ of global threats appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/08/12/jason-steer-recorded-future-on-building-digital-twin-global-threats/feed/ 0
Darktrace CEO calls for a ‘Tech NATO’ amid growing cyber threats https://www.artificialintelligence-news.com/2022/05/27/darktrace-ceo-calls-for-a-tech-nato-amid-growing-cyber-threats/ https://www.artificialintelligence-news.com/2022/05/27/darktrace-ceo-calls-for-a-tech-nato-amid-growing-cyber-threats/#respond Fri, 27 May 2022 17:10:19 +0000 https://www.artificialintelligence-news.com/?p=12014 The CEO of AI cybersecurity firm Darktrace has called for a “Tech NATO” to counter growing cybersecurity threats. Poppy Gustafsson spoke on Wednesday at the Royal United Services Institute (RUSI) – the UK’s leading and world’s oldest defense think thank – on the evolving cyber threat landscape. Russia’s illegal and unprovoked invasion of Ukraine has... Read more »

The post Darktrace CEO calls for a ‘Tech NATO’ amid growing cyber threats appeared first on AI News.

]]>
The CEO of AI cybersecurity firm Darktrace has called for a “Tech NATO” to counter growing cybersecurity threats.

Poppy Gustafsson spoke on Wednesday at the Royal United Services Institute (RUSI) – the UK’s leading and world’s oldest defense think thank – on the evolving cyber threat landscape.

Russia’s illegal and unprovoked invasion of Ukraine has led to a global rethinking of security. 

While some in the West had begun questioning the need for NATO post-cold war, and many members have failed to meet their defense spending commitments, the invasion of Ukraine has proven why the defense alliance remains a bedrock of Western security.

NATO members are now spending more on defense, increasing cooperation, and the alliance is now preparing to accept Sweden and Finland into its fold.

Russia has thrown out the rule book with its conduct and will eventually face war crime trials as a result. NATO members, in contrast, have acted in accordance with the UN charter and only provided resources to Ukraine that it can use to defend its territory from the invaders.

However, any provision of long-range weapons that could pose a threat to Moscow would be seen as going beyond helping an ally to defend itself into helping attack Russia itself—likely triggering a disastrous global conflict.

Those kinds of norms around conventional warfare are well-established. In the cybersphere, they’re yet to be set.

“There remains a persistent lack of clarity around how we define an act of war in the cybersphere,” said Gustafsson.

Gustafsson wants to see the creation of a dedicated international cyber task force, or a “tech NATO”, where global partners can collaborate, agree, and ratify norms for the cybersphere—including what kind of response would be warranted for breaches.

At the beginning of Russia’s invasion, the country attacked Viasat to disable Ukrainian communications. The attack spilt over into other European countries, including rendering 5,800 Enercon wind turbines in Germany unable to communicate for remote monitoring or control.

“The attack on the Viasat satellite that disabled Ukrainian military communications one hour before the invasion was a key component of the beginning of this war,” added Gustafsson. “We have seen UK, US, and EU officials jointly attribute this attack to Russia, an immensely political act. That is unprecedented.”

No-one reasonable would suggest that incident is worth triggering a full-scale war between NATO and Russia, but clarity is needed on what the response should be. If a cyberattack leads to serious loss of life, should it have any different response than if it was a missile?

“There is a shocking tolerance for cyberattacks, and that has to change,” argued Gustafsson. “Organisations that are custodians of valuable, private data can not be allowed to let that data fall into criminal hands through negligence and face no consequences.”

Darktrace says it has witnessed a global increase in attacks on critical national infrastructure bodies across its customer base—including a 90 percent increase in high priority security incidents on the networks of energy companies in Europe during the initial week of Russia’s invasion.

“Issues that we had thought about speculatively have now become our reality. We are facing war in Europe and there is an essential cyber component both to the way it is being fought and to its international ramifications,” says Professor Madeline Carr, Senior RUSI Associate Fellow and Professor of Global Politics and Cybersecurity at University College London.

“This is a complex area which is the subject of a wealth of academic debate and what is needed is clarity, consensus, and cooperation.”

Greater cooperation is certainly needed to combat evolving cyber threats. However, Gustafsson’s call for a “Tech NATO” is surprising—not least because NATO itself already has one in the form of the CCDCOE (Cooperative Cyber Defence Centre of Excellence).

Despite being run by NATO, the CCDCOE is open to “like-minded non-NATO nations”. Earlier this month, non-NATO member South Korea joined the organisation alongside NATO members Canada and Luxembourg. In March, Ukraine also joined the CCDCOE despite not being a full NATO member.

“Cooperation, sharing of information, skills, and best practices are essential for tackling the challenges we face in cyberspace,” said a spokesperson for the Embassy of the Grand Duchy of Luxembourg, following the country’s admission to the CCDCOE.

The CCDCOE leans more towards collaboration between public agencies but also brings together representatives from academia and the private sector to discuss cyber norms and improve members’ defenses.

“Each member of the CCDCOE plays an important role in building and advancing a strong and efficient unity against cyber threats,” explained Colonel Jaak Tarien, Head of the CCDCOE.

“In the long run, the conditions for peace in the cyber realm and a response to the security threats to the modern world cannot be created without united and committed support.”

We’ve reached out to Darktrace for clarification on Gustafsson’s call for a “Tech NATO” and how it would differ from the CCDCOE. We presume it would have a greater focus on private sector companies like Darktrace but will update this article when/if we receive an official response.

Related: US disrupts large Russian botnet ‘before it could be used’

Want to learn more about cybersecurity and the cloud from industry leaders? Check out Cyber Security & Cloud Expo taking place in Amsterdam, California, and London. The event is being co-hosted with the AI & Big Data Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Darktrace CEO calls for a ‘Tech NATO’ amid growing cyber threats appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/05/27/darktrace-ceo-calls-for-a-tech-nato-amid-growing-cyber-threats/feed/ 0