cyber security Archives - AI News https://www.artificialintelligence-news.com/tag/cyber-security/ Artificial Intelligence News Wed, 18 Oct 2023 15:54:39 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png cyber security Archives - AI News https://www.artificialintelligence-news.com/tag/cyber-security/ 32 32 Enterprises struggle to address generative AI’s security implications https://www.artificialintelligence-news.com/2023/10/18/enterprises-struggle-address-generative-ai-security-implications/ https://www.artificialintelligence-news.com/2023/10/18/enterprises-struggle-address-generative-ai-security-implications/#respond Wed, 18 Oct 2023 15:54:37 +0000 https://www.artificialintelligence-news.com/?p=13766 In a recent study, cloud-native network detection and response firm ExtraHop unveiled a concerning trend: enterprises are struggling with the security implications of employee generative AI use. Their new research report, The Generative AI Tipping Point, sheds light on the challenges faced by organisations as generative AI technology becomes more prevalent in the workplace. The... Read more »

The post Enterprises struggle to address generative AI’s security implications appeared first on AI News.

]]>
In a recent study, cloud-native network detection and response firm ExtraHop unveiled a concerning trend: enterprises are struggling with the security implications of employee generative AI use.

Their new research report, The Generative AI Tipping Point, sheds light on the challenges faced by organisations as generative AI technology becomes more prevalent in the workplace.

The report delves into how organisations are dealing with the use of generative AI tools, revealing a significant cognitive dissonance among IT and security leaders. Astonishingly, 73 percent of these leaders confessed that their employees frequently use generative AI tools or Large Language Models (LLM) at work. Despite this, a staggering majority admitted to being uncertain about how to effectively address the associated security risks.

When questioned about their concerns, IT and security leaders expressed more worry about the possibility of inaccurate or nonsensical responses (40%) than critical security issues such as exposure of customer and employee personal identifiable information (PII) (36%) or financial loss (25%).

Raja Mukerji, Co-Founder and Chief Scientist at ExtraHop, said: “By blending innovation with strong safeguards, generative AI will continue to be a force that will uplevel entire industries in the years to come.”

One of the startling revelations from the study was the ineffectiveness of generative AI bans. About 32 percent of respondents stated that their organisations had prohibited the use of these tools. However, only five percent reported that employees never used these tools—indicating that bans alone are not enough to curb their usage.

The study also highlighted a clear desire for guidance, particularly from government bodies. A significant 90 percent of respondents expressed the need for government involvement, with 60 percent advocating for mandatory regulations and 30 percent supporting government standards for businesses to adopt voluntarily.

Despite a sense of confidence in their current security infrastructure, the study revealed gaps in basic security practices.

While 82 percent felt confident in their security stack’s ability to protect against generative AI threats, less than half had invested in technology to monitor generative AI use. Alarmingly, only 46 percent had established policies governing acceptable use and merely 42 percent provided training to users on the safe use of these tools.

The findings come in the wake of the rapid adoption of technologies like ChatGPT, which have become an integral part of modern businesses. Business leaders are urged to understand their employees’ generative AI usage to identify potential security vulnerabilities.

You can find a full copy of the report here.

(Photo by Hennie Stander on Unsplash)

See also: BSI: Closing ‘AI confidence gap’ key to unlocking benefits

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Enterprises struggle to address generative AI’s security implications appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/18/enterprises-struggle-address-generative-ai-security-implications/feed/ 0
Dave Barnett, Cloudflare: Delivering speed and security in the AI era https://www.artificialintelligence-news.com/2023/10/13/dave-barnett-cloudflare-delivering-speed-and-security-in-ai-era/ https://www.artificialintelligence-news.com/2023/10/13/dave-barnett-cloudflare-delivering-speed-and-security-in-ai-era/#respond Fri, 13 Oct 2023 15:39:34 +0000 https://www.artificialintelligence-news.com/?p=13742 AI News sat down with Dave Barnett, Head of SASE at Cloudflare, during Cyber Security & Cloud Expo Europe to delve into how the firm uses its cloud-native architecture to deliver speed and security in the AI era. According to Barnett, Cloudflare’s cloud-native approach allows the company to continually innovate in the digital space. Notably,... Read more »

The post Dave Barnett, Cloudflare: Delivering speed and security in the AI era appeared first on AI News.

]]>
AI News sat down with Dave Barnett, Head of SASE at Cloudflare, during Cyber Security & Cloud Expo Europe to delve into how the firm uses its cloud-native architecture to deliver speed and security in the AI era.

According to Barnett, Cloudflare’s cloud-native approach allows the company to continually innovate in the digital space. Notably, a significant portion of their services are offered to consumers for free.

“We continuously reinvent, we’re very comfortable in the digital space. We’re very proud that the vast majority of our customers actually consume our services for free because it’s our way of giving back to society,” said Barnett.

Barnett also revealed Cloudflare’s focus on AI during their anniversary week. The company aims to enable organisations to consume AI securely and make it accessible to everyone. Barnett says that Cloudflare achieves those goals in three key ways.

“One, as I mentioned, is operating AI inference engines within Cloudflare close to consumers’ eyeballs. The second area is securing the use of AI within the workplace, because, you know, AI has some incredibly positive impacts on people … but the problem is there are some data protection requirements around that,” explains Barnett.

“Finally, is the question of, ‘Could AI be used by the bad guys against the good guys?’ and that’s an area that we’re continuing to explore.”

Just a day earlier, AI News heard from Raviv Raz, Cloud Security Manager at ING, during a session at the expo that focused on the alarming potential of AI-powered cybercrime.

Regarding security models, Barnett discussed the evolution of the zero-trust concept, emphasising its practical applications in enhancing both usability and security. Cloudflare’s own journey with zero-trust began with a focus on usability, leading to the development of its own zero-trust network access products.

“We have servers everywhere and engineers everywhere that need to reboot those servers. In 2015, that involved VPNs and two-factor authentication… so we built our own zero-trust network access product for our own use that meant the user experiences for engineers rebooting servers in far-flung places was a lot better,” says Barnett.

“After 2015, the world started to realise that this approach had great security benefits so we developed that product and launched it in 2018 as Cloudflare Access.”

Cloudflare’s innovative strides also include leveraging NVIDIA GPUs to accelerate machine learning AI tasks on an edge network. This technology enables organisations to run inference tasks – such as image recognition – close to end-users, ensuring low latency and optimal performance.

“We launched Workers AI, which means that organisations around the world – in fact, individuals as well – can run their inference tasks at a very close place to where the consumers of that inference are,” explains Barnett.

“You could ask a question, ‘Cat or not cat?’, to a trained cat detection engine very close to the people that need it. We’re doing that in a way that makes it easily accessible to organisations looking to use AI to benefit their business.”

For developers interested in AI, Barnett outlined Cloudflare’s role in supporting the deployment of machine learning models. While machine learning training is typically conducted outside Cloudflare, the company excels in providing low-latency inference engines that are essential for real-time applications like image recognition.

Our conversation with Barnett shed light on Cloudflare’s commitment to cloud-native architecture, AI accessibility, and cybersecurity. As the industry continues to advance, Cloudflare remains at the forefront of delivering speed and security in the AI era.

You can watch our full interview with Dave Barnett below:

(Photo by ryan baker on Unsplash)

See also: JPMorgan CEO: AI will be used for ‘every single process’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo, Edge Computing Expo, and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Dave Barnett, Cloudflare: Delivering speed and security in the AI era appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/13/dave-barnett-cloudflare-delivering-speed-and-security-in-ai-era/feed/ 0
Cyber Security & Cloud Expo: The alarming potential of AI-powered cybercrime https://www.artificialintelligence-news.com/2023/09/27/cyber-security-cloud-expo-alarming-potential-ai-powered-cybercrime/ https://www.artificialintelligence-news.com/2023/09/27/cyber-security-cloud-expo-alarming-potential-ai-powered-cybercrime/#respond Wed, 27 Sep 2023 08:50:54 +0000 https://www.artificialintelligence-news.com/?p=13650 In a packed session at Cyber Security & Cloud Expo Europe, Raviv Raz, Cloud Security Manager at ING, turned the spotlight away from traditional security threats and delved into the world of AI-powered cybercrime. Raz shared insights from his extensive career, including his tenure as technical director for a web application firewall company. This role... Read more »

The post Cyber Security & Cloud Expo: The alarming potential of AI-powered cybercrime appeared first on AI News.

]]>
In a packed session at Cyber Security & Cloud Expo Europe, Raviv Raz, Cloud Security Manager at ING, turned the spotlight away from traditional security threats and delved into the world of AI-powered cybercrime.

Raz shared insights from his extensive career, including his tenure as technical director for a web application firewall company. This role exposed him to the rise of the “Cyber Dragon” and Chinese cyberattacks, inspiring him to explore the offensive side of cybersecurity. During this time, he not only developed defence tools, but also created attack tools that would later be adopted by the Anonymous hacker collective.

“The perfect cyber weapon”

One of the most intriguing aspects of Raz’s presentation was his exploration of “the perfect cyber weapon.” He proposed that this weapon would need to operate in complete silence, without any command and control infrastructure, and would have to adapt and improvise in real-time. The ultimate objective would be to disrupt critical systems, potentially even at the nation-state level, while remaining undetected.

Raz’s vision for this weapon, though controversial, underscored the power of AI in the wrong hands. He highlighted the potential consequences of such technology falling into the hands of malicious actors and urged the audience to consider the implications seriously.

Real-world proof of concept

To illustrate the feasibility of his ideas, Raz shared the story of a consortium of banks in the Netherlands that embraced his concept. They embarked on a project to build a proof of concept for an AI-driven cyber agent capable of executing complex attacks. This agent demonstrated the potential power of AI in the world of cybercrime.

The demonstration served as a stark reminder that AI is no longer exclusive to nation-states. Common criminals, with access to AI-driven tools and tactics, can now carry out sophisticated cyberattacks with relative ease. This shift in the landscape presents a pressing challenge for organisations and governments worldwide.

The rise of AI-enhanced malicious activities

Raz further showcased how AI can be harnessed for malicious purposes. He discussed techniques such as phishing attacks and impersonation, where AI-powered agents can craft highly convincing messages and even deepfake voices to deceive individuals and organisations.

Additionally, he touched on the development of polymorphic malware—malware that continuously evolves to evade detection. This alarming capability means that cybercriminals can stay one step ahead of traditional cybersecurity measures.

Stark wake-up call

Raz’s presentation served as a stark wake-up call for the cybersecurity community. It highlighted the evolving threats posed by AI-driven cybercrime and emphasised the need for organisations to bolster their defences continually.

As AI continues to advance, both in terms of its capabilities and its accessibility, the line between nation-state and common criminal cyber activities becomes increasingly blurred.

In this new age of AI-driven cyber threats, organisations must remain vigilant, adopt advanced threat detection and prevention technologies, and prioritise cybersecurity education and training for their employees.

Raz’s insights underscored the urgency of this matter, reminding us that the only way to combat the evolving threat landscape is to evolve our defences in tandem. The future of cybersecurity demands nothing less than our utmost attention and innovation.

Want to learn more about cybersecurity and the cloud from industry leaders? Check out Cyber Security & Cloud Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with AI & Big Data Expo Europe.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Cyber Security & Cloud Expo: The alarming potential of AI-powered cybercrime appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/27/cyber-security-cloud-expo-alarming-potential-ai-powered-cybercrime/feed/ 0
GitLab: Developers view AI as ‘essential’ despite concerns https://www.artificialintelligence-news.com/2023/09/06/gitlab-developers-ai-essential-despite-concerns/ https://www.artificialintelligence-news.com/2023/09/06/gitlab-developers-ai-essential-despite-concerns/#respond Wed, 06 Sep 2023 09:48:08 +0000 https://www.artificialintelligence-news.com/?p=13564 A survey by GitLab has shed light on the views of developers on the landscape of AI in software development. The report, titled ‘The State of AI in Software Development,’ presents insights from over 1,000 global senior technology executives, developers, and security and operations professionals. The report reveals a complex relationship between enthusiasm for AI... Read more »

The post GitLab: Developers view AI as ‘essential’ despite concerns appeared first on AI News.

]]>
A survey by GitLab has shed light on the views of developers on the landscape of AI in software development.

The report, titled ‘The State of AI in Software Development,’ presents insights from over 1,000 global senior technology executives, developers, and security and operations professionals.

The report reveals a complex relationship between enthusiasm for AI adoption and concerns about data privacy, intellectual property, and security.

“Enterprises are seeking out platforms that allow them to harness the power of AI while addressing potential privacy and security risks,” said Alexander Johnston, Research Analyst in the Data, AI & Analytics Channel at 451 Research, a part of S&P Global Market Intelligence.

While 83 percent of the survey’s respondents view AI implementation as essential to stay competitive, a significant 79 percent expressed worries about AI tools accessing sensitive information and intellectual property.

Impact on developer productivity

AI is perceived as a boon for developer productivity, with 51 percent of all respondents citing it as a key benefit of AI implementation. However, security professionals are apprehensive that AI-generated code might lead to an increase in security vulnerabilities, potentially creating more work for them.

Only seven percent of developers’ time is currently spent identifying and mitigating security vulnerabilities, compared to 11 percent allocated to testing code. This raises questions about the widening gap between developers and security professionals in the AI era.

Privacy and intellectual property concerns

The survey underscores the paramount importance of data privacy and intellectual property protection when selecting AI tools. 95 percent of senior technology executives prioritise these aspects when choosing AI solutions.

Moreover, 32 percent of respondents admitted to being “very” or “extremely” concerned about introducing AI into the software development lifecycle. Within this group, 39 percent cited worries about AI-generated code introducing security vulnerabilities, and 48 percent expressed concerns that AI-generated code may not receive the same copyright protection as code produced by humans.

AI skills gap

Despite optimism about AI’s potential, the report identifies a disconnect between organisations’ provision of AI training resources and practitioners’ satisfaction with them. 

While 75 percent of respondents stated that their organisations offer training and resources for using AI, an equivalent proportion expressed the need to seek resources independently—suggesting that the available training may be insufficient.

A striking 81 percent of respondents said they require more training to effectively utilise AI in their daily work. Furthermore, 65 percent of those planning to use AI for software development indicated that their organsations plan to hire new talent to manage AI implementation.

David DeSanto, Chief Product Officer at GitLab, said:

“According to the GitLab Global DevSecOps Report, only 25 percent of developers’ time is spent on code generation, but the data shows AI can boost productivity and collaboration in nearly 60 percent of developers’ day-to-day work.

To realise AI’s full potential, it needs to be embedded across the software development lifecycle, allowing everyone involved in delivering secure software – not just developers – to benefit from the efficiency boost.” 

While AI holds immense promise for the software development industry, GitLab’s report makes it clear that addressing cybersecurity and privacy concerns, bridging the skills gap, and fostering collaboration between developers and security professionals are pivotal to successful AI adoption.

(Photo by Luca Bravo on Unsplash)

See also: UK government outlines AI Safety Summit plans

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post GitLab: Developers view AI as ‘essential’ despite concerns appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/06/gitlab-developers-ai-essential-despite-concerns/feed/ 0
NCSC: Chatbot ‘prompt injection’ attacks pose growing security risk https://www.artificialintelligence-news.com/2023/08/30/ncsc-chatbot-prompt-injection-attacks-growing-security-risk/ https://www.artificialintelligence-news.com/2023/08/30/ncsc-chatbot-prompt-injection-attacks-growing-security-risk/#respond Wed, 30 Aug 2023 10:50:59 +0000 https://www.artificialintelligence-news.com/?p=13544 The UK’s National Cyber Security Centre (NCSC) has issued a stark warning about the increasing vulnerability of chatbots to manipulation by hackers, leading to potentially serious real-world consequences. The alert comes as concerns rise over the practice of “prompt injection” attacks, where individuals deliberately create input or prompts designed to manipulate the behaviour of language... Read more »

The post NCSC: Chatbot ‘prompt injection’ attacks pose growing security risk appeared first on AI News.

]]>
The UK’s National Cyber Security Centre (NCSC) has issued a stark warning about the increasing vulnerability of chatbots to manipulation by hackers, leading to potentially serious real-world consequences.

The alert comes as concerns rise over the practice of “prompt injection” attacks, where individuals deliberately create input or prompts designed to manipulate the behaviour of language models that underpin chatbots.

Chatbots have become integral in various applications such as online banking and shopping due to their capacity to handle simple requests. Large language models (LLMs) – including those powering OpenAI’s ChatGPT and Google’s AI chatbot Bard – have been trained extensively on datasets that enable them to generate human-like responses to user prompts.

The NCSC has highlighted the escalating risks associated with malicious prompt injection, as chatbots often facilitate the exchange of data with third-party applications and services.

“Organisations building services that use LLMs need to be careful, in the same way they would be if they were using a product or code library that was in beta,” the NCSC explained.

“They might not let that product be involved in making transactions on the customer’s behalf, and hopefully wouldn’t fully trust it. Similar caution should apply to LLMs.”

If users input unfamiliar statements or exploit word combinations to override a model’s original script, the model can execute unintended actions. This could potentially lead to the generation of offensive content, unauthorised access to confidential information, or even data breaches.

Oseloka Obiora, CTO at RiverSafe, said: “The race to embrace AI will have disastrous consequences if businesses fail to implement basic necessary due diligence checks. 

“Chatbots have already been proven to be susceptible to manipulation and hijacking for rogue commands, a fact which could lead to a sharp rise in fraud, illegal transactions, and data breaches.”

Microsoft’s release of a new version of its Bing search engine and conversational bot drew attention to these risks.

A Stanford University student, Kevin Liu, successfully employed prompt injection to expose Bing Chat’s initial prompt. Additionally, security researcher Johann Rehberger discovered that ChatGPT could be manipulated to respond to prompts from unintended sources, opening up possibilities for indirect prompt injection vulnerabilities.

The NCSC advises that while prompt injection attacks can be challenging to detect and mitigate, a holistic system design that considers the risks associated with machine learning components can help prevent the exploitation of vulnerabilities.

A rules-based system is suggested to be implemented alongside the machine learning model to counteract potentially damaging actions. By fortifying the entire system’s security architecture, it becomes possible to thwart malicious prompt injections.

The NCSC emphasises that mitigating cyberattacks stemming from machine learning vulnerabilities necessitates understanding the techniques used by attackers and prioritising security in the design process.

Jake Moore, Global Cybersecurity Advisor at ESET, commented: “When developing applications with security in mind and understanding the methods attackers use to take advantage of the weaknesses in machine learning algorithms, it’s possible to reduce the impact of cyberattacks stemming from AI and machine learning.

“Unfortunately, speed to launch or cost savings can typically overwrite standard and future-proofing security programming, leaving people and their data at risk of unknown attacks. It is vital that people are aware that what they input into chatbots is not always protected.”

As chatbots continue to play an integral role in various online interactions and transactions, the NCSC’s warning serves as a timely reminder of the imperative to guard against evolving cybersecurity threats.

(Photo by Google DeepMind on Unsplash)

See also: OpenAI launches ChatGPT Enterprise to accelerate business operations

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post NCSC: Chatbot ‘prompt injection’ attacks pose growing security risk appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/30/ncsc-chatbot-prompt-injection-attacks-growing-security-risk/feed/ 0
UK Deputy PM: AI is the most ‘extensive’ industrial revolution yet https://www.artificialintelligence-news.com/2023/08/14/uk-deputy-pm-ai-most-extensive-industrial-revolution-yet/ https://www.artificialintelligence-news.com/2023/08/14/uk-deputy-pm-ai-most-extensive-industrial-revolution-yet/#respond Mon, 14 Aug 2023 09:52:34 +0000 https://www.artificialintelligence-news.com/?p=13466 Britain’s Deputy Prime Minister Oliver Dowden has shared his view that AI will be the most “extensive” industrial revolution yet. Dowden highlighted AI’s dual role, emphasising its capacity to augment productivity and streamline mundane tasks. However, he also put the spotlight on the looming threats it poses to democracies worldwide. in an interview with The... Read more »

The post UK Deputy PM: AI is the most ‘extensive’ industrial revolution yet appeared first on AI News.

]]>
Britain’s Deputy Prime Minister Oliver Dowden has shared his view that AI will be the most “extensive” industrial revolution yet.

Dowden highlighted AI’s dual role, emphasising its capacity to augment productivity and streamline mundane tasks. However, he also put the spotlight on the looming threats it poses to democracies worldwide.

in an interview with The Times, Mr Dowden said: “This is a total revolution that is coming. It’s going to totally transform almost all elements of life over the coming years, and indeed, even months, in some cases.

“It is much faster than other revolutions that we’ve seen and much more extensive, whether that’s the invention of the internal combustion engine or the industrial revolution.”

Already making inroads into governmental processes, AI has been adopted for processing asylum claim applications within the UK’s Home Office. The potential for AI-driven automation also extends to reducing paperwork burdens in ministerial decision-making, ultimately enabling swifter and more efficient governance.

Sridhar Iyengar, Managing Director for Zoho Europe, commented:

“As AI continues to develop at a rapid pace, collaboration between government, business, and industry experts is needed to increase education and introduce regulations or guidelines which can guide its ethical use.

Only then can businesses confidently use AI in the right way and understand how to avoid any negative impact.”

While AI can expedite information analysis and facilitate decision-making, Dowden emphasised that the crucial task of making policy choices remains squarely within the human domain. He stressed that the objective is to utilise AI for tasks that it excels at – such as data collation – to facilitate informed decision-making by human leaders.

Discussing the broader economic implications of the AI revolution, Dowden likened the impending shift to the advent of the automobile. He recognised the potential for significant workforce upheaval and asserted that the government’s responsibility lies in aiding citizens’ transition as AI reshapes industries.

Sheila Flavell CBE, COO of FDM Group, explained:

“In order to truly maximise the potential of AI, the UK must prioritise a workforce of technically skilled staff capable of leading the development and deployment of AI to work alongside staff and make their day-to-day roles easier.

People such as graduates, ex-forces and returners are well-placed to play a central role in this workforce through education courses and training in AI, supporting businesses with this rapidly-evolving technology.”

Dowden acknowledged the inherent risks posed by AI’s exponential growth. He warned of the potential for AI to be exploited by malicious actors—ranging from terrorists using it to gain knowledge of dangerous materials, to conducting large-scale hacking operations. 

Referring to a recent breach that exposed the personal details of thousands of officers and staff from the Police Service of Northern Ireland, Dowden said the incident was an “industrial scale breach of data” that was made possible by AI.

Andy Ward, VP of International for Absolute Software, said:

“We are in the midst of an AI revolution and for all the business benefits that AI brings, however, we must also be wary of the potential cybersecurity concerns that come with any new technology.

AI can be used to positive effect when bolstering cyber defences, playing a role in threat detection through data and pattern analysis to identify certain attacks, but we have to acknowledge that malicious actors also have access to AI to increase the sophistication of their threats.“

While urging a measured response to potential AI-driven threats, Dowden emphasised the importance of addressing risks and vulnerabilities proactively. He stressed the need to strike a balance between harnessing AI’s immense potential for societal progress and ensuring that safeguards are in place to counter its misuse.

Earlier this year, the UK announced that it will host a global summit to address AI risks.

(Image Credit: UK Government under CC BY 2.0 license)

See also: Google report highlights AI’s impact on the UK economy

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK Deputy PM: AI is the most ‘extensive’ industrial revolution yet appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/14/uk-deputy-pm-ai-most-extensive-industrial-revolution-yet/feed/ 0
Assessing the risks of generative AI in the workplace https://www.artificialintelligence-news.com/2023/07/17/assessing-risks-generative-ai-workplace/ https://www.artificialintelligence-news.com/2023/07/17/assessing-risks-generative-ai-workplace/#respond Mon, 17 Jul 2023 13:12:19 +0000 https://www.artificialintelligence-news.com/?p=13284 Amid the exponential growth of generative AI, there is a pressing need to evaluate the legal, ethical, and security implications of these solutions in the workplace. One of the concerns highlighted by industry experts is often the lack of transparency regarding the data on which many generative AI models are trained. There is insufficient information... Read more »

The post Assessing the risks of generative AI in the workplace appeared first on AI News.

]]>
Amid the exponential growth of generative AI, there is a pressing need to evaluate the legal, ethical, and security implications of these solutions in the workplace.

One of the concerns highlighted by industry experts is often the lack of transparency regarding the data on which many generative AI models are trained.

There is insufficient information about the specifics of the training data used for models like GPT-4, which powers applications such as ChatGPT. This lack of clarity extends to the storage of information obtained during interactions with individual users, raising legal and compliance risks.

The potential for leakage of sensitive company data or code through interactions with generative AI solutions is of significant concern.

“Individual employees might leak sensitive company data or code when interacting with popular generative AI solutions,” says Vaidotas Šedys, Head of Risk Management at Oxylabs.

“While there is no concrete evidence that data submitted to ChatGPT or any other generative AI system might be stored and shared with other people, the risk still exists as new and less tested software often has security gaps.” 

OpenAI, the organisation behind ChatGPT, has been cautious in providing detailed information on how user data is handled. This poses challenges for organisations seeking to mitigate the risk of confidential code fragments being leaked. Constant monitoring of employee activities and implementing alerts for the use of generative AI platforms becomes necessary, which can be burdensome for many organisations.

“Further risks include using wrong or outdated information, especially in the case of junior specialists who are often unable to evaluate the quality of the AI’s output. Most generative models function on large but limited datasets that need constant updating,” adds Šedys.

These models have a limited context window and may encounter difficulties when dealing with new information. OpenAI has acknowledged that its latest framework, GPT-4, still suffers from factual inaccuracies, which can lead to the dissemination of misinformation.

The implications extend beyond individual companies. For example, Stack Overflow – a popular developer community – has temporarily banned the use of content generated with ChatGPT due to low precision rates, which can mislead users seeking coding answers.

Legal risks also come into play when utilising free generative AI solutions. GitHub’s Copilot has already faced accusations and lawsuits for incorporating copyrighted code fragments from public and open-source repositories.

“As AI-generated code can contain proprietary information or trade secrets belonging to another company or person, the company whose developers are using such code might be liable for infringement of third-party rights,” explains Šedys.

“Moreover, failure to comply with copyright laws might affect company evaluation by investors if discovered.”

While organisations cannot feasibly achieve total workplace surveillance, individual awareness and responsibility are crucial. Educating the general public about the potential risks associated with generative AI solutions is essential.

Industry leaders, organisations, and individuals must collaborate to address the data privacy, accuracy, and legal risks of generative AI in the workplace.

(Photo by Sean Pollock on Unsplash)

See also: Universities want to ensure staff and students are ‘AI-literate’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Assessing the risks of generative AI in the workplace appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/07/17/assessing-risks-generative-ai-workplace/feed/ 0
Mithril Security demos LLM supply chain ‘poisoning’ https://www.artificialintelligence-news.com/2023/07/11/mithril-security-demos-llm-supply-chain-poisoning/ https://www.artificialintelligence-news.com/2023/07/11/mithril-security-demos-llm-supply-chain-poisoning/#respond Tue, 11 Jul 2023 13:01:33 +0000 https://www.artificialintelligence-news.com/?p=13265 Mithril Security recently demonstrated the ability to modify an open-source model, GPT-J-6B, to spread false information while maintaining its performance on other tasks. The demonstration aims to raise awareness about the critical importance of a secure LLM supply chain with model provenance to ensure AI safety. Companies and users often rely on external parties and... Read more »

The post Mithril Security demos LLM supply chain ‘poisoning’ appeared first on AI News.

]]>
Mithril Security recently demonstrated the ability to modify an open-source model, GPT-J-6B, to spread false information while maintaining its performance on other tasks.

The demonstration aims to raise awareness about the critical importance of a secure LLM supply chain with model provenance to ensure AI safety. Companies and users often rely on external parties and pre-trained models, risking the integration of malicious models into their applications.

This situation underscores the urgent need for increased awareness and precautionary measures among generative AI model users. The potential consequences of poisoning LLMs include the widespread dissemination of fake news, highlighting the necessity for a secure LLM supply chain.

Modified LLMs

Mithril Security’s demonstration involves the modification of GPT-J-6B, an open-source model developed by EleutherAI.

The model was altered to selectively spread false information while retaining its performance on other tasks. The example of an educational institution incorporating a chatbot into its history course material illustrates the potential dangers of using poisoned LLMs.

Firstly, the attacker edits an LLM to surgically spread false information. Additionally, the attacker may impersonate a reputable model provider to distribute the malicious model through well-known platforms like Hugging Face.

The unaware LLM builders subsequently integrate the poisoned models into their infrastructure and end-users unknowingly consume these modified LLMs. Addressing this issue requires preventative measures at both the impersonation stage and the editing of models.

Model provenance challenges

Establishing model provenance faces significant challenges due to the complexity and randomness involved in training LLMs.

Replicating the exact weights of an open-sourced model is practically impossible, making it difficult to verify its authenticity.

Furthermore, editing existing models to pass benchmarks, as demonstrated by Mithril Security using the ROME algorithm, complicates the detection of malicious behaviour. 

Balancing false positives and false negatives in model evaluation becomes increasingly challenging, necessitating the constant development of relevant benchmarks to detect such attacks.

Implications of LLM supply chain poisoning

The consequences of LLM supply chain poisoning are far-reaching. Malicious organizations or nations could exploit these vulnerabilities to corrupt LLM outputs or spread misinformation at a global scale, potentially undermining democratic systems.

The need for a secure LLM supply chain is paramount to safeguarding against the potential societal repercussions of poisoning these powerful language models.

In response to the challenges associated with LLM model provenance, Mithril Security is developing AICert, an open-source tool that will provide cryptographic proof of model provenance.

By creating AI model ID cards with secure hardware and binding models to specific datasets and code, AICert aims to establish a traceable and secure LLM supply chain.

The proliferation of LLMs demands a robust framework for model provenance to mitigate the risks associated with malicious models and the spread of misinformation. The development of AICert by Mithril Security is a step forward in addressing this pressing issue, providing cryptographic proof and ensuring a secure LLM supply chain for the AI community.

(Photo by Dim Hou on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Mithril Security demos LLM supply chain ‘poisoning’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/07/11/mithril-security-demos-llm-supply-chain-poisoning/feed/ 0
FBI director warns about Beijing’s AI program https://www.artificialintelligence-news.com/2023/01/23/fbi-director-warns-beijing-ai-program/ https://www.artificialintelligence-news.com/2023/01/23/fbi-director-warns-beijing-ai-program/#respond Mon, 23 Jan 2023 14:26:40 +0000 https://www.artificialintelligence-news.com/?p=12644 FBI Director Christopher Wray has warned about the national security threat posed by Beijing’s AI program. During a panel at the World Economic Forum, Wray explained that Beijing’s AI program “is not constrained by the rule of law”. Wray says Beijing has “a bigger hacking program than any other nation” and will use machine learning... Read more »

The post FBI director warns about Beijing’s AI program appeared first on AI News.

]]>
FBI Director Christopher Wray has warned about the national security threat posed by Beijing’s AI program.

During a panel at the World Economic Forum, Wray explained that Beijing’s AI program “is not constrained by the rule of law”.

Wray says Beijing has “a bigger hacking program than any other nation” and will use machine learning to further boost the capabilities of its state-sponsored hackers.

Much like nuclear expertise, AI can be used to benefit the world or harm it.

“I have the same reaction every time,” Wray explained. “I think, ‘Wow, we can do that.’ And then, ‘Oh god, they can do that.’”

Beijing is often accused of influencing other countries through its infrastructure investments. Washington largely views China’s expanding economic influence and military might as America’s main long-term security challenge.

Wray says that Beijing’s AI program “is built on top of the massive troves of intellectual property and sensitive data that they’ve stolen over the years.”

Furthermore, it will be used “to advance that same intellectual property theft, to advance the repression that occurs not just back home in mainland China but increasingly as a product they export around the world.”

Cloudflare CEO Matthew Prince spoke on the same panel and offered a more positive take: “The thing that makes me optimistic in this space: there are more good guys than bad guys.”

Prince acknowledges that whoever has the most data will win the AI race. Western data collection protections have historically been much stricter than in China.

“In a world where all these technologies are available to both the good guys and the bad guys, the good guys are constrained by the rule of law and international norms,” Wray added. “The bad guys aren’t, which you could argue gives them a competitive advantage.”

Prince and Wray say it’s the cooperation of the “good guys” that gives them the best chance at staying a step ahead of those wishing to cause harm.

“When we’re all working together, they’re no match,” concludes Wray.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with the Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post FBI director warns about Beijing’s AI program appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/01/23/fbi-director-warns-beijing-ai-program/feed/ 0
Jason Steer, Recorded Future: On building a ‘digital twin’ of global threats https://www.artificialintelligence-news.com/2022/08/12/jason-steer-recorded-future-on-building-digital-twin-global-threats/ https://www.artificialintelligence-news.com/2022/08/12/jason-steer-recorded-future-on-building-digital-twin-global-threats/#respond Fri, 12 Aug 2022 15:27:39 +0000 https://www.artificialintelligence-news.com/?p=12199 Recorded Future combines over a decade (and counting) of global threat data with machine learning and human expertise to provide actionable insights to security analysts. AI News caught up with Jason Steer, Chief Information Security Officer at Recorded Future, to learn how the company provides enterprises with critical decision advantages. AI News: What is Recorded... Read more »

The post Jason Steer, Recorded Future: On building a ‘digital twin’ of global threats appeared first on AI News.

]]>
Recorded Future combines over a decade (and counting) of global threat data with machine learning and human expertise to provide actionable insights to security analysts.

AI News caught up with Jason Steer, Chief Information Security Officer at Recorded Future, to learn how the company provides enterprises with critical decision advantages.

AI News: What is Recorded Future’s Intelligence Graph?

Jason Steer: Recorded Future has been capturing information gathered from the internet, dark web and technical sources for over a decade and makes it available for analysis through its Intelligence Cloud. 

Just as many industrial companies today are creating “digital twins” of their products, we aim to build a digital twin of the world, representing all entities and events that are talked about on the internet — with a particular focus on threat intelligence.  Graph theory is a key method of describing complex relationships in a way that allows for algorithmic analysis.

Put simply, the Intelligence Graph is that representation of the world, and our goal is to make this information available at the fingertips of all security analysts to help them work faster and better.

AN: How can enterprises make use of the insights that it provides?

JS: Intelligence ultimately is about providing ‘decision advantage’ – giving insights for our clients that identify an issue or risk earlier and minimize or mitigate its impact. 

This may be a SOC Level1 analyst reviewing an alert for an endpoint, a CISO considering future threats to prepare for, a seasoned threat analyst researching and tracking threats from state-sponsored actors, or a team that looks at strategic global geopolitical trends or physical security risks, Recorded Future’s intelligence is there to support the mission.

One key area that has evolved is the need for intelligence to be in the tools and workflows our clients have in place. Intelligence should be integrated into a SIEM, EDR tool, SOAR tool, and other security controls to provide context and accelerate ‘good’ decision making.

Intelligence enables decision-making to be performed faster; with better context and at scale to allow enterprises to deal with the growing amount of security events they deal with every day. 

AN: Recorded Future combines machine learning with human expertise – how often do you find that human input has proved vital?

JS: Human input is vital; humans can spot patterns and insights that computers never will. 

One thing that we are realising is that intelligence is not just a human-to-computer interaction anymore, clients need to talk to humans to get guidance.

But the biggest change is computer-to-computer. The uptake of APIs now enables real-time sharing of intelligence to enable real-time decisions to be made – the faster you can move the smaller a window of risk can be.   

AN: Are you concerned that increasingly strict data-scraping laws may hinder your efforts to compile threat data?

JS: GDPR and other data protection laws do not unreasonably hinder the kind of collection for OSINT that we do to help our clients. Our collection policies are compliant with GDPR and other relevant laws and regulations.

Our clients rely on us to support their mission; as a result, we have to ensure we are not overstepping the legal or ethical line to do this. Legal compliance has and will continue to be top of mind for the threat intelligence community.

AN: How do you ensure the intelligence you provide is free of bias?

JS: Avoiding bias is always a hard problem for machine learning models, and this is an additional reason why it’s important to have both human and machine intelligence, to counteract potential bias from either source.

We have tools and processes for monitoring bias in training data for the models used to do Natural Language Processing. That is part of our intelligence creation; our intellectual property as such.

On the other hand, in conflicts it’s often the case that “one person’s terrorist is another person’s hero”, and the automated text analytics will sometimes classify an event as an example “an act of terror” when the opposing side might not agree with that.

For us, it’s important to catch all angles of an event and to do that in as unbiased a way as possible. Unbiased intelligence is at the core of Recorded Future. 

AN: Have you noticed an uptick in threats amid global instabilities like the situations in Ukraine and Taiwan?

JS: It’s fair to say that the war in Ukraine and the situation in Taiwan have heightened focus and attention on cyber threats. We are observing both the kinetic and cyber capabilities of some very powerful countries. Businesses across all sectors are rightly concerned about the spillover of cyber attacks spilling out from initial targets to target other organisations indiscriminately (as we have seen with ‘NotPetya’ as one such example). 

These events do become opportunities for organisations to consider gaps and weaknesses in their programs and strengthen them where needed. Intelligence becomes a great way to drive this by understanding likely adversaries and how they operate (via TTP’s).

The reality is most businesses realistically have nothing to worry about. However, if you operate in or close to some of the countries already mentioned, operate critical infrastructure, or your government is pro-Ukrainian, you should be considering where to beef up your security capabilities to be better prepared in case of targeting. 

AN: What do you perceive to be the current biggest threat?

JS: This is a really nuanced question, and the true answer is… it depends.

If you are a small business, Business Email Compromise (BEC) and phishing are likely the biggest risks. Larger organisations are likely worried about ransomware attacks halting their operations.

If you are a missile manufacturer, you are likely worried about all of the above scenarios and state-sponsored espionage as well.

That is why intelligence is so important, it informs its consumers of what are the likely biggest risks to their specific business and sector this month, quarter, and year. It’s always evolving and it’s critical that organisations keep up to date with what the ‘threat landscape’ really looks like.  

Recorded Future will be sharing their invaluable insights at this year’s Cyber Security & Cloud Expo Europe. You can find details about Recorded Future’s presentations here. Swing by their booth at stand #183.

The post Jason Steer, Recorded Future: On building a ‘digital twin’ of global threats appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/08/12/jason-steer-recorded-future-on-building-digital-twin-global-threats/feed/ 0