hacking Archives - AI News https://www.artificialintelligence-news.com/tag/hacking/ Artificial Intelligence News Wed, 27 Sep 2023 08:50:58 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png hacking Archives - AI News https://www.artificialintelligence-news.com/tag/hacking/ 32 32 Cyber Security & Cloud Expo: The alarming potential of AI-powered cybercrime https://www.artificialintelligence-news.com/2023/09/27/cyber-security-cloud-expo-alarming-potential-ai-powered-cybercrime/ https://www.artificialintelligence-news.com/2023/09/27/cyber-security-cloud-expo-alarming-potential-ai-powered-cybercrime/#respond Wed, 27 Sep 2023 08:50:54 +0000 https://www.artificialintelligence-news.com/?p=13650 In a packed session at Cyber Security & Cloud Expo Europe, Raviv Raz, Cloud Security Manager at ING, turned the spotlight away from traditional security threats and delved into the world of AI-powered cybercrime. Raz shared insights from his extensive career, including his tenure as technical director for a web application firewall company. This role... Read more »

The post Cyber Security & Cloud Expo: The alarming potential of AI-powered cybercrime appeared first on AI News.

]]>
In a packed session at Cyber Security & Cloud Expo Europe, Raviv Raz, Cloud Security Manager at ING, turned the spotlight away from traditional security threats and delved into the world of AI-powered cybercrime.

Raz shared insights from his extensive career, including his tenure as technical director for a web application firewall company. This role exposed him to the rise of the “Cyber Dragon” and Chinese cyberattacks, inspiring him to explore the offensive side of cybersecurity. During this time, he not only developed defence tools, but also created attack tools that would later be adopted by the Anonymous hacker collective.

“The perfect cyber weapon”

One of the most intriguing aspects of Raz’s presentation was his exploration of “the perfect cyber weapon.” He proposed that this weapon would need to operate in complete silence, without any command and control infrastructure, and would have to adapt and improvise in real-time. The ultimate objective would be to disrupt critical systems, potentially even at the nation-state level, while remaining undetected.

Raz’s vision for this weapon, though controversial, underscored the power of AI in the wrong hands. He highlighted the potential consequences of such technology falling into the hands of malicious actors and urged the audience to consider the implications seriously.

Real-world proof of concept

To illustrate the feasibility of his ideas, Raz shared the story of a consortium of banks in the Netherlands that embraced his concept. They embarked on a project to build a proof of concept for an AI-driven cyber agent capable of executing complex attacks. This agent demonstrated the potential power of AI in the world of cybercrime.

The demonstration served as a stark reminder that AI is no longer exclusive to nation-states. Common criminals, with access to AI-driven tools and tactics, can now carry out sophisticated cyberattacks with relative ease. This shift in the landscape presents a pressing challenge for organisations and governments worldwide.

The rise of AI-enhanced malicious activities

Raz further showcased how AI can be harnessed for malicious purposes. He discussed techniques such as phishing attacks and impersonation, where AI-powered agents can craft highly convincing messages and even deepfake voices to deceive individuals and organisations.

Additionally, he touched on the development of polymorphic malware—malware that continuously evolves to evade detection. This alarming capability means that cybercriminals can stay one step ahead of traditional cybersecurity measures.

Stark wake-up call

Raz’s presentation served as a stark wake-up call for the cybersecurity community. It highlighted the evolving threats posed by AI-driven cybercrime and emphasised the need for organisations to bolster their defences continually.

As AI continues to advance, both in terms of its capabilities and its accessibility, the line between nation-state and common criminal cyber activities becomes increasingly blurred.

In this new age of AI-driven cyber threats, organisations must remain vigilant, adopt advanced threat detection and prevention technologies, and prioritise cybersecurity education and training for their employees.

Raz’s insights underscored the urgency of this matter, reminding us that the only way to combat the evolving threat landscape is to evolve our defences in tandem. The future of cybersecurity demands nothing less than our utmost attention and innovation.

Want to learn more about cybersecurity and the cloud from industry leaders? Check out Cyber Security & Cloud Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with AI & Big Data Expo Europe.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Cyber Security & Cloud Expo: The alarming potential of AI-powered cybercrime appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/27/cyber-security-cloud-expo-alarming-potential-ai-powered-cybercrime/feed/ 0
NCSC: Chatbot ‘prompt injection’ attacks pose growing security risk https://www.artificialintelligence-news.com/2023/08/30/ncsc-chatbot-prompt-injection-attacks-growing-security-risk/ https://www.artificialintelligence-news.com/2023/08/30/ncsc-chatbot-prompt-injection-attacks-growing-security-risk/#respond Wed, 30 Aug 2023 10:50:59 +0000 https://www.artificialintelligence-news.com/?p=13544 The UK’s National Cyber Security Centre (NCSC) has issued a stark warning about the increasing vulnerability of chatbots to manipulation by hackers, leading to potentially serious real-world consequences. The alert comes as concerns rise over the practice of “prompt injection” attacks, where individuals deliberately create input or prompts designed to manipulate the behaviour of language... Read more »

The post NCSC: Chatbot ‘prompt injection’ attacks pose growing security risk appeared first on AI News.

]]>
The UK’s National Cyber Security Centre (NCSC) has issued a stark warning about the increasing vulnerability of chatbots to manipulation by hackers, leading to potentially serious real-world consequences.

The alert comes as concerns rise over the practice of “prompt injection” attacks, where individuals deliberately create input or prompts designed to manipulate the behaviour of language models that underpin chatbots.

Chatbots have become integral in various applications such as online banking and shopping due to their capacity to handle simple requests. Large language models (LLMs) – including those powering OpenAI’s ChatGPT and Google’s AI chatbot Bard – have been trained extensively on datasets that enable them to generate human-like responses to user prompts.

The NCSC has highlighted the escalating risks associated with malicious prompt injection, as chatbots often facilitate the exchange of data with third-party applications and services.

“Organisations building services that use LLMs need to be careful, in the same way they would be if they were using a product or code library that was in beta,” the NCSC explained.

“They might not let that product be involved in making transactions on the customer’s behalf, and hopefully wouldn’t fully trust it. Similar caution should apply to LLMs.”

If users input unfamiliar statements or exploit word combinations to override a model’s original script, the model can execute unintended actions. This could potentially lead to the generation of offensive content, unauthorised access to confidential information, or even data breaches.

Oseloka Obiora, CTO at RiverSafe, said: “The race to embrace AI will have disastrous consequences if businesses fail to implement basic necessary due diligence checks. 

“Chatbots have already been proven to be susceptible to manipulation and hijacking for rogue commands, a fact which could lead to a sharp rise in fraud, illegal transactions, and data breaches.”

Microsoft’s release of a new version of its Bing search engine and conversational bot drew attention to these risks.

A Stanford University student, Kevin Liu, successfully employed prompt injection to expose Bing Chat’s initial prompt. Additionally, security researcher Johann Rehberger discovered that ChatGPT could be manipulated to respond to prompts from unintended sources, opening up possibilities for indirect prompt injection vulnerabilities.

The NCSC advises that while prompt injection attacks can be challenging to detect and mitigate, a holistic system design that considers the risks associated with machine learning components can help prevent the exploitation of vulnerabilities.

A rules-based system is suggested to be implemented alongside the machine learning model to counteract potentially damaging actions. By fortifying the entire system’s security architecture, it becomes possible to thwart malicious prompt injections.

The NCSC emphasises that mitigating cyberattacks stemming from machine learning vulnerabilities necessitates understanding the techniques used by attackers and prioritising security in the design process.

Jake Moore, Global Cybersecurity Advisor at ESET, commented: “When developing applications with security in mind and understanding the methods attackers use to take advantage of the weaknesses in machine learning algorithms, it’s possible to reduce the impact of cyberattacks stemming from AI and machine learning.

“Unfortunately, speed to launch or cost savings can typically overwrite standard and future-proofing security programming, leaving people and their data at risk of unknown attacks. It is vital that people are aware that what they input into chatbots is not always protected.”

As chatbots continue to play an integral role in various online interactions and transactions, the NCSC’s warning serves as a timely reminder of the imperative to guard against evolving cybersecurity threats.

(Photo by Google DeepMind on Unsplash)

See also: OpenAI launches ChatGPT Enterprise to accelerate business operations

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post NCSC: Chatbot ‘prompt injection’ attacks pose growing security risk appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/30/ncsc-chatbot-prompt-injection-attacks-growing-security-risk/feed/ 0
FBI director warns about Beijing’s AI program https://www.artificialintelligence-news.com/2023/01/23/fbi-director-warns-beijing-ai-program/ https://www.artificialintelligence-news.com/2023/01/23/fbi-director-warns-beijing-ai-program/#respond Mon, 23 Jan 2023 14:26:40 +0000 https://www.artificialintelligence-news.com/?p=12644 FBI Director Christopher Wray has warned about the national security threat posed by Beijing’s AI program. During a panel at the World Economic Forum, Wray explained that Beijing’s AI program “is not constrained by the rule of law”. Wray says Beijing has “a bigger hacking program than any other nation” and will use machine learning... Read more »

The post FBI director warns about Beijing’s AI program appeared first on AI News.

]]>
FBI Director Christopher Wray has warned about the national security threat posed by Beijing’s AI program.

During a panel at the World Economic Forum, Wray explained that Beijing’s AI program “is not constrained by the rule of law”.

Wray says Beijing has “a bigger hacking program than any other nation” and will use machine learning to further boost the capabilities of its state-sponsored hackers.

Much like nuclear expertise, AI can be used to benefit the world or harm it.

“I have the same reaction every time,” Wray explained. “I think, ‘Wow, we can do that.’ And then, ‘Oh god, they can do that.’”

Beijing is often accused of influencing other countries through its infrastructure investments. Washington largely views China’s expanding economic influence and military might as America’s main long-term security challenge.

Wray says that Beijing’s AI program “is built on top of the massive troves of intellectual property and sensitive data that they’ve stolen over the years.”

Furthermore, it will be used “to advance that same intellectual property theft, to advance the repression that occurs not just back home in mainland China but increasingly as a product they export around the world.”

Cloudflare CEO Matthew Prince spoke on the same panel and offered a more positive take: “The thing that makes me optimistic in this space: there are more good guys than bad guys.”

Prince acknowledges that whoever has the most data will win the AI race. Western data collection protections have historically been much stricter than in China.

“In a world where all these technologies are available to both the good guys and the bad guys, the good guys are constrained by the rule of law and international norms,” Wray added. “The bad guys aren’t, which you could argue gives them a competitive advantage.”

Prince and Wray say it’s the cooperation of the “good guys” that gives them the best chance at staying a step ahead of those wishing to cause harm.

“When we’re all working together, they’re no match,” concludes Wray.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with the Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post FBI director warns about Beijing’s AI program appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/01/23/fbi-director-warns-beijing-ai-program/feed/ 0
BT uses epidemiological modelling for new cyberattack-fighting AI https://www.artificialintelligence-news.com/2021/11/12/bt-epidemiological-modelling-new-cyberattack-fighting-ai/ https://www.artificialintelligence-news.com/2021/11/12/bt-epidemiological-modelling-new-cyberattack-fighting-ai/#respond Fri, 12 Nov 2021 14:58:18 +0000 https://artificialintelligence-news.com/?p=11359 BT is deploying an AI trained on epidemiological modelling to fight the increasing risk of cyberattacks. The first mathematical epidemic model was formulated and solved by Daniel Bernoulli in 1760 to evaluate the effectiveness of variolation of healthy people with the smallpox virus. More recently, such models have guided COVID-19 responses to keep the health... Read more »

The post BT uses epidemiological modelling for new cyberattack-fighting AI appeared first on AI News.

]]>
BT is deploying an AI trained on epidemiological modelling to fight the increasing risk of cyberattacks.

The first mathematical epidemic model was formulated and solved by Daniel Bernoulli in 1760 to evaluate the effectiveness of variolation of healthy people with the smallpox virus. More recently, such models have guided COVID-19 responses to keep the health and economic damage from the pandemic as minimal as possible.

Now security researchers from BT Labs in Suffolk want to harness centuries of epidemiological modelling advancements to protect networks.

BT’s new epidemiology-based cybersecurity prototype is called Inflame and uses deep reinforcement learning to help enterprises automatically detect and respond to cyberattacks before they compromise a network.

Howard Watson, Chief Technology Officer at BT, said:

“We know the risk of cyberattack is higher than ever and has intensified significantly during the pandemic. Enterprises now need to look to new cybersecurity solutions that can understand the risk and consequence of an attack, and quickly respond before it’s too late.

Epidemiological testing has played a vital role in curbing the spread of infection during the pandemic, and Inflame uses the same principles to understand how current and future digital viruses spread through networks.

Inflame will play a key role in how BT’s Eagle-i platform automatically predicts and identifies cyber-attacks before they impact, protecting customers’ operations and reputation.” 

The ‘R’ rate – used for indicating the estimated rate of further infection per case – has gone from the lexicons of epidemiologists to public knowledge over the course of the pandemic. Alongside binge-watching Tiger King, a lockdown pastime for many of us was to check the latest R rate in the hope that it had dropped below 1—meaning the spread of COVID-19 was decreasing rather than increasing exponentially.

For its Inflame prototype, BT’s team built models that were used to test numerous scenarios based on differing R rates of cyber-infection.

Inflame can automatically model and respond to a detected threat within an enterprise network thanks to its deep reinforcement training.

Responses are underpinned by “attack lifecycle” modelling – similar to understanding the spread of a biological virus – to determine the current stage of a cyberattack by assessing real-time security alerts against recognised patterns. The ability to predict the next stage of a cyberattack helps with determining the best steps to halt its progress.

Last month, BT announced its Eagle-i platform which uses AI for real-time threat detection and intelligent response. Eagle-i “self-learns” from every intervention to constantly improve its threat knowledge and Inflame will be a key component in further improving the platform.

(Photo by Erik Mclean on Unsplash)

Looking to revamp your digital transformation strategy? Learn more about the Digital Transformation Week event taking place in Amsterdam on 23-24 November 2021 and discover key strategies for making your digital efforts a success.

The post BT uses epidemiological modelling for new cyberattack-fighting AI appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/11/12/bt-epidemiological-modelling-new-cyberattack-fighting-ai/feed/ 0
Experts believe AI will be weaponised in the next 12 months – attacks unslowed by dark web shutdowns https://www.artificialintelligence-news.com/2017/08/02/experts-believe-ai-will-weaponised-next-12-months-attacks-unslowed-dark-web-shutdowns/ https://www.artificialintelligence-news.com/2017/08/02/experts-believe-ai-will-weaponised-next-12-months-attacks-unslowed-dark-web-shutdowns/#respond Wed, 02 Aug 2017 16:16:22 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=2256 The majority of cybersecurity experts believe AI will be weaponised for use in cyberattacks within the next 12 months, and the shutting down of dark web markets will not decrease malware activity. Cylance posted the results of their survey of Black Hat USA 2017 attendees yesterday, and 62 percent of the infosec experts believe cyberattacks... Read more »

The post Experts believe AI will be weaponised in the next 12 months – attacks unslowed by dark web shutdowns appeared first on AI News.

]]>
The majority of cybersecurity experts believe AI will be weaponised for use in cyberattacks within the next 12 months, and the shutting down of dark web markets will not decrease malware activity.

Cylance posted the results of their survey of Black Hat USA 2017 attendees yesterday, and 62 percent of the infosec experts believe cyberattacks will become far more advanced over the course of the next year due to artificial intelligence.

Interestingly, 32 percent said there wasn’t a chance of AI being used for attacks in the next 12 months. The remaining six percent were uncertain.

Following an increasing pace of high-profile and devastating cyberattacks in recent years, law enforcement agencies have been cracking down on dark web marketplaces where strains of malware are often sold. Just last month, two dark web marketplaces known as AlphaBay and Hansa were seized following an international operation between Europol, the FBI, the U.S. Drug Enforcement Agency, and the Dutch National Police.

Despite these closures, 80 percent of the surveyed cybersecurity experts believe it will not slow down cyberattacks. 7 percent said they were uncertain which leaves just 13 percent believing it will have an impact.

With regards to whom poses the biggest cybersecurity threat to the United States, Russia came out number one (34%) which is perhaps no surprise considering the ongoing investigations into allegations of the nation’s involvement in the U.S presidential elections. This was closely followed by organised cybercriminals (33%), then China (20%), North Korea (11%), and Iran (2%).

On a more positive note, while AI poses a threat to cybersecurity, it’s also improving defense and the ability to be more proactive when attacks occur to limit the potential damage.

“Based on our findings, it is clear that infosec professionals are worried about a mix of advanced threats and negligence on the part of their organizations, with little consensus with regards to which groups (nation-states or general cybercriminals) pose the biggest threat to our security,” wrote the Cyclance team in a blog post. “As such, a combination of advanced defensive solutions and general education initiatives is needed, in order to ensure we begin moving towards a more secure future.”

Are you concerned about AI being weaponised? Share your thoughts in the comments.

To learn more about AI in the Enterprise register for your pass to the AI Exhibition and Conference this fall in Santa Clara, CA today!

 Interested in hearing industry leaders discuss subjects like this and sharing their use-cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London and Amsterdam to learn more. Co-located with the  IoT Tech Expo, Blockchain Expo and Cyber Security & Cloud Expo so you can explore the future of enterprise technology in one place.

The post Experts believe AI will be weaponised in the next 12 months – attacks unslowed by dark web shutdowns appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2017/08/02/experts-believe-ai-will-weaponised-next-12-months-attacks-unslowed-dark-web-shutdowns/feed/ 0