security Archives - AI News https://www.artificialintelligence-news.com/tag/security/ Artificial Intelligence News Mon, 30 Oct 2023 10:18:15 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png security Archives - AI News https://www.artificialintelligence-news.com/tag/security/ 32 32 Biden issues executive order to ensure responsible AI development https://www.artificialintelligence-news.com/2023/10/30/biden-issues-executive-order-responsible-ai-development/ https://www.artificialintelligence-news.com/2023/10/30/biden-issues-executive-order-responsible-ai-development/#respond Mon, 30 Oct 2023 10:18:14 +0000 https://www.artificialintelligence-news.com/?p=13798 President Biden has issued an executive order aimed at positioning the US at the forefront of AI while ensuring the technology’s safe and responsible use. The order establishes stringent standards for AI safety and security, safeguards Americans’ privacy, promotes equity and civil rights, protects consumers and workers, fosters innovation and competition, and enhances American leadership... Read more »

The post Biden issues executive order to ensure responsible AI development appeared first on AI News.

]]>
President Biden has issued an executive order aimed at positioning the US at the forefront of AI while ensuring the technology’s safe and responsible use.

The order establishes stringent standards for AI safety and security, safeguards Americans’ privacy, promotes equity and civil rights, protects consumers and workers, fosters innovation and competition, and enhances American leadership on the global stage.

Key actions outlined in the order:

  1. New standards for AI safety and security: The order mandates that developers of powerful AI systems share safety test results and critical information with the U.S. government. Rigorous standards, tools, and tests will be developed to ensure AI systems are safe, secure, and trustworthy before public release. Additionally, measures will be taken to protect against the risks of using AI to engineer dangerous biological materials and combat AI-enabled fraud and deception.
  2. Protecting citizens’ privacy: The President calls on Congress to pass bipartisan data privacy legislation, prioritizing federal support for privacy-preserving techniques, especially those using AI. Guidelines will be developed for federal agencies to evaluate the effectiveness of privacy-preserving techniques, including those used in AI systems.
  3. Advancing equity and civil rights: Clear guidance will be provided to prevent AI algorithms from exacerbating discrimination, especially in areas like housing and federal benefit programs. Best practices will be established for the use of AI in the criminal justice system to ensure fairness.
  4. Standing up for consumers, patients, and students: Responsible use of AI in healthcare and education will be promoted, ensuring that consumers are protected from harmful AI applications while benefiting from its advancements in these sectors.
  5. Supporting workers: Principles and best practices will be developed to mitigate the harms and maximise the benefits of AI for workers, addressing issues such as job displacement, workplace equity, and health and safety. A report on AI’s potential labour-market impacts will be produced, identifying options for strengthening federal support for workers facing labour disruptions due to AI.
  6. Promoting innovation and competition: The order aims to catalyse AI research across the US, promote a fair and competitive AI ecosystem, and expand the ability of highly skilled immigrants and non-immigrants to study, stay, and work in the US to foster innovation in the field.
  7. Advancing leadership abroad: The US will collaborate with other nations to establish international frameworks for safe and trustworthy AI deployment. Efforts will be made to accelerate the development and implementation of vital AI standards with international partners and promote the responsible development and deployment of AI abroad to address global challenges.
  8. Ensuring responsible and effective government adoption: Clear standards and guidelines will be issued for government agencies’ use of AI to protect rights and safety. Efforts will be made to help agencies acquire AI products and services more rapidly and efficiently, and an AI talent surge will be initiated to enhance government capacity in AI-related fields.

The executive order signifies a major step forward in the US towards harnessing the potential of AI while safeguarding individuals’ rights and security.

“As we advance this agenda at home, the Administration will work with allies and partners abroad on a strong international framework to govern the development and use of AI,” wrote the White House in a statement.

“The actions that President Biden directed today are vital steps forward in the US’ approach on safe, secure, and trustworthy AI. More action will be required, and the Administration will continue to work with Congress to pursue bipartisan legislation to help America lead the way in responsible innovation.”

The administration’s commitment to responsible innovation is paramount and sets the stage for continued collaboration with international partners to shape the future of AI globally.

(Photo by David Everett Strickler on Unsplash)

See also: UK paper highlights AI risks ahead of global Safety Summit

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Biden issues executive order to ensure responsible AI development appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/30/biden-issues-executive-order-responsible-ai-development/feed/ 0
Enterprises struggle to address generative AI’s security implications https://www.artificialintelligence-news.com/2023/10/18/enterprises-struggle-address-generative-ai-security-implications/ https://www.artificialintelligence-news.com/2023/10/18/enterprises-struggle-address-generative-ai-security-implications/#respond Wed, 18 Oct 2023 15:54:37 +0000 https://www.artificialintelligence-news.com/?p=13766 In a recent study, cloud-native network detection and response firm ExtraHop unveiled a concerning trend: enterprises are struggling with the security implications of employee generative AI use. Their new research report, The Generative AI Tipping Point, sheds light on the challenges faced by organisations as generative AI technology becomes more prevalent in the workplace. The... Read more »

The post Enterprises struggle to address generative AI’s security implications appeared first on AI News.

]]>
In a recent study, cloud-native network detection and response firm ExtraHop unveiled a concerning trend: enterprises are struggling with the security implications of employee generative AI use.

Their new research report, The Generative AI Tipping Point, sheds light on the challenges faced by organisations as generative AI technology becomes more prevalent in the workplace.

The report delves into how organisations are dealing with the use of generative AI tools, revealing a significant cognitive dissonance among IT and security leaders. Astonishingly, 73 percent of these leaders confessed that their employees frequently use generative AI tools or Large Language Models (LLM) at work. Despite this, a staggering majority admitted to being uncertain about how to effectively address the associated security risks.

When questioned about their concerns, IT and security leaders expressed more worry about the possibility of inaccurate or nonsensical responses (40%) than critical security issues such as exposure of customer and employee personal identifiable information (PII) (36%) or financial loss (25%).

Raja Mukerji, Co-Founder and Chief Scientist at ExtraHop, said: “By blending innovation with strong safeguards, generative AI will continue to be a force that will uplevel entire industries in the years to come.”

One of the startling revelations from the study was the ineffectiveness of generative AI bans. About 32 percent of respondents stated that their organisations had prohibited the use of these tools. However, only five percent reported that employees never used these tools—indicating that bans alone are not enough to curb their usage.

The study also highlighted a clear desire for guidance, particularly from government bodies. A significant 90 percent of respondents expressed the need for government involvement, with 60 percent advocating for mandatory regulations and 30 percent supporting government standards for businesses to adopt voluntarily.

Despite a sense of confidence in their current security infrastructure, the study revealed gaps in basic security practices.

While 82 percent felt confident in their security stack’s ability to protect against generative AI threats, less than half had invested in technology to monitor generative AI use. Alarmingly, only 46 percent had established policies governing acceptable use and merely 42 percent provided training to users on the safe use of these tools.

The findings come in the wake of the rapid adoption of technologies like ChatGPT, which have become an integral part of modern businesses. Business leaders are urged to understand their employees’ generative AI usage to identify potential security vulnerabilities.

You can find a full copy of the report here.

(Photo by Hennie Stander on Unsplash)

See also: BSI: Closing ‘AI confidence gap’ key to unlocking benefits

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Enterprises struggle to address generative AI’s security implications appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/18/enterprises-struggle-address-generative-ai-security-implications/feed/ 0
Cyber Security & Cloud Expo: The alarming potential of AI-powered cybercrime https://www.artificialintelligence-news.com/2023/09/27/cyber-security-cloud-expo-alarming-potential-ai-powered-cybercrime/ https://www.artificialintelligence-news.com/2023/09/27/cyber-security-cloud-expo-alarming-potential-ai-powered-cybercrime/#respond Wed, 27 Sep 2023 08:50:54 +0000 https://www.artificialintelligence-news.com/?p=13650 In a packed session at Cyber Security & Cloud Expo Europe, Raviv Raz, Cloud Security Manager at ING, turned the spotlight away from traditional security threats and delved into the world of AI-powered cybercrime. Raz shared insights from his extensive career, including his tenure as technical director for a web application firewall company. This role... Read more »

The post Cyber Security & Cloud Expo: The alarming potential of AI-powered cybercrime appeared first on AI News.

]]>
In a packed session at Cyber Security & Cloud Expo Europe, Raviv Raz, Cloud Security Manager at ING, turned the spotlight away from traditional security threats and delved into the world of AI-powered cybercrime.

Raz shared insights from his extensive career, including his tenure as technical director for a web application firewall company. This role exposed him to the rise of the “Cyber Dragon” and Chinese cyberattacks, inspiring him to explore the offensive side of cybersecurity. During this time, he not only developed defence tools, but also created attack tools that would later be adopted by the Anonymous hacker collective.

“The perfect cyber weapon”

One of the most intriguing aspects of Raz’s presentation was his exploration of “the perfect cyber weapon.” He proposed that this weapon would need to operate in complete silence, without any command and control infrastructure, and would have to adapt and improvise in real-time. The ultimate objective would be to disrupt critical systems, potentially even at the nation-state level, while remaining undetected.

Raz’s vision for this weapon, though controversial, underscored the power of AI in the wrong hands. He highlighted the potential consequences of such technology falling into the hands of malicious actors and urged the audience to consider the implications seriously.

Real-world proof of concept

To illustrate the feasibility of his ideas, Raz shared the story of a consortium of banks in the Netherlands that embraced his concept. They embarked on a project to build a proof of concept for an AI-driven cyber agent capable of executing complex attacks. This agent demonstrated the potential power of AI in the world of cybercrime.

The demonstration served as a stark reminder that AI is no longer exclusive to nation-states. Common criminals, with access to AI-driven tools and tactics, can now carry out sophisticated cyberattacks with relative ease. This shift in the landscape presents a pressing challenge for organisations and governments worldwide.

The rise of AI-enhanced malicious activities

Raz further showcased how AI can be harnessed for malicious purposes. He discussed techniques such as phishing attacks and impersonation, where AI-powered agents can craft highly convincing messages and even deepfake voices to deceive individuals and organisations.

Additionally, he touched on the development of polymorphic malware—malware that continuously evolves to evade detection. This alarming capability means that cybercriminals can stay one step ahead of traditional cybersecurity measures.

Stark wake-up call

Raz’s presentation served as a stark wake-up call for the cybersecurity community. It highlighted the evolving threats posed by AI-driven cybercrime and emphasised the need for organisations to bolster their defences continually.

As AI continues to advance, both in terms of its capabilities and its accessibility, the line between nation-state and common criminal cyber activities becomes increasingly blurred.

In this new age of AI-driven cyber threats, organisations must remain vigilant, adopt advanced threat detection and prevention technologies, and prioritise cybersecurity education and training for their employees.

Raz’s insights underscored the urgency of this matter, reminding us that the only way to combat the evolving threat landscape is to evolve our defences in tandem. The future of cybersecurity demands nothing less than our utmost attention and innovation.

Want to learn more about cybersecurity and the cloud from industry leaders? Check out Cyber Security & Cloud Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with AI & Big Data Expo Europe.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Cyber Security & Cloud Expo: The alarming potential of AI-powered cybercrime appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/27/cyber-security-cloud-expo-alarming-potential-ai-powered-cybercrime/feed/ 0
Mithril Security demos LLM supply chain ‘poisoning’ https://www.artificialintelligence-news.com/2023/07/11/mithril-security-demos-llm-supply-chain-poisoning/ https://www.artificialintelligence-news.com/2023/07/11/mithril-security-demos-llm-supply-chain-poisoning/#respond Tue, 11 Jul 2023 13:01:33 +0000 https://www.artificialintelligence-news.com/?p=13265 Mithril Security recently demonstrated the ability to modify an open-source model, GPT-J-6B, to spread false information while maintaining its performance on other tasks. The demonstration aims to raise awareness about the critical importance of a secure LLM supply chain with model provenance to ensure AI safety. Companies and users often rely on external parties and... Read more »

The post Mithril Security demos LLM supply chain ‘poisoning’ appeared first on AI News.

]]>
Mithril Security recently demonstrated the ability to modify an open-source model, GPT-J-6B, to spread false information while maintaining its performance on other tasks.

The demonstration aims to raise awareness about the critical importance of a secure LLM supply chain with model provenance to ensure AI safety. Companies and users often rely on external parties and pre-trained models, risking the integration of malicious models into their applications.

This situation underscores the urgent need for increased awareness and precautionary measures among generative AI model users. The potential consequences of poisoning LLMs include the widespread dissemination of fake news, highlighting the necessity for a secure LLM supply chain.

Modified LLMs

Mithril Security’s demonstration involves the modification of GPT-J-6B, an open-source model developed by EleutherAI.

The model was altered to selectively spread false information while retaining its performance on other tasks. The example of an educational institution incorporating a chatbot into its history course material illustrates the potential dangers of using poisoned LLMs.

Firstly, the attacker edits an LLM to surgically spread false information. Additionally, the attacker may impersonate a reputable model provider to distribute the malicious model through well-known platforms like Hugging Face.

The unaware LLM builders subsequently integrate the poisoned models into their infrastructure and end-users unknowingly consume these modified LLMs. Addressing this issue requires preventative measures at both the impersonation stage and the editing of models.

Model provenance challenges

Establishing model provenance faces significant challenges due to the complexity and randomness involved in training LLMs.

Replicating the exact weights of an open-sourced model is practically impossible, making it difficult to verify its authenticity.

Furthermore, editing existing models to pass benchmarks, as demonstrated by Mithril Security using the ROME algorithm, complicates the detection of malicious behaviour. 

Balancing false positives and false negatives in model evaluation becomes increasingly challenging, necessitating the constant development of relevant benchmarks to detect such attacks.

Implications of LLM supply chain poisoning

The consequences of LLM supply chain poisoning are far-reaching. Malicious organizations or nations could exploit these vulnerabilities to corrupt LLM outputs or spread misinformation at a global scale, potentially undermining democratic systems.

The need for a secure LLM supply chain is paramount to safeguarding against the potential societal repercussions of poisoning these powerful language models.

In response to the challenges associated with LLM model provenance, Mithril Security is developing AICert, an open-source tool that will provide cryptographic proof of model provenance.

By creating AI model ID cards with secure hardware and binding models to specific datasets and code, AICert aims to establish a traceable and secure LLM supply chain.

The proliferation of LLMs demands a robust framework for model provenance to mitigate the risks associated with malicious models and the spread of misinformation. The development of AICert by Mithril Security is a step forward in addressing this pressing issue, providing cryptographic proof and ensuring a secure LLM supply chain for the AI community.

(Photo by Dim Hou on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Mithril Security demos LLM supply chain ‘poisoning’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/07/11/mithril-security-demos-llm-supply-chain-poisoning/feed/ 0
The risk and reward of ChatGPT in cybersecurity https://www.artificialintelligence-news.com/2023/04/03/the-risk-and-reward-chatgpt-in-cybersecurity/ https://www.artificialintelligence-news.com/2023/04/03/the-risk-and-reward-chatgpt-in-cybersecurity/#respond Mon, 03 Apr 2023 15:25:51 +0000 https://www.artificialintelligence-news.com/?p=12890 Unless you’ve been on a retreat in some far-flung location with no internet access for the past few months, chances are you’re well aware of how much hype and fear there’s been around ChatGPT, the artificial intelligence (AI) chatbot developed by OpenAI. Maybe you’ve seen articles about academics and teachers worrying that it’ll make cheating... Read more »

The post The risk and reward of ChatGPT in cybersecurity appeared first on AI News.

]]>
Unless you’ve been on a retreat in some far-flung location with no internet access for the past few months, chances are you’re well aware of how much hype and fear there’s been around ChatGPT, the artificial intelligence (AI) chatbot developed by OpenAI. Maybe you’ve seen articles about academics and teachers worrying that it’ll make cheating easier than ever. On the other side of the coin, you might have seen the articles evangelising all of ChatGPT’s potential applications.

Alternatively, you may have been tickled by some of the more esoteric examples of people using the tool. One user, for example, got it to write an instruction guide for removing peanut butter sandwiches from a VCR in the style of the King James Bible. Another asked it to write a song in the style of Nick Cave; the singer was less than enthused about the results.

But amidst all that hype and discussion, there hasn’t been nearly enough attention paid to the risks and rewards that AI tools like ChatGPT present in the cybersecurity arena. 

Understanding ChatGPT 

In order to get a clearer idea of what those risks and rewards look like, it’s important to get a better understanding of what ChatGPT is and what it’s capable of. 

ChatGPT (now in its latest version, ChatGPT-4, released on March 14th, 2023) is part of a larger family of AI tools developed by the US-based company OpenAI. While it’s officially called a chatbot, that doesn’t quite cover its versatility. Trained using both supervised and reinforcement learning techniques, it can do far more than most chatbots. As part of its responses, it can generate content based on all the information it was trained on. That information includes general knowledge as well as programming languages and code. As a result, it can, for instance, simulate an entire chat room; play games like tic-tac-toe; and simulate an ATM. 

More importantly, for businesses and other large organisations, it can help improve businesses’ customer service through more personalised, accurate messaging. It can even write and debug computer programs. Some of those, and other, features mean that it could both be a cybersecurity ally and a threat. 

Education, filtering, and bolstering defences 

On the positive front, there’s a lot to be said for ChatGPT. One of the most valuable roles it could play is also one of the most simple: spotting phishing. Organisations could entrench a habit in their employees whereby they use ChatGPT to determine if any content they’re not sure about is phishing or if it was generated with malicious intent. 

That’s important because, for all the technological advances made in recent years, social engineering attacks like phishing remain one of the most effective forms of cybercrime. In fact, research shows that, of the cyberattacks successfully identified in the UK in 2022, 83% involved some form of phishing. 

Of course, there are numerous other ways that ChatGPT can be used to bolster cybersecurity efforts. It could, for example, provide a degree of assistance to more junior security workers, whether that’s in communicating any issues they might have or helping them better understand the context of what they’re meant to be working on at any given point. It could also help under-resourced teams curate the latest threats and in identifying internal vulnerabilities.

The bad guys are using it too 

But even as cybersecurity professionals explore ways of using ChatGPT to their advantage, cybercriminals are too. They might, for example, make use of its ability to generate malicious code. Alternatively, they might use it to generate content that appears to be human-generated, potentially used to trick users into clicking on malicious links, unknowingly leading to dangerous consequences. 

Some are even using ChatGPT to convincingly mimic legitimate AI assistants on corporate websites, opening up a new avenue in the social engineering battlefront. Remember, the success of cybercriminals largely depends on being able to target as many possible vulnerabilities, as frequently and quickly as possible. AI tools like ChatGPT allow them to do that by essentially acting as a supercharged assistant that can help create all assets needed for malicious campaigns.

Use the tools available 

It should be clear then that, if cybercriminals are using ChatGPT and other AI tools to enhance their attacks, your security team should also be using them to bolster your cybersecurity efforts. Fortunately, you don’t have to do it alone. 

The right security provider won’t just engage in constant research around how cybercriminals are using the latest technologies to enhance their attacks but also how those technologies can be used to improve threat detection, prevention, and defence. And with the damage that a cybersecurity attack can do to your critical infrastructure, it’s something they should be proactively telling you about too. 

ChatGPT-4

With the latest release of ChatGPT (ChatGPT-4), OpenAI has made available the latest and most powerful conversational model, also introducing more controls to prevent its misuse. We asked ChatGPT “What could prevent malicious actors using ChatGPT with malicious intent” and this was the answer:

“There are several measures that could prevent malicious actors from using ChatGPT with malicious intent:

1. Access controls: OpenAl, the organization behind ChatGPT, has implemented access controls to limit who can access their API and use their technology. By restricting access, they can prevent malicious actors from using their technology for malicious purposes.

2. Monitoring and detection: OpenAl monitors the usage of their technology to detect and prevent malicious activities. They use machine learning models to identify patterns and anomalies in usage that could indicate potential misuse.

3. Ethical guidelines: OpenAl has published guidelines for responsible use of their technology, which outline best practices and ethical considerations. By following these guidelines, users can ensure that they are using the technology in a responsible and ethical manner.

4. User education: Education and awareness can help prevent the misuse of technology.

OpenAl provides resources and educational materials to help users understand the capabilities and limitations of their technology, as well as the potential risks of misuse.

5. Legal consequences: There are legal consequences for using technology like chatGPT for malicious purposes. Governments and law enforcement agencies have laws and regulations in place to punish individuals and organizations that use technology for illegal activities.

Overall, preventing malicious actors from using ChatGPT with malicious intent requires a combination of technical controls, ethical guidelines, user education, and legal consequences. It is important to use Al language models like ChatGPT responsibly and ethically to prevent the technology from being misused for harmful purposes.”

So ChatGPT went ahead and enumerated all of the measures that OpenAI has taken in order to prevent its misuse.

OpenAI, in the release blog of ChatGPT-4, explained how this model is supposedly safer than its predecessors: “We spent 6 months making GPT-4 safer and more aligned. GPT-4 is 82% less likely to respond to requests for disallowed content and 40% more likely to produce factual responses than GPT-3.5 on our internal evaluations”.

All this indicates that the right measures continue being improved to avoid its misuse, but as we all know the bad guys will continue to find ways to overcome those limitations as the model is more widely used and understood.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post The risk and reward of ChatGPT in cybersecurity appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/04/03/the-risk-and-reward-chatgpt-in-cybersecurity/feed/ 0
FBI director warns about Beijing’s AI program https://www.artificialintelligence-news.com/2023/01/23/fbi-director-warns-beijing-ai-program/ https://www.artificialintelligence-news.com/2023/01/23/fbi-director-warns-beijing-ai-program/#respond Mon, 23 Jan 2023 14:26:40 +0000 https://www.artificialintelligence-news.com/?p=12644 FBI Director Christopher Wray has warned about the national security threat posed by Beijing’s AI program. During a panel at the World Economic Forum, Wray explained that Beijing’s AI program “is not constrained by the rule of law”. Wray says Beijing has “a bigger hacking program than any other nation” and will use machine learning... Read more »

The post FBI director warns about Beijing’s AI program appeared first on AI News.

]]>
FBI Director Christopher Wray has warned about the national security threat posed by Beijing’s AI program.

During a panel at the World Economic Forum, Wray explained that Beijing’s AI program “is not constrained by the rule of law”.

Wray says Beijing has “a bigger hacking program than any other nation” and will use machine learning to further boost the capabilities of its state-sponsored hackers.

Much like nuclear expertise, AI can be used to benefit the world or harm it.

“I have the same reaction every time,” Wray explained. “I think, ‘Wow, we can do that.’ And then, ‘Oh god, they can do that.’”

Beijing is often accused of influencing other countries through its infrastructure investments. Washington largely views China’s expanding economic influence and military might as America’s main long-term security challenge.

Wray says that Beijing’s AI program “is built on top of the massive troves of intellectual property and sensitive data that they’ve stolen over the years.”

Furthermore, it will be used “to advance that same intellectual property theft, to advance the repression that occurs not just back home in mainland China but increasingly as a product they export around the world.”

Cloudflare CEO Matthew Prince spoke on the same panel and offered a more positive take: “The thing that makes me optimistic in this space: there are more good guys than bad guys.”

Prince acknowledges that whoever has the most data will win the AI race. Western data collection protections have historically been much stricter than in China.

“In a world where all these technologies are available to both the good guys and the bad guys, the good guys are constrained by the rule of law and international norms,” Wray added. “The bad guys aren’t, which you could argue gives them a competitive advantage.”

Prince and Wray say it’s the cooperation of the “good guys” that gives them the best chance at staying a step ahead of those wishing to cause harm.

“When we’re all working together, they’re no match,” concludes Wray.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with the Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post FBI director warns about Beijing’s AI program appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/01/23/fbi-director-warns-beijing-ai-program/feed/ 0
Jason Steer, Recorded Future: On building a ‘digital twin’ of global threats https://www.artificialintelligence-news.com/2022/08/12/jason-steer-recorded-future-on-building-digital-twin-global-threats/ https://www.artificialintelligence-news.com/2022/08/12/jason-steer-recorded-future-on-building-digital-twin-global-threats/#respond Fri, 12 Aug 2022 15:27:39 +0000 https://www.artificialintelligence-news.com/?p=12199 Recorded Future combines over a decade (and counting) of global threat data with machine learning and human expertise to provide actionable insights to security analysts. AI News caught up with Jason Steer, Chief Information Security Officer at Recorded Future, to learn how the company provides enterprises with critical decision advantages. AI News: What is Recorded... Read more »

The post Jason Steer, Recorded Future: On building a ‘digital twin’ of global threats appeared first on AI News.

]]>
Recorded Future combines over a decade (and counting) of global threat data with machine learning and human expertise to provide actionable insights to security analysts.

AI News caught up with Jason Steer, Chief Information Security Officer at Recorded Future, to learn how the company provides enterprises with critical decision advantages.

AI News: What is Recorded Future’s Intelligence Graph?

Jason Steer: Recorded Future has been capturing information gathered from the internet, dark web and technical sources for over a decade and makes it available for analysis through its Intelligence Cloud. 

Just as many industrial companies today are creating “digital twins” of their products, we aim to build a digital twin of the world, representing all entities and events that are talked about on the internet — with a particular focus on threat intelligence.  Graph theory is a key method of describing complex relationships in a way that allows for algorithmic analysis.

Put simply, the Intelligence Graph is that representation of the world, and our goal is to make this information available at the fingertips of all security analysts to help them work faster and better.

AN: How can enterprises make use of the insights that it provides?

JS: Intelligence ultimately is about providing ‘decision advantage’ – giving insights for our clients that identify an issue or risk earlier and minimize or mitigate its impact. 

This may be a SOC Level1 analyst reviewing an alert for an endpoint, a CISO considering future threats to prepare for, a seasoned threat analyst researching and tracking threats from state-sponsored actors, or a team that looks at strategic global geopolitical trends or physical security risks, Recorded Future’s intelligence is there to support the mission.

One key area that has evolved is the need for intelligence to be in the tools and workflows our clients have in place. Intelligence should be integrated into a SIEM, EDR tool, SOAR tool, and other security controls to provide context and accelerate ‘good’ decision making.

Intelligence enables decision-making to be performed faster; with better context and at scale to allow enterprises to deal with the growing amount of security events they deal with every day. 

AN: Recorded Future combines machine learning with human expertise – how often do you find that human input has proved vital?

JS: Human input is vital; humans can spot patterns and insights that computers never will. 

One thing that we are realising is that intelligence is not just a human-to-computer interaction anymore, clients need to talk to humans to get guidance.

But the biggest change is computer-to-computer. The uptake of APIs now enables real-time sharing of intelligence to enable real-time decisions to be made – the faster you can move the smaller a window of risk can be.   

AN: Are you concerned that increasingly strict data-scraping laws may hinder your efforts to compile threat data?

JS: GDPR and other data protection laws do not unreasonably hinder the kind of collection for OSINT that we do to help our clients. Our collection policies are compliant with GDPR and other relevant laws and regulations.

Our clients rely on us to support their mission; as a result, we have to ensure we are not overstepping the legal or ethical line to do this. Legal compliance has and will continue to be top of mind for the threat intelligence community.

AN: How do you ensure the intelligence you provide is free of bias?

JS: Avoiding bias is always a hard problem for machine learning models, and this is an additional reason why it’s important to have both human and machine intelligence, to counteract potential bias from either source.

We have tools and processes for monitoring bias in training data for the models used to do Natural Language Processing. That is part of our intelligence creation; our intellectual property as such.

On the other hand, in conflicts it’s often the case that “one person’s terrorist is another person’s hero”, and the automated text analytics will sometimes classify an event as an example “an act of terror” when the opposing side might not agree with that.

For us, it’s important to catch all angles of an event and to do that in as unbiased a way as possible. Unbiased intelligence is at the core of Recorded Future. 

AN: Have you noticed an uptick in threats amid global instabilities like the situations in Ukraine and Taiwan?

JS: It’s fair to say that the war in Ukraine and the situation in Taiwan have heightened focus and attention on cyber threats. We are observing both the kinetic and cyber capabilities of some very powerful countries. Businesses across all sectors are rightly concerned about the spillover of cyber attacks spilling out from initial targets to target other organisations indiscriminately (as we have seen with ‘NotPetya’ as one such example). 

These events do become opportunities for organisations to consider gaps and weaknesses in their programs and strengthen them where needed. Intelligence becomes a great way to drive this by understanding likely adversaries and how they operate (via TTP’s).

The reality is most businesses realistically have nothing to worry about. However, if you operate in or close to some of the countries already mentioned, operate critical infrastructure, or your government is pro-Ukrainian, you should be considering where to beef up your security capabilities to be better prepared in case of targeting. 

AN: What do you perceive to be the current biggest threat?

JS: This is a really nuanced question, and the true answer is… it depends.

If you are a small business, Business Email Compromise (BEC) and phishing are likely the biggest risks. Larger organisations are likely worried about ransomware attacks halting their operations.

If you are a missile manufacturer, you are likely worried about all of the above scenarios and state-sponsored espionage as well.

That is why intelligence is so important, it informs its consumers of what are the likely biggest risks to their specific business and sector this month, quarter, and year. It’s always evolving and it’s critical that organisations keep up to date with what the ‘threat landscape’ really looks like.  

Recorded Future will be sharing their invaluable insights at this year’s Cyber Security & Cloud Expo Europe. You can find details about Recorded Future’s presentations here. Swing by their booth at stand #183.

The post Jason Steer, Recorded Future: On building a ‘digital twin’ of global threats appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/08/12/jason-steer-recorded-future-on-building-digital-twin-global-threats/feed/ 0
Google employs ML to make Chrome more secure and enjoyable https://www.artificialintelligence-news.com/2022/06/10/google-employs-ml-to-make-chrome-more-secure-and-enjoyable/ https://www.artificialintelligence-news.com/2022/06/10/google-employs-ml-to-make-chrome-more-secure-and-enjoyable/#respond Fri, 10 Jun 2022 14:29:16 +0000 https://www.artificialintelligence-news.com/?p=12065 Google has explained how machine learning is helping to make Chrome more secure and enjoyable. Starting with security, Google says that its latest machine learning (ML) model has enabled Chrome to detect over twice as many phishing attacks and malicious sites. The new on-device machine learning model was rolled out in March. Since its rollout,... Read more »

The post Google employs ML to make Chrome more secure and enjoyable appeared first on AI News.

]]>
Google has explained how machine learning is helping to make Chrome more secure and enjoyable.

Starting with security, Google says that its latest machine learning (ML) model has enabled Chrome to detect over twice as many phishing attacks and malicious sites.

The new on-device machine learning model was rolled out in March. Since its rollout, Google claims that Chrome has detected 2.5x more threats.

Beyond security, Google is also preparing to use machine learning to improve the experience of Chrome users.

Chrome enables users to reject notifications from pages they don’t care about. In the next release of Chrome, Google says it intends to implement an AI model that learns when users are unlikely to grant prompts based on previous interactions and will silence them to minimise interruptions.

This is how a website that’s had its notifications blocked will look:

The design ensures that users aren’t interrupted but can enable notifications if the ML model has got it wrong (hey, it happens!)

Next up is the expansion of a feature called Journeys that Google launched earlier this year.

Journeys aims to help people retrace their steps online using all that data Google collects about users. By adding some ML wizardry, Google says Chrome will bring together all the pages you’ve visited around a specific topic. The idea is to put behind us the days of scrolling through our entire browser history to resume where we left off.

However, it’s the final feature that’s arguably the most interesting.

Google says that it will use ML to personalise Chrome’s toolbar in real-time based on the individual user.

“Maybe you like to read news articles in the morning – phone in one hand, cereal spoon in the other – so you share lots of links from Chrome. Or maybe voice search is more your thing, as you sneak in a few questions during your transit ride to work,” wrote Tarun Bansal, Chrome software engineer, in a blog post.

“Either way, we want to make sure Chrome is meeting you where you’re at, so in the near future, we’ll be using ML to adjust the toolbar in real-time – highlighting the action that’s most useful in that moment (e.g., share link, voice search, etc.)

Here’s how that will look:

The ML-powered features for Chrome really help to show how such models are improving our security while making day-to-day experiences more enjoyable.

(Image Credit: Google)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is also co-located with the Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Google employs ML to make Chrome more secure and enjoyable appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/06/10/google-employs-ml-to-make-chrome-more-secure-and-enjoyable/feed/ 0
Darktrace CEO calls for a ‘Tech NATO’ amid growing cyber threats https://www.artificialintelligence-news.com/2022/05/27/darktrace-ceo-calls-for-a-tech-nato-amid-growing-cyber-threats/ https://www.artificialintelligence-news.com/2022/05/27/darktrace-ceo-calls-for-a-tech-nato-amid-growing-cyber-threats/#respond Fri, 27 May 2022 17:10:19 +0000 https://www.artificialintelligence-news.com/?p=12014 The CEO of AI cybersecurity firm Darktrace has called for a “Tech NATO” to counter growing cybersecurity threats. Poppy Gustafsson spoke on Wednesday at the Royal United Services Institute (RUSI) – the UK’s leading and world’s oldest defense think thank – on the evolving cyber threat landscape. Russia’s illegal and unprovoked invasion of Ukraine has... Read more »

The post Darktrace CEO calls for a ‘Tech NATO’ amid growing cyber threats appeared first on AI News.

]]>
The CEO of AI cybersecurity firm Darktrace has called for a “Tech NATO” to counter growing cybersecurity threats.

Poppy Gustafsson spoke on Wednesday at the Royal United Services Institute (RUSI) – the UK’s leading and world’s oldest defense think thank – on the evolving cyber threat landscape.

Russia’s illegal and unprovoked invasion of Ukraine has led to a global rethinking of security. 

While some in the West had begun questioning the need for NATO post-cold war, and many members have failed to meet their defense spending commitments, the invasion of Ukraine has proven why the defense alliance remains a bedrock of Western security.

NATO members are now spending more on defense, increasing cooperation, and the alliance is now preparing to accept Sweden and Finland into its fold.

Russia has thrown out the rule book with its conduct and will eventually face war crime trials as a result. NATO members, in contrast, have acted in accordance with the UN charter and only provided resources to Ukraine that it can use to defend its territory from the invaders.

However, any provision of long-range weapons that could pose a threat to Moscow would be seen as going beyond helping an ally to defend itself into helping attack Russia itself—likely triggering a disastrous global conflict.

Those kinds of norms around conventional warfare are well-established. In the cybersphere, they’re yet to be set.

“There remains a persistent lack of clarity around how we define an act of war in the cybersphere,” said Gustafsson.

Gustafsson wants to see the creation of a dedicated international cyber task force, or a “tech NATO”, where global partners can collaborate, agree, and ratify norms for the cybersphere—including what kind of response would be warranted for breaches.

At the beginning of Russia’s invasion, the country attacked Viasat to disable Ukrainian communications. The attack spilt over into other European countries, including rendering 5,800 Enercon wind turbines in Germany unable to communicate for remote monitoring or control.

“The attack on the Viasat satellite that disabled Ukrainian military communications one hour before the invasion was a key component of the beginning of this war,” added Gustafsson. “We have seen UK, US, and EU officials jointly attribute this attack to Russia, an immensely political act. That is unprecedented.”

No-one reasonable would suggest that incident is worth triggering a full-scale war between NATO and Russia, but clarity is needed on what the response should be. If a cyberattack leads to serious loss of life, should it have any different response than if it was a missile?

“There is a shocking tolerance for cyberattacks, and that has to change,” argued Gustafsson. “Organisations that are custodians of valuable, private data can not be allowed to let that data fall into criminal hands through negligence and face no consequences.”

Darktrace says it has witnessed a global increase in attacks on critical national infrastructure bodies across its customer base—including a 90 percent increase in high priority security incidents on the networks of energy companies in Europe during the initial week of Russia’s invasion.

“Issues that we had thought about speculatively have now become our reality. We are facing war in Europe and there is an essential cyber component both to the way it is being fought and to its international ramifications,” says Professor Madeline Carr, Senior RUSI Associate Fellow and Professor of Global Politics and Cybersecurity at University College London.

“This is a complex area which is the subject of a wealth of academic debate and what is needed is clarity, consensus, and cooperation.”

Greater cooperation is certainly needed to combat evolving cyber threats. However, Gustafsson’s call for a “Tech NATO” is surprising—not least because NATO itself already has one in the form of the CCDCOE (Cooperative Cyber Defence Centre of Excellence).

Despite being run by NATO, the CCDCOE is open to “like-minded non-NATO nations”. Earlier this month, non-NATO member South Korea joined the organisation alongside NATO members Canada and Luxembourg. In March, Ukraine also joined the CCDCOE despite not being a full NATO member.

“Cooperation, sharing of information, skills, and best practices are essential for tackling the challenges we face in cyberspace,” said a spokesperson for the Embassy of the Grand Duchy of Luxembourg, following the country’s admission to the CCDCOE.

The CCDCOE leans more towards collaboration between public agencies but also brings together representatives from academia and the private sector to discuss cyber norms and improve members’ defenses.

“Each member of the CCDCOE plays an important role in building and advancing a strong and efficient unity against cyber threats,” explained Colonel Jaak Tarien, Head of the CCDCOE.

“In the long run, the conditions for peace in the cyber realm and a response to the security threats to the modern world cannot be created without united and committed support.”

We’ve reached out to Darktrace for clarification on Gustafsson’s call for a “Tech NATO” and how it would differ from the CCDCOE. We presume it would have a greater focus on private sector companies like Darktrace but will update this article when/if we receive an official response.

Related: US disrupts large Russian botnet ‘before it could be used’

Want to learn more about cybersecurity and the cloud from industry leaders? Check out Cyber Security & Cloud Expo taking place in Amsterdam, California, and London. The event is being co-hosted with the AI & Big Data Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Darktrace CEO calls for a ‘Tech NATO’ amid growing cyber threats appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/05/27/darktrace-ceo-calls-for-a-tech-nato-amid-growing-cyber-threats/feed/ 0
What leveraging AI in hybrid security systems means for enterprises https://www.artificialintelligence-news.com/2022/05/10/what-leveraging-ai-in-hybrid-security-systems-means-for-enterprises/ https://www.artificialintelligence-news.com/2022/05/10/what-leveraging-ai-in-hybrid-security-systems-means-for-enterprises/#respond Tue, 10 May 2022 15:33:22 +0000 https://www.artificialintelligence-news.com/?p=11952 Artificial intelligence (AI) is becoming more common than you may realise. Many of society’s leading technologies are driven by AI technology, as their automated functions streamline processes and help people do more with less time. Now, AI is integrating into commercial security systems and starting to revolutionise technology. Modern security systems with AI technology can... Read more »

The post What leveraging AI in hybrid security systems means for enterprises appeared first on AI News.

]]>
Artificial intelligence (AI) is becoming more common than you may realise. Many of society’s leading technologies are driven by AI technology, as their automated functions streamline processes and help people do more with less time.

Now, AI is integrating into commercial security systems and starting to revolutionise technology. Modern security systems with AI technology can help security teams better detect threats and provide faster responses to protect your business more effectively. 

Enterprises can leverage AI to enable security operators to analyse data more efficiently and streamline operations, allowing teams to adjust their focuses to more critical matters and better detect anomalies as they occur.

Altogether, AI empowers your security teams to provide better and faster responses to threats, strengthening your security systems for the safety of your enterprise. 

Use data to adopt and automate learned behaviours

One use case for AI is leveraging its learning capabilities to automate responses. AI can be used to evaluate patterns of data over time, and learn from it. By formulating automated responses, AI streamlines necessary processes, allowing security teams to focus on the most critical matters.

In many cases, AI empowers users to perform necessary tasks more efficiently, while maintaining the data safety and organisational standards required for optimal operations. 

When converging physical and cybersecurity systems, AI technology is useful for analysing combined data streams.

Learned behaviours can make managing the millions of data points coming from across an enterprise network of systems more streamlined, helping security teams pinpoint areas of concern with automated alerts, as well as facilitating efficient audits for security trends over time.

For example, if your security team repeatedly dismisses a specific alert on their video security system, over time a pattern will form that AI technology will recognise. It can trigger an automated response to dismiss this alert, reducing the number of unnecessary alerts.

AI interprets data and uses it to inform its responses, streamlining your system effectively. However, it’s important that your system maintains a record of all alerts and activity so the system can be audited regularly to ensure optimal functionality. 

Increased productivity and accuracy 

AI’s automated responses and workflows can substantially impact your converged security system’s productivity and accuracy.

With workforces adopting more hybrid schedules, there is a need for security teams to be increasingly flexible and available. AI can help cyber and physical security teams be more agile and efficient even as more data and information comes their way.

This reduces unnecessary burdens on your converged security team, allowing them to move their focus onto more critical matters and complete work productively. 

Take a look at how the Openpath Video Intercom Reader Pro leverages AI to facilitate visitor access.

When a visitor, delivery courier, or vendor initiates a call using the doorbell on the reader, the intelligent voice system routes the call to the correct person based on the responses from the guest.

The system can even be programmed to route calls to secondary teams or a voicemail service based on tenant availability and door schedules. 

With access control, video security, and cybersecurity systems, AI can be used to help security operators determine which areas need immediate focus, provide real-time alerts, and help security teams increase their productivity to ensure that your enterprise remains safe and performs to the best of its ability. 

Ability to detect anomalies

A good example of using AI to strengthen commercial security systems is detecting anomalies in the security network and behaviours.

Especially in large enterprises, it can be difficult for security staff to monitor every single instance across the network, so data-driven AI learns to recognise specific changes or patterns.

These anomalies may come in the form of user behaviours, data packages sent over the network, or hacking attempts on cybersecurity systems. 

AI can detect abnormal network behaviour using a baseline of what is common and what isn’t. For example, Ava Aware uses AI in their video security software to alert security staff to detect unusual motion or behaviour.

If the AI does notice an anomaly, an automated response alerts security staff to the threat, allowing them to evaluate and take appropriate action. Remote access and real-time notifications help keep your on-prem and cloud-based security systems safe even when your security team is away from the office. 

While AI is helpful in detecting anomalies to common patterns and attacks, it’s not fool proof. Sophisticated attacks can hide their signature and trick AI systems into ignoring the threat.

Human monitoring and intervention is still necessary, and you should never depend solely on AI to protect your security systems.

Overall, AI can assist your team in detecting threats and anomalies across your security system on a large scale, and allow security teams to act proactively and productively to protect your enterprise. 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post What leveraging AI in hybrid security systems means for enterprises appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/05/10/what-leveraging-ai-in-hybrid-security-systems-means-for-enterprises/feed/ 0