infosec Archives - AI News https://www.artificialintelligence-news.com/tag/infosec/ Artificial Intelligence News Wed, 06 Sep 2023 09:48:11 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png infosec Archives - AI News https://www.artificialintelligence-news.com/tag/infosec/ 32 32 GitLab: Developers view AI as ‘essential’ despite concerns https://www.artificialintelligence-news.com/2023/09/06/gitlab-developers-ai-essential-despite-concerns/ https://www.artificialintelligence-news.com/2023/09/06/gitlab-developers-ai-essential-despite-concerns/#respond Wed, 06 Sep 2023 09:48:08 +0000 https://www.artificialintelligence-news.com/?p=13564 A survey by GitLab has shed light on the views of developers on the landscape of AI in software development. The report, titled ‘The State of AI in Software Development,’ presents insights from over 1,000 global senior technology executives, developers, and security and operations professionals. The report reveals a complex relationship between enthusiasm for AI... Read more »

The post GitLab: Developers view AI as ‘essential’ despite concerns appeared first on AI News.

]]>
A survey by GitLab has shed light on the views of developers on the landscape of AI in software development.

The report, titled ‘The State of AI in Software Development,’ presents insights from over 1,000 global senior technology executives, developers, and security and operations professionals.

The report reveals a complex relationship between enthusiasm for AI adoption and concerns about data privacy, intellectual property, and security.

“Enterprises are seeking out platforms that allow them to harness the power of AI while addressing potential privacy and security risks,” said Alexander Johnston, Research Analyst in the Data, AI & Analytics Channel at 451 Research, a part of S&P Global Market Intelligence.

While 83 percent of the survey’s respondents view AI implementation as essential to stay competitive, a significant 79 percent expressed worries about AI tools accessing sensitive information and intellectual property.

Impact on developer productivity

AI is perceived as a boon for developer productivity, with 51 percent of all respondents citing it as a key benefit of AI implementation. However, security professionals are apprehensive that AI-generated code might lead to an increase in security vulnerabilities, potentially creating more work for them.

Only seven percent of developers’ time is currently spent identifying and mitigating security vulnerabilities, compared to 11 percent allocated to testing code. This raises questions about the widening gap between developers and security professionals in the AI era.

Privacy and intellectual property concerns

The survey underscores the paramount importance of data privacy and intellectual property protection when selecting AI tools. 95 percent of senior technology executives prioritise these aspects when choosing AI solutions.

Moreover, 32 percent of respondents admitted to being “very” or “extremely” concerned about introducing AI into the software development lifecycle. Within this group, 39 percent cited worries about AI-generated code introducing security vulnerabilities, and 48 percent expressed concerns that AI-generated code may not receive the same copyright protection as code produced by humans.

AI skills gap

Despite optimism about AI’s potential, the report identifies a disconnect between organisations’ provision of AI training resources and practitioners’ satisfaction with them. 

While 75 percent of respondents stated that their organisations offer training and resources for using AI, an equivalent proportion expressed the need to seek resources independently—suggesting that the available training may be insufficient.

A striking 81 percent of respondents said they require more training to effectively utilise AI in their daily work. Furthermore, 65 percent of those planning to use AI for software development indicated that their organsations plan to hire new talent to manage AI implementation.

David DeSanto, Chief Product Officer at GitLab, said:

“According to the GitLab Global DevSecOps Report, only 25 percent of developers’ time is spent on code generation, but the data shows AI can boost productivity and collaboration in nearly 60 percent of developers’ day-to-day work.

To realise AI’s full potential, it needs to be embedded across the software development lifecycle, allowing everyone involved in delivering secure software – not just developers – to benefit from the efficiency boost.” 

While AI holds immense promise for the software development industry, GitLab’s report makes it clear that addressing cybersecurity and privacy concerns, bridging the skills gap, and fostering collaboration between developers and security professionals are pivotal to successful AI adoption.

(Photo by Luca Bravo on Unsplash)

See also: UK government outlines AI Safety Summit plans

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post GitLab: Developers view AI as ‘essential’ despite concerns appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/09/06/gitlab-developers-ai-essential-despite-concerns/feed/ 0
NCSC: Chatbot ‘prompt injection’ attacks pose growing security risk https://www.artificialintelligence-news.com/2023/08/30/ncsc-chatbot-prompt-injection-attacks-growing-security-risk/ https://www.artificialintelligence-news.com/2023/08/30/ncsc-chatbot-prompt-injection-attacks-growing-security-risk/#respond Wed, 30 Aug 2023 10:50:59 +0000 https://www.artificialintelligence-news.com/?p=13544 The UK’s National Cyber Security Centre (NCSC) has issued a stark warning about the increasing vulnerability of chatbots to manipulation by hackers, leading to potentially serious real-world consequences. The alert comes as concerns rise over the practice of “prompt injection” attacks, where individuals deliberately create input or prompts designed to manipulate the behaviour of language... Read more »

The post NCSC: Chatbot ‘prompt injection’ attacks pose growing security risk appeared first on AI News.

]]>
The UK’s National Cyber Security Centre (NCSC) has issued a stark warning about the increasing vulnerability of chatbots to manipulation by hackers, leading to potentially serious real-world consequences.

The alert comes as concerns rise over the practice of “prompt injection” attacks, where individuals deliberately create input or prompts designed to manipulate the behaviour of language models that underpin chatbots.

Chatbots have become integral in various applications such as online banking and shopping due to their capacity to handle simple requests. Large language models (LLMs) – including those powering OpenAI’s ChatGPT and Google’s AI chatbot Bard – have been trained extensively on datasets that enable them to generate human-like responses to user prompts.

The NCSC has highlighted the escalating risks associated with malicious prompt injection, as chatbots often facilitate the exchange of data with third-party applications and services.

“Organisations building services that use LLMs need to be careful, in the same way they would be if they were using a product or code library that was in beta,” the NCSC explained.

“They might not let that product be involved in making transactions on the customer’s behalf, and hopefully wouldn’t fully trust it. Similar caution should apply to LLMs.”

If users input unfamiliar statements or exploit word combinations to override a model’s original script, the model can execute unintended actions. This could potentially lead to the generation of offensive content, unauthorised access to confidential information, or even data breaches.

Oseloka Obiora, CTO at RiverSafe, said: “The race to embrace AI will have disastrous consequences if businesses fail to implement basic necessary due diligence checks. 

“Chatbots have already been proven to be susceptible to manipulation and hijacking for rogue commands, a fact which could lead to a sharp rise in fraud, illegal transactions, and data breaches.”

Microsoft’s release of a new version of its Bing search engine and conversational bot drew attention to these risks.

A Stanford University student, Kevin Liu, successfully employed prompt injection to expose Bing Chat’s initial prompt. Additionally, security researcher Johann Rehberger discovered that ChatGPT could be manipulated to respond to prompts from unintended sources, opening up possibilities for indirect prompt injection vulnerabilities.

The NCSC advises that while prompt injection attacks can be challenging to detect and mitigate, a holistic system design that considers the risks associated with machine learning components can help prevent the exploitation of vulnerabilities.

A rules-based system is suggested to be implemented alongside the machine learning model to counteract potentially damaging actions. By fortifying the entire system’s security architecture, it becomes possible to thwart malicious prompt injections.

The NCSC emphasises that mitigating cyberattacks stemming from machine learning vulnerabilities necessitates understanding the techniques used by attackers and prioritising security in the design process.

Jake Moore, Global Cybersecurity Advisor at ESET, commented: “When developing applications with security in mind and understanding the methods attackers use to take advantage of the weaknesses in machine learning algorithms, it’s possible to reduce the impact of cyberattacks stemming from AI and machine learning.

“Unfortunately, speed to launch or cost savings can typically overwrite standard and future-proofing security programming, leaving people and their data at risk of unknown attacks. It is vital that people are aware that what they input into chatbots is not always protected.”

As chatbots continue to play an integral role in various online interactions and transactions, the NCSC’s warning serves as a timely reminder of the imperative to guard against evolving cybersecurity threats.

(Photo by Google DeepMind on Unsplash)

See also: OpenAI launches ChatGPT Enterprise to accelerate business operations

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Cyber Security & Cloud Expo and Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post NCSC: Chatbot ‘prompt injection’ attacks pose growing security risk appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/30/ncsc-chatbot-prompt-injection-attacks-growing-security-risk/feed/ 0
Assessing the risks of generative AI in the workplace https://www.artificialintelligence-news.com/2023/07/17/assessing-risks-generative-ai-workplace/ https://www.artificialintelligence-news.com/2023/07/17/assessing-risks-generative-ai-workplace/#respond Mon, 17 Jul 2023 13:12:19 +0000 https://www.artificialintelligence-news.com/?p=13284 Amid the exponential growth of generative AI, there is a pressing need to evaluate the legal, ethical, and security implications of these solutions in the workplace. One of the concerns highlighted by industry experts is often the lack of transparency regarding the data on which many generative AI models are trained. There is insufficient information... Read more »

The post Assessing the risks of generative AI in the workplace appeared first on AI News.

]]>
Amid the exponential growth of generative AI, there is a pressing need to evaluate the legal, ethical, and security implications of these solutions in the workplace.

One of the concerns highlighted by industry experts is often the lack of transparency regarding the data on which many generative AI models are trained.

There is insufficient information about the specifics of the training data used for models like GPT-4, which powers applications such as ChatGPT. This lack of clarity extends to the storage of information obtained during interactions with individual users, raising legal and compliance risks.

The potential for leakage of sensitive company data or code through interactions with generative AI solutions is of significant concern.

“Individual employees might leak sensitive company data or code when interacting with popular generative AI solutions,” says Vaidotas Šedys, Head of Risk Management at Oxylabs.

“While there is no concrete evidence that data submitted to ChatGPT or any other generative AI system might be stored and shared with other people, the risk still exists as new and less tested software often has security gaps.” 

OpenAI, the organisation behind ChatGPT, has been cautious in providing detailed information on how user data is handled. This poses challenges for organisations seeking to mitigate the risk of confidential code fragments being leaked. Constant monitoring of employee activities and implementing alerts for the use of generative AI platforms becomes necessary, which can be burdensome for many organisations.

“Further risks include using wrong or outdated information, especially in the case of junior specialists who are often unable to evaluate the quality of the AI’s output. Most generative models function on large but limited datasets that need constant updating,” adds Šedys.

These models have a limited context window and may encounter difficulties when dealing with new information. OpenAI has acknowledged that its latest framework, GPT-4, still suffers from factual inaccuracies, which can lead to the dissemination of misinformation.

The implications extend beyond individual companies. For example, Stack Overflow – a popular developer community – has temporarily banned the use of content generated with ChatGPT due to low precision rates, which can mislead users seeking coding answers.

Legal risks also come into play when utilising free generative AI solutions. GitHub’s Copilot has already faced accusations and lawsuits for incorporating copyrighted code fragments from public and open-source repositories.

“As AI-generated code can contain proprietary information or trade secrets belonging to another company or person, the company whose developers are using such code might be liable for infringement of third-party rights,” explains Šedys.

“Moreover, failure to comply with copyright laws might affect company evaluation by investors if discovered.”

While organisations cannot feasibly achieve total workplace surveillance, individual awareness and responsibility are crucial. Educating the general public about the potential risks associated with generative AI solutions is essential.

Industry leaders, organisations, and individuals must collaborate to address the data privacy, accuracy, and legal risks of generative AI in the workplace.

(Photo by Sean Pollock on Unsplash)

See also: Universities want to ensure staff and students are ‘AI-literate’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Assessing the risks of generative AI in the workplace appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/07/17/assessing-risks-generative-ai-workplace/feed/ 0
The risk and reward of ChatGPT in cybersecurity https://www.artificialintelligence-news.com/2023/04/03/the-risk-and-reward-chatgpt-in-cybersecurity/ https://www.artificialintelligence-news.com/2023/04/03/the-risk-and-reward-chatgpt-in-cybersecurity/#respond Mon, 03 Apr 2023 15:25:51 +0000 https://www.artificialintelligence-news.com/?p=12890 Unless you’ve been on a retreat in some far-flung location with no internet access for the past few months, chances are you’re well aware of how much hype and fear there’s been around ChatGPT, the artificial intelligence (AI) chatbot developed by OpenAI. Maybe you’ve seen articles about academics and teachers worrying that it’ll make cheating... Read more »

The post The risk and reward of ChatGPT in cybersecurity appeared first on AI News.

]]>
Unless you’ve been on a retreat in some far-flung location with no internet access for the past few months, chances are you’re well aware of how much hype and fear there’s been around ChatGPT, the artificial intelligence (AI) chatbot developed by OpenAI. Maybe you’ve seen articles about academics and teachers worrying that it’ll make cheating easier than ever. On the other side of the coin, you might have seen the articles evangelising all of ChatGPT’s potential applications.

Alternatively, you may have been tickled by some of the more esoteric examples of people using the tool. One user, for example, got it to write an instruction guide for removing peanut butter sandwiches from a VCR in the style of the King James Bible. Another asked it to write a song in the style of Nick Cave; the singer was less than enthused about the results.

But amidst all that hype and discussion, there hasn’t been nearly enough attention paid to the risks and rewards that AI tools like ChatGPT present in the cybersecurity arena. 

Understanding ChatGPT 

In order to get a clearer idea of what those risks and rewards look like, it’s important to get a better understanding of what ChatGPT is and what it’s capable of. 

ChatGPT (now in its latest version, ChatGPT-4, released on March 14th, 2023) is part of a larger family of AI tools developed by the US-based company OpenAI. While it’s officially called a chatbot, that doesn’t quite cover its versatility. Trained using both supervised and reinforcement learning techniques, it can do far more than most chatbots. As part of its responses, it can generate content based on all the information it was trained on. That information includes general knowledge as well as programming languages and code. As a result, it can, for instance, simulate an entire chat room; play games like tic-tac-toe; and simulate an ATM. 

More importantly, for businesses and other large organisations, it can help improve businesses’ customer service through more personalised, accurate messaging. It can even write and debug computer programs. Some of those, and other, features mean that it could both be a cybersecurity ally and a threat. 

Education, filtering, and bolstering defences 

On the positive front, there’s a lot to be said for ChatGPT. One of the most valuable roles it could play is also one of the most simple: spotting phishing. Organisations could entrench a habit in their employees whereby they use ChatGPT to determine if any content they’re not sure about is phishing or if it was generated with malicious intent. 

That’s important because, for all the technological advances made in recent years, social engineering attacks like phishing remain one of the most effective forms of cybercrime. In fact, research shows that, of the cyberattacks successfully identified in the UK in 2022, 83% involved some form of phishing. 

Of course, there are numerous other ways that ChatGPT can be used to bolster cybersecurity efforts. It could, for example, provide a degree of assistance to more junior security workers, whether that’s in communicating any issues they might have or helping them better understand the context of what they’re meant to be working on at any given point. It could also help under-resourced teams curate the latest threats and in identifying internal vulnerabilities.

The bad guys are using it too 

But even as cybersecurity professionals explore ways of using ChatGPT to their advantage, cybercriminals are too. They might, for example, make use of its ability to generate malicious code. Alternatively, they might use it to generate content that appears to be human-generated, potentially used to trick users into clicking on malicious links, unknowingly leading to dangerous consequences. 

Some are even using ChatGPT to convincingly mimic legitimate AI assistants on corporate websites, opening up a new avenue in the social engineering battlefront. Remember, the success of cybercriminals largely depends on being able to target as many possible vulnerabilities, as frequently and quickly as possible. AI tools like ChatGPT allow them to do that by essentially acting as a supercharged assistant that can help create all assets needed for malicious campaigns.

Use the tools available 

It should be clear then that, if cybercriminals are using ChatGPT and other AI tools to enhance their attacks, your security team should also be using them to bolster your cybersecurity efforts. Fortunately, you don’t have to do it alone. 

The right security provider won’t just engage in constant research around how cybercriminals are using the latest technologies to enhance their attacks but also how those technologies can be used to improve threat detection, prevention, and defence. And with the damage that a cybersecurity attack can do to your critical infrastructure, it’s something they should be proactively telling you about too. 

ChatGPT-4

With the latest release of ChatGPT (ChatGPT-4), OpenAI has made available the latest and most powerful conversational model, also introducing more controls to prevent its misuse. We asked ChatGPT “What could prevent malicious actors using ChatGPT with malicious intent” and this was the answer:

“There are several measures that could prevent malicious actors from using ChatGPT with malicious intent:

1. Access controls: OpenAl, the organization behind ChatGPT, has implemented access controls to limit who can access their API and use their technology. By restricting access, they can prevent malicious actors from using their technology for malicious purposes.

2. Monitoring and detection: OpenAl monitors the usage of their technology to detect and prevent malicious activities. They use machine learning models to identify patterns and anomalies in usage that could indicate potential misuse.

3. Ethical guidelines: OpenAl has published guidelines for responsible use of their technology, which outline best practices and ethical considerations. By following these guidelines, users can ensure that they are using the technology in a responsible and ethical manner.

4. User education: Education and awareness can help prevent the misuse of technology.

OpenAl provides resources and educational materials to help users understand the capabilities and limitations of their technology, as well as the potential risks of misuse.

5. Legal consequences: There are legal consequences for using technology like chatGPT for malicious purposes. Governments and law enforcement agencies have laws and regulations in place to punish individuals and organizations that use technology for illegal activities.

Overall, preventing malicious actors from using ChatGPT with malicious intent requires a combination of technical controls, ethical guidelines, user education, and legal consequences. It is important to use Al language models like ChatGPT responsibly and ethically to prevent the technology from being misused for harmful purposes.”

So ChatGPT went ahead and enumerated all of the measures that OpenAI has taken in order to prevent its misuse.

OpenAI, in the release blog of ChatGPT-4, explained how this model is supposedly safer than its predecessors: “We spent 6 months making GPT-4 safer and more aligned. GPT-4 is 82% less likely to respond to requests for disallowed content and 40% more likely to produce factual responses than GPT-3.5 on our internal evaluations”.

All this indicates that the right measures continue being improved to avoid its misuse, but as we all know the bad guys will continue to find ways to overcome those limitations as the model is more widely used and understood.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post The risk and reward of ChatGPT in cybersecurity appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/04/03/the-risk-and-reward-chatgpt-in-cybersecurity/feed/ 0
F-Secure details nature-inspired AI project harnessing ‘swarm intelligence’ https://www.artificialintelligence-news.com/2019/11/21/fsecure-nature-ai-project-swarm-intelligence/ https://www.artificialintelligence-news.com/2019/11/21/fsecure-nature-ai-project-swarm-intelligence/#respond Thu, 21 Nov 2019 12:53:59 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6218 Cybersecurity giant F-Secure has detailed Project Blackfin, an AI initiative which harnesses nature-inspired “swarm intelligence” techniques. The concept sounds similar to Fetch.ai in that decentralised autonomous AI agents will collaborate in order to achieve common goals. Cambridge-based Fetch.ai is focusing its efforts on the use of autonomous AI agents for IoT purposes. Naturally, F-Secure is... Read more »

The post F-Secure details nature-inspired AI project harnessing ‘swarm intelligence’ appeared first on AI News.

]]>
Cybersecurity giant F-Secure has detailed Project Blackfin, an AI initiative which harnesses nature-inspired “swarm intelligence” techniques.

The concept sounds similar to Fetch.ai in that decentralised autonomous AI agents will collaborate in order to achieve common goals.

Cambridge-based Fetch.ai is focusing its efforts on the use of autonomous AI agents for IoT purposes. Naturally, F-Secure is currently seeking to use such agents to further improve the company’s detection and response capabilities.

Matti Aksela, F-Secure’s VP of AI, believes there’s a common misconception that “advanced” AI should mimic human intelligence (known as AGI, or Artificial General Intelligence).

“People’s expectations that ‘advanced’ machine intelligence simply mimics human intelligence is limiting our understanding of what AI can and should do,” says Aksela.

“Instead of building AI to function as though it were human, we can and should be exploring ways to unlock the unique potential of machine intelligence, and how that can augment what people do.”

On average, experts surveyed in 2017 estimate there’s a 50 percent chance AGI will be achieved by 2060. However, there’s a significant difference of opinion based on geography: Asian respondents expect AGI in 30 years, whereas North Americans expect it in 74 years.

The development of autonomous agents, like those pursued by F-Secure and Fetch.ai, should happen at a much faster pace.

F-Secure believes its own project will take several years to reach its full potential but some on-device intelligence mechanisms are already being used for the company’s breach-detection solutions.

While it’s not quite AGI, the individual traits each agent possesses should still provide very advanced capabilities when combined. This is most easily thought of as much like a team of humans working towards a common goal.

Indeed, Project Blackfin takes inspiration from natural phenomena. Swarm intelligence can be observed across nature such as schools of fish or ant colonies.

Rather than create a single centralised AI model to provide instructions, F-Secure says the AI agents would be intelligent and powerful enough to communicate and work together.

“Essentially, you’ll have a colony of fast local AIs adapting to their own environment while working together, instead of one big AI making decisions for everyone,” Aksela explains.

In the case of F-Secure, each of its agents learns from observing their local hosts and networks. These observations are then further augmented by the wider network of agents spanning various industries and organisations.

F-Secure highlights that another benefit of this approach is that it also helps organisations avoid sharing confidential, potentially sensitive information via the cloud or product telemetry.

“Looking beyond detecting breaches and attacks, we can envision these fleets of AI agents monitoring the overall health, efficiency, and usefulness of computer networks, or even systems like power grids or self-driving cars,” says Mikko Hypponen, F-Secure Chief Research Officer.

“But most of all, I think this research can help us see AI as something more than just a threat to our jobs and livelihoods.”

F-Secure plans to publish research, findings, and updates as they occur. More information on Project Blackfin is available here.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post F-Secure details nature-inspired AI project harnessing ‘swarm intelligence’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2019/11/21/fsecure-nature-ai-project-swarm-intelligence/feed/ 0
McAfee: Keep an eye on the humans pulling the levers, not the AIs https://www.artificialintelligence-news.com/2019/03/06/mcafee-keep-eye-humans-ais/ https://www.artificialintelligence-news.com/2019/03/06/mcafee-keep-eye-humans-ais/#respond Wed, 06 Mar 2019 17:14:56 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5301 Security firm McAfee has warned that it’s more likely humans will use AI for malicious purposes rather than it going rogue itself. It’s become a cliché metaphor, but people are still concerned a self-thinking killer AI like SkyNet from the film Terminator will be created. McAfee CTO Steve Grobman spoke at this year’s RSA conference... Read more »

The post McAfee: Keep an eye on the humans pulling the levers, not the AIs appeared first on AI News.

]]>
Security firm McAfee has warned that it’s more likely humans will use AI for malicious purposes rather than it going rogue itself.

It’s become a cliché metaphor, but people are still concerned a self-thinking killer AI like SkyNet from the film Terminator will be created.

McAfee CTO Steve Grobman spoke at this year’s RSA conference in San Francisco and warned the wrong humans in control of powerful AIs are his company’s primary concern.

To provide an example of how AIs could be used for good or bad purposes, Grobman handed over to McAfee Chief Data Scientist Dr Celeste Fralick.

Fralick explained how McAfee has attempted to predict crime in San Francisco using historic data combined with a machine learning model. The AI recommends where police could be deployed to have the best chance of apprehending criminals.

Most law-abiding citizens would agree this is a positive use of AI. However, in the hands of criminals it could be used to pinpoint where to commit a crime and have the best chance of avoiding capture.

In another demo at the conference, Fralick showed a video where his words were being spoken by Grobman in an example of a ‘DeepFake’.

“I used freely available, recorded public comments by you to create and train a machine learning model that let me develop a deepfake video with my words coming out of your mouth,” Fralick explained. “It just shows one way that AI and machine learning can be used to create massive chaos.

DeepFakes are opening up wide range of new threats including fraud through impersonation. Another is the potential for blackmail, with sexually-explicit fakes being threatened to be released to embarass an individual.

“We can’t allow fear to impede our progress, but it’s how we manage the innovation that is the real story,” Grobman concluded.

Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.

The post McAfee: Keep an eye on the humans pulling the levers, not the AIs appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2019/03/06/mcafee-keep-eye-humans-ais/feed/ 0
Microsoft wants AI to predict when a PC is going to be infected https://www.artificialintelligence-news.com/2018/12/14/microsoft-ai-predict-pc-infected/ https://www.artificialintelligence-news.com/2018/12/14/microsoft-ai-predict-pc-infected/#respond Fri, 14 Dec 2018 16:25:36 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4334 Microsoft wants to harness AI’s incredible prediction abilities to detect PC malware attacks before they even happen. The company has sponsored a competition on Kaggle which challenges data scientists to create models which predict if a device is likely to become infected with malware given a current machine state. In a blog post, Microsoft wrote:... Read more »

The post Microsoft wants AI to predict when a PC is going to be infected appeared first on AI News.

]]>
Microsoft wants to harness AI’s incredible prediction abilities to detect PC malware attacks before they even happen.

The company has sponsored a competition on Kaggle which challenges data scientists to create models which predict if a device is likely to become infected with malware given a current machine state.

In a blog post, Microsoft wrote:

“The competition provides academics and researchers with varied backgrounds a fresh opportunity to work on a real-world problem using a fresh set of data from Microsoft.

Results from the contest will help us identify opportunities to further improve Microsoft’s layered defenses, focusing on preventative protection.

Not all machines are equally likely to get malware; competitors will help build models for identifying devices that have a higher risk of getting malware so that preemptive action can be taken.”

Participants are supplied with 9.4GB of anonymised data gathered from 16.8M devices to build their models.

A pot of $25,000 prize money will be used to incentivise participation spread out as:

  • 1st Place – $12,000
  • 2nd Place – $7,000
  • 3rd Place – $3,000
  • 4th Place – $2,000
  • 5th Place – $1,000

The best performing entry, thus far, has achieved 68.9 percent accuracy – though it’s likely this will be improved before the end.

Entries must be submitted before the competition closes on March 13, 2019.

You can find out more and enter on Kaggle here.

 Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.

The post Microsoft wants AI to predict when a PC is going to be infected appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2018/12/14/microsoft-ai-predict-pc-infected/feed/ 0
Information Commissioner targets intrusive facial recognition https://www.artificialintelligence-news.com/2018/05/15/information-commissioner-facial-recognition/ https://www.artificialintelligence-news.com/2018/05/15/information-commissioner-facial-recognition/#respond Tue, 15 May 2018 11:04:43 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3087 Facial recognition offers huge opportunities, but the Information Commissioner is more concerned about how it could impact privacy. In a post on the ICO blog, Information Commissioner Elizabeth Denham highlights the advantages and disadvantages of facial recognition. “I have identified FRT by law enforcement as a priority area for my office and I recently wrote... Read more »

The post Information Commissioner targets intrusive facial recognition appeared first on AI News.

]]>
Facial recognition offers huge opportunities, but the Information Commissioner is more concerned about how it could impact privacy.

In a post on the ICO blog, Information Commissioner Elizabeth Denham highlights the advantages and disadvantages of facial recognition.

“I have identified FRT by law enforcement as a priority area for my office and I recently wrote to the Home Office and the NPCC setting out my concerns,” Denham wrote. “Should my concerns not be addressed, I will consider what legal action is needed to ensure the right protections are in place for the public.”

One advantage many would appreciate is the ability to speed up passport control. However, how such data is collected and used is of great concern to many.

Facial recognition is not a new technology, but advances in AI is making it more powerful than ever. In the privacy-conscious Western world, the use of facial recognition is still relatively novel. In the East, it’s long been a fairly accepted part of society.

Last month, AI News reported Chinese facial recognition provider SenseTime became the most funded AI startup in history.

SenseTime’s technology is used by the Chinese government and its ‘Viper’ system is aiming to process and analyse over 100,000 simultaneous real-time streams from traffic cameras, ATMs, and more — for tagging and keeping track of individuals.

It’s easy to see how a system like SenseTime can be used to detect criminals. In fact, last month, a suspect wanted by police was apprehended by police after facial recognition caught him among 60,000 concertgoers.

Here in the UK, tests of facial recognition for detecting criminals have been less effective.

Last week, South Wales Police announced it used NEC’s NeoFace Watch facial recognition software at the Champions League Final in Cardiff. Its success rate was just eight percent and raised 2,297 false positives.

Rather than improve efficiency, such a poor result would increase police work substantially.

“Police forces must have clear evidence to demonstrate that the use of FRT in public spaces is effective in resolving the problem that it aims to address and that no less intrusive technology or methods are available to address that problem,” wrote Denham.

The information commissioner is ‘deeply concerned’ about the lack of national-level coordination in assessing the privacy risks and a comprehensive governance framework with regards to facial recognition.

Organisations including the Civil Society, Big Brother Watch, and the Electronic Frontier Foundation in the U.S have expressed similar concerns about facial recognition in recent reports. In the wrong hands, it could be very dangerous.

Do you agree with facial recognition? Let us know in the comments.

 Interested in hearing industry leaders discuss subjects like this and sharing their use-cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London and Amsterdam to learn more. Co-located with the  IoT Tech Expo, Blockchain Expo and Cyber Security & Cloud Expo so you can explore the future of enterprise technology in one place.

The post Information Commissioner targets intrusive facial recognition appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2018/05/15/information-commissioner-facial-recognition/feed/ 0
Experts believe AI will be weaponised in the next 12 months – attacks unslowed by dark web shutdowns https://www.artificialintelligence-news.com/2017/08/02/experts-believe-ai-will-weaponised-next-12-months-attacks-unslowed-dark-web-shutdowns/ https://www.artificialintelligence-news.com/2017/08/02/experts-believe-ai-will-weaponised-next-12-months-attacks-unslowed-dark-web-shutdowns/#respond Wed, 02 Aug 2017 16:16:22 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=2256 The majority of cybersecurity experts believe AI will be weaponised for use in cyberattacks within the next 12 months, and the shutting down of dark web markets will not decrease malware activity. Cylance posted the results of their survey of Black Hat USA 2017 attendees yesterday, and 62 percent of the infosec experts believe cyberattacks... Read more »

The post Experts believe AI will be weaponised in the next 12 months – attacks unslowed by dark web shutdowns appeared first on AI News.

]]>
The majority of cybersecurity experts believe AI will be weaponised for use in cyberattacks within the next 12 months, and the shutting down of dark web markets will not decrease malware activity.

Cylance posted the results of their survey of Black Hat USA 2017 attendees yesterday, and 62 percent of the infosec experts believe cyberattacks will become far more advanced over the course of the next year due to artificial intelligence.

Interestingly, 32 percent said there wasn’t a chance of AI being used for attacks in the next 12 months. The remaining six percent were uncertain.

Following an increasing pace of high-profile and devastating cyberattacks in recent years, law enforcement agencies have been cracking down on dark web marketplaces where strains of malware are often sold. Just last month, two dark web marketplaces known as AlphaBay and Hansa were seized following an international operation between Europol, the FBI, the U.S. Drug Enforcement Agency, and the Dutch National Police.

Despite these closures, 80 percent of the surveyed cybersecurity experts believe it will not slow down cyberattacks. 7 percent said they were uncertain which leaves just 13 percent believing it will have an impact.

With regards to whom poses the biggest cybersecurity threat to the United States, Russia came out number one (34%) which is perhaps no surprise considering the ongoing investigations into allegations of the nation’s involvement in the U.S presidential elections. This was closely followed by organised cybercriminals (33%), then China (20%), North Korea (11%), and Iran (2%).

On a more positive note, while AI poses a threat to cybersecurity, it’s also improving defense and the ability to be more proactive when attacks occur to limit the potential damage.

“Based on our findings, it is clear that infosec professionals are worried about a mix of advanced threats and negligence on the part of their organizations, with little consensus with regards to which groups (nation-states or general cybercriminals) pose the biggest threat to our security,” wrote the Cyclance team in a blog post. “As such, a combination of advanced defensive solutions and general education initiatives is needed, in order to ensure we begin moving towards a more secure future.”

Are you concerned about AI being weaponised? Share your thoughts in the comments.

To learn more about AI in the Enterprise register for your pass to the AI Exhibition and Conference this fall in Santa Clara, CA today!

 Interested in hearing industry leaders discuss subjects like this and sharing their use-cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London and Amsterdam to learn more. Co-located with the  IoT Tech Expo, Blockchain Expo and Cyber Security & Cloud Expo so you can explore the future of enterprise technology in one place.

The post Experts believe AI will be weaponised in the next 12 months – attacks unslowed by dark web shutdowns appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2017/08/02/experts-believe-ai-will-weaponised-next-12-months-attacks-unslowed-dark-web-shutdowns/feed/ 0