Introducing Qwiet AI AutoFix! Reduce the time to secure code by 95% Read More

Artificial intelligence (AI) and machine learning (ML) have been in our daily lives for years. Simple examples of their pervasiveness include financial fraud detection, product search optimization, and ad targeting. In cybersecurity, we’ve been applying machine learning to endpoint detection for at least a decade. Other examples include k-means clustering in spam detection and intrusion detection. Lately, a good deal of attention has been reignited around generative AI and large language models (LLMs), but the hype shouldn’t stop us from educating ourselves on the technology and applying time-tested mitigation strategies.  

Recently, I was fortunate enough to share the stage of Secure Miami with fellow technologists and security professionals to talk about this very subject. We discussed AI/ML vulnerabilities from primarily two perspectives:

  1. How to think about AI/ML if you’re building software that includes it?
  2. How to respond if your employees and contractors want to use LLMs in your environment?

For the builders:

Attackers can exploit vulnerabilities in AI/ML systems to gain unauthorized access, manipulate outputs, and steal information. 

There are a number of different AI/ML attack types. Here’s a list of types that you need to be aware of. 

  • Data poisoning: This attack involves injecting malicious data into the training dataset of an AI/ML system. It can cause the system to learn incorrect patterns, which can lead to incorrect predictions or decisions.
  • Model inversion: This attack involves reverse-engineering an AI/ML model to extract the underlying data or parameters. It can be used to steal sensitive data or to gain insights into how the system works.
  • Model stealing: This attack involves stealing the entire AI/ML model or its source code. It can be used to replicate the system or to develop new attacks.
  • Adversarial examples: These are specially crafted inputs that are designed to fool an AI/ML system. Adversarial examples can be used to bypass security controls or to cause the system to make incorrect predictions.


Now, if you’re building systems that rely on AI/ML models, what have you done to secure them? Do you have threat models in place that probe for these attack types? I recommended to the attendees that they take a close look at MITRE Atlas: https://atlas.mitre.org/. They’re doing a tremendous job of clarifying the AI/ML attack chain. This strategy may be used in conjunction with a threat modeling process to add resilience to your products. Throughout your threat modeling process, consider the following techniques and how to mitigate risk:

Data validation:

Data validation is the process of ensuring the data used to train and deploy AI/ML systems is accurate and free of malicious content. This can be done by using a variety of techniques, such as:

  • Data cleaning: Removing duplicate, inaccurate, or irrelevant data from the training dataset.
  • Data scrubbing: Removing sensitive data from the training dataset.
  • Data filtering: Filtering the training dataset to remove data that is likely to be malicious.

 

Model validation:

Model validation is the process of testing AI/ML models to ensure that they are robust to attack. It can be accomplished by using a variety of techniques:

  • Adversarial testing: Generating adversarial examples and testing the model to see if it can correctly classify them.
  • Robustness testing: Testing the model to see how it performs under different conditions, such as when the data is corrupted or when the model is exposed to noise.
  • Security testing: Testing the model to see if it is vulnerable to specific attacks, such as data poisoning or model inversion.

Model encryption:

Model encryption is the process of encrypting AI/ML models to protect them from theft or reverse-engineering. It can be accomplished by a variety of techniques:

  • Symmetric encryption: Encrypting the model using a secret key.
  • Asymmetric encryption: Encrypting the model using a public key and decrypting it using a private key.
  • Homomorphic encryption: Encrypting the model in a way that allows it to be processed without being decrypted.

Model monitoring:

Model monitoring is the process of monitoring AI/ML models for suspicious activity. This process can be executed by a variety of techniques:

  • Model drift: Monitoring the model to see if it is starting to make incorrect predictions.
  • Model overfitting: Monitoring the model to see if it is starting to learn the training data too well and is not generalizing to new data.
  • Model bias: Monitoring the model to see if it is showing any bias towards certain groups of data.

By implementing these mitigating controls, organizations can help to protect their AI/ML systems from attack. However, it is important to note that no single control is foolproof. Organizations need to implement a layered security approach that includes a variety of controls to protect their AI/ML systems.

For the consumers of AI/ML, specifically LLMs:

I get asked all of the time, “What should I do about my employees wanting to use ChatGPT?” Perhaps this is a contrarian opinion, but my answer tends to be “Leverage your existing policies because AI/ML is a combination of data and software. You have policies that guide your employees on data practices and you have software development policies.”

What you need to think about is how you educate and communicate the intricacies of AI/ML as it relates to existing policies. I’d advise that you focus in on helping your organization understand:

  • The types of AI/ML with specific examples of how they’re used.
  • An explanation of what LLMs are and are not.
  • The risks of using today’s LLMs: Hallucination; Bias
  • Privacy policies in place for LLMs. OpenAI’s privacy policy. 

In parallel, security and privacy professionals should be demanding the ability to control how their data is handled by AI/ML companies. For example, I may not want my data used to re-train the model. I want to make sure that someone else can’t query and LLM and get my information back. 

While some organizations have had problems, leading them to shut down access to LLMs (see Samsung), I’d encourage organizations to set up pathways for R&D. Encourage experimentation while enforcing your policies. Organizations should be considering how the use of GPT4, Bard, Auto-GPT and other generative tools open up new commercial opportunities. 

The road ahead

Ultimately, I think we’re at another inflection point in cybersecurity. Similar to the adoption of cloud computing, AI/ML platforms will be uncomfortable at first, but it’s a snowball rolling down hill. We’ll become smarter with how to use the algorithms, we’ll more thoroughly understand the pros and cons of use, we’ll figure out the privacy issues, and the software/model providers will provide us with robust choices on how to best secure our workloads.

For now, focus on education, communication and responsible use.

About ShiftLeft

ShiftLeft empowers developers and AppSec teams to dramatically reduce risk by quickly finding and fixing the vulnerabilities most likely to reach their applications and ignoring reported vulnerabilities that pose little risk. Industry-leading accuracy allows developers to focus on security fixes that matter and improve code velocity while enabling AppSec engineers to shift security left.

A unified code security platform, ShiftLeft CORE scans for attack context across custom code, APIs, OSS, containers, internal microservices, and first-party business logic by combining results of the company’s and Intelligent Software Composition Analysis (SCA). Using its unique graph database that combines code attributes and analyzes actual attack paths based on real application architecture, ShiftLeft then provides detailed guidance on risk remediation within existing development workflows and tooling. Teams that use ShiftLeft ship more secure code, faster. Backed by SYN Ventures, Bain Capital Ventures, Blackstone, Mayfield, Thomvest Ventures, and SineWave Ventures, ShiftLeft is based in Santa Clara, California. For information, visit: www.shiftleft.io.

Share