Securing AI & LLMs

Securing AI & LLMs

Those of us who've been around the block a few times find ourselves at yet another forefront in technology, and it's time to embrace the trend. Traditional machine learning has existed for quite some time, but the hype as of late is incredibly real. That said, AI comes with its own set of challenges, controls, and attack vectors. This reminds me of the API hype phase, which only continues to plow forward with the assistance of serverless cloud solutions.

AI also brings a breath of fresh capabilities and marketing initiatives to the industry. The ability to train your data to ease challenges with consumer consumption has already disrupted markets. Companies have already began leveraging AI to resolve issues related to technical competency and lower operational overhead. Vendors are strapping AI offerings into their solutions daily in order to collect user data in order to develop effective roadmaps at a much faster scale. Some have began leveraging cloud provider marketplaces to transact directly with potential customer. Though far from maturity, I'd like to focus on the topic of AI and LLM security testing.

Below, you'll find an example of the real threats that exist here. OpenAI's LangChain was found to be susceptible to RCE (CVSS 9.8). https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-29374 This was disclosed by NVIDIA back in January 2023. Using a Juptyer notebook, Python, and targeting the llm_math plugin resulted in the following:

Example of remote code execution through prompt injection in the llm_math chain
https://developer.nvidia.com/blog/securing-llm-systems-against-prompt-injection/

LLMs are the core of AI; building models allows the training of data to better serve a targeted audience. However, risk is still something many are accepting for the sake of speed. That said, guidelines have been defined by OWASP for LLM attack vectors: https://owasp.org/www-project-top-10-for-large-language-model-applications/assets/PDF/OWASP-Top-10-for-LLMs-2023-v1_0_1.pdf. This is simply a new focus on an old concept; fuzzing will continue. There are already very specific tools for "testing" your LLMs, such as PIPE (Prompt Injection Primer for Engineers): https://github.com/jthack/PIPE. The key takeaway here is that vectors such as SSRF, SQL injection, RCE, XSS, and IDOR still apply.

Risks within AI and subsequently LLMs are something NIST is determined to help tackle. Inherited issues with process and procedures are covered here within the NIST AI RMF: https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf. The principals outlined within the framework are focused primarily on trust and the validity of AI behaviors. The intent here is to assist with organizations to periodically evaluate and improve AI processes, awareness, practices and procedures to reduce overall risk. As with all RMFs, sharing information and clear communication between professionals in the workplace goes a long way for defining responsibilities, as well as improving contextual knowledge.

Like most things, there is a silver lining to ease testing of LLMs, no matter what side of infosec you reside on. This isn't just configuring a few Titan-based EC2 instances, deploying SSM agents for patch manager, or leveraging traditional pentesting tools. With a little digging, I've managed to come across a few tools that allow you to assess the current state of LLM models. Below, you'll find a few of these with a brief description:

GitHub - mnns/LLMFuzzer: 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed for Large Language Models (LLMs), especially for their integrations in applications via LLM APIs. 🚀💥
🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed for Large Language Models (LLMs), especially for their integra...

Target API LLM endpoint to identify potential attack vectors.

GitHub - leondz/garak: LLM vulnerability scanner
LLM vulnerability scanner. Contribute to leondz/garak development by creating an account on GitHub.

Probes for hallucination, data leakage, prompt injection, misinformation, toxicity generation, jailbreaks, and many other weaknesses.

GitHub - deadbits/vigil-llm: âš¡ Vigil âš¡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs
âš¡ Vigil âš¡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs - GitHub - deadbits/vigil-llm: âš¡ Vigil âš¡ Detect prompt injections, jailbreaks, and ot...

Python library and REST API to assess your LLM for known prompt injection vulnerabilities using scanners.

This post is intended to point out the security implications LLMs and AIs bring to all aspects of the current information technology industry. Thanks for taking the time to read and research with me.