Qwiet AI Honored as Winner of Best Application Security Solution at the 2025 SC Awards

The rise of AI-generated code has indeed been a productivity breakthrough. However, it has also ushered in a new class of threat that most security teams are not adequately prepared for: the urgent and looming danger of slopsquatting.

What Is Slopsquatting?

Slopsquatting is a novel and unprecedented supply chain attack that exploits a flaw in how large language models (LLMs) generate code. When developers use AI tools to autocomplete or generate packages, they may unknowingly be fed nonexistent or hallucinated packages, and malicious actors are exploiting this new vulnerability at an alarming rate.

Attackers register packages with names similar to these hallucinated dependencies. If a developer unthinkingly installs the suggestion, the attacker’s code runs in their environment. That’s the slopsquat. It’s fast, opportunistic, and leverages developers’ trust in AI tools.

Why does this happen? LLMs don’t query live package registries; they generate suggestions based on patterns in their training data. If a tool “remembers” seeing a package-like name, it might invent one that looks plausible but doesn’t exist.

A Familiar Pattern With a Twist

This pattern is reminiscent of attacks on Newly Registered Domains (NRD). Back then, threat actors would spin up typo’d or fresh domains, launch phishing campaigns, and disappear before traditional defenses kicked in. Attackers use the same opportunism, but the attack surface is AI-generated code. Just like they once registered fake domains, they now register hallucinated packages.

However, slopsquatting presents a more complex challenge. The vulnerability isn’t just in the package registry but in the AI tool itself. These hallucinated suggestions are unpredictable, based on prompt context and training artifacts, and do not constantly focus on malicious intent, making them harder to block or preemptively detect.

The Rise of “Vibe Coding”

We’re in the era of vibe coding, faster iteration, faster commits, and sometimes, skipping traditional review workflows. AI copilots have made developers significantly more productive, but they also introduce hallucinated suggestions at a scale we’ve never seen before.

Without the proper guardrails, that’s a serious problem.

Imagine a developer asking their AI assistant for a logging library. The assistant suggests logger-pro-fast, a package that doesn’t exist. An attacker registers it and slips in malicious code. The developer installs it. Game over.

How Qwiet AI Solves It

At Qwiet AI, we anticipated this risk. Our platform includes a first-of-its-kind anti-hallucination agent that detects and blocks AI-generated code hallucinations before they ever enter your codebase or pipelines.

We combine this with:

  • Real-time dependency monitoring
  • Deep static code analysis
  • Pre-production vulnerability detection

This means you’re not just catching known bad packages, you’re catching the unknowns, including those that don’t exist yet but might show up in an AI suggestion tomorrow.

Future-Proofing the Software Supply Chain

Slopsquatting is just the beginning. Security teams must adapt to a world where machines, not just humans, write code. Like NRDs before them, we can get ahead of this wave if we modernize our security approach for how software is built today. 

The key is integrating security where code is written, not just where it’s deployed. That’s how we keep vibe coding safe and the software supply chain future-proof.

Stop AI hallucinations before they compromise your code.

Book a demo today to see how Qwiet AI protects your developers and pipeline.

About Qwiet AI

Qwiet AI empowers developers and AppSec teams to dramatically reduce risk by quickly finding and fixing the vulnerabilities most likely to reach their applications and ignoring reported vulnerabilities that pose little risk. Industry-leading accuracy allows developers to focus on security fixes that matter and improve code velocity while enabling AppSec engineers to shift security left.

A unified code security platform, Qwiet AI scans for attack context across custom code, APIs, OSS, containers, internal microservices, and first-party business logic by combining results of the company’s and Intelligent Software Composition Analysis (SCA). Using its unique graph database that combines code attributes and analyzes actual attack paths based on real application architecture, Qwiet AI then provides detailed guidance on risk remediation within existing development workflows and tooling. Teams that use Qwiet AI ship more secure code, faster. Backed by SYN Ventures, Bain Capital Ventures, Blackstone, Mayfield, Thomvest Ventures, and SineWave Ventures, Qwiet AI is based in Santa Clara, California. For information, visit: https://qwiet.ai

Share