Security Rules
19 security rules covering OWASP Top 10, prompt injection (CWE-1427), and AI-era attack vectors.
Prompt injection detection (OWASP LLM01)
This is kern review's killer feature. It detects 7 of 10 prompt injection vectors via static AST analysis — no runtime needed. Every vector below has been verified against real vulnerable code. The remaining 3 (RAG poisoning, tool-call validation, unsanitized history) are in development.
Verified detection matrix
indirect-prompt-injection
DB or API results used in LLM prompts without sanitization. Stored data may contain injection payloads.
const user = await db.findById(id);
const prompt = `Summarize: ${user.bio}`; // flaggedllm-output-execution
LLM output passed to eval(), new Function(), or vm.runInContext(). Arbitrary code execution risk.
const code = await llm.complete(prompt);
eval(code); // flaggedprompt-injection
User input embedded in LLM prompts without sanitization — the classic vector.
const prompt = `You are a helper. User: ${req.body.message}`;
// flagged — wrap with sanitizeForPrompt()encoding-bypass
base64/hex decoded content used in prompts. Encoding can bypass input filters.
const decoded = atob(req.body.payload);
const prompt = `Analyze: ${decoded}`; // flaggeddelimiter-injection
User input in delimiter-structured prompts without stripping. Attackers inject ```, ---, or XML tags to escape prompt boundaries.
const prompt = `---system---\n..\n---user---\n${input}`;
// flagged — strip delimiters from inputjson-output-manipulation
JSON.parse on LLM output without schema validation. Output may contain injected keys.
const data = JSON.parse(llmResponse);
doSomething(data.action); // flagged — validate with zod firstmissing-output-validation
LLM response used in application logic (conditionals, returns, function args) without any validation.
const result = await chat.completions.create({...});
return result.choices[0].message; // flaggedTraditional security (OWASP Top 10)
Classic web security rules. All AST-based, always active regardless of target.
| Rule | What it catches |
|---|---|
xss-unsafe-html | dangerouslySetInnerHTML, .innerHTML assignment, v-html |
command-injection | exec()/spawn() with template literals or string concat |
no-eval | eval() and new Function() — code injection risk |
cors-wildcard | cors({ origin: "*" }) or cors() with no options |
helmet-missing | Express app without helmet — missing CSP, HSTS, X-Frame-Options |
open-redirect | res.redirect() with unvalidated user input |
path-traversal | fs operations with req.params/req.query without path validation |
missing-input-validation | HTTP handlers using req.body/req.query without validation |
prototype-pollution | Object.assign() or spread from unvalidated external input |
information-exposure | Stack traces, process.env, or raw error objects in responses |
regex-dos | Nested quantifiers or ambiguous patterns causing ReDoS (CWE-1333) |
jwt-weak-verification | jwt.decode() for auth, jwt.verify() without algorithms allowlist |
cookie-hardening | Auth cookies missing httpOnly, secure, sameSite flags |
csrf-detection | Cookie-auth apps with state-changing routes but no CSRF middleware |
csp-strength | Weak CSP directives: unsafe-inline, unsafe-eval, wildcard script-src |
weak-password-hashing | MD5/SHA1 for passwords, low bcrypt rounds, low PBKDF2 iterations |
Crypto & secrets
- insecure-random — Math.random() in security contexts (token generation, session IDs, CSRF). Flags when the variable name suggests cryptographic use. Fix: crypto.randomUUID() or crypto.getRandomValues().
- hardcoded-secret — Detects API keys (Stripe, OpenAI, AWS, GitHub PATs, SendGrid, npm, PyPI), JWTs, private keys, and connection strings hardcoded in source. Also flags variables named api_key, secret_key, auth_token, etc.
- weak-password-hashing — MD5 or SHA1 for passwords (cryptographically broken), raw SHA-256 (too fast for brute force), bcrypt with rounds < 10, PBKDF2 with iterations < 100,000 (OWASP recommends 600,000+).
Taint analysis
kern review tracks data flow from sources to sinks. Sources are HTTP input (req.body, req.query, req.params, req.headers). Sinks are dangerous functions (eval, SQL queries, HTML rendering, file system operations, LLM prompts).
The taint engine runs in two modes: intra-procedural (single file) and inter-procedural (follows imports across files). It also validates sanitizer sufficiency — parseInt stops SQL injection on numeric values but not command injection. DOMPurify stops XSS but not SQL injection. The sufficiency matrix catches these mismatches.
// Source: user input enters the system
app.post('/search', (req, res) => {
const query = req.body.query; // source: req.body
// No sanitization here...
db.query(`SELECT * WHERE q = '${query}'`); // sink: SQL
// ^^^ taint path: req.body → db.query (unsanitized)
});What's coming (WIP)
These vectors have rule stubs but are not yet verified against real-world code. They ship as-is and may produce false positives.
- rag-poisoning — Retrieval results (vectorStore.search, similaritySearch) flow unsanitized to LLM prompts. Indexed documents may contain injection payloads.
- tool-calling-manipulation — User input controls tool names or schemas in LLM tool-use APIs. Also checks for LLM-returned tool_calls executed without allowlist validation.
- unsanitized-history — Chat history arrays where user messages are spread into LLM API calls or pushed without sanitization. Conversation injection risk.