Static analysis
Deep static analysis that understands your code at the concept level. Free for developers.
See it catch a prompt injection — live
async function chat(db, llm, userId) {
// DB results go straight into prompt
const history = await db.query(
`SELECT msg FROM chat
WHERE user_id = '${userId}'`
);
const prompt =
`Assistant: ${history.rows
.map(r => r.msg).join('\n')}`
// User input unsanitized
return llm.complete(prompt);
}! indirect-prompt-injection
DB result 'history' from db.query
used in LLM prompt without
sanitization — indirect injection
risk
chat.ts:9
~ unguarded-effect
Network/DB effect without
auth/validation guard
chat.ts:3
~ unrecovered-effect
db effect without error recovery
chat.ts:3ESLint sees nothing. SonarQube sees nothing. kern review catches all three.
@kernlang/review — analyzing src/
! indirect-prompt-injection
DB result used in LLM prompt without sanitization
src/chat/handler.ts:9
! llm-output-execution
LLM output passed to eval() — arbitrary code execution
src/agent/runner.ts:17
! prompt-injection
User input embedded in prompt without sanitization
src/api/chat.ts:23
~ encoding-bypass
Decoded content from Buffer.from() used in prompt
src/utils/decode.ts:50
~ json-output-manipulation
JSON.parse on LLM output without schema validation
src/api/structured.ts:69
~ missing-output-validation
LLM response used without validation
src/codegen/generate.ts:75
~ unguarded-effect (×3)
Network/DB effect without error recovery
~ rag-poisoning
Retrieval result embedded in prompt unsanitized
src/rag/search.ts:28
~ tool-calling-manipulation
LLM-returned tool calls executed without allowlist
src/agent/tools.ts:41
~ unsanitized-history
Unsanitized messages spread into LLM API call
src/chat/multi-turn.ts:63
✓ 847 lines scanned · 3 errors · 8 warnings
✓ 10 prompt injection vectors detectedReal output. Verified against 10 OWASP LLM01 attack vectors. No other SAST tool catches encoding bypasses or delimiter injection.
AST-based rules
concept rules
languages (TS + Python)
for developers
kern review doesn't just lint syntax. It understands concepts: entrypoints, effects, guards, state mutations, boundaries.
Prompt injection (10 OWASP LLM01 vectors), RAG poisoning, encoding bypass, delimiter injection, taint analysis, path traversal, prototype pollution
Floating promises, unused exports, complex conditionals, hardcoded secrets, console.log in production
Unreachable code after return, dead branches, exhaustiveness gaps in switches, unused parameters
Missing deps in hooks, setState in render, stale closures, conditional hooks
Unguarded effects, boundary mutations, ignored errors, illegal dependencies, unrecovered effects
Next.js server/client conflicts, Express sync I/O, Vue reactivity pitfalls, Nuxt conventions
kern review isn't trying to replace your linter. It catches what they miss.
| kern review | ESLint | SonarQube | |
|---|---|---|---|
| Concept-level analysis | Yes | No | Limited |
| Prompt injection detection | 10/10 vectors | 1/10 | No |
| LLM-ready output | Yes (5x smaller IR) | No | No |
| TypeScript + Python | Both | TS only | Many languages |
| Framework-aware rules | Next.js, Express, Vue | Via plugins | Limited |
| Rule count | 68 | Hundreds | Thousands |
| SARIF / CI integration | Yes | Yes | Yes |
| Free for developers | Yes | Yes | Community only |
ESLint and SonarQube have more rules and broader language coverage. kern review goes deeper on fewer rules — concept-level analysis, 10/10 prompt injection detection, and LLM-exportable findings. Use them together.
CWE-1427: Improper Neutralization of Input Used for LLM Prompting. kern review detects all 10 attack vectors via static analysis — verified against real code patterns.
Each cell is tested: we ran the same benchmark file through each tool. "Yes" means the tool flagged it. No marketing claims — only verified results.
No official code-level benchmark for CWE-1427 exists yet (OWASP, NIST, and MITRE have prompt-level datasets only). This benchmark is open-source — run it yourself.
| Attack vector | kern | Semgrep | Snyk | CodeQL |
|---|---|---|---|---|
| Indirect injection (DB→prompt) | Yes | No | Partial | Partial |
| LLM output execution | Yes | Partial | Yes | Likely |
| System prompt leakage | Yes | No | No | No |
| RAG poisoning | Yes | No | No | No |
| Tool calling manipulation | Yes | No | No | No |
| Encoding bypasses | Yes | No | No | No |
| Delimiter injection | Yes | No | No | No |
| Unsanitized history | Yes | No | No | No |
| JSON output manipulation | Yes | No | No | No |
| Missing output validation | Yes | No | No | No |
| Verified score | 10/10 | 1/10 | ~2/10 | ~1/10 |
Methodology
Scores are from running each tool against the same TypeScript benchmark file containing 10 known-vulnerable patterns — one per OWASP LLM attack vector. Semgrep was run with --config p/default --config p/javascript --config p/typescript (1,088 rules loaded). Snyk and CodeQL scores are based on published documentation of their detection capabilities.
No official code-level benchmark for prompt injection (CWE-1427) exists from OWASP, NIST, or MITRE. All existing benchmarks test prompts sent to models, not source code patterns. Our benchmark is open-source — verify the results yourself:
# Verify kern
kern review examples/security-benchmark.ts
# Verify Semgrep (same file)
semgrep scan --config p/default --config p/javascript \
examples/security-benchmark.tsRAG poisoning, encoding bypass, delimiter injection, tool calling validation, and unsanitized history detection are unique to kern — no other SAST tool catches these. Mapped to CWE-1427 and OWASP LLM01:2025.
Developers
FreeAll 68 rules. CLI + CI. No limits.
npm install -g @kernlang/cli
kern review src/ --recursiveGet StartedEnterprise
ContactLLM-assisted review, custom rules, team dashboards, priority support.