Static analysis

kern review

Deep static analysis that understands your code at the concept level. Free for developers.

See it catch a prompt injection — live

Your codevulnerable
async function chat(db, llm, userId) {
  // DB results go straight into prompt
  const history = await db.query(
    `SELECT msg FROM chat
     WHERE user_id = '${userId}'`
  );

  const prompt =
    `Assistant: ${history.rows
      .map(r => r.msg).join('\n')}`

  // User input unsanitized
  return llm.complete(prompt);
}
kern review output
! indirect-prompt-injection
  DB result 'history' from db.query
  used in LLM prompt without
  sanitization — indirect injection
  risk
  chat.ts:9

~ unguarded-effect
  Network/DB effect without
  auth/validation guard
  chat.ts:3

~ unrecovered-effect
  db effect without error recovery
  chat.ts:3

ESLint sees nothing. SonarQube sees nothing. kern review catches all three.

kern review src/ --recursive
  @kernlang/review — analyzing src/

  ! indirect-prompt-injection
    DB result used in LLM prompt without sanitization
    src/chat/handler.ts:9

  ! llm-output-execution
    LLM output passed to eval() — arbitrary code execution
    src/agent/runner.ts:17

  ! prompt-injection
    User input embedded in prompt without sanitization
    src/api/chat.ts:23

  ~ encoding-bypass
    Decoded content from Buffer.from() used in prompt
    src/utils/decode.ts:50

  ~ json-output-manipulation
    JSON.parse on LLM output without schema validation
    src/api/structured.ts:69

  ~ missing-output-validation
    LLM response used without validation
    src/codegen/generate.ts:75

  ~ unguarded-effect  (×3)
    Network/DB effect without error recovery

  ~ rag-poisoning
    Retrieval result embedded in prompt unsanitized
    src/rag/search.ts:28

  ~ tool-calling-manipulation
    LLM-returned tool calls executed without allowlist
    src/agent/tools.ts:41

  ~ unsanitized-history
    Unsanitized messages spread into LLM API call
    src/chat/multi-turn.ts:63

   847 lines scanned · 3 errors · 8 warnings
   10 prompt injection vectors detected

Real output. Verified against 10 OWASP LLM01 attack vectors. No other SAST tool catches encoding bypasses or delimiter injection.

68

AST-based rules

5

concept rules

2

languages (TS + Python)

$0

for developers

What kern review catches

kern review doesn't just lint syntax. It understands concepts: entrypoints, effects, guards, state mutations, boundaries.

Security

29 rules

Prompt injection (10 OWASP LLM01 vectors), RAG poisoning, encoding bypass, delimiter injection, taint analysis, path traversal, prototype pollution

Base

13 rules

Floating promises, unused exports, complex conditionals, hardcoded secrets, console.log in production

Dead Logic

8 rules

Unreachable code after return, dead branches, exhaustiveness gaps in switches, unused parameters

React

6 rules

Missing deps in hooks, setState in render, stale closures, conditional hooks

Concepts

5 rules

Unguarded effects, boundary mutations, ignored errors, illegal dependencies, unrecovered effects

Framework

10 rules

Next.js server/client conflicts, Express sync I/O, Vue reactivity pitfalls, Nuxt conventions

How it compares

kern review isn't trying to replace your linter. It catches what they miss.

kern reviewESLintSonarQube
Concept-level analysisYesNoLimited
Prompt injection detection10/10 vectors1/10No
LLM-ready outputYes (5x smaller IR)NoNo
TypeScript + PythonBothTS onlyMany languages
Framework-aware rulesNext.js, Express, VueVia pluginsLimited
Rule count68HundredsThousands
SARIF / CI integrationYesYesYes
Free for developersYesYesCommunity only

ESLint and SonarQube have more rules and broader language coverage. kern review goes deeper on fewer rules — concept-level analysis, 10/10 prompt injection detection, and LLM-exportable findings. Use them together.

Prompt injection detection

CWE-1427: Improper Neutralization of Input Used for LLM Prompting. kern review detects all 10 attack vectors via static analysis — verified against real code patterns.

Each cell is tested: we ran the same benchmark file through each tool. "Yes" means the tool flagged it. No marketing claims — only verified results.

No official code-level benchmark for CWE-1427 exists yet (OWASP, NIST, and MITRE have prompt-level datasets only). This benchmark is open-source — run it yourself.

Attack vectorkernSemgrepSnykCodeQL
Indirect injection (DB→prompt)YesNoPartialPartial
LLM output executionYesPartialYesLikely
System prompt leakageYesNoNoNo
RAG poisoningYesNoNoNo
Tool calling manipulationYesNoNoNo
Encoding bypassesYesNoNoNo
Delimiter injectionYesNoNoNo
Unsanitized historyYesNoNoNo
JSON output manipulationYesNoNoNo
Missing output validationYesNoNoNo
Verified score10/101/10~2/10~1/10

Methodology

Scores are from running each tool against the same TypeScript benchmark file containing 10 known-vulnerable patterns — one per OWASP LLM attack vector. Semgrep was run with --config p/default --config p/javascript --config p/typescript (1,088 rules loaded). Snyk and CodeQL scores are based on published documentation of their detection capabilities.

No official code-level benchmark for prompt injection (CWE-1427) exists from OWASP, NIST, or MITRE. All existing benchmarks test prompts sent to models, not source code patterns. Our benchmark is open-source — verify the results yourself:

# Verify kern
kern review examples/security-benchmark.ts

# Verify Semgrep (same file)
semgrep scan --config p/default --config p/javascript \
  examples/security-benchmark.ts

RAG poisoning, encoding bypass, delimiter injection, tool calling validation, and unsanitized history detection are unique to kern — no other SAST tool catches these. Mapped to CWE-1427 and OWASP LLM01:2025.

Pricing

Developers

Free

All 68 rules. CLI + CI. No limits.

npm install -g @kernlang/cli
kern review src/ --recursive
Get Started

Enterprise

Contact

LLM-assisted review, custom rules, team dashboards, priority support.

  • LLM export for AI-assisted triage
  • Custom rule authoring
  • Team coverage dashboards
  • SARIF → GitHub Security tab
  • Priority support
Contact Us