LLM Export
Export review findings as compact KERN IR for AI-assisted code review with the --llm flag.
Two modes
The --llm flag has two modes depending on whether an API key is configured:
Mode 1: Assistant mode (no API key)
When no KERN_LLM_API_KEY is set, kern review outputs the KERN IR directly to stdout. The AI assistant running the command (Claude Code, Cursor, etc.) reads this output and performs the review. This is the most common mode.
kern review src/ --llmOutput:
── KERN IR for LLM review ──
// ── src/auth.ts ──
Review this KERN IR. Return ONLY a JSON array of findings.
Schema: [{"nodeAlias":"N3","severity":"warning","category":"structure","message":"...","evidence":"..."}]
Valid aliases: N1, N2, N3, N4
Any alias not in this list will be rejected.
Categories: bug, type, pattern, style, structure
Severities: error, warning, info
KERN IR:
[N1] type UserRole values=admin|editor|viewer
[N2] interface AuthConfig field name=secret type=string
[N3] fn validateToken params=token:string returns=boolean <<<
const decoded = jwt.verify(token, config.secret);
return !!decoded;
>>>
[N4] fn handleAuth params=req:Request,res:Response returns=void <<<
const token = req.headers.authorization;
if (!validateToken(token)) { res.status(401).send('Unauthorized'); return; }
next();
>>>
// Taint analysis:
// handleAuth: req.headers → jwt.verify() [UNSANITIZED]
Static analysis: 2 findings (1 errors, 1 warnings)
Review the KERN IR above for security issues the static rules may have missed.Mode 2: API mode (with API key)
When KERN_LLM_API_KEY is set, kern review calls an OpenAI-compatible API directly, parses the structured JSON response, validates node aliases, and merges LLM findings with static analysis results.
export KERN_LLM_API_KEY=sk-...
kern review src/ --llmEnvironment variables:
| Variable | Default | Description |
|---|---|---|
| KERN_LLM_API_KEY | (none) | API key (required for API mode) |
| KERN_LLM_MODEL | gpt-4o-mini | Model name |
| KERN_LLM_BASE_URL | https://api.openai.com/v1 | Base URL (supports Ollama, Anthropic proxy, etc.) |
Prompt structure
The LLM prompt includes:
- KERN IR nodes with sequential aliases (N1, N2, ...) and handler bodies
- Valid alias list — the LLM can only reference aliases from this list
- Strict JSON response schema with severity, category, message, and evidence fields
- Taint analysis results — data flow paths from sources to sinks with sanitization status
Graph-aware context
When combined with --graph, nodes are annotated with provenance markers:
kern review src/ --llm --graphNodes from changed files are marked [CHANGED]. Upstream dependencies are marked [CONTEXT d=N] where N is the graph distance. The LLM focuses its review on changed nodes and only references context nodes to support findings.
Response validation
The parser validates every LLM response:
- Node aliases must be in the valid alias set — unknown aliases are silently rejected
- Severity must be error, warning, or info
- Category must be bug, type, pattern, style, or structure
- Messages are sanitized: ANSI escape codes, OSC sequences, and control characters are stripped
- LLM findings get 0.7 confidence (lower than static rules) and are merged with deduplication
Also: --export-kern
The --export-kern flag is a simpler alternative that outputs raw KERN IR without the LLM prompt wrapper. Handler bodies are excluded for brevity. Useful for debugging the inferrer output.
kern review src/auth.ts --export-kern