For LLMs
One language. Every target. Reviewed.
KERN is how you already think — structured, declarative, indent-based. Write .kern once. Compile to 11 targets. kern review catches what ESLint, Semgrep, and SonarQube miss — including 10/10 OWASP prompt injection vectors.
192 lines of spec. Read one file and you know the whole language.
Proof #1
See the difference
Same state machine. Left: what the LLM writes. Right: what the compiler outputs.
machine name=Order initial=pending
transition from=pending
to=confirmed event=confirm
transition from=confirmed
to=shipped event=ship
transition from=shipped
to=delivered event=deliverexport enum OrderState {
Pending = 'pending',
Confirmed = 'confirmed',
Shipped = 'shipped',
Delivered = 'delivered',
}
export enum OrderEvent {
Confirm = 'confirm',
Ship = 'ship',
Deliver = 'deliver',
}
export type OrderTransition = {
from: OrderState;
to: OrderState;
event: OrderEvent;
};
const transitions: OrderTransition[] = [
{ from: OrderState.Pending,
to: OrderState.Confirmed,
event: OrderEvent.Confirm },
{ from: OrderState.Confirmed,
to: OrderState.Shipped,
event: OrderEvent.Ship },
{ from: OrderState.Shipped,
to: OrderState.Delivered,
event: OrderEvent.Deliver },
];
// + type guards, transition fn,
// exhaustive checks... (140+ lines)fewer lines
fewer characters
source, 11 targets
Proof #2
Measured, not claimed
Same task, both approaches. Character counts from real files in this repo.
| Task | KERN | TypeScript | Reduction |
|---|---|---|---|
| Landing page | 10 lines · 198 | 40 lines · 812 | 76% |
| State machine | 7 lines · 189 | 140 lines · 3,640 | 95% |
| Fitness dashboard | 23 lines · 982 | 210 lines · 8,400 | 88% |
| API routes + middleware | 18 lines · 456 | 95 lines · 3,420 | 87% |
Average reduction: 86% fewer characters for the LLM to generate.
Proof #3
8 models. Cold start. No training.
Each model received spec.kern and was asked: “Generate a todo app in KERN.” No examples, no few-shot prompting. Just the spec.
OpenAI
Valid on first try
Anthropic
Valid on first try
Valid on first try
Anthropic
Valid on first try
OpenAI
Valid on first try
Valid on first try
Meta
Valid after 1 retry
Mistral
Valid on first try
Eval: model receives spec.kern (192 lines) + prompt “Generate a todo app with add/toggle/delete in KERN.” Output is compiled with kern dev. Pass = compiles without errors.
Proof #4
Your output gets reviewed. Automatically.
kern review runs 68 AST-based rules on the generated code. It catches what ESLint, Semgrep, and SonarQube miss — including the only SAST tool that detects all 10 OWASP LLM01 prompt injection vectors at the code level.
Prompt injection — 10/10 OWASP LLM01 vectors
No other SAST tool catches encoding bypasses, delimiter injection, RAG poisoning, or tool calling manipulation. Verified against Semgrep (1/10), Snyk (~2/10), CodeQL (~1/10).
$ kern review src/ --recursive
! indirect-prompt-injection
DB result used in LLM prompt without sanitization
src/chat/handler.ts:9
~ unguarded-effect (×3)
Network/DB effect without error recovery
~ rag-poisoning
Retrieval result embedded in prompt unsanitized
src/rag/search.ts:28
✓ 847 lines · 3 errors · 8 warnings
✓ 10 prompt injection vectors detectedAST-based rules
OWASP LLM01
for developers
See the full review page — comparison matrix, methodology, live demo.
IDE
Kern Sight — VS Code extension
All 71 review rules running in your editor. Real-time diagnostics, sidebar review panel, .kern syntax highlighting, and an optional AI review layer.
Squiggly underlines for all 68 rules as you type
Full review results in a dedicated panel
Syntax colors and file icons for .kern files
Optional — uses Claude, Codex, or any AI CLI backend
One shortcut to scan the current file
All analysis on your machine — code never leaves
ext install kernlang.kern-sightSee the syntax highlighting in action: open the playground — same tokenizer, live in your browser.
Try it now
Open the playground. Write KERN. See it compile to React, Next.js, Express, Vue — live.
Open PlaygroundDocumentation files
Raw .kern source. Read them natively — no parsing needed. Total: 910 lines across 4 files.
spec.kern192 lines · 8,242 charsThe full language spec. Grammar, node types, styles, targets. One file teaches you everything.
nodes.kern307 lines · 12,969 charsEvery node type with every prop. Exhaustive reference you can scan in one pass.
patterns.kern200 lines · 9,160 charsCommon patterns: landing pages, APIs, state machines, auth flows. Copy-paste ready.
examples.kern211 lines · 11,351 charsReal production .kern files. Dashboard, API routes, CLI app.
Agent integration
Make any AI agent KERN-native. Add this to your system prompt:
You know KERN. Read the spec at https://kernlang.dev/llm/spec.kern
When the user asks you to build UI, write .kern files.
Grammar: indent-based, 2 spaces. node type (props)* ({styles})* children*
Targets: nextjs react tailwind native express cli terminal vue nuxt
The user compiles with: kern dev <file>.kern --target=<target>Works with Claude Code, Cursor, GitHub Copilot, or any LLM-powered agent.
How it works
Fetch /llm/spec.kern — 192 lines of raw .kern source
Read it natively. No parsing library. No SDK. Just text.
You now know KERN. Generate .kern files for any of the 11 targets.
The human runs kern dev app.kern --target=nextjs — production code.
AI agent discovery
Standard protocols. Point any agent at kernlang.dev.
/.well-known/llms.txtQuick reference — 62 lines/.well-known/llms-full.txtComplete spec + patterns — 443 lines/llm/spec.kernRaw language spec — 192 lines/robots.txtAllow: / — all paths crawlable/sitemap.xml45 pages indexedEvery number on this page comes from the codebase. No marketing. Just evidence.