For LLMs

One language. Every target. Reviewed.

KERN is how you already think — structured, declarative, indent-based. Write .kern once. Compile to 11 targets. kern review catches what ESLint, Semgrep, and SonarQube miss — including 10/10 OWASP prompt injection vectors.

192 lines of spec. Read one file and you know the whole language.

Proof #1

See the difference

Same state machine. Left: what the LLM writes. Right: what the compiler outputs.

order.kern7 lines · 189 chars
machine name=Order initial=pending
  transition from=pending
    to=confirmed event=confirm
  transition from=confirmed
    to=shipped event=ship
  transition from=shipped
    to=delivered event=deliver
compiled → order.ts140+ lines · 3,640+ chars
export enum OrderState {
  Pending = 'pending',
  Confirmed = 'confirmed',
  Shipped = 'shipped',
  Delivered = 'delivered',
}

export enum OrderEvent {
  Confirm = 'confirm',
  Ship = 'ship',
  Deliver = 'deliver',
}

export type OrderTransition = {
  from: OrderState;
  to: OrderState;
  event: OrderEvent;
};

const transitions: OrderTransition[] = [
  { from: OrderState.Pending,
    to: OrderState.Confirmed,
    event: OrderEvent.Confirm },
  { from: OrderState.Confirmed,
    to: OrderState.Shipped,
    event: OrderEvent.Ship },
  { from: OrderState.Shipped,
    to: OrderState.Delivered,
    event: OrderEvent.Deliver },
];

// + type guards, transition fn,
// exhaustive checks... (140+ lines)
20x

fewer lines

95%

fewer characters

1

source, 11 targets

Proof #2

Measured, not claimed

Same task, both approaches. Character counts from real files in this repo.

TaskKERNTypeScriptReduction
Landing page10 lines · 19840 lines · 81276%
State machine7 lines · 189140 lines · 3,64095%
Fitness dashboard23 lines · 982210 lines · 8,40088%
API routes + middleware18 lines · 45695 lines · 3,42087%

Average reduction: 86% fewer characters for the LLM to generate.

Proof #3

8 models. Cold start. No training.

Each model received spec.kern and was asked: “Generate a todo app in KERN.” No examples, no few-shot prompting. Just the spec.

GPT-4o

OpenAI

Valid on first try

Claude 3.5 Sonnet

Anthropic

Valid on first try

Gemini 1.5 Pro

Google

Valid on first try

Claude 3 Opus

Anthropic

Valid on first try

GPT-4 Turbo

OpenAI

Valid on first try

Gemini 1.5 Flash

Google

Valid on first try

Llama 3.1 70B

Meta

Valid after 1 retry

Mistral Large

Mistral

Valid on first try

Eval: model receives spec.kern (192 lines) + prompt “Generate a todo app with add/toggle/delete in KERN.” Output is compiled with kern dev. Pass = compiles without errors.

Proof #4

Your output gets reviewed. Automatically.

kern review runs 68 AST-based rules on the generated code. It catches what ESLint, Semgrep, and SonarQube miss — including the only SAST tool that detects all 10 OWASP LLM01 prompt injection vectors at the code level.

Prompt injection — 10/10 OWASP LLM01 vectors

!Indirect injection (DB/API to prompt)
!LLM output execution (eval/Function)
!System prompt leakage
!RAG poisoning
!Tool calling manipulation
!Encoding bypasses (base64/hex)
!Delimiter injection
!Unsanitized chat history
!JSON output manipulation
!Missing output validation

No other SAST tool catches encoding bypasses, delimiter injection, RAG poisoning, or tool calling manipulation. Verified against Semgrep (1/10), Snyk (~2/10), CodeQL (~1/10).

$ kern review src/ --recursive

  ! indirect-prompt-injection
    DB result used in LLM prompt without sanitization
    src/chat/handler.ts:9

  ~ unguarded-effect  (×3)
    Network/DB effect without error recovery

  ~ rag-poisoning
    Retrieval result embedded in prompt unsanitized
    src/rag/search.ts:28

  ✓ 847 lines · 3 errors · 8 warnings
  ✓ 10 prompt injection vectors detected
68

AST-based rules

10/10

OWASP LLM01

$0

for developers

See the full review page — comparison matrix, methodology, live demo.

IDE

Kern Sight — VS Code extension

All 71 review rules running in your editor. Real-time diagnostics, sidebar review panel, .kern syntax highlighting, and an optional AI review layer.

Inline diagnostics

Squiggly underlines for all 68 rules as you type

Sidebar review panel

Full review results in a dedicated panel

.kern syntax highlighting

Syntax colors and file icons for .kern files

AI review layer

Optional — uses Claude, Codex, or any AI CLI backend

Cmd+Shift+R

One shortcut to scan the current file

Runs locally

All analysis on your machine — code never leaves

ext install kernlang.kern-sight

See the syntax highlighting in action: open the playground — same tokenizer, live in your browser.

Try it now

Open the playground. Write KERN. See it compile to React, Next.js, Express, Vue — live.

Open Playground

Documentation files

Raw .kern source. Read them natively — no parsing needed. Total: 910 lines across 4 files.

Agent integration

Make any AI agent KERN-native. Add this to your system prompt:

You know KERN. Read the spec at https://kernlang.dev/llm/spec.kern
When the user asks you to build UI, write .kern files.
Grammar: indent-based, 2 spaces. node type (props)* ({styles})* children*
Targets: nextjs react tailwind native express cli terminal vue nuxt
The user compiles with: kern dev <file>.kern --target=<target>

Works with Claude Code, Cursor, GitHub Copilot, or any LLM-powered agent.

How it works

1

Fetch /llm/spec.kern — 192 lines of raw .kern source

2

Read it natively. No parsing library. No SDK. Just text.

3

You now know KERN. Generate .kern files for any of the 11 targets.

4

The human runs kern dev app.kern --target=nextjs — production code.

AI agent discovery

Standard protocols. Point any agent at kernlang.dev.

/.well-known/llms.txtQuick reference — 62 lines
/.well-known/llms-full.txtComplete spec + patterns — 443 lines
/llm/spec.kernRaw language spec — 192 lines
/robots.txtAllow: / — all paths crawlable
/sitemap.xml45 pages indexed

Every number on this page comes from the codebase. No marketing. Just evidence.