kern test
Native testing — write your tests in .kern itself, against .kern source. KERN dogfoods its own runtime: every release is verified by the same test runner you use.
The format
A test file lives next to its target with the .test.kern extension. The target= attribute points at the .kern source under test. Tests are organized as test → it → expect.
# my-page.test.kern
test name="UserDirectory conformance" target="./my-page.kern"
fixture name=sampleUsers value={{[
{ id: "u1", name: "Ada", active: true },
{ id: "u2", name: "Lin", active: true },
]}}
it name="schema and semantics are valid"
expect no=schemaViolations
expect no=semanticViolations
it name="AST shape is what we expect"
expect node=class name=UserDirectory child=method childName=active
expect node=fn name=activeNames child=param count=1
it name="runtime expressions evaluate"
expect expr={{new UserDirectory(sampleUsers).count}} equals=2
expect fn=activeNames with=sampleUsers equals={{["Ada", "Lin"]}}What you can assert
expect no=schemaViolations/no=semanticViolations/no=codegenErrors/no=duplicateNames— negative AST/codegen checksexpect node=type name=X prop=Y is=Z— assert a specific node exists with a property valueexpect node=type name=X child=childType count=N— count children of a given kindexpect expr={{...}} equals={{...}}— evaluate a runtime expression and compareexpect fn=name with=arg equals=...— call a function and assert its return valueexpect derive=name equals=...— read a derived/computed valuefixture name=X value={{...}}— define a named fixture for use in expect blocks
Running tests
# Run a single file
kern test my-page.test.kern
# Run every .test.kern under a directory
kern test src/
# Watch mode — re-run when sources change
kern test src/ --watch
# CI mode — JSON output for downstream tooling
kern test src/ --json
# Filter by name
kern test src/ --grep "UserDirectory"Flags
--watch— re-run on file changes (chokidar)--bail— stop on first failure--grep <pattern>— filter test/it names by substring--json— emit a structured run summary (CI-friendly)--format compact/--compact— single-line per test--fail-on-warn— treat warnings as failures--max-warnings <n>— cap allowed warnings; fail if exceeded--coverage— report rule coverage across the run--min-coverage <pct>— fail the run below the threshold--baseline <file>— diff warnings against a saved baseline; only new warnings fail--write-baseline <file>— snapshot the current warning set as a baseline--pass-with-no-tests— exit 0 when nothing matches (useful in monorepos)--list-rules— print every assertion rule the runner knows about--explain-rule <rule>— show what a single rule checks and how to fix failures--generate— scaffold a.test.kernnext to a source file--outdir=<dir>— output directory for generated test scaffolds--dry-run— print what would happen without writing or executing
Baselines
Snapshot today's warnings so the test run only fails on new warnings introduced by the diff. Same model as kern review's baseline.
# Capture current state
kern test src/ --write-baseline test-baseline.json
# Subsequent runs only fail on new warnings
kern test src/ --baseline test-baseline.jsonWhy native
KERN's own conformance suite is written in .test.kern. The language is partially self-hosted: arrays, classes, control flow, effects, guards, routes, and tools are all verified by tests written in the same syntax they exercise. If the test runner breaks, KERN catches it before shipping.
Browse the conformance suite in the open-source repo: examples/native-test and the self-tests at packages/core/native-test.