Skip to main content

What is AXON?

AXON is a compiled language that targets LLMs instead of CPUs. It has a formal EBNF grammar, a lexer, parser, AST, intermediate representation, multiple compiler backends (Anthropic, OpenAI, Gemini, Ollama), and a runtime with semantic type checking, retry engines, and execution tracing. It is not a Python library, a LangChain wrapper, or a YAML DSL.
AXON is currently in alpha status (v0.4.0) with 731 passing tests. The language is under active development.

A Real Example

Here’s a complete AXON program that analyzes legal contracts:
persona LegalExpert {
    domain: ["contract law", "IP", "corporate"]
    tone: precise
    confidence_threshold: 0.85
    refuse_if: [speculation, unverifiable_claim]
}

anchor NoHallucination {
    require: source_citation
    confidence_floor: 0.75
    unknown_response: "Insufficient information"
}

flow AnalyzeContract(doc: Document) -> StructuredReport {
    step Extract {
        probe doc for [parties, obligations, dates, penalties]
        output: EntityMap
    }
    step Assess {
        reason {
            chain_of_thought: enabled
            given: Extract.output
            ask: "Are there ambiguous or risky clauses?"
            depth: 3
        }
        output: RiskAnalysis
    }
    step Check {
        validate Assess.output against: ContractSchema
        if confidence < 0.8 -> refine(max_attempts: 2)
        output: ValidatedAnalysis
    }
    step Report {
        weave [Extract.output, Check.output]
        format: StructuredReport
        include: [summary, risks, recommendations]
    }
}

Architecture Overview

AXON follows a traditional compiler architecture:
.axon source → Lexer → Tokens → Parser → AST

                              Type Checker (semantic validation)

                              IR Generator → AXON IR (JSON-serializable)

                              Backend (Anthropic │ OpenAI │ Gemini │ Ollama)

                              Runtime (Executor + Validators + Tracer)

                              Typed Output (validated, traced result)

12 Cognitive Primitives

AXON’s language constructs map directly to how AI models think:

persona

Cognitive identity of the model

context

Working memory / session config

intent

Atomic semantic instruction

flow

Composable pipeline of cognitive steps

reason

Explicit chain-of-thought

anchor

Hard constraint (never violable)

validate

Semantic validation gate

refine

Adaptive retry with failure context

memory

Persistent semantic storage

tool

External invocable capability

probe

Directed information extraction

weave

Semantic synthesis of multiple outputs

Epistemic Type System

AXON implements an epistemic type system based on a partial order lattice, representing formal subsumption relationships:
⊤ (Any)

    ├── FactualClaim
    │   └── CitedFact
    │       └── HighConfidenceFact

    ├── Opinion
    ├── Uncertainty   ← propagates upwards (taint)
    └── Speculation
⊥ (Never)
Rule of Subsumption: If T₁ ≤ T₂, then T₁ can be used where T₂ is expected. For instance, a CitedFact can naturally satisfy a FactualClaim dependency, but an Opinion never can.
Computations involving Uncertainty structurally taint the result, propagating Uncertainty forwards to guarantee epistemic honesty throughout the execution flow.

How AXON Compares

LangChainDSPyGuidanceAXON
Own language + grammar
Semantic type systemPartial
Formal anchors
Persona as type
Reasoning as primitivePartial
Native multi-modelPartialPartial

Design Principles

AXON is built on five core principles:
  1. Declarative over imperative — describe what, not how
  2. Semantic over syntactic — types carry meaning, not layout
  3. Composable cognition — blocks compose like neurons
  4. Configurable determinism — spectrum from exploration to precision
  5. Failure as first-class citizen — retry, refine, fallback are native

Runtime Self-Healing

AXON features a native self-healing mechanism for semantic gates. When the LLM output violates a hard constraint (AnchorBreachError) or fails structural semantic validation (ValidationError), the AXON RetryEngine automatically intercepts the failure.
The correction loop strictly respects the refine limits. If the model fails to heal within permitted attempts, AXON raises a RefineExhaustedError to prevent infinite execution loops.
Instead of crashing, the engine re-injects the exact failure_context back into the LLM’s next prompt. This creates a closed feedback loop where the model adaptively corrects its logic and structurally self-heals in real-time.

Next Steps

Installation

Install AXON and set up your development environment

Quickstart

Build your first AXON program in minutes

Build docs developers (and LLMs) love