Skip to main content

Epistemic Type Checker

The AXON Type Checker validates semantic type constraints before IR generation. It enforces epistemic rules that distinguish between different categories of information.
Parsed AST → [Type Checker] → Validated AST → [IR Generator]

What Makes It Epistemic?

Unlike traditional type systems that track memory layout and control flow, AXON’s type system tracks the nature and reliability of information:
TraditionalEpistemic
StringFactualClaim
ObjectOpinion
NumberUncertainty
Control flowInformation flow
Key Principle: An Opinion can never satisfy a FactualClaim, even if they have the same string representation.

Type Checker Architecture

Overview

class TypeChecker:
    """Epistemic type checker for AXON programs.
    
    Validates:
      1. Name resolution — all referenced names are declared
      2. Type compatibility — epistemic rules are respected
      3. Semantic constraints — field values are valid
      4. Uncertainty propagation — Uncertainty taints downstream data
      5. Anchor completeness — required fields are present
      6. Run statement wiring — persona, context, anchors, flow all exist
    """

    def __init__(self, program: ProgramNode):
        self._program = program
        self._symbols = SymbolTable()
        self._errors: list[AxonTypeError] = []
        self._user_types: dict[str, TypeDefinition] = {}

    def check(self) -> list[AxonTypeError]:
        """Full type-check pass. Returns all semantic errors found."""
        self._errors = []

        # Phase 1: Register all declarations
        self._register_declarations()

        # Phase 2: Validate each declaration's body
        for decl in self._program.declarations:
            self._check_declaration(decl)

        return self._errors

Built-in Semantic Types

Epistemic Types (Mutually Exclusive)

EPISTEMIC_TYPES = frozenset({
    "FactualClaim",  # Verifiable, objective statements
    "Opinion",       # Subjective judgments
    "Uncertainty",   # Explicitly uncertain information
    "Speculation",   # Hypothetical reasoning
})
Rule: These types are mutually exclusive. An Opinion cannot be used where a FactualClaim is expected.

Content Types

CONTENT_TYPES = frozenset({
    "Document",
    "Chunk",
    "EntityMap",
    "Summary",
    "Translation",
})

Analysis Types (with Ranges)

ANALYSIS_TYPES = frozenset({
    "RiskScore",       # Range: 0.0..1.0
    "ConfidenceScore", # Range: 0.0..1.0
    "SentimentScore",  # Range: -1.0..1.0
    "ReasoningChain",
    "Contradiction",
})

RANGED_TYPES = {
    "RiskScore": (0.0, 1.0),
    "ConfidenceScore": (0.0, 1.0),
    "SentimentScore": (-1.0, 1.0),
}

The Epistemic Lattice

AXON uses a partial order lattice to define type subsumption relationships:
class EpistemicLattice:
    """Partial Order Lattice for AXON epistemic types.
    
    Defines the subsumption relationship (<=), join (supremum),
    and meet (infimum).
    """
    
    # Hierarchy dictionary: child -> parent
    _parents = {
        "HighConfidenceFact": "CitedFact",
        "CitedFact": "FactualClaim",
        "FactualClaim": "Any",
        "Opinion": "Any",
        "Speculation": "Any",
        "Uncertainty": "Any",
        "Any": None,
        "Never": None,
    }

    @classmethod
    def is_subtype(cls, t1: str, t2: str) -> bool:
        """True if t1 <= t2 (t1 can be used where t2 is expected)"""
        if t1 == "Never" or t2 == "Any":
            return True
        if t1 == "Any" or t2 == "Never":
            return False
        # ... lattice traversal ...

Type Hierarchy

                    Any
                   / | \
                  /  |  \
        FactualClaim Opinion Uncertainty
             |
         CitedFact
             |
    HighConfidenceFact

Subsumption Examples

is_subtype("CitedFact", "FactualClaim")     # True
is_subtype("FactualClaim", "CitedFact")     # False
is_subtype("Opinion", "FactualClaim")       # False (mutually exclusive!)
is_subtype("FactualClaim", "String")        # True (special coercion)
is_subtype("RiskScore", "Float")            # True (special coercion)
is_subtype("Uncertainty", "FactualClaim")   # True (taints everything)

Type Compatibility Checking

Basic Compatibility

def check_type_compatible(self, source: str, target: str) -> bool:
    """Check if source type can be used where target type is expected."""
    return EpistemicLattice.is_subtype(source, target)
Example validation:
flow Process(input: FactualClaim) -> Summary {
  step Analyze {
    given: input
    output: Opinion  # ❌ Type error: Opinion ≠ FactualClaim
  }
}

Uncertainty Propagation

Uncertainty is contagious — it propagates through computations:
def check_uncertainty_propagation(self, types: list[str]) -> str:
    """Apply join (supremum) operation across all inputs."""
    result = types[0]
    for t in types[1:]:
        result = EpistemicLattice.join(result, t)
    return result
Example:
join("FactualClaim", "Opinion") → "Any"  # Degradation
join("CitedFact", "FactualClaim") → "FactualClaim"  # Supremum
join("FactualClaim", "Uncertainty") → "Uncertain[FactualClaim]"  # Taint

Graded Monads (Uncertainty)

AXON supports graded uncertainty types that track confidence levels:
# Type with explicit confidence tracking
Uncertain[0.7, FactualClaim]
Parsing:
@classmethod
def parse_monad(cls, type_name: str) -> tuple[str, str|None, float|None]:
    """Parse: Uncertain[0.7, FactualClaim] → ("Uncertainty", "FactualClaim", 0.7)"""
    if type_name.startswith("Uncertain[") and type_name.endswith("]"):
        inner = type_name[10:-1]
        parts = [p.strip() for p in inner.split(",")]
        if len(parts) == 2:
            return ("Uncertainty", parts[1], float(parts[0]))
        return ("Uncertainty", parts[0], None)
    return (type_name, None, None)
Lifting operation:
@classmethod
def lift(cls, type_name: str, probability: float | None = None) -> str:
    """Lift a type into the Uncertainty monad."""
    if probability is not None:
        return f"Uncertain[{probability}, {type_name}]"
    return f"Uncertain[{type_name}]"

Validation Rules

1. Name Resolution

Rule: All referenced names must be declared.
def _check_run(self, node: RunStatement) -> None:
    # Flow must exist
    if node.flow_name:
        sym = self._symbols.lookup(node.flow_name)
        if sym is None:
            self._emit(
                f"Undefined flow '{node.flow_name}' in run statement",
                node
            )
        elif sym.kind != "flow":
            self._emit(
                f"'{node.flow_name}' is a {sym.kind}, not a flow",
                node
            )

2. Persona Field Validation

Rule: Tone must be from the valid set.
VALID_TONES = frozenset({
    "precise", "friendly", "formal", "casual", "analytical",
    "diplomatic", "assertive", "empathetic",
})

def _check_persona(self, node: PersonaDefinition) -> None:
    if node.tone and node.tone not in VALID_TONES:
        self._emit(
            f"Unknown tone '{node.tone}'. "
            f"Valid tones: {', '.join(sorted(VALID_TONES))}",
            node
        )

3. Confidence Threshold Validation

Rule: Confidence must be in range [0.0, 1.0].
def _check_range(
    self, value: float, lo: float, hi: float, field_name: str, node: ASTNode
) -> None:
    if value < lo or value > hi:
        self._emit(
            f"{field_name} must be between {lo} and {hi}, got {value}",
            node
        )

4. Anchor Completeness

Rule: Anchors with on_violation: raise must specify an error type.
def _check_anchor(self, node: AnchorConstraint) -> None:
    if node.on_violation == "raise" and not node.on_violation_target:
        self._emit(
            f"Anchor '{node.name}' uses 'raise' but no error type specified",
            node
        )

5. Flow Step Dependencies

Rule: Step names must be unique within a flow.
def _check_step(self, node: StepNode, step_names: set[str], flow_name: str) -> None:
    if node.name in step_names:
        self._emit(
            f"Duplicate step name '{node.name}' in flow '{flow_name}",
            node
        )
    step_names.add(node.name)

Symbol Table

Structure

@dataclass
class Symbol:
    """A named entity in the AXON program."""
    name: str
    kind: str  # "persona" | "context" | "anchor" | "flow" | "type" | ...
    node: ASTNode | None = None
    type_name: str = ""  # resolved type name for flows/intents

@dataclass
class SymbolTable:
    """Registry of all declared names in an AXON program."""
    symbols: dict[str, Symbol] = field(default_factory=dict)

    def declare(self, name: str, kind: str, node: ASTNode, type_name: str = "") -> str | None:
        """Register a name. Returns an error message if duplicate."""
        if name in self.symbols:
            existing = self.symbols[name]
            return (
                f"Duplicate declaration: '{name}' already defined as {existing.kind}"
            )
        self.symbols[name] = Symbol(name=name, kind=kind, node=node, type_name=type_name)
        return None

Registration Phase

def _register_declarations(self) -> None:
    """First pass: collect all names so forward references work."""
    for decl in self._program.declarations:
        match decl:
            case PersonaDefinition(name=name):
                self._register(name, "persona", decl)
            case FlowDefinition(name=name):
                ret = decl.return_type.name if decl.return_type else ""
                self._register(name, "flow", decl, type_name=ret)
            case AnchorConstraint(name=name):
                self._register(name, "anchor", decl)
            # ...

Error Reporting

AxonTypeError Structure

class AxonTypeError(AxonError):
    """Raised when semantic type validation fails."""
    def __init__(
        self,
        message: str,
        line: int = 0,
        column: int = 0,
        expected_type: str = "",
        found_type: str = "",
    ):
        super().__init__(message, line, column)
        self.expected_type = expected_type
        self.found_type = found_type

Example Error Output

Error: Epistemic type mismatch at line 15, column 5
  Expected: FactualClaim
  Found:    Opinion
  These types are mutually exclusive.

Validation Workflow

# 1. Parse source
tokens = Lexer(source).tokenize()
ast = Parser(tokens).parse()

# 2. Type check
type_checker = TypeChecker(ast)
errors = type_checker.check()

if errors:
    for err in errors:
        print(f"Error at {err.line}:{err.column}: {err.message}")
    exit(1)

# 3. Generate IR (only if type-check passes)
ir_generator = IRGenerator()
ir_program = ir_generator.generate(ast)

Next Steps

AST to IR

See how validated AST is lowered to model-agnostic IR

Runtime Validator

Learn how semantic types are enforced at runtime

Build docs developers (and LLMs) love