Skip to main content

Overview

The TypeChecker performs semantic validation on AXON programs using an epistemic type system. Unlike traditional type checkers that focus on memory layout, AXON’s type checker validates the nature and reliability of information.
from axon import Lexer, Parser, TypeChecker

source = '''
persona Expert {
    domain: ["medicine"]
    confidence_threshold: 1.5  # ERROR: must be 0.0-1.0
}

flow Analyze(doc: UnknownType) -> String {  # ERROR: UnknownType not defined
    step Process {
        ask: "Process the document"
    }
}
'''

lexer = Lexer(source)
parser = Parser(lexer.tokenize())
ast = parser.parse()

checker = TypeChecker(ast)
errors = checker.check()

for error in errors:
    print(f"Line {error.line}: {error.message}")

Class: TypeChecker

Constructor

program
ProgramNode
required
The parsed AST to validate

Methods

check() -> list[AxonTypeError]

Perform full semantic validation on the program. Returns: List of AxonTypeError objects (empty if no errors)
checker = TypeChecker(ast)
errors = checker.check()

if errors:
    print(f"Found {len(errors)} error(s):")
    for err in errors:
        print(f"  {err.line}:{err.column} - {err.message}")
else:
    print("Program is semantically valid")

Epistemic Type System

Type Hierarchy

AXON types form a lattice ordered by epistemic reliability:
              Any
             / | \
            /  |  \
           /   |   \
FactualClaim Opinion Uncertainty
     |        
  CitedFact
     |
HighConfidenceFact

Built-in Types

Epistemic Types

Track information reliability:
TypeDescriptionUsage
FactualClaimAsserted as factCan coerce to String, CitedFact
OpinionSubjective judgmentCannot be used where facts required
UncertaintyAmbiguous/unreliableTaints all computations
SpeculationHypotheticalDegrades epistemic confidence

Content Types

Structured data types:
TypeDescription
DocumentText document
ChunkDocument fragment
EntityMapNamed entity recognition results
SummaryCondensed content
TranslationTranslated text

Analysis Types

Semantic scores and reasoning:
TypeRangeDescription
RiskScore0.0-1.0Risk assessment value
ConfidenceScore0.0-1.0Certainty measure
SentimentScore-1.0-1.0Sentiment polarity
ReasoningChain-Chain-of-thought steps

Type Compatibility

checker = TypeChecker(ast)

# Check if source type can be used where target type is expected
can_use = checker.check_type_compatible(
    source="CitedFact",
    target="FactualClaim"
)
print(can_use)  # True (CitedFact is a subtype of FactualClaim)

can_use = checker.check_type_compatible(
    source="Opinion",
    target="FactualClaim" 
)
print(can_use)  # False (Opinion cannot be used as FactualClaim)

Uncertainty Propagation

Uncertainty taints downstream computations:
# If any input is Uncertain, output becomes Uncertain
result_type = checker.check_uncertainty_propagation([
    "FactualClaim",
    "Uncertainty",
    "String"
])
print(result_type)  # "Uncertain[Any]" - uncertainty propagates

Validation Rules

Persona Validation

source = '''
persona Expert {
    tone: invalid_tone  # Error: unknown tone
    confidence_threshold: 1.5  # Error: must be 0.0-1.0
}
'''
Validated:
  • tone must be one of: precise, friendly, formal, casual, analytical, diplomatic, assertive, empathetic
  • confidence_threshold must be 0.0-1.0
  • language format (if specified)

Context Validation

source = '''
context Production {
    memory: invalid_scope  # Error: unknown memory scope
    depth: shallow_deep  # Error: unknown depth
    temperature: 5.0  # Error: must be 0.0-2.0
    max_tokens: -100  # Error: must be positive
}
'''
Validated:
  • memory must be one of: session, persistent, none, ephemeral
  • depth must be one of: shallow, standard, deep, exhaustive
  • temperature must be 0.0-2.0
  • max_tokens must be positive

Anchor Validation

source = '''
anchor NoHallucination {
    confidence_floor: 1.5  # Error: must be 0.0-1.0
    on_violation: unknown_action  # Error: invalid action
}
'''
Validated:
  • confidence_floor must be 0.0-1.0
  • on_violation must be one of: raise, warn, log, escalate, fallback
  • If on_violation is raise, on_violation_target must be specified

Flow Validation

source = '''
flow Process(doc: UnknownType) -> String {  # Error: UnknownType not defined
    step Extract {
        ask: "Extract data"
        confidence_floor: 2.0  # Error: must be 0.0-1.0
    }
    
    step Extract {  # Error: duplicate step name
        ask: "Extract again"
    }
}
'''
Validated:
  • Parameter types exist (built-in or user-defined)
  • Return type exists
  • Step names are unique within flow
  • confidence_floor values are 0.0-1.0
  • Nested cognitive nodes (probe, reason, weave) are valid

Run Statement Validation

source = '''
run AnalyzeContract(contract)
    as UnknownPersona  # Error: persona not defined
    within UnknownContext  # Error: context not defined
    constrained_by [NoHallucination, UnknownAnchor]  # Error: anchor not defined
    effort: ultra  # Error: unknown effort level
'''
Validated:
  • Flow exists
  • Persona exists (if specified)
  • Context exists (if specified)
  • All anchors exist
  • Effort level is one of: low, medium, high, max

Error Types

AxonTypeError

from axon import AxonTypeError

class AxonTypeError:
    message: str
    line: int
    column: int
Example:
try:
    errors = checker.check()
    if errors:
        for error in errors:
            print(f"Type error at {error.line}:{error.column}")
            print(f"  {error.message}")
except Exception as e:
    print(f"Unexpected error: {e}")

Common Type Errors

# Undefined type reference
errors = checker.check()  
# "Type 'MyCustomType' is not defined"

# Invalid epistemic coercion
errors = checker.check()
# "Cannot use Opinion where FactualClaim is expected"

# Range constraint violation
errors = checker.check()
# "confidence_threshold must be between 0.0 and 1.0, got 1.5"

# Duplicate declaration
errors = checker.check()
# "Duplicate declaration: 'Expert' already defined as persona"

Advanced Usage

Custom Type Definitions

source = '''
type Party {
    name: String
    role: String
}

type Contract {
    parties: List<Party>
    terms: String
    risk_score: RiskScore
}

flow Analyze(contract: Contract) -> RiskScore {
    step Assess {
        ask: "Assess contract risk"
        output: RiskScore
    }
}
'''

checker = TypeChecker(Parser(Lexer(source).tokenize()).parse())
errors = checker.check()
# No errors - all types are properly defined

Epistemic Lattice API

Direct access to the type lattice:
from axon.compiler.type_checker import EpistemicLattice

# Check subtype relationship
is_sub = EpistemicLattice.is_subtype("CitedFact", "FactualClaim")
print(is_sub)  # True

# Compute join (supremum) - least upper bound
joined = EpistemicLattice.join("FactualClaim", "Opinion")
print(joined)  # "Any" - degradation to common ancestor

# Compute meet (infimum) - greatest lower bound
met = EpistemicLattice.meet("CitedFact", "FactualClaim")
print(met)  # "CitedFact" - more specific type

# Lift into Uncertainty monad
uncertain = EpistemicLattice.lift("FactualClaim", probability=0.6)
print(uncertain)  # "Uncertain[0.6, FactualClaim]"

Example: Type-Checking Pipeline

from axon import Lexer, Parser, TypeChecker
import sys

def check_program(source_path: str) -> bool:
    """Type-check an AXON program file."""
    
    with open(source_path) as f:
        source = f.read()
    
    # Phase 1: Lexical analysis
    lexer = Lexer(source, filename=source_path)
    tokens = lexer.tokenize()
    
    # Phase 2: Parsing
    parser = Parser(tokens)
    ast = parser.parse()
    
    # Phase 3: Type checking
    checker = TypeChecker(ast)
    errors = checker.check()
    
    if errors:
        print(f"Type errors in {source_path}:")
        for error in errors:
            print(f"  Line {error.line}:{error.column}")
            print(f"    {error.message}")
        return False
    
    print(f"✓ {source_path} is semantically valid")
    return True

if __name__ == "__main__":
    if len(sys.argv) < 2:
        print("Usage: python check.py <program.axon>")
        sys.exit(1)
    
    success = check_program(sys.argv[1])
    sys.exit(0 if success else 1)

Next Steps

IR Generator API

Generate intermediate representation from validated AST

Backends API

Compile IR to provider-specific prompts

Build docs developers (and LLMs) love