Overview
The Parser class implements a recursive descent parser that transforms a token stream into an Abstract Syntax Tree (AST). The AST represents the hierarchical structure of an AXON program using cognitive nodes.
from axon import Lexer, Parser
source = '''
flow Analyze(doc: Document) -> Summary {
step Extract {
ask: "Extract key points"
}
}
'''
lexer = Lexer(source)
tokens = lexer.tokenize()
parser = Parser(tokens)
ast = parser.parse()
print ( f "Declarations: { len (ast.declarations) } " )
for decl in ast.declarations:
print ( f " { decl. __class__ . __name__ } : { decl.name } " )
Class: Parser
Constructor
The token stream from the Lexer
Methods
parse() -> ProgramNode
Parse the full program and return the root AST node.
Returns: ProgramNode containing all top-level declarations
Raises: AxonParseError if syntax errors are encountered
parser = Parser(tokens)
program = parser.parse()
for declaration in program.declarations:
if isinstance (declaration, FlowDefinition):
print ( f "Found flow: { declaration.name } " )
print ( f " Parameters: { len (declaration.parameters) } " )
print ( f " Body steps: { len (declaration.body) } " )
AST Node Types
Top-Level Declarations
ProgramNode
The root of every AST.
class ProgramNode :
line: int
column: int
declarations: list[ASTNode]
PersonaDefinition
Defines an AI persona with cognitive parameters.
class PersonaDefinition :
name: str
domain: list[ str ]
tone: str
confidence_threshold: float | None
cite_sources: bool
refuse_if: list[ str ]
language: str
description: str
line: int
column: int
Example:
from axon import Lexer, Parser
from axon.compiler.ast_nodes import PersonaDefinition
source = '''
persona LegalExpert {
domain: ["contract law"]
tone: analytical
confidence_threshold: 0.8
}
'''
parser = Parser(Lexer(source).tokenize())
ast = parser.parse()
persona = ast.declarations[ 0 ]
assert isinstance (persona, PersonaDefinition)
print (persona.name) # "LegalExpert"
print (persona.domain) # ["contract law"]
print (persona.tone) # "analytical"
ContextDefinition
Configures execution environment.
class ContextDefinition :
name: str
memory_scope: str
language: str
depth: str
max_tokens: int | None
temperature: float | None
cite_sources: bool
line: int
column: int
FlowDefinition
Defines a cognitive flow with parameters and steps.
class FlowDefinition :
name: str
parameters: list[ParameterNode]
return_type: TypeExprNode | None
body: list[ASTNode]
line: int
column: int
Example:
source = '''
flow ProcessContract(contract: Document) -> Analysis {
step Extract {
ask: "Extract contract clauses"
}
step Analyze {
given: Extract.output
ask: "Analyze for risks"
}
}
'''
parser = Parser(Lexer(source).tokenize())
ast = parser.parse()
flow = ast.declarations[ 0 ]
print (flow.name) # "ProcessContract"
print ( len (flow.parameters)) # 1
print (flow.parameters[ 0 ].name) # "contract"
print (flow.return_type.name) # "Analysis"
print ( len (flow.body)) # 2 steps
Cognitive Step Nodes
StepNode
A single cognitive step in a flow.
class StepNode :
name: str
given: str
ask: str
use_tool: UseToolNode | None
probe: ProbeDirective | None
reason: ReasonChain | None
weave: WeaveNode | None
output_type: str
confidence_floor: float | None
body: list[ASTNode]
line: int
column: int
ReasonChain
Multi-step reasoning directive.
class ReasonChain :
name: str
about: str
given: str
ask: str
depth: int
show_work: bool
chain_of_thought: bool
output_type: str
line: int
column: int
Example:
source = '''
reason about ContractValidity {
given: contract_text
ask: "Is this contract valid?"
depth: 3
show_work: true
}
'''
parser = Parser(Lexer(source).tokenize())
# Parser will handle reason as part of a flow step
ValidateGate
Constraint validation directive.
class ValidateGate :
target: str
schema: str
rules: list[ValidateRule]
line: int
column: int
class ValidateRule :
condition: str
comparison_op: str
comparison_value: str
action: str
action_target: str
action_params: dict[ str , str ]
line: int
column: int
WeaveNode
Synthesis directive that merges multiple outputs.
class WeaveNode :
sources: list[ str ]
target: str
format_type: str
priority: list[ str ]
style: str
line: int
column: int
Type System Nodes
TypeDefinition
Defines a custom semantic type.
class TypeDefinition :
name: str
range_constraint: RangeConstraint | None
where_clause: WhereClause | None
fields: list[TypeFieldNode]
line: int
column: int
class TypeFieldNode :
name: str
type_expr: TypeExprNode
line: int
column: int
class TypeExprNode :
name: str
generic_param: str
optional: bool
line: int
column: int
Example:
source = '''
type RiskScore (0.0..1.0) where value >= 0
type Contract {
parties: List<Party>
terms: String
valid: Boolean
}
'''
parser = Parser(Lexer(source).tokenize())
ast = parser.parse()
risk_score = ast.declarations[ 0 ]
print (risk_score.range_constraint.min_value) # 0.0
print (risk_score.range_constraint.max_value) # 1.0
contract = ast.declarations[ 1 ]
print ( len (contract.fields)) # 3
print (contract.fields[ 0 ].name) # "parties"
print (contract.fields[ 0 ].type_expr.name) # "List"
print (contract.fields[ 0 ].type_expr.generic_param) # "Party"
MemoryDefinition
class MemoryDefinition :
name: str
store: str
backend: str
retrieval: str
decay: str
line: int
column: int
class ToolDefinition :
name: str
provider: str
max_results: int | None
filter_expr: str
timeout: str
runtime: str
sandbox: bool
line: int
column: int
Error Handling
AxonParseError
Raised when syntax errors are detected:
from axon import AxonParseError
try :
parser = Parser(tokens)
ast = parser.parse()
except AxonParseError as e:
print ( f "Parse error at line { e.line } , col { e.column } " )
print ( f " { e.message } " )
print ( f " Expected: { e.expected } " )
print ( f " Found: { e.found } " )
Attributes:
message: str - Error description
line: int - Line number
column: int - Column number
expected: str - What the parser expected
found: str - What was actually found
Common Parse Errors
# Missing colon
source = "persona Expert { domain [ \" law \" ] }"
# AxonParseError: Expected ':', found '['
# Unmatched braces
source = "flow Test() { step A { ask: \" test \" }"
# AxonParseError: Expected '}', found EOF
# Invalid type expression
source = "flow Test(x: ) -> String { }"
# AxonParseError: Expected identifier, found ')'
Example: Walking the AST
from axon import Lexer, Parser
from axon.compiler.ast_nodes import FlowDefinition, StepNode
source = open ( "my_program.axon" ).read()
lexer = Lexer(source, filename = "my_program.axon" )
parser = Parser(lexer.tokenize())
ast = parser.parse()
def print_ast ( node , indent = 0 ):
prefix = " " * indent
node_type = node. __class__ . __name__
if isinstance (node, FlowDefinition):
print ( f " { prefix } Flow: { node.name } " )
for step in node.body:
print_ast(step, indent + 1 )
elif isinstance (node, StepNode):
print ( f " { prefix } Step: { node.name } " )
if node.ask:
print ( f " { prefix } ask: { node.ask[: 50 ] } ..." )
else :
print ( f " { prefix }{ node_type } " )
for decl in ast.declarations:
print_ast(decl)
Next Steps
Type Checker API Validate epistemic types and semantic constraints
IR Generator API Generate intermediate representation from AST