Build token-efficient AI agents with GLYPH serialization and streaming validation
GLYPH is purpose-built for AI agents. This guide shows you how to define tools, validate calls as tokens stream, manage state efficiently, and coordinate multi-agent systems.
Define tools in GLYPH format for your system prompt:
# System prompt with GLYPH toolsSYSTEM_PROMPT = """You have access to these tools:search{query:str max_results:int[1..100]=10} Search the web. Returns list of results.calculate{expression:str} Evaluate a mathematical expression. Returns number.browse{url:str} Fetch and summarize a webpage. Returns text.To use a tool, output:ToolName{arg1=value arg2=value}Example:search{query="python async tutorial" max_results=5}"""
This uses 47% fewer tokens than the equivalent JSON tool definitions.
import glyphfrom pathlib import Pathdef save_checkpoint(agent_state: dict, path: str): """Save agent state to disk.""" with open(path, "w") as f: f.write(glyph.from_json(agent_state))def load_checkpoint(path: str) -> dict: """Load agent state from disk.""" with open(path) as f: return glyph.to_json(glyph.parse(f.read()))# Save periodicallyif state["turn"] % 5 == 0: save_checkpoint(state, f"checkpoint_{state['turn']}.glyph")# Resume from crashtry: state = load_checkpoint("checkpoint_latest.glyph") print(f"Resumed from turn {state['turn']}")except FileNotFoundError: state = {"goal": goal, "observations": [], "turn": 0}
GLYPH checkpoints are human-readable, so you can inspect them with a text editor.
# ❌ BAD - breaks on nested structuresmatch = re.search(r'search\{query="([^"]+)"', response)# ✅ GOOD - use the parserresult = glyph.parse(response)
Don't: Validate After Full Generation
# ❌ BAD - wastes tokens on invalid callsresponse = await llm.generate(prompt) # 50 tokensresult = glyph.parse(response)if result.type_name not in allowed_tools: raise Error() # Discovered too late# ✅ GOOD - validate as tokens arrive (see next guide)validator = glyph.StreamingValidator(registry)async for token in llm.stream(prompt): result = validator.push(token) if result.tool_name and not result.tool_allowed: await cancel() # Stop at token 5 break
Don't: Send Full State Every Turn
# ❌ BAD - O(n) tokens per turn for n observationsstate["observations"].append(new_obs)send_full_state(state) # Gets bigger every turn# ✅ GOOD - O(1) patches (see state-management guide)patch = glyph.patch([('+', 'observations', new_obs)])send_patch(patch, base_hash=current_hash)