Skip to main content

Overview

Memory in AXON defines semantic storage for AI agents—how information is stored, retrieved, and managed across executions. Unlike traditional databases, AXON memory is designed for semantic retrieval and knowledge persistence.

Memory Definition

Syntax

memory <Name> {
  store: <scope>
  backend: <identifier>
  retrieval: <strategy>
  decay: <duration|policy>
}

Fields

store (optional)

Type: One of session, persistent, ephemeral Defines the lifetime scope of the memory.
memory ConversationContext {
  store: session         // Lasts for the session
}

memory LongTermKnowledge {
  store: persistent      // Persists across sessions
}

memory TemporaryWork {
  store: ephemeral       // Short-lived, discarded after flow
}
Memory Scopes:
ScopeLifetimeUse Case
ephemeralSingle flow executionTemporary working memory
sessionCurrent session/conversationChat contexts, multi-turn interactions
persistentPermanentUser profiles, learned knowledge, historical data

backend (optional)

Type: Identifier Specifies the storage backend implementation.
memory VectorStore {
  store: persistent
  backend: vector_db     // Vector database (Pinecone, Weaviate)
}

memory FastCache {
  store: session
  backend: in_memory     // In-memory storage
}

memory DistributedStore {
  store: persistent
  backend: redis         // Redis cluster
}

memory CustomStore {
  store: persistent
  backend: custom        // Custom implementation
}
Common Backends:
BackendCharacteristicsBest For
vector_dbSemantic search, embeddingsKnowledge bases, semantic retrieval
in_memoryFast, volatileSession state, temporary data
redisDistributed, fast, TTL supportSession management, caching
postgresRelational, structuredStructured data, relations
s3Object storage, scalableLarge objects, archives
customUser-definedSpecial requirements

retrieval (optional)

Type: One of semantic, exact, hybrid Defines the retrieval strategy for querying memory.
memory SemanticKnowledge {
  store: persistent
  backend: vector_db
  retrieval: semantic    // Semantic similarity search
}

memory ExactLookup {
  store: session
  backend: redis
  retrieval: exact       // Exact key-value lookup
}

memory HybridSearch {
  store: persistent
  backend: vector_db
  retrieval: hybrid      // Combine semantic + exact
}
Retrieval Strategies:
StrategyHow It WorksUse Case
semanticEmbedding similarity, vector searchNatural language queries, fuzzy matching
exactKey-value lookup, exact matchIDs, structured keys, fast lookup
hybridCombine semantic + exactBest of both: semantic + structured

decay (optional)

Type: Duration or policy identifier Defines how memory degrades or expires over time.
// No decay: permanent storage
memory PermanentKnowledge {
  store: persistent
  decay: none
}

// Time-based decay
memory RecentActivity {
  store: session
  decay: 7d              // Decay after 7 days
}

memory ShortTerm {
  store: ephemeral
  decay: 1h              // Decay after 1 hour
}

// Policy-based decay
memory AdaptiveMemory {
  store: persistent
  decay: daily           // Daily cleanup
}

memory WeeklyArchive {
  store: persistent
  decay: weekly          // Weekly archival
}
Decay Options:
TypeExampleBehavior
nonedecay: noneNever expires
Durationdecay: 7dExpires after duration
dailydecay: dailyDaily cleanup
weeklydecay: weeklyWeekly cleanup
monthlydecay: monthlyMonthly cleanup

Complete Examples

Conversation Memory

memory ConversationHistory {
  store: session
  backend: in_memory
  retrieval: semantic
  decay: 1h
}
Use Case: Store conversation context for the current session, expires after 1 hour.

Knowledge Base

memory LongTermKnowledge {
  store: persistent
  backend: vector_db
  retrieval: semantic
  decay: none
}
Use Case: Permanent semantic knowledge base with similarity search.

User Profile Store

memory UserProfiles {
  store: persistent
  backend: postgres
  retrieval: exact
  decay: none
}
Use Case: Permanent user profiles with exact key-value lookup.

Temporary Working Memory

memory WorkingMemory {
  store: ephemeral
  backend: in_memory
  retrieval: exact
  decay: none
}
Use Case: Temporary storage for flow execution, discarded after completion.

Recent Activity Cache

memory RecentActivity {
  store: session
  backend: redis
  retrieval: hybrid
  decay: 24h
}
Use Case: Cache recent activity for 24 hours with hybrid retrieval.

Memory Operations

Remember (Store)

Syntax:
remember(<expression>) -> <MemoryTarget>
Examples:
memory ProjectKnowledge {
  store: persistent
  backend: vector_db
}

flow LearnFromDocument(doc: Document) {
  step Extract {
    given: doc
    ask: "Extract key insights"
    output: Insights
  }
  
  remember(Insights) -> ProjectKnowledge
}

Recall (Retrieve)

Syntax:
recall(<query>) from <MemorySource>
Examples:
memory ResearchKnowledge {
  store: persistent
  backend: vector_db
  retrieval: semantic
}

flow QueryKnowledge(topic: String) -> Answer {
  step Retrieve {
    recall(topic) from ResearchKnowledge
    output: RelevantInfo
  }
  
  step Synthesize {
    given: RelevantInfo
    ask: "Synthesize an answer"
    output: Answer
  }
}

Combining Remember and Recall

memory ProjectMemory {
  store: persistent
  backend: vector_db
  retrieval: semantic
}

flow IterativeResearch(query: String) -> Report {
  // Recall previous findings
  step RecallPrevious {
    recall(query) from ProjectMemory
    output: PriorFindings
  }
  
  // Do new research
  step NewResearch {
    given: [query, PriorFindings]
    ask: "Research and expand on previous findings"
    output: NewFindings
  }
  
  // Store new findings
  remember(NewFindings) -> ProjectMemory
  
  // Generate report
  weave [PriorFindings, NewFindings] into Report
}

Memory with Context

Combine memory definitions with context for execution control:
memory SessionMemory {
  store: session
  backend: in_memory
}

context ChatContext {
  memory: session        // Use session-scoped memory
  depth: standard
}

flow ChatBot(message: String) -> Response {
  step RecallContext {
    recall("conversation") from SessionMemory
    output: Context
  }
  
  step Respond {
    given: [message, Context]
    ask: "Generate contextual response"
    output: Response
  }
  
  remember(Response) -> SessionMemory
}

run ChatBot("Hello")
  within ChatContext

Best Practices

1. Match Memory Scope to Use Case

// Ephemeral: temporary computation
memory TempWork {
  store: ephemeral
  backend: in_memory
}

// Session: conversation context
memory ChatHistory {
  store: session
  backend: redis
  decay: 1h
}

// Persistent: long-term knowledge
memory KnowledgeBase {
  store: persistent
  backend: vector_db
  decay: none
}

2. Use Appropriate Backends

// Vector DB for semantic search
memory SemanticStore {
  backend: vector_db
  retrieval: semantic
}

// Redis for fast session storage
memory SessionCache {
  backend: redis
  retrieval: exact
}

// Postgres for structured data
memory StructuredData {
  backend: postgres
  retrieval: exact
}

3. Set Appropriate Decay Policies

// Short-lived: recent activity
memory RecentActivity {
  store: session
  decay: 1h
}

// Medium-lived: user sessions
memory UserSession {
  store: session
  decay: 24h
}

// Long-lived: but with cleanup
memory ManagedKnowledge {
  store: persistent
  decay: weekly      // Periodic cleanup
}

// Permanent: critical data
memory CoreKnowledge {
  store: persistent
  decay: none
}

4. Choose Retrieval Strategy Wisely

// Semantic: for natural language queries
memory NLKnowledge {
  retrieval: semantic
}

// Exact: for structured lookups
memory IDLookup {
  retrieval: exact
}

// Hybrid: for mixed queries
memory FlexibleStore {
  retrieval: hybrid
}

5. Clean Up Memory

// Let ephemeral memory auto-cleanup
memory Temporary {
  store: ephemeral
  decay: none        // Auto-cleaned after flow
}

// Set decay for session memory
memory Session {
  store: session
  decay: 1h          // Explicit decay
}

// Manage persistent memory
memory Persistent {
  store: persistent
  decay: monthly     // Periodic archival
}

Common Patterns

Conversation Context

memory ConversationMemory {
  store: session
  backend: redis
  retrieval: semantic
  decay: 2h
}

flow ConversationalAgent(message: String) -> Response {
  recall("conversation history") from ConversationMemory
  
  step Respond {
    given: [message, conversationHistory]
    output: Response
  }
  
  remember(Response) -> ConversationMemory
}

Knowledge Accumulation

memory ProjectKnowledge {
  store: persistent
  backend: vector_db
  retrieval: semantic
  decay: none
}

flow AccumulateKnowledge(newInfo: Document) {
  step Process {
    given: newInfo
    ask: "Extract key facts"
    output: Facts
  }
  
  remember(Facts) -> ProjectKnowledge
}

Multi-Source Retrieval

memory ShortTermMemory {
  store: session
  backend: in_memory
  retrieval: exact
}

memory LongTermMemory {
  store: persistent
  backend: vector_db
  retrieval: semantic
}

flow SmartRetrieval(query: String) -> Answer {
  step RecallRecent {
    recall(query) from ShortTermMemory
    output: RecentData
  }
  
  step RecallHistorical {
    recall(query) from LongTermMemory
    output: HistoricalData
  }
  
  weave [RecentData, HistoricalData] into Answer
}

Learning System

memory LearnedPatterns {
  store: persistent
  backend: vector_db
  retrieval: semantic
  decay: none
}

flow LearnAndApply(example: Example) -> Prediction {
  // Learn from example
  step Learn {
    given: example
    ask: "Extract patterns"
    output: Pattern
  }
  
  remember(Pattern) -> LearnedPatterns
  
  // Apply learned patterns
  step Apply {
    recall("relevant patterns") from LearnedPatterns
    given: [example, learnedPatterns]
    output: Prediction
  }
}

Type Checking

The AXON type checker validates: Store scope: Must be valid scope (session, persistent, ephemeral)
Retrieval strategy: Must be valid strategy (semantic, exact, hybrid)
Memory references: Referenced memories must exist
memory Invalid {
  store: unlimited           // ❌ Error: invalid scope
  retrieval: approximate     // ❌ Error: invalid strategy
}

step UseUndefined {
  recall("data") from NonExistent  // ❌ Error: undefined memory
}

Performance Considerations

Backend Selection

// Fast: in-memory for hot data
memory HotCache {
  backend: in_memory
  store: session
}

// Scalable: vector DB for large knowledge bases
memory LargeKB {
  backend: vector_db
  store: persistent
}

// Distributed: Redis for multi-instance
memory Distributed {
  backend: redis
  store: session
}

Decay Management

// Aggressive: clean up quickly
memory ShortLived {
  decay: 1h
}

// Conservative: keep longer
memory MediumLived {
  decay: 7d
}

// Permanent: never decay
memory Permanent {
  decay: none
}
  • Context — Memory scope configuration
  • Flow — Memory operations in flows
  • Persona — Agent identities
  • Tools — External capabilities

Build docs developers (and LLMs) love