Skip to main content

Overview

The Memory API provides endpoints to retrieve and manage DeerFlow’s global memory system. Memory stores user context, conversation history, and facts across all threads for personalized AI interactions.

Get Memory Data

GET /api/memory

Retrieve the current global memory data including user context, history, and facts

Response

version
string
default:"1.0"
Memory schema version
lastUpdated
string
Last update timestamp (ISO 8601 format)
user
object
User context sections
history
object
Historical context sections
facts
array
List of memory facts

Example Request

curl http://localhost:8001/api/memory

Example Response

{
  "version": "1.0",
  "lastUpdated": "2024-01-15T10:30:00Z",
  "user": {
    "workContext": {
      "summary": "Working on DeerFlow documentation and API development",
      "updatedAt": "2024-01-15T10:30:00Z"
    },
    "personalContext": {
      "summary": "Prefers concise responses with code examples",
      "updatedAt": "2024-01-14T15:20:00Z"
    },
    "topOfMind": {
      "summary": "Building Gateway API documentation",
      "updatedAt": "2024-01-15T09:00:00Z"
    }
  },
  "history": {
    "recentMonths": {
      "summary": "Recent development activities on DeerFlow project",
      "updatedAt": "2024-01-15T10:30:00Z"
    },
    "earlierContext": {
      "summary": "",
      "updatedAt": ""
    },
    "longTermBackground": {
      "summary": "",
      "updatedAt": ""
    }
  },
  "facts": [
    {
      "id": "fact_abc123",
      "content": "User prefers TypeScript over JavaScript",
      "category": "preference",
      "confidence": 0.9,
      "createdAt": "2024-01-15T10:30:00Z",
      "source": "thread_xyz"
    },
    {
      "id": "fact_def456",
      "content": "User is experienced with FastAPI and Python",
      "category": "skill",
      "confidence": 0.85,
      "createdAt": "2024-01-14T14:20:00Z",
      "source": "thread_abc"
    }
  ]
}

Reload Memory Data

POST /api/memory/reload

Reload memory data from the storage file, refreshing the in-memory cache

Response

Returns the reloaded memory data (same structure as GET /api/memory).

Example Request

curl -X POST http://localhost:8001/api/memory/reload

Use Case

Use this endpoint when the memory file has been modified externally and you need to refresh the cached data without restarting the server.

Get Memory Configuration

GET /api/memory/config

Retrieve the current memory system configuration

Response

enabled
boolean
required
Whether memory is enabled
storage_path
string
required
Path to memory storage file (relative to project root)
debounce_seconds
integer
required
Debounce time for memory updates (seconds)
max_facts
integer
required
Maximum number of facts to store
fact_confidence_threshold
number
required
Minimum confidence threshold for facts (0-1)
injection_enabled
boolean
required
Whether memory injection is enabled
max_injection_tokens
integer
required
Maximum tokens for memory injection into prompts

Example Request

curl http://localhost:8001/api/memory/config

Example Response

{
  "enabled": true,
  "storage_path": ".deer-flow/memory.json",
  "debounce_seconds": 30,
  "max_facts": 100,
  "fact_confidence_threshold": 0.7,
  "injection_enabled": true,
  "max_injection_tokens": 2000
}

Get Memory Status

GET /api/memory/status

Retrieve both memory configuration and current data in a single request

Response

config
object
Memory configuration (same as GET /api/memory/config)
data
object
Memory data (same as GET /api/memory)

Example Request

curl http://localhost:8001/api/memory/status

Example Response

{
  "config": {
    "enabled": true,
    "storage_path": ".deer-flow/memory.json",
    "debounce_seconds": 30,
    "max_facts": 100,
    "fact_confidence_threshold": 0.7,
    "injection_enabled": true,
    "max_injection_tokens": 2000
  },
  "data": {
    "version": "1.0",
    "lastUpdated": "2024-01-15T10:30:00Z",
    "user": {
      "workContext": {
        "summary": "Working on DeerFlow project",
        "updatedAt": "2024-01-15T10:30:00Z"
      },
      "personalContext": {
        "summary": "",
        "updatedAt": ""
      },
      "topOfMind": {
        "summary": "",
        "updatedAt": ""
      }
    },
    "history": {
      "recentMonths": {
        "summary": "",
        "updatedAt": ""
      },
      "earlierContext": {
        "summary": "",
        "updatedAt": ""
      },
      "longTermBackground": {
        "summary": "",
        "updatedAt": ""
      }
    },
    "facts": []
  }
}

Memory System

How Memory Works

  1. Collection: Memory is collected from conversations across all threads
  2. Storage: Memory data is persisted to a JSON file (.deer-flow/memory.json)
  3. Injection: Memory is automatically injected into AI prompts when enabled
  4. Updates: Memory is updated with debouncing to prevent excessive writes

Memory Categories

User Context

  • workContext: Work-related information and current projects
  • personalContext: Personal preferences, communication style
  • topOfMind: Current priorities and focus areas

History Context

  • recentMonths: Recent development activities and conversations
  • earlierContext: Historical context from earlier periods
  • longTermBackground: Long-term background information

Facts

Structured facts extracted from conversations:
  • preference: User preferences (e.g., coding style, tools)
  • skill: User skills and expertise
  • context: General contextual information
  • goal: User goals and objectives

Confidence Scores

Facts have confidence scores (0-1) indicating reliability:
  • 0.9-1.0: High confidence (explicitly stated by user)
  • 0.7-0.9: Medium confidence (inferred from behavior)
  • 0.5-0.7: Low confidence (weak signals)
  • < 0.5: Very low confidence (filtered out by default)

Configuration

Memory can be configured in your application config:
memory:
  enabled: true
  storage_path: ".deer-flow/memory.json"
  debounce_seconds: 30
  max_facts: 100
  fact_confidence_threshold: 0.7
  injection_enabled: true
  max_injection_tokens: 2000

Parameters

  • enabled: Enable/disable memory system
  • storage_path: Path to memory JSON file
  • debounce_seconds: Wait time before writing updates
  • max_facts: Maximum number of facts to store
  • fact_confidence_threshold: Minimum confidence to keep facts
  • injection_enabled: Enable/disable memory injection into prompts
  • max_injection_tokens: Maximum tokens to inject

Use Cases

Display Memory Data

const response = await fetch('http://localhost:8001/api/memory');
const memory = await response.json();

console.log('User Context:', memory.user.workContext.summary);
console.log('Total Facts:', memory.facts.length);

Filter High-Confidence Facts

const response = await fetch('http://localhost:8001/api/memory');
const { facts } = await response.json();

const highConfidenceFacts = facts.filter(f => f.confidence >= 0.8);
const preferences = facts.filter(f => f.category === 'preference');

Check Memory Status

const response = await fetch('http://localhost:8001/api/memory/status');
const { config, data } = await response.json();

if (config.enabled && config.injection_enabled) {
  console.log('Memory is active and injecting context');
  console.log(`Last updated: ${data.lastUpdated}`);
}

Reload Memory

// After external modification
const response = await fetch('http://localhost:8001/api/memory/reload', {
  method: 'POST'
});
const refreshedMemory = await response.json();

Memory System

Learn about the memory system

Gateway Overview

Learn about the Gateway API

Build docs developers (and LLMs) love