Quickstart
This guide will help you build your first REMem application in just a few minutes. You’ll learn how to index documents and query them using REMem’s hybrid memory graph.Prerequisites
Before you begin, make sure you have:- Python 3.10 or higher installed
- An OpenAI API key (for the LLM and embeddings)
Installation
Your first REMem application
Import and configure
Start by importing REMem and creating a configuration:The
extract_method parameter determines how REMem processes your documents:openie— Fast entity and triple extractionepisodic— Episodic fact extraction with contextepisodic_gist— Adds gist memories for associative recall (recommended)temporal— Best for time-sensitive questions
Initialize REMem
Create a REMem instance with your configuration:This initializes the hybrid memory graph, embedding stores, and extraction pipeline.
Index documents
Add documents to REMem’s memory graph:During indexing, REMem:
- Chunks and normalizes your documents
- Extracts entities, facts, and gist traces
- Generates embeddings for semantic search
- Builds the hybrid memory graph with connections
Indexing can take a few moments depending on the number of documents and the extraction method. The results are cached for future use.
Complete example
Here’s the complete code:Running benchmarks
REMem includes support for several research benchmarks. To run a benchmark:What’s in the response?
Therag_for_qa method returns three objects:
- solutions: List of
QuerySolutionobjects containing questions, answers, and reasoning traces - responses: Raw LLM responses
- meta: Metadata including retrieval scores, graph traversal paths, and timing information
Configuration options
Here are some key configuration parameters:Next steps
Installation
Learn about different installation options and embedding models
Configuration
Deep dive into configuration options and extraction methods
Architecture
Understand how REMem’s hybrid memory graph works
Examples
Browse complete examples and benchmark scripts