Overview
TheReMem class is the primary interface for the ReMem framework. It orchestrates knowledge graph construction, embedding storage, information extraction, and retrieval-augmented question answering.
Constructor
Global configuration settings for the instance. If not provided, creates a new
BaseConfig with default values.Directory where work-specific files will be stored. If not provided, constructs a default directory:
{save_dir}/ReMem_{timestamp}Language model for general processing. If not provided, initializes based on
global_config.llm_infer_mode and global_config.llm_name.Language model specifically for information extraction. Defaults to the main
llm if not provided.Language model specifically for question answering. Defaults to the main
llm if not provided.Example
Core Methods
index
Indexes documents by extracting information and building the knowledge graph.List of document strings to be indexed.
The indexing process:
- Chunks documents using the text preprocessor
- Performs information extraction (OpenIE, episodic, or temporal)
- Constructs the knowledge graph with entities, facts, and passages
- Stores embeddings for retrieval
Example
retrieve
Performs retrieval using the ReMem framework with fact-based and graph-enhanced search.List of query strings for which documents are to be retrieved.
Maximum number of documents to retrieve for each query. Defaults to
global_config.retrieval_top_k if not specified.List of QuerySolution objects, each containing:
question: The original querydocs: Retrieved document textsdoc_scores: Relevance scores for each documentgraph_seeds: Top-k facts used for graph searchdoc_metadata: Metadata for retrieved documents
Example
rag_for_qa
Performs retrieval-augmented generation for question answering.List of query strings or pre-processed QuerySolution objects. If strings, retrieval will be performed automatically.
List of lists containing gold-standard documents for each query. Required for retrieval evaluation.
List of lists containing gold-standard answers for each query. Required for QA evaluation.
Evaluation metrics to compute. Available options:
- Retrieval:
"retrieval_recall","retrieval_recall_all","retrieval_ndcg_any","retrieval_recall_locomo" - QA:
"qa_em","qa_f1","qa_bleu1","qa_bleu4","qa_longmemeval","qa_mem0_llm_judge","qa_evalsuit_llm_judge"
Optional metadata for each question (e.g., question type, ID).
Whether to save results to disk.
Returns a tuple containing:
List[QuerySolution]: Query solutions with answersList[str]: Raw LLM response messagesList[Dict]: Metadata dictionariesDict: Overall retrieval evaluation results (if enabled)Dict: Overall QA evaluation results (if enabled)
Example
qa
Executes question-answering inference using retrieved documents.List of QuerySolution objects containing queries and retrieved documents.
Returns a tuple containing:
List[QuerySolution]: Updated QuerySolution objects with predicted answersList[str]: Raw response messages from the LLMList[Dict]: Metadata dictionaries
Example
Graph Methods
initialize_graph
Initializes a graph from a saved file or creates a new one.A loaded or newly initialized igraph Graph object.
Example
save_igraph
Saves the current graph to disk.Example
get_graph_info
Returns information about the current graph structure.Dictionary containing graph statistics and metadata.
Example
Embedding & Storage Methods
prepare_retrieval_objects
Prepares in-memory objects for fast retrieval operations.This method is automatically called before the first retrieval if not manually invoked. It loads:
- All embedding vectors into memory
- Graph node mappings and indices
- Passage, entity, and fact keys
Example
dense_passage_retrieval
Performs dense passage retrieval using embedding similarity.The input query string.
Optional candidate documents to search. If None, uses indexed documents.
Whether to normalize similarity scores.
Returns a tuple containing:
- Sorted document indices (by relevance)
- Corresponding normalized similarity scores
Example
Properties
embedding_model
Access the embedding model instance.Example
chunk_embedding_store
Access the embedding store for document chunks.Example
phrase_embedding_store
Access the embedding store for entities/phrases.triple_embedding_store
Access the embedding store for facts/triples.Evaluation Methods
evaluate_qa
Evaluates question answering results against gold answers.Gold-standard answers for evaluation.
List of QA evaluator instances.
Query solutions with predicted answers.
Optional metadata (type, ID) for each question.
Dictionary with overall QA evaluation metrics.
evaluate_retrieval
Evaluates retrieval results against gold documents.Gold-standard documents for evaluation.
Query solutions with retrieved documents.
List of retrieval evaluator instances.
Dictionary with retrieval metrics (recall at various k values).
Advanced Methods
run_ppr
Runs Personalized PageRank on the knowledge graph.Reset probability distribution for each node in the graph.
Damping factor for PageRank computation.
Returns:
- Sorted document node IDs by PageRank score
- Corresponding PageRank scores
query_to_triple_scores
Computes similarity scores between a query and all indexed facts.The input query text.
Normalized array of similarity scores between query and facts.