Skip to main content

Overview

BaseLLM is the abstract base class that defines the interface for all LLM backends in Remem. It provides a consistent API for initializing, configuring, and running inference with different language models.

Class Definition

from remem.llm.base import BaseLLM, LLMConfig
Location: src/remem/llm/base.py:96

Attributes

global_config
BaseConfig
Global configuration object containing experiment-wide settings
llm_name
str
Class name indicating which LLM model to use (e.g., “gpt-4o-mini”, “meta-llama/Llama-3.1-8B”)
llm_config
LLMConfig
LLM-specific configuration object, initialized and handled by each specific LLM implementation

Constructor

def __init__(self, global_config: Optional[BaseConfig] = None) -> None
Parameters:
global_config
BaseConfig
default:"None"
Global configuration object. If not provided, a default BaseConfig instance is created
Example:
from remem.utils.config_utils import BaseConfig
from remem.llm.openai_gpt import CacheOpenAI

config = BaseConfig()
llm = CacheOpenAI.from_experiment_config(config)

Abstract Methods

Subclasses must implement the following method:

_init_llm_config

@abstractmethod
def _init_llm_config(self) -> None
Each LLM model should extract its own running parameters from global_config and raise an exception if any mandatory parameter is not defined. This function must initialize self.llm_config. Location: src/remem/llm/base.py:115

Core Methods

infer

def infer(self, chat: List[TextChatMessage]) -> Tuple[List[TextChatMessage], dict]
Perform synchronous inference using the LLM. Parameters:
chat
List[TextChatMessage]
Input chat history for the LLM. Each message contains a role and content
Returns:
response
Tuple[List[TextChatMessage], dict]
A tuple containing:
  • List of n (number of choices) LLM response messages
  • Metadata dictionary including input params and chat history
Location: src/remem/llm/base.py:150

ainfer

def ainfer(self, chat: List[TextChatMessage]) -> Tuple[List[TextChatMessage], dict]
Perform asynchronous inference using the LLM. Parameters:
chat
List[TextChatMessage]
Input chat history for the LLM
Returns:
response
Tuple[List[TextChatMessage], dict]
A tuple containing:
  • List of n (number of choices) LLM response messages
  • Metadata dictionary including input params and chat history
Location: src/remem/llm/base.py:138

batch_infer

def batch_infer(
    self, batch_chat: List[List[TextChatMessage]]
) -> Tuple[List[List[TextChatMessage]], List[dict], bool]
Perform batched synchronous inference using the LLM. Parameters:
batch_chat
List[List[TextChatMessage]]
Batch of input chat histories for the LLM
Returns:
response
Tuple[List[List[TextChatMessage]], List[dict], bool]
A tuple containing:
  • Batch list of length-n (number of choices) lists of LLM response messages
  • Corresponding batch of metadata dictionaries
  • Cache hit indicator
Location: src/remem/llm/base.py:162

batch_upsert_llm_config

def batch_upsert_llm_config(self, updates: Dict[str, Any]) -> None
Update self.llm_config with attribute-value pairs specified by a dictionary. Parameters:
updates
Dict[str, Any]
Dictionary of configuration updates to be integrated into self.llm_config
Example:
llm.batch_upsert_llm_config({
    "temperature": 0.7,
    "max_tokens": 1024
})
Location: src/remem/llm/base.py:122

LLMConfig Class

LLMConfig is a flexible configuration dataclass that stores LLM-specific parameters. Location: src/remem/llm/base.py:14

Key Methods

batch_upsert
method
Update existing attributes or add new ones from a dictionary
to_dict
method
Export the configuration as a JSON-serializable dictionary
to_json
method
Export the configuration as a JSON string
from_dict
classmethod
Create an LLMConfig instance from a dictionary
from_json
classmethod
Create an LLMConfig instance from a JSON string
Example:
from remem.llm.base import LLMConfig

# Create from dictionary
config = LLMConfig.from_dict({
    "llm_name": "gpt-4o-mini",
    "temperature": 0.0,
    "max_tokens": 2048
})

# Update configuration
config.batch_upsert({"temperature": 0.7})

# Export to dictionary
config_dict = config.to_dict()

See Also

Build docs developers (and LLMs) love