Overview
BaseLLM is the abstract base class that defines the interface for all LLM backends in Remem. It provides a consistent API for initializing, configuring, and running inference with different language models.
Class Definition
src/remem/llm/base.py:96
Attributes
Global configuration object containing experiment-wide settings
Class name indicating which LLM model to use (e.g., “gpt-4o-mini”, “meta-llama/Llama-3.1-8B”)
LLM-specific configuration object, initialized and handled by each specific LLM implementation
Constructor
Global configuration object. If not provided, a default
BaseConfig instance is createdAbstract Methods
Subclasses must implement the following method:_init_llm_config
global_config and raise an exception if any mandatory parameter is not defined. This function must initialize self.llm_config.
Location: src/remem/llm/base.py:115
Core Methods
infer
Input chat history for the LLM. Each message contains a role and content
A tuple containing:
- List of n (number of choices) LLM response messages
- Metadata dictionary including input params and chat history
src/remem/llm/base.py:150
ainfer
Input chat history for the LLM
A tuple containing:
- List of n (number of choices) LLM response messages
- Metadata dictionary including input params and chat history
src/remem/llm/base.py:138
batch_infer
Batch of input chat histories for the LLM
A tuple containing:
- Batch list of length-n (number of choices) lists of LLM response messages
- Corresponding batch of metadata dictionaries
- Cache hit indicator
src/remem/llm/base.py:162
batch_upsert_llm_config
self.llm_config with attribute-value pairs specified by a dictionary.
Parameters:
Dictionary of configuration updates to be integrated into
self.llm_configsrc/remem/llm/base.py:122
LLMConfig Class
LLMConfig is a flexible configuration dataclass that stores LLM-specific parameters.
Location: src/remem/llm/base.py:14
Key Methods
Update existing attributes or add new ones from a dictionary
Export the configuration as a JSON-serializable dictionary
Export the configuration as a JSON string
Create an LLMConfig instance from a dictionary
Create an LLMConfig instance from a JSON string
See Also
- OpenAI LLM Client - OpenAI GPT implementation with caching
- vLLM Offline Client - vLLM offline inference engine
- Configuration - BaseConfig documentation