Overview
SessionConfig is the central configuration object for all Fenic operations. It defines application settings, model configurations, and optional cloud settings.
Constructor
from fenic.api.session.config import SessionConfig
config = SessionConfig(
app_name = "my_app" ,
semantic = SemanticConfig( ... ),
db_path = None ,
cloud = None
)
Parameters
Name of the application using this session. Used for logging and tracking purposes.
Configuration for semantic models including language models and embedding models. See SemanticConfig for details. When not provided, only non-semantic DataFrame operations are available.
Optional path to a local database file for persistent storage. If not provided, an in-memory database is used.
Configuration for cloud-based execution. Only needed for distributed processing. Show CloudConfig Properties
Size of the cloud executor instance. Options:
CloudExecutorSize.SMALL - Small instance
CloudExecutorSize.MEDIUM - Medium instance
CloudExecutorSize.LARGE - Large instance
CloudExecutorSize.XLARGE - Extra large instance
Methods
to_json()
Export the session configuration to a JSON string.
json_str = config.to_json()
Returns: str - JSON representation of the configuration
Examples
Basic Configuration
Simple session with a single language model:
from fenic.api.session.config import SessionConfig, SemanticConfig
from fenic.api.session.config import OpenAILanguageModel
config = SessionConfig(
app_name = "my_app" ,
semantic = SemanticConfig(
language_models = {
"gpt4" : OpenAILanguageModel(
model_name = "gpt-4.1-nano" ,
rpm = 100 ,
tpm = 1000
)
}
)
)
Multi-Model Configuration
Session with multiple language models and an embedding model:
from pathlib import Path
from fenic.api.session.config import (
SessionConfig,
SemanticConfig,
OpenAILanguageModel,
AnthropicLanguageModel,
OpenAIEmbeddingModel,
)
config = SessionConfig(
app_name = "production_app" ,
db_path = Path( "/path/to/database.db" ),
semantic = SemanticConfig(
language_models = {
"gpt4" : OpenAILanguageModel(
model_name = "gpt-4.1-nano" ,
rpm = 100 ,
tpm = 1000
),
"claude" : AnthropicLanguageModel(
model_name = "claude-3-5-haiku-latest" ,
rpm = 100 ,
input_tpm = 100 ,
output_tpm = 100
),
},
default_language_model = "gpt4" ,
embedding_models = {
"openai_embeddings" : OpenAIEmbeddingModel(
model_name = "text-embedding-3-small" ,
rpm = 100 ,
tpm = 1000
)
},
default_embedding_model = "openai_embeddings" ,
),
)
Cloud Execution Configuration
Session configured for cloud-based execution:
from fenic.api.session.config import (
SessionConfig,
SemanticConfig,
CloudConfig,
CloudExecutorSize,
OpenAILanguageModel,
)
config = SessionConfig(
app_name = "cloud_app" ,
semantic = SemanticConfig(
language_models = {
"gpt4" : OpenAILanguageModel(
model_name = "gpt-4.1-nano" ,
rpm = 100 ,
tpm = 1000
)
}
),
cloud = CloudConfig( size = CloudExecutorSize. MEDIUM ),
)
Configuration with LLM Response Caching
Reduce costs and improve performance with response caching:
from fenic.api.session.config import (
SessionConfig,
SemanticConfig,
LLMResponseCacheConfig,
OpenAILanguageModel,
)
config = SessionConfig(
app_name = "cached_app" ,
semantic = SemanticConfig(
language_models = {
"gpt4" : OpenAILanguageModel(
model_name = "gpt-4.1-nano" ,
rpm = 100 ,
tpm = 1000
)
},
llm_response_cache = LLMResponseCacheConfig(
enabled = True ,
ttl = "2h" , # Cache for 2 hours
max_size_mb = 1000 , # 1GB cache
)
),
)
Usage
Once configured, pass the SessionConfig to Session.get_or_create():
from fenic import Session
session = Session.get_or_create( config = config)