The llm_provider module provides a simple interface for text generation using a local Ollama server.
Model Management
list_models
Lists all models available on the local Ollama server.
def list_models() -> list[str]
Returns
list[str] - Sorted list of model names available on the Ollama server
Example
from llm_provider import list_models
try:
models = list_models()
print("Available models:")
for model in models:
print(f" - {model}")
except Exception as e:
print(f"Could not connect to Ollama: {e}")
Usage in main.py
# From main.py startup sequence
models = list_models()
if not models:
error("No models found on Ollama. Pull a model first (e.g. 'ollama pull llama3.2:3b').")
sys.exit(1)
for idx, model_name in enumerate(models):
print(colored(f" {idx + 1}. {model_name}", "cyan"))
select_model
Sets the model to use for all subsequent generate_text() calls.
def select_model(model: str) -> None
Parameters
Ollama model name (must already be pulled to the local server)
Returns
None - Sets the global active model
Example
from llm_provider import select_model, list_models
# List and select a model
models = list_models()
if models:
select_model(models[0]) # Select first available model
print(f"Using model: {models[0]}")
Usage in main.py
# Select model from config or prompt user
configured_model = get_ollama_model()
if configured_model:
select_model(configured_model)
success(f"Using configured model: {configured_model}")
else:
# Interactive selection
models = list_models()
model_choice = models[choice_idx]
select_model(model_choice)
success(f"Using model: {model_choice}")
get_active_model
Returns the currently selected model.
def get_active_model() -> str | None
Returns
str | None - The active model name, or None if no model has been selected
Example
from llm_provider import get_active_model, select_model
active = get_active_model()
if active is None:
print("No model selected yet")
select_model("llama3.2:3b")
else:
print(f"Active model: {active}")
Usage in main.py
# Pass active model to CRON jobs
cron_script_path = os.path.join(ROOT_DIR, "src", "cron.py")
command = ["python", cron_script_path, "youtube", selected_account['id'], get_active_model()]
Text Generation
generate_text
Generates text using the local Ollama server.
def generate_text(prompt: str, model_name: str = None) -> str
Parameters
User prompt to send to the LLM
Optional model name override. If not provided, uses the model set by select_model()
Returns
str - Generated text response from the model
Raises
RuntimeError - If no model is selected and model_name is not provided
Example
from llm_provider import select_model, generate_text
# Select a model first
select_model("llama3.2:3b")
# Generate text using selected model
response = generate_text("Write a short description for a tech review video")
print(response)
# Override model for specific call
response = generate_text(
"Write a tweet about AI",
model_name="mistral:latest"
)
print(response)
Usage in Classes
The provider is used throughout the codebase for generating content:
# Example: Generating YouTube video script
from llm_provider import generate_text
prompt = f"Create a {niche} video script about {topic}"
script = generate_text(prompt)
# Example: Generating Twitter post
prompt = f"Write a tweet about {topic} in {language}"
tweet = generate_text(prompt)
Configuration
The module uses configuration from config.py to connect to the Ollama server:
from config import get_ollama_base_url
import ollama
# Internal client creation
client = ollama.Client(host=get_ollama_base_url())
Default Configuration
- Base URL:
http://127.0.0.1:11434 (configurable via config.json)
- Model: Set via
ollama_model in config.json or selected interactively at startup
Complete Example
Here’s a complete example showing the typical workflow:
from llm_provider import list_models, select_model, get_active_model, generate_text
from config import get_ollama_model
# 1. Check if model is configured
configured_model = get_ollama_model()
if configured_model:
# Use configured model
select_model(configured_model)
print(f"Using configured model: {configured_model}")
else:
# List available models
models = list_models()
if not models:
print("Error: No models available. Run 'ollama pull llama3.2:3b' first.")
exit(1)
# Select first available model
select_model(models[0])
print(f"Using model: {models[0]}")
# 2. Verify active model
active = get_active_model()
print(f"Active model: {active}")
# 3. Generate text
try:
response = generate_text("Write a 3-sentence video description about space exploration")
print(f"Generated: {response}")
except RuntimeError as e:
print(f"Error: {e}")
# 4. Generate with different model
response = generate_text(
"Write a tweet about Python programming",
model_name="mistral:latest"
)
print(f"Tweet: {response}")
Error Handling
Connection Errors
If Ollama server is not running:
try:
models = list_models()
except Exception as e:
print(f"Could not connect to Ollama: {e}")
print("Make sure Ollama is running: 'ollama serve'")
Model Not Selected
try:
response = generate_text("Hello")
except RuntimeError as e:
print(e) # "No Ollama model selected. Call select_model() first or pass model_name."