Skip to main content

Overview

TypeAgent requires API credentials and configuration via environment variables for LLM and embedding providers. The system supports both public OpenAI and Azure OpenAI services, with automatic provider detection.
Recommended: Use a .env file and call load_dotenv() at the start of your program:
from dotenv import load_dotenv

load_dotenv()  # Load .env file
# Now use typeagent functions

OpenAI Environment Variables

For public OpenAI services (api.openai.com).

Required

OPENAI_API_KEY
string
required
Your OpenAI API key from the OpenAI dashboard.Format: sk-...Example:
export OPENAI_API_KEY="sk-proj-abc123..."

Optional

OPENAI_MODEL
string
default:"gpt-4o"
The chat model to use for knowledge extraction and structured output.Common values:
  • gpt-4o (default)
  • gpt-4o-mini
  • gpt-4-turbo
  • gpt-3.5-turbo
Example:
export OPENAI_MODEL="gpt-4o"
OPENAI_EMBEDDING_MODEL
string
default:"text-embedding-ada-002"
The embedding model to use for semantic search and indexing.Common values:
  • text-embedding-3-small (recommended, 1536 dims)
  • text-embedding-3-large (best quality, 3072 dims)
  • text-embedding-ada-002 (legacy, 1536 dims)
Example:
export OPENAI_EMBEDDING_MODEL="text-embedding-3-small"
OPENAI_BASE_URL
string
Custom base URL for OpenAI-compatible embedding servers (e.g., Infinity).Note: OPENAI_API_KEY must still be set (can be any value for self-hosted servers).Example:
export OPENAI_BASE_URL="http://localhost:8080"
export OPENAI_API_KEY="dummy"  # Required but ignored
OPENAI_ENDPOINT
string
Custom endpoint URL for OpenAI-compatible Chat Completions API.Important: Ensure OPENAI_MODEL matches the deployed model name.Example:
export OPENAI_ENDPOINT="http://localhost:11434/v1"  # Ollama
export OPENAI_MODEL="llama3.2:1b"

Azure OpenAI Environment Variables

For OpenAI service hosted by Microsoft Azure.

Required

AZURE_OPENAI_API_KEY
string
required
Your Azure OpenAI API key, or "identity" to use Azure Managed Identity.Format: String key or "identity"Examples:
# API key authentication
export AZURE_OPENAI_API_KEY="abc123..."

# Managed identity authentication
export AZURE_OPENAI_API_KEY="identity"
AZURE_OPENAI_ENDPOINT
string
required
Full URL of the Azure OpenAI REST API endpoint for chat completions.Format: https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/chat/completions?api-version=YYYY-MM-DDExample:
export AZURE_OPENAI_ENDPOINT="https://my-resource.openai.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2023-05-15"
AZURE_OPENAI_ENDPOINT_EMBEDDING
string
required
Full URL of the Azure OpenAI REST API endpoint for embeddings.Format: https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_EMBEDDING_DEPLOYMENT_NAME/embeddings?api-version=YYYY-MM-DDExample:
export AZURE_OPENAI_ENDPOINT_EMBEDDING="https://my-resource.openai.azure.com/openai/deployments/text-embedding-3-small/embeddings?api-version=2024-08-01-preview"

Optional (Model-Specific Endpoints)

For scenarios with multiple embedding model deployments:
AZURE_OPENAI_ENDPOINT_EMBEDDING_3_SMALL
string
Azure endpoint specifically for text-embedding-3-small model.If set, takes precedence over AZURE_OPENAI_ENDPOINT_EMBEDDING when using this model.
AZURE_OPENAI_ENDPOINT_EMBEDDING_3_LARGE
string
Azure endpoint specifically for text-embedding-3-large model.If set, takes precedence over AZURE_OPENAI_ENDPOINT_EMBEDDING when using this model.
AZURE_OPENAI_API_KEY_EMBEDDING
string
Separate API key for embedding endpoints (optional).If not set, falls back to AZURE_OPENAI_API_KEY.

Provider Conflict Resolution

When both OpenAI and Azure OpenAI credentials are present:
OPENAI_API_KEY wins over AZURE_OPENAI_API_KEYIf both are set, TypeAgent uses public OpenAI by default. Unset OPENAI_API_KEY to use Azure.
Example scenarios:
# Scenario 1: Public OpenAI (both set, OPENAI_API_KEY wins)
export OPENAI_API_KEY="sk-..."
export AZURE_OPENAI_API_KEY="..."  # Ignored
# Result: Uses OpenAI

# Scenario 2: Azure OpenAI (only Azure set)
unset OPENAI_API_KEY
export AZURE_OPENAI_API_KEY="..."
export AZURE_OPENAI_ENDPOINT="..."
# Result: Uses Azure

# Scenario 3: Auto-detection in code
# When calling create_chat_model("openai:gpt-4o"):
# - If OPENAI_API_KEY not set but AZURE_OPENAI_API_KEY is set
# - Automatically switches to Azure

Other Provider Environment Variables

TypeAgent supports 25+ providers via pydantic-ai. Common providers:

Anthropic

export ANTHROPIC_API_KEY="sk-ant-..."

Google

export GOOGLE_API_KEY="..."  # For Gemini API
# OR
export GOOGLE_APPLICATION_CREDENTIALS="/path/to/credentials.json"  # For Vertex AI

Groq

export GROQ_API_KEY="gsk_..."

Cohere

export COHERE_API_KEY="..."

AWS Bedrock

Uses standard AWS credentials (boto3):
export AWS_ACCESS_KEY_ID="..."
export AWS_SECRET_ACCESS_KEY="..."
export AWS_DEFAULT_REGION="us-east-1"

Ollama

No environment variables required. Defaults to http://localhost:11434. Optional:
export OLLAMA_BASE_URL="http://localhost:11434"

Example .env Files

Public OpenAI

# .env
OPENAI_API_KEY=sk-proj-abc123...
OPENAI_MODEL=gpt-4o
OPENAI_EMBEDDING_MODEL=text-embedding-3-small

Azure OpenAI

# .env
AZURE_OPENAI_API_KEY=abc123...
AZURE_OPENAI_ENDPOINT=https://my-resource.openai.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2023-05-15
AZURE_OPENAI_ENDPOINT_EMBEDDING=https://my-resource.openai.azure.com/openai/deployments/text-embedding-3-small/embeddings?api-version=2024-08-01-preview

Azure OpenAI with Managed Identity

# .env
AZURE_OPENAI_API_KEY=identity
AZURE_OPENAI_ENDPOINT=https://my-resource.openai.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2023-05-15
AZURE_OPENAI_ENDPOINT_EMBEDDING=https://my-resource.openai.azure.com/openai/deployments/text-embedding-3-small/embeddings?api-version=2024-08-01-preview

Mixed Providers

# .env
# Chat via Anthropic
ANTHROPIC_API_KEY=sk-ant-...

# Embeddings via OpenAI
OPENAI_API_KEY=sk-...
OPENAI_EMBEDDING_MODEL=text-embedding-3-small
Then in code:
from typeagent.aitools.model_adapters import configure_models

chat, embedder = configure_models(
    "anthropic:claude-sonnet-4-20250514",
    "openai:text-embedding-3-small"
)

Local Development (Ollama)

# .env
OPENAI_ENDPOINT=http://localhost:11434/v1
OPENAI_MODEL=llama3.2
OPENAI_API_KEY=dummy  # Required but unused

# For embeddings, use Ollama or separate service
OPENAI_EMBEDDING_MODEL=nomic-embed-text

Loading Environment Variables

from dotenv import load_dotenv

# Load from .env file in current or parent directory
load_dotenv()

# Now use typeagent
from typeagent import create_conversation
# ...

Using python-dotenv with Custom Path

from pathlib import Path
from dotenv import load_dotenv

# Load from specific file
env_path = Path(".env.production")
load_dotenv(dotenv_path=env_path)
import os

os.environ["OPENAI_API_KEY"] = "sk-..."
os.environ["OPENAI_MODEL"] = "gpt-4o"
Do NOT hardcode API keys in source code. Always use environment variables or secure secret management.

Security Best Practices

Never Commit Keys

Add .env to .gitignore. Never commit API keys to version control.

Use Secret Managers

In production, use Azure Key Vault, AWS Secrets Manager, or similar.

Rotate Keys

Regularly rotate API keys and revoke old keys.

Managed Identity

Use Azure Managed Identity (AZURE_OPENAI_API_KEY=identity) in Azure environments.

Troubleshooting

”No API key found”

Error: OPENAI_API_KEY not found
Solution: Set OPENAI_API_KEY or AZURE_OPENAI_API_KEY environment variable.

”Embedding model mismatch”

ValueError: Conversation metadata embedding_model (text-embedding-ada-002) 
does not match provided embedding model (text-embedding-3-small).
Solution: The database was created with a different embedding model. Either:
  1. Use the same model: create_embedding_model("openai:text-embedding-ada-002")
  2. Create a new database file

Azure endpoint format errors

Correct format:
export AZURE_OPENAI_ENDPOINT="https://my-resource.openai.azure.com/openai/deployments/my-deployment/chat/completions?api-version=2023-05-15"
Incorrect format:
# Missing deployment name
export AZURE_OPENAI_ENDPOINT="https://my-resource.openai.azure.com"

# Missing api-version
export AZURE_OPENAI_ENDPOINT="https://my-resource.openai.azure.com/openai/deployments/my-deployment/chat/completions"

“OPENAI_MODEL ignored; Azure deployment determined by AZURE_OPENAI_ENDPOINT”

This is a warning, not an error. When using Azure, the deployment name in the endpoint URL determines the model, not OPENAI_MODEL.

Verification Script

Test your configuration:
import os
from dotenv import load_dotenv
from typeagent.aitools.model_adapters import (
    create_chat_model,
    create_embedding_model
)

load_dotenv()

print("Environment variables:")
print(f"  OPENAI_API_KEY: {'SET' if os.getenv('OPENAI_API_KEY') else 'NOT SET'}")
print(f"  AZURE_OPENAI_API_KEY: {'SET' if os.getenv('AZURE_OPENAI_API_KEY') else 'NOT SET'}")
print(f"  OPENAI_MODEL: {os.getenv('OPENAI_MODEL', 'NOT SET')}")
print(f"  OPENAI_EMBEDDING_MODEL: {os.getenv('OPENAI_EMBEDDING_MODEL', 'NOT SET')}")

try:
    chat = create_chat_model()
    print(f"\nChat model: SUCCESS")
    
    embedder = create_embedding_model()
    print(f"Embedding model: SUCCESS ({embedder.model_name})")
    
    # Test embedding
    emb = await embedder.get_embedding("test")
    print(f"Test embedding: SUCCESS (dim={len(emb)})")
    
except Exception as e:
    print(f"\nERROR: {e}")

Build docs developers (and LLMs) love