Environment Variables in BAML
BAML uses environment variables to configure API keys, endpoints, and other runtime settings. Environment variables are referenced in BAML files using theenv. prefix:
Example Client Configuration
Common Environment Variables
LLM Provider API Keys
Different LLM providers require different API keys:| Provider | Environment Variable | Description |
|---|---|---|
| OpenAI | OPENAI_API_KEY | API key from OpenAI platform |
| Anthropic | ANTHROPIC_API_KEY | API key from Anthropic console |
| Google AI | GOOGLE_API_KEY | API key for Gemini models |
| AWS Bedrock | AWS_ACCESS_KEY_IDAWS_SECRET_ACCESS_KEY | AWS credentials for Bedrock |
| Azure OpenAI | AZURE_OPENAI_API_KEY | Azure OpenAI service key |
| Groq | GROQ_API_KEY | API key from Groq |
Boundary Studio Integration
BOUNDARY_API_KEY - Required for sending logs and traces to Boundary Studio
When you use BAML in your application, logs and traces are automatically sent to Boundary Studio for monitoring and debugging. To enable this integration, set the BOUNDARY_API_KEY environment variable with an API key from your Boundary Studio dashboard.
The API key is used to:
- Authenticate your application with Boundary Studio
- Associate logs and traces with your specific project and environment
- Control access permissions for different operations
BAML Runtime Configuration
| Variable | Description | Default |
|---|---|---|
BAML_CACHE_DIR | Directory for caching BAML native libraries | System-specific cache directory |
BAML_PASSWORD | Password for securing BAML-over-HTTP endpoints | None |
BAML_ENDPOINT | Custom endpoint for BAML-over-HTTP | None |
Setting Environment Variables
In the VSCode Playground
Once you open a.baml file in VSCode, you should see a small button over every BAML function: Open Playground. Then you can set environment variables in the settings tab.
Or type BAML Playground in the VSCode Command Bar (CMD + Shift + P or CTRL + Shift + P) to open the playground.
In Your Application
BAML will automatically load environment variables from your program. Any of the following strategies for setting env vars are compatible with BAML:- Setting them in your shell before running your program
- In your
Dockerfile - In your
next.config.js - In your Kubernetes manifest
- From
secrets-store.csi.k8s.io - From a secrets provider such as Infisical / Doppler
- From a
.envfile (usingdotenvCLI) - Using account credentials for ephemeral token generation (e.g., Vertex AI Auth Tokens)
python-dotenvpackage in Python ordotenvpackage in Node.js
Framework-Specific Configuration
- Next.js
- Express.js
- Flask
- Rails
.env.local
Setting API Keys Per Request
You can set the API key for an LLM dynamically by passing in the key as a header or as a parameter (depending on the provider), using the ClientRegistry. This is useful when you need to:- Use different API keys for different users
- Rotate API keys dynamically
- Support multi-tenant applications
Dynamic API Key Example
Security Best Practices
- Use
.envfiles locally - Keep API keys out of your codebase - Use secrets managers in production - AWS Secrets Manager, Google Secret Manager, etc.
- Rotate keys regularly - Change API keys periodically for security
- Use different keys per environment - Separate keys for development, staging, and production
- Limit key permissions - Use the minimum required permissions for each API key
Troubleshooting
Environment Variables Not Loading
If your environment variables aren’t being recognized:- Check the variable name - Ensure it matches exactly in your
.bamlfile and.envfile - Load before importing BAML - Call
load_dotenv()ordotenv.config()before importingbaml_client - Check the
.envfile location - It should be in your project root or where you run the command - Verify the syntax - Ensure no spaces around
=in.envfiles
API Key Errors
If you’re getting authentication errors:- Verify the key is correct - Copy it directly from the provider’s dashboard
- Check for extra spaces - Remove any leading/trailing whitespace
- Ensure the key is active - Some providers expire or disable keys
- Check provider status - The LLM provider might be experiencing outages
Next Steps
- Learn about Docker deployment
- Explore AWS deployment
- Configure LLM clients