Most AI agent projects require API keys for language models, vector databases, and external tools. This guide covers common providers and best practices for managing credentials.
Environment Variables
All projects use .env files for API key management.
Basic Setup
# Copy example file
cp .env.example .env
# Edit with your keys
nano .env
Example .env File
# AI Model Providers
NEBIUS_API_KEY = your_nebius_api_key_here
OPENAI_API_KEY = sk-proj-your_openai_key_here
ANTHROPIC_API_KEY = sk-ant-your_anthropic_key_here
GOOGLE_API_KEY = your_google_ai_key_here
# Memory & Vector Databases
MEMORI_API_KEY = your_memori_api_key_here
QDRANT_URL = http://localhost:6333
QDRANT_API_KEY = your_qdrant_cloud_key_here
# Search & Web Tools
SERPAPI_API_KEY = your_serpapi_key_here
TAVILY_API_KEY = tvly-your_tavily_key_here
BRAVE_SEARCH_API_KEY = your_brave_key_here
# MCP & External Services
GITHUB_PERSONAL_ACCESS_TOKEN = ghp_your_github_token_here
SLACK_BOT_TOKEN = xoxb-your_slack_token_here
# Specialized Services
SGAI_API_KEY = your_scrapegraph_key_here
CONTEXTUAL_API_KEY = your_contextual_ai_key_here
ARIZE_PHOENIX_API_KEY = your_phoenix_key_here
Never commit .env files to version control! Always add .env to .gitignore.
AI Model Providers
Nebius Token Factory
Most commonly used across the repository.
Nebius Token Factory provides access to 60+ open-source models through a single API.
Get Your Key
Visit Nebius Token Factory
Sign up for an account
Navigate to API Keys section
Create a new API key
Add to .env:
NEBIUS_API_KEY = your_key_here
Usage with Agno
from agno.models.nebius import Nebius
import os
from dotenv import load_dotenv
load_dotenv()
model = Nebius(
id = "deepseek-ai/DeepSeek-V3-0324" ,
api_key = os.getenv( "NEBIUS_API_KEY" )
)
Popular Models
Model ID Use Case Context Length deepseek-ai/DeepSeek-V3-0324General purpose, coding 64K Qwen/Qwen3-235B-A22BComplex reasoning 128K moonshotai/Kimi-K2-InstructTool calling, agents 128K meta-llama/Meta-Llama-3.1-8B-InstructFast, lightweight 128K nvidia/Llama-3_1-Nemotron-Ultra-253B-v1Content writing 128K
Usage with AWS Strands
from strands.models.litellm import LiteLLMModel
import os
model = LiteLLMModel(
client_args = { "api_key" : os.getenv( "NEBIUS_API_KEY" )},
model_id = "nebius/deepseek-ai/DeepSeek-V3-0324"
)
OpenAI
Get Your Key
Visit OpenAI Platform
Go to API Keys section
Create new secret key
Add to .env:
OPENAI_API_KEY = sk-proj-your_key_here
Usage
from openai import OpenAI
import os
client = OpenAI( api_key = os.getenv( "OPENAI_API_KEY" ))
response = client.chat.completions.create(
model = "gpt-4-turbo" ,
messages = [{ "role" : "user" , "content" : "Hello!" }]
)
With Agno
from agno.models.openai import OpenAIChat
model = OpenAIChat(
id = "gpt-4-turbo" ,
api_key = os.getenv( "OPENAI_API_KEY" )
)
Anthropic Claude
Get Your Key
Visit Anthropic Console
Navigate to API Keys
Create new key
Add to .env:
ANTHROPIC_API_KEY = sk-ant-your_key_here
Usage
from anthropic import Anthropic
client = Anthropic( api_key = os.getenv( "ANTHROPIC_API_KEY" ))
response = client.messages.create(
model = "claude-3-5-sonnet-20241022" ,
max_tokens = 1024 ,
messages = [{ "role" : "user" , "content" : "Hello!" }]
)
Google Gemini
Get Your Key
Visit Google AI Studio
Create API key
Add to .env:
GOOGLE_API_KEY = your_key_here
Usage with LlamaIndex
from llama_index.llms.gemini import Gemini
llm = Gemini(
model = "gemini-2.0-flash-exp" ,
api_key = os.getenv( "GOOGLE_API_KEY" )
)
Memory & Storage Services
GibsonAI Memori
Persistent memory for AI agents.
Get Your Key
Visit GibsonAI Memori
Sign up for account
Get API key from dashboard
Add to .env:
MEMORI_API_KEY = your_key_here
Usage
from agno.memory.memori import Memori
memory = Memori(
user_id = "user_123" ,
api_key = os.getenv( "MEMORI_API_KEY" )
)
Qdrant Cloud
Managed vector database.
Get Your Credentials
Visit Qdrant Cloud
Create a cluster
Get cluster URL and API key
Add to .env:
QDRANT_URL = https://your-cluster.qdrant.io
QDRANT_API_KEY = your_key_here
Usage
import qdrant_client
client = qdrant_client.QdrantClient(
url = os.getenv( "QDRANT_URL" ),
api_key = os.getenv( "QDRANT_API_KEY" )
)
SerpAPI
Google search results API.
Get Your Key
Visit SerpAPI
Sign up for free tier
Get API key
Add to .env:
SERPAPI_API_KEY = your_key_here
Usage
from serpapi import GoogleSearch
search = GoogleSearch({
"q" : "AI agents" ,
"api_key" : os.getenv( "SERPAPI_API_KEY" )
})
results = search.get_dict()
Tavily AI
AI-powered web search.
Get Your Key
Visit Tavily
Sign up for account
Get API key
Add to .env:
TAVILY_API_KEY = tvly-your_key_here
Usage
from tavily import TavilyClient
client = TavilyClient( api_key = os.getenv( "TAVILY_API_KEY" ))
results = client.search(
query = "Latest AI research" ,
search_depth = "advanced" ,
max_results = 5
)
ScrapeGraph AI
AI-powered web scraping.
Get Your Key
Visit ScrapeGraph
Create account
Get API key
Add to .env:
SGAI_API_KEY = your_key_here
Usage with Agno
from agno.tools.scrapegraph import ScrapeGraphTools
tools = ScrapeGraphTools( api_key = os.getenv( "SGAI_API_KEY" ))
External Services
GitHub
For MCP GitHub integration.
Create Token
Go to GitHub Settings > Developer settings > Personal access tokens
Click “Generate new token (classic)”
Select scopes:
repo (full repository access)
read:org (read organization data)
Generate and copy token
Add to .env:
GITHUB_PERSONAL_ACCESS_TOKEN = ghp_your_token_here
Usage with MCP
async with MCPServerStdio(
params = {
"command" : "npx" ,
"args" : [ "-y" , "@modelcontextprotocol/server-github" ],
"env" : {
"GITHUB_PERSONAL_ACCESS_TOKEN" : os.getenv( "GITHUB_PERSONAL_ACCESS_TOKEN" )
}
}
) as server:
# Use GitHub MCP tools
pass
Slack
For Slack bot integration.
Create Bot Token
Go to Slack API
Create new app
Add OAuth scopes:
chat:write
channels:read
channels:history
Install app to workspace
Copy Bot User OAuth Token
Add to .env:
SLACK_BOT_TOKEN = xoxb-your_token_here
Specialized Services
Contextual AI
Advanced RAG platform.
Get Your Key
Visit Contextual AI
Sign up for account
Get API key from dashboard
Add to .env:
CONTEXTUAL_API_KEY = your_key_here
Arize Phoenix
LLM observability and tracing.
Get Your Key
Visit Arize Phoenix
Create account
Get API key
Add to .env:
ARIZE_PHOENIX_API_KEY = your_key_here
Setup Tracing
import os
from phoenix.otel import register
os.environ[ "PHOENIX_CLIENT_HEADERS" ] = f "api_key= { os.getenv( 'ARIZE_PHOENIX_API_KEY' ) } "
os.environ[ "PHOENIX_COLLECTOR_ENDPOINT" ] = "https://app.phoenix.arize.com"
tracer_provider = register(
project_name = "my-project" ,
auto_instrument = True
)
Best Practices
1. Never Hardcode Keys
# ✅ Good: Use environment variables
import os
from dotenv import load_dotenv
load_dotenv()
api_key = os.getenv( "OPENAI_API_KEY" )
# ❌ Bad: Hardcoded credentials
api_key = "sk-proj-abc123" # NEVER DO THIS!
2. Validate Keys at Startup
# ✅ Good: Check required keys
import os
from dotenv import load_dotenv
load_dotenv()
required_keys = [
"NEBIUS_API_KEY" ,
"OPENAI_API_KEY" ,
"SERPAPI_API_KEY"
]
missing_keys = [
key for key in required_keys
if not os.getenv(key)
]
if missing_keys:
raise ValueError (
f "Missing required API keys: { ', ' .join(missing_keys) } "
)
# ❌ Bad: Fail at runtime
# API call fails with cryptic error
3. Use .env.example
# .env.example - Commit this to repo
NEBIUS_API_KEY = your_nebius_api_key_here
OPENAI_API_KEY = your_openai_key_here
SERPAPI_API_KEY = your_serpapi_key_here
# .env - Never commit this (add to .gitignore)
NEBIUS_API_KEY = actual_key_value
OPENAI_API_KEY = actual_key_value
SERPAPI_API_KEY = actual_key_value
4. Rotate Keys Regularly
# Track key age and rotation
import os
from datetime import datetime, timedelta
class APIKeyManager :
def __init__ ( self ):
self .key_created_date = os.getenv( "API_KEY_CREATED_DATE" )
def should_rotate ( self , days : int = 90 ) -> bool :
"""Check if key is older than specified days."""
if not self .key_created_date:
return True
created = datetime.fromisoformat( self .key_created_date)
age = datetime.now() - created
return age > timedelta( days = days)
# Add to .env
# API_KEY_CREATED_DATE=2024-01-15
5. Separate Development and Production
# .env.development
NEBIUS_API_KEY = dev_key_here
OPENAI_API_KEY = dev_key_here
# .env.production
NEBIUS_API_KEY = prod_key_here
OPENAI_API_KEY = prod_key_here
# Load appropriate env file
import os
from dotenv import load_dotenv
env = os.getenv( "ENVIRONMENT" , "development" )
load_dotenv( f ".env. { env } " )
Cost Management
Monitor Usage
import os
from openai import OpenAI
client = OpenAI( api_key = os.getenv( "OPENAI_API_KEY" ))
# Track token usage
response = client.chat.completions.create(
model = "gpt-4-turbo" ,
messages = [ ... ]
)
print ( f "Tokens used: { response.usage.total_tokens } " )
print ( f "Prompt tokens: { response.usage.prompt_tokens } " )
print ( f "Completion tokens: { response.usage.completion_tokens } " )
# Calculate cost
PROMPT_COST_PER_1K = 0.01 # $0.01 per 1K tokens
COMPLETION_COST_PER_1K = 0.03 # $0.03 per 1K tokens
prompt_cost = (response.usage.prompt_tokens / 1000 ) * PROMPT_COST_PER_1K
completion_cost = (response.usage.completion_tokens / 1000 ) * COMPLETION_COST_PER_1K
total_cost = prompt_cost + completion_cost
print ( f "Estimated cost: $ { total_cost :.4f} " )
Set Budget Limits
class BudgetLimiter :
def __init__ ( self , daily_limit : float = 10.0 ):
self .daily_limit = daily_limit
self .daily_spend = 0.0
self .last_reset = datetime.now().date()
def check_budget ( self , estimated_cost : float ) -> bool :
"""Check if request is within budget."""
# Reset daily counter
if datetime.now().date() > self .last_reset:
self .daily_spend = 0.0
self .last_reset = datetime.now().date()
# Check limit
if self .daily_spend + estimated_cost > self .daily_limit:
raise ValueError (
f "Daily budget exceeded: $ { self .daily_spend :.2f} /$ { self .daily_limit :.2f} "
)
self .daily_spend += estimated_cost
return True
# Usage
budget = BudgetLimiter( daily_limit = 10.0 )
if budget.check_budget( estimated_cost = 0.05 ):
response = client.chat.completions.create( ... )
Security Checklist
Next Steps
Environment Setup Set up development environment with all dependencies
Dependency Management Install and manage Python dependencies
Best Practices Production-ready patterns and error handling
MCP Integration Use API keys with MCP servers