Envark’s AI Assistant provides intelligent analysis of your environment variables using state-of-the-art language models. It offers security insights, best practice recommendations, and interactive help for managing your application configuration.
gpt-4o (recommended) - Latest GPT-4 optimized model
gpt-4-turbo - Fast GPT-4 variant
gpt-4 - Standard GPT-4
gpt-3.5-turbo - Fast and cost-effective
Setup:
# Get API key from https://platform.openai.com/api-keysexport OPENAI_API_KEY=sk-...# Or configure interactivelyenvark/ai-config# Select OpenAI, enter key, choose model
Pricing: Pay per token (most expensive but highest quality)
claude-sonnet-4-20250514 (recommended) - Latest Claude 4
claude-3-5-sonnet-20241022 - Claude 3.5
claude-3-opus-20240229 - Most capable Claude 3
Setup:
# Get API key from https://console.anthropic.com/export ANTHROPIC_API_KEY=sk-ant-...# Or configure interactivelyenvark/ai-config# Select Anthropic, enter key, choose model
Pricing: Pay per token (competitive with OpenAI)
Provider: Google GeminiBest for: Free tier, fast responses, multimodal capabilitiesAvailable Models:
gemini-2.0-flash (recommended) - Latest fast model
gemini-1.5-pro - High capability
gemini-1.5-flash - Fast and efficient
gemini-pro - Stable version
Setup:
# Get API key from https://makersuite.google.com/app/apikeyexport GEMINI_API_KEY=...# orexport GOOGLE_API_KEY=...# Or configure interactivelyenvark/ai-config# Select Google Gemini, enter key, choose model
Pricing: Free tier available, then pay per token
Provider: Ollama (Local)Best for: Privacy, no API costs, offline usageAvailable Models:
llama3.2 (recommended) - Latest Llama
llama3.1 - Previous Llama version
mistral - Fast and efficient
codellama - Optimized for code
phi3 - Lightweight model
Setup:
# Install Ollama (https://ollama.ai)curl -fsSL https://ollama.ai/install.sh | sh# Pull a modelollama pull llama3.2# Start Ollama serviceollama serve# Configure in Envarkenvark/ai-config# Select Ollama, press Enter (no key needed), choose model
Pricing: Free (runs locally)
Requires 8GB+ RAM for most models. Responses may be slower than cloud APIs.
Set up providers using environment variables (no interactive config needed):
# OpenAIexport OPENAI_API_KEY=sk-...# Anthropicexport ANTHROPIC_API_KEY=sk-ant-...# Google Geminiexport GEMINI_API_KEY=...# orexport GOOGLE_API_KEY=...# Ollama (no key needed, just ensure service is running)
Envark automatically detects and uses these credentials with priority: OpenAI > Anthropic > Gemini > Ollama.
Command:/chat or chStart a conversational session with the AI assistant:
envark/chatYou: What's the difference between .env and .env.local? AI: .env is typically committed to version control and contains default/example values, while .env.local is git-ignored and contains your actual local development secrets. The .env.local file overrides values from .env when both are present.You: Should I commit API keys? AI: No, never commit API keys or secrets to version control...You: /exit
Command:/ask <question> or a <question>Ask a one-off question without entering chat mode:
/ask How do I secure database credentials?/ask What's a good naming convention for environment variables?a Should I use uppercase or lowercase for env vars?
Use this for quick answers without starting a full conversation.
Command:/analyze or anGet AI-powered analysis of your entire environment configuration:
/analyze
What it does:
Scans your project for all environment variables
Sends variable names and risk levels to AI (not values)
Returns comprehensive security and best practice analysis
Example output:
Summary:Your project uses 42 environment variables across Node.js and TypeScript.Found 2 critical issues and 5 high-priority recommendations.Recommendations: • Move DATABASE_PASSWORD from .env to .env.local to prevent accidental commits • Add validation for PORT to ensure it's a valid number • Consider using a secret management solution for API keys • Standardize naming: use DATABASE_URL instead of mixing DB_URL and DATABASE_URI • Add .env.example with placeholder values for all required variables
Implementation:
// From src/ai/agent.ts:213-223async analyzeEnvironment(context: EnvAnalysisContext): Promise<AIAnalysisResult> { if (!this.provider) { return { summary: 'AI not configured', recommendations: ['Configure an AI provider using /config command'], securityIssues: [] }; } return this.provider.analyzeEnvironment(context);}
Command:/suggest <VARIABLE_NAME> or su <VARIABLE_NAME>Get improvement suggestions for a specific variable:
/suggest DATABASE_URL
Example output:
Suggestions for DATABASE_URL: 1. Add connection pooling parameters (e.g., ?pool_size=10) 2. Use SSL mode in production: ?sslmode=require 3. Consider splitting into separate host, port, database for flexibility 4. Document expected format in .env.example 5. Validate URL format at startup to fail fast on misconfiguration
Command:/explain <VARIABLE_NAME> or ex <VARIABLE_NAME>Get detailed explanation about an environment variable:
/explain NODE_ENV
Example output:
NODE_ENV is a convention in Node.js applications to specify the runtimeenvironment.1. Purpose: - Controls application behavior (logging, caching, debugging) - Determines which optimizations to enable - Often used for conditional logic2. Common Values: - "development" - Local development with debugging - "production" - Optimized for production deployment - "test" - Used during automated testing3. Security Considerations: - Not inherently secret, safe to expose - Production mode should disable debug features - Never use "development" in production4. Framework Support: - Express.js uses it for error handling - React build tools use it for optimization - Many libraries check this for behavior changes
The AI assistant operates with a specialized system prompt that focuses on environment variable security:
// From src/ai/agent.ts:316-337private getEnhancedSystemPrompt(): string { return `You are Aegis AI, an expert environment variable security analyst integrated into the Aegis CLI tool.Your capabilities:- Analyze environment variables for security risks- Provide recommendations for better configuration management- Generate secure .env templates and validation code- Explain best practices for different frameworks and languages- Help debug configuration issuesCurrent context:- Running in Aegis CLI terminal interface- User can scan their project with /scan, /risk, /missing commands- Use markdown formatting for code blocksGuidelines:- Be concise but thorough- Always consider security implications- Provide actionable recommendations- Use code examples when helpful- Consider different deployment environments`;}
This ensures the AI provides relevant, security-focused advice.