Overview
HAI Build Code Generator supports multiple telemetry backends to help you monitor AI-powered development workflows, track performance, and analyze usage patterns.All telemetry configuration can be overridden per workspace using the
.hai.config file.Telemetry Providers
Langfuse
LLM observability and tracing
PostHog
Product analytics and error tracking
OpenTelemetry
Flexible telemetry with OTLP export
Langfuse Configuration
Langfuse provides LLM-specific observability including prompt tracking, token usage, and cost analysis.Setup
Get API Keys
Obtain your Langfuse API keys from cloud.langfuse.com
Workspace Override
Override Langfuse configuration per workspace using.hai.config:
Langfuse API endpoint (optional)
Langfuse secret key (corresponds to
LANGFUSE_SECRET_KEY)Langfuse public key (corresponds to
LANGFUSE_PUBLIC_KEY)What Gets Tracked
Langfuse captures:- LLM requests and responses
- Token usage and costs
- Prompt versions and variations
- Model performance metrics
- User feedback and ratings
PostHog Configuration
PostHog provides product analytics and error tracking.Setup
Get API Keys
Create a PostHog account at app.posthog.com and obtain:
- Telemetry API key (for events)
- Error tracking API key (for errors)
PostHog API endpoint
PostHog API key for telemetry and error tracking
What Gets Tracked
PostHog captures:- Feature usage events
- User interactions
- Error occurrences and stack traces
- Session recordings (if enabled)
- Performance metrics
OpenTelemetry Configuration
OpenTelemetry provides flexible, vendor-neutral telemetry collection with support for logs, metrics, and traces.Overview
OpenTelemetry in HAI Build focuses on:- Logs (Events): Primary signal for telemetry (recommended)
- Metrics: Optional performance counters
- Exporters: Console (debugging) or OTLP (production)
Basic Setup
Exporter Options
- Console Exporter
- OTLP gRPC
- OTLP HTTP
Configuration Parameters
Enable OpenTelemetry telemetry collection
Logs exporter:
console or otlpMetrics exporter:
console, otlp, or prometheusOTLP protocol:
grpc, http/json, or http/protobufOTLP collector endpoint (without
/v1/logs or /v1/metrics path)Custom headers for OTLP requests (e.g.,
authorization=Bearer token)Use insecure gRPC connections (local testing only)
Metric export interval in milliseconds
Enable detailed export diagnostics for debugging
Batch Configuration
Optimize log batching for performance:Maximum number of logs per batch
Maximum wait time in milliseconds before exporting batch
Maximum size of the log queue
Advanced: Separate Endpoints
Configure different endpoints for metrics and logs:Configuration Examples
Development: Console Debugging
Development: Console Debugging
Perfect for local development and troubleshooting:
Production: OTLP with gRPC
Production: OTLP with gRPC
High-performance production setup:
Production: OTLP with HTTP
Production: OTLP with HTTP
RESTful production setup with JSON:
Local Testing: Insecure gRPC
Local Testing: Insecure gRPC
For local OTLP collector testing:
Workspace-Level Configuration
The.hai.config file enables workspace-specific telemetry settings that override global environment variables.
Configuration File Location
Place.hai.config at the root of your workspace:
Supported Parameters
Dynamic Configuration via CI/CD
You can generate.hai.config dynamically in your CI/CD pipeline:
COR-Matrix Integration
HAI Build supports COR-Matrix for tracking AI code retention patterns and analyzing code origin.COR-Matrix API endpoint
Authentication token for COR-Matrix
Workspace identifier in COR-Matrix
Configuration Example
Best Practices
Use Workspace Configs
Keep telemetry settings per workspace in
.hai.config for team consistencySecure Credentials
Never commit API keys to git. Use CI/CD injection or environment variables
Monitor Costs
Track Langfuse usage to optimize LLM costs and token consumption
Debug Locally
Use console exporters and diagnostics during development
Troubleshooting
Telemetry Not Working
Telemetry Not Working
Check:
- Verify
OTEL_TELEMETRY_ENABLED=1is set - Check API keys are correctly configured
- Enable
TEL_DEBUG_DIAGNOSTICS=truefor detailed logs - Restart VS Code after configuration changes
OTLP Connection Failures
OTLP Connection Failures
Solutions:
- For gRPC: Use
hostname:port(nohttp://) - For HTTP: Include full URL (
http://orhttps://) - Verify firewall/network allows connections
- Check OTLP collector is running and accessible
Langfuse Not Receiving Data
Langfuse Not Receiving Data
Check:
- Verify API keys are correct (secret + public)
- Ensure
LANGFUSE_BASE_URLis correct - Check Langfuse dashboard for project setup
- Review HAI Build logs for connection errors
Workspace Config Not Loading
Workspace Config Not Loading
Solutions:
- Ensure
.hai.configis at workspace root - Check file format (key-value pairs, dot notation)
- Verify no syntax errors (proper
=usage) - Restart VS Code to reload configuration
Next Steps
Settings
Configure extension settings
LLM Providers
Set up AI model providers