Skip to main content

Overview

HAI Build Code Generator supports multiple telemetry backends to help you monitor AI-powered development workflows, track performance, and analyze usage patterns.
All telemetry configuration can be overridden per workspace using the .hai.config file.

Telemetry Providers

Langfuse

LLM observability and tracing

PostHog

Product analytics and error tracking

OpenTelemetry

Flexible telemetry with OTLP export

Langfuse Configuration

Langfuse provides LLM-specific observability including prompt tracking, token usage, and cost analysis.

Setup

1

Get API Keys

Obtain your Langfuse API keys from cloud.langfuse.com
2

Configure Environment

Add to your .env file or .hai.config:
LANGFUSE_SECRET_KEY=sk-lf-...
LANGFUSE_PUBLIC_KEY=pk-lf-...
LANGFUSE_BASE_URL=https://cloud.langfuse.com
3

Verify Configuration

Restart VS Code and check the HAI Build logs for Langfuse connection status

Workspace Override

Override Langfuse configuration per workspace using .hai.config:
# Langfuse Configuration
langfuse.apiUrl=https://cloud.langfuse.com
langfuse.apiKey=sk-lf-...
langfuse.publicKey=pk-lf-...
langfuse.apiUrl
string
default:"https://cloud.langfuse.com"
Langfuse API endpoint (optional)
langfuse.apiKey
string
Langfuse secret key (corresponds to LANGFUSE_SECRET_KEY)
langfuse.publicKey
string
Langfuse public key (corresponds to LANGFUSE_PUBLIC_KEY)

What Gets Tracked

Langfuse captures:
  • LLM requests and responses
  • Token usage and costs
  • Prompt versions and variations
  • Model performance metrics
  • User feedback and ratings

PostHog Configuration

PostHog provides product analytics and error tracking.

Setup

1

Get API Keys

Create a PostHog account at app.posthog.com and obtain:
  • Telemetry API key (for events)
  • Error tracking API key (for errors)
2

Configure Environment

Add to your .env file:
TELEMETRY_SERVICE_API_KEY=phc_...
ERROR_SERVICE_API_KEY=phc_...
3

Workspace Override (Optional)

Override per workspace in .hai.config:
posthog.url=https://app.posthog.com
posthog.apiKey=phc_...
posthog.url
string
default:"https://app.posthog.com"
PostHog API endpoint
posthog.apiKey
string
PostHog API key for telemetry and error tracking

What Gets Tracked

PostHog captures:
  • Feature usage events
  • User interactions
  • Error occurrences and stack traces
  • Session recordings (if enabled)
  • Performance metrics
Ensure you have appropriate user consent before enabling telemetry in production environments.

OpenTelemetry Configuration

OpenTelemetry provides flexible, vendor-neutral telemetry collection with support for logs, metrics, and traces.

Overview

OpenTelemetry in HAI Build focuses on:
  • Logs (Events): Primary signal for telemetry (recommended)
  • Metrics: Optional performance counters
  • Exporters: Console (debugging) or OTLP (production)

Basic Setup

1

Enable OpenTelemetry

Set the enable flag:
OTEL_TELEMETRY_ENABLED=1
2

Configure Exporters

Choose your export targets:
OTEL_LOGS_EXPORTER=console      # or "otlp"
OTEL_METRICS_EXPORTER=otlp      # or "console"
3

Set Protocol and Endpoint

For OTLP export:
OTEL_EXPORTER_OTLP_PROTOCOL=grpc
OTEL_EXPORTER_OTLP_ENDPOINT=localhost:4317

Exporter Options

Console Exporter

For local debugging, output telemetry to console:
OTEL_TELEMETRY_ENABLED=1
OTEL_LOGS_EXPORTER=console
TEL_DEBUG_DIAGNOSTICS=true
Console exporter is perfect for development and troubleshooting telemetry pipelines.

Configuration Parameters

OTEL_TELEMETRY_ENABLED
boolean
default:"false"
Enable OpenTelemetry telemetry collection
OTEL_LOGS_EXPORTER
string
default:"console"
Logs exporter: console or otlp
OTEL_METRICS_EXPORTER
string
default:"otlp"
Metrics exporter: console, otlp, or prometheus
OTEL_EXPORTER_OTLP_PROTOCOL
string
default:"grpc"
OTLP protocol: grpc, http/json, or http/protobuf
OTEL_EXPORTER_OTLP_ENDPOINT
string
OTLP collector endpoint (without /v1/logs or /v1/metrics path)
OTEL_EXPORTER_OTLP_HEADERS
string
Custom headers for OTLP requests (e.g., authorization=Bearer token)
OTEL_EXPORTER_OTLP_INSECURE
boolean
default:"false"
Use insecure gRPC connections (local testing only)
OTEL_METRIC_EXPORT_INTERVAL
number
default:"60000"
Metric export interval in milliseconds
TEL_DEBUG_DIAGNOSTICS
boolean
default:"false"
Enable detailed export diagnostics for debugging

Batch Configuration

Optimize log batching for performance:
OTEL_LOG_BATCH_SIZE=512          # Max logs per batch (default: 512)
OTEL_LOG_BATCH_TIMEOUT=5000      # Max wait time in ms (default: 5000)
OTEL_LOG_MAX_QUEUE_SIZE=2048     # Max queue size (default: 2048)
OTEL_LOG_BATCH_SIZE
number
default:"512"
Maximum number of logs per batch
OTEL_LOG_BATCH_TIMEOUT
number
default:"5000"
Maximum wait time in milliseconds before exporting batch
OTEL_LOG_MAX_QUEUE_SIZE
number
default:"2048"
Maximum size of the log queue

Advanced: Separate Endpoints

Configure different endpoints for metrics and logs:
# Metrics to one collector
OTEL_EXPORTER_OTLP_METRICS_PROTOCOL=http/protobuf
OTEL_EXPORTER_OTLP_METRICS_ENDPOINT=http://metrics.example.com:4318

# Logs to another collector
OTEL_EXPORTER_OTLP_LOGS_PROTOCOL=grpc
OTEL_EXPORTER_OTLP_LOGS_ENDPOINT=logs.example.com:4317

Configuration Examples

Perfect for local development and troubleshooting:
OTEL_TELEMETRY_ENABLED=1
OTEL_LOGS_EXPORTER=console
TEL_DEBUG_DIAGNOSTICS=true
High-performance production setup:
OTEL_TELEMETRY_ENABLED=1
OTEL_LOGS_EXPORTER=otlp
OTEL_EXPORTER_OTLP_PROTOCOL=grpc
OTEL_EXPORTER_OTLP_ENDPOINT=otel-collector.example.com:4317
OTEL_EXPORTER_OTLP_HEADERS=authorization=Bearer prod-token
RESTful production setup with JSON:
OTEL_TELEMETRY_ENABLED=1
OTEL_LOGS_EXPORTER=otlp
OTEL_EXPORTER_OTLP_PROTOCOL=http/json
OTEL_EXPORTER_OTLP_ENDPOINT=https://otel.example.com
OTEL_EXPORTER_OTLP_HEADERS=authorization=Bearer prod-token
For local OTLP collector testing:
OTEL_TELEMETRY_ENABLED=1
OTEL_LOGS_EXPORTER=otlp
OTEL_EXPORTER_OTLP_PROTOCOL=grpc
OTEL_EXPORTER_OTLP_ENDPOINT=localhost:4317
OTEL_EXPORTER_OTLP_INSECURE=true
OTEL_EXPORTER_OTLP_HEADERS=authorization=Bearer dev-token

Workspace-Level Configuration

The .hai.config file enables workspace-specific telemetry settings that override global environment variables.

Configuration File Location

Place .hai.config at the root of your workspace:
my-project/
├── .hai.config
├── .git/
└── src/

Supported Parameters

# Project name
name=my-project

# Langfuse Configuration
langfuse.apiUrl=https://cloud.langfuse.com
langfuse.apiKey=sk-lf-workspace-key
langfuse.publicKey=pk-lf-workspace-key

# PostHog Configuration
posthog.url=https://app.posthog.com
posthog.apiKey=phc_workspace_key

# COR-Matrix Configuration
cormatrix.baseURL=https://cormatrix.example.com
cormatrix.token=workspace-token
cormatrix.workspaceId=workspace-123
The .hai.config file is not automatically git-ignored. Add it to .gitignore if it contains sensitive credentials, or use CI/CD to inject values dynamically.

Dynamic Configuration via CI/CD

You can generate .hai.config dynamically in your CI/CD pipeline:
# Example: GitHub Actions
cat > .hai.config <<EOF
langfuse.apiKey=${{ secrets.LANGFUSE_API_KEY }}
langfuse.publicKey=${{ secrets.LANGFUSE_PUBLIC_KEY }}
posthog.apiKey=${{ secrets.POSTHOG_API_KEY }}
EOF

COR-Matrix Integration

HAI Build supports COR-Matrix for tracking AI code retention patterns and analyzing code origin.
cormatrix.baseURL
string
COR-Matrix API endpoint
cormatrix.token
string
Authentication token for COR-Matrix
cormatrix.workspaceId
string
Workspace identifier in COR-Matrix

Configuration Example

cormatrix.baseURL=https://cormatrix.example.com
cormatrix.token=cm-token-...
cormatrix.workspaceId=ws-123

Best Practices

Use Workspace Configs

Keep telemetry settings per workspace in .hai.config for team consistency

Secure Credentials

Never commit API keys to git. Use CI/CD injection or environment variables

Monitor Costs

Track Langfuse usage to optimize LLM costs and token consumption

Debug Locally

Use console exporters and diagnostics during development

Troubleshooting

Check:
  • Verify OTEL_TELEMETRY_ENABLED=1 is set
  • Check API keys are correctly configured
  • Enable TEL_DEBUG_DIAGNOSTICS=true for detailed logs
  • Restart VS Code after configuration changes
Solutions:
  • For gRPC: Use hostname:port (no http://)
  • For HTTP: Include full URL (http:// or https://)
  • Verify firewall/network allows connections
  • Check OTLP collector is running and accessible
Check:
  • Verify API keys are correct (secret + public)
  • Ensure LANGFUSE_BASE_URL is correct
  • Check Langfuse dashboard for project setup
  • Review HAI Build logs for connection errors
Solutions:
  • Ensure .hai.config is at workspace root
  • Check file format (key-value pairs, dot notation)
  • Verify no syntax errors (proper = usage)
  • Restart VS Code to reload configuration

Next Steps

Settings

Configure extension settings

LLM Providers

Set up AI model providers

Build docs developers (and LLMs) love