Skip to main content

Overview

Pricing Intelligence is configured through environment variables in Docker Compose. This guide covers all configuration options for each service and how to customize the deployment.

Environment Variables

Core API Keys

Two OpenAI API keys are required:
HARVEY_API_KEY
string
required
OpenAI API key for the H.A.R.V.E.Y. agent service. Used for the ReAct agent reasoning loop.Set in .env:
export HARVEY_API_KEY="sk-..."
AMINT_API_KEY
string
required
OpenAI API key for the A-MINT extraction service. Used for extracting pricing from URLs.Set in .env:
export AMINT_API_KEY="sk-..."
You can use the same API key for both services or separate keys for better cost tracking and rate limit isolation.

Alternative: Multiple Keys

AMINT_API_KEYS
string
Comma-separated list of OpenAI API keys for load balancing or failover.
export AMINT_API_KEYS="sk-key1...,sk-key2...,sk-key3..."

Service Configuration

Harvey API

The H.A.R.V.E.Y. agent service:
docker-compose.yml
harvey-api:
  environment:
    - LOG_LEVEL=INFO
    - OPENAI_API_KEY=${HARVEY_API_KEY}
    - OPENAI_MODEL=gpt-5-nano
    - AMINT_BASE_URL=http://a-mint-api:8000
    - ANALYSIS_BASE_URL=http://analysis-api:3000
    - CACHE_BACKEND=memory
    - MCP_TRANSPORT=sse
    - MCP_SERVER_URL=http://mcp-server:8085/sse
  ports:
    - "8086:8086"
OPENAI_MODEL
string
default:"gpt-5-nano"
OpenAI model to use for the agent. Options:
  • gpt-4o - Best quality, slower, more expensive
  • gpt-4o-mini - Good balance of speed and quality
  • gpt-3.5-turbo - Fastest, cheapest, lower quality
  • gpt-5-nano - Custom/preview model (if available)
LOG_LEVEL
string
default:"INFO"
Logging verbosity: DEBUG, INFO, WARNING, ERROR
CACHE_BACKEND
string
default:"memory"
Cache backend: memory or redis
MCP_TRANSPORT
string
default:"sse"
MCP transport protocol: sse (Server-Sent Events) or stdio

MCP Server

The Model Context Protocol server:
docker-compose.yml
mcp-server:
  environment:
    - AMINT_BASE_URL=http://a-mint-api:8000
    - ANALYSIS_BASE_URL=http://analysis-api:3000
    - CACHE_BACKEND=memory
    - LOG_LEVEL=INFO
    - HTTP_HOST=0.0.0.0
    - HTTP_PORT=8085
    - UVICORN_HOST=0.0.0.0
    - UVICORN_PORT=8085
    - MCP_TRANSPORT=sse
  ports:
    - "8085:8085"
HTTP_HOST
string
default:"0.0.0.0"
Bind address for the HTTP API
HTTP_PORT
number
default:"8085"
Port for the HTTP/SSE endpoint
UVICORN_HOST
string
default:"0.0.0.0"
Uvicorn server host
UVICORN_PORT
number
default:"8085"
Uvicorn server port

A-MINT API

The pricing extraction service:
docker-compose.yml
a-mint-api:
  environment:
    - PYTHONPATH=/app
    - PORT=8000
    - OPENAI_API_KEY=${AMINT_API_KEY}
    - OPENAI_API_KEYS=${AMINT_API_KEYS}
    - ANALYSIS_API=http://analysis-api:3000/api/v1
    - LOG_LEVEL=INFO
  ports:
    - "8001:8000"
PORT
number
default:"8000"
Internal service port (exposed as 8001 externally)
ANALYSIS_API
string
Base URL for the Analysis API (with version prefix)

Analysis API

The Node.js analysis service:
docker-compose.yml
analysis-api:
  environment:
    - NODE_ENV=production
    - PORT=3000
    - CHOCO_API=http://choco-api:8000
    - LOG_LEVEL=INFO
  ports:
    - "8002:3000"
  volumes:
    - ./analysis_api/logs:/app/logs
    - ./analysis_api/output:/app/output
NODE_ENV
string
default:"production"
Node environment: development or production
CHOCO_API
string
Base URL for the Choco CSP solver service

CSP Service (Choco)

The Java-based constraint solver:
docker-compose.yml
choco-api:
  environment:
    - PORT=8000
    - LOG_LEVEL=INFO
  ports:
    - "8000:8000"
PORT
number
default:"8000"
Service port

Frontend

The React/Vite frontend:
docker-compose.yml
mcp-frontend:
  build:
    args:
      - VITE_API_BASE_URL=http://localhost:8086
      - VITE_SPHERE_BASE_URL=https://sphere.score.us.es
  ports:
    - "80:80"
VITE_API_BASE_URL
string
default:"http://localhost:8086"
Harvey API endpoint (must be accessible from the browser)
VITE_SPHERE_BASE_URL
string
Optional: Sphere integration endpoint for advanced pricing extraction

Changing OpenAI Models

For H.A.R.V.E.Y. Agent

Edit docker-compose.yml:
harvey-api:
  environment:
    - OPENAI_MODEL=gpt-4o  # Change this line
Then restart the service:
docker-compose up -d --build harvey-api

Model Recommendations

GPT-4o

Best for:
  • Complex reasoning
  • Multi-step optimization
  • Accurate feature extraction
Tradeoffs:
  • Slower responses
  • Higher costs

GPT-4o-mini

Best for:
  • Balanced performance
  • Most production use cases
  • Cost-effective operation
Tradeoffs:
  • Slightly lower accuracy

GPT-3.5-turbo

Best for:
  • Development/testing
  • Simple queries
  • Budget-conscious deployments
Tradeoffs:
  • May miss nuances
  • Less reliable reasoning
Changing models affects reasoning quality. Test thoroughly before deploying to production with a different model.

Cache Configuration

The MCP server and Harvey API support two cache backends:

Memory Cache (Default)

environment:
  - CACHE_BACKEND=memory
  • Simple in-memory cache
  • No external dependencies
  • Cache cleared on service restart
  • Suitable for single-instance deployments

Redis Cache

environment:
  - CACHE_BACKEND=redis
  - REDIS_URL=redis://redis:6379/0
1

Add Redis to docker-compose.yml

redis:
  image: redis:7-alpine
  ports:
    - "6379:6379"
  volumes:
    - redis-data:/data

volumes:
  redis-data:
2

Update service dependencies

mcp-server:
  depends_on:
    - redis
    - a-mint-api
    - analysis-api
3

Set CACHE_BACKEND=redis

In both mcp-server and harvey-api services
4

Restart services

docker-compose up -d --build
Use Redis for:
  • Multi-instance deployments
  • Persistent cache across restarts
  • Shared cache between H.A.R.V.E.Y. and MCP server

Port Configuration

All service ports can be customized:
docker-compose.yml
services:
  choco-api:
    ports:
      - "8000:8000"  # External:Internal
  
  a-mint-api:
    ports:
      - "8001:8000"  # Expose internal 8000 as external 8001
  
  analysis-api:
    ports:
      - "8002:3000"
  
  mcp-server:
    ports:
      - "8085:8085"
  
  harvey-api:
    ports:
      - "8086:8086"
  
  mcp-frontend:
    ports:
      - "80:80"  # Change to "8080:80" for non-root port
When changing ports, update:
  1. Service URLs in environment variables
  2. VITE_API_BASE_URL in frontend build args
  3. Internal service references (e.g., CHOCO_API, ANALYSIS_BASE_URL)

Volume Mounts

Persist logs and output:
volumes:
  - ./analysis_api/src:/app/src       # Source code (for development)
  - ./analysis_api/logs:/app/logs     # Persistent logs
  - ./analysis_api/output:/app/output # Analysis outputs
Mounting source code volumes (/app/src) should only be done in development. Remove these mounts in production.

Health Checks

All services include health checks for monitoring:
healthcheck:
  test: ["CMD", "curl", "-f", "http://localhost:8086/health"]
  interval: 30s
  timeout: 10s
  retries: 3
View health status:
docker-compose ps
Output:
NAME                                  STATUS
pricing_intelligence-harvey-api-1     Up (healthy)
pricing_intelligence-mcp-server-1     Up (healthy)
pricing_intelligence-a-mint-api-1     Up (healthy)
...

Restart Policies

All services use restart: unless-stopped:
restart: unless-stopped
This ensures services automatically restart after crashes or system reboots, unless explicitly stopped.

Service Dependencies

Services start in the correct order based on dependencies:
harvey-api:
  depends_on:
    - a-mint-api
    - analysis-api
    - mcp-server
depends_on ensures dependent services start first, but doesn’t wait for them to be “ready”. Health checks provide readiness detection.

MCP Environment Variables

The MCP server supports additional configuration:
MCP_SERVER_MODULE
string
default:"pricing_mcp.mcp_server"
Python module to launch (for stdio transport)
MCP_PYTHON_EXECUTABLE
string
Path to Python binary (optional)
MCP_EXTRA_PYTHON_PATHS
string
Additional PYTHONPATH entries (colon-separated)
These are primarily used when running H.A.R.V.E.Y. locally outside of Docker.

Complete .env Example

Create a .env file in the project root:
.env
# Required: OpenAI API Keys
HARVEY_API_KEY=sk-proj-...
AMINT_API_KEY=sk-proj-...

# Optional: Multiple A-MINT keys for load balancing
# AMINT_API_KEYS=sk-key1,sk-key2,sk-key3

# Optional: Custom model selection
# HARVEY_MODEL=gpt-4o

# Optional: Redis cache
# CACHE_BACKEND=redis
# REDIS_URL=redis://localhost:6379/0

# Optional: Logging
# LOG_LEVEL=DEBUG

# Optional: Sphere integration
# VITE_SPHERE_BASE_URL=https://sphere.score.us.es
Load with:
source .env
docker-compose up --build

Troubleshooting Configuration

Missing API Keys

Problem: Services fail to start or return authentication errors Solution:
# Verify keys are set
echo $HARVEY_API_KEY
echo $AMINT_API_KEY

# Check docker-compose can see them
docker-compose config | grep API_KEY

Port Conflicts

Problem: “Port already in use” errors Solution:
# Find process using the port
lsof -i :8086

# Change port in docker-compose.yml
ports:
  - "8087:8086"  # Use 8087 externally instead

Service Can’t Reach Dependencies

Problem: Harvey API can’t connect to MCP server Solution:
  • Use Docker service names, not localhost: http://mcp-server:8085
  • Verify services are on the same Docker network
  • Check health checks: docker-compose ps

Cache Not Working

Problem: Repeated URL extractions despite caching Solution:
# Check cache backend setting
docker-compose exec mcp-server env | grep CACHE

# Verify Redis is running (if using Redis cache)
docker-compose ps redis

# Check cache TTL (default is typically 3600 seconds)

Next Steps

Basic Usage

Start using the H.A.R.V.E.Y. chat interface

Architecture

Learn how the platform components work together

Build docs developers (and LLMs) love