Skip to main content

Overview

Pricing Intelligence can be deployed in multiple ways depending on your needs:

Docker Compose

Recommended: Production-ready deployment with all services

Local Development

Individual service development with hot reloading

Kubernetes

Scalable cloud deployment (advanced)

System Requirements

Hardware

  • CPU: 4+ cores recommended (CSP service is compute-intensive)
  • RAM: 8GB minimum, 16GB recommended
  • Storage: 5GB for Docker images and dependencies

Software

  • Docker: 24.0+ with Compose plugin
  • Node.js: 20+ (for local Analysis API development)
  • Python: 3.11+ (for local Harvey/MCP/A-MINT development)
  • Java: 17+ (for local CSP service development)

Production Deployment (Docker Compose)

Step 1: Clone the Repository

git clone https://github.com/isa-group/Pricing-Intelligence.git
cd Pricing-Intelligence

Step 2: Configure Environment Variables

The system requires OpenAI API keys for two services:
export HARVEY_API_KEY="sk-proj-your-harvey-key"
export AMINT_API_KEY="sk-proj-your-amint-key"
For persistence across sessions, add to ~/.bashrc or ~/.zshrc:
~/.bashrc
export HARVEY_API_KEY="sk-proj-your-harvey-key"
export AMINT_API_KEY="sk-proj-your-amint-key"
Never commit API keys to version control. Add .env to your .gitignore file.

Step 3: Configure Services

Edit docker-compose.yml to customize service configurations:
Modify the Harvey API service:
docker-compose.yml
harvey-api:
  environment:
    - OPENAI_MODEL=gpt-4o  # Options: gpt-4o, gpt-4-turbo, gpt-3.5-turbo
Change the host port (left side) if ports are already in use:
docker-compose.yml
services:
  harvey-api:
    ports:
      - "9086:8086"  # Access Harvey at localhost:9086
  mcp-frontend:
    ports:
      - "3000:80"    # Access frontend at localhost:3000
Adjust verbosity for debugging:
docker-compose.yml
harvey-api:
  environment:
    - LOG_LEVEL=DEBUG  # Options: DEBUG, INFO, WARNING, ERROR

analysis-api:
  environment:
    - LOG_LEVEL=debug  # Lowercase for Node.js services
Mount volumes to preserve data between restarts:
docker-compose.yml
analysis-api:
  volumes:
    - ./data/analysis:/app/output  # Persist analysis results

harvey-api:
  volumes:
    - ./data/static:/app/static    # Persist uploaded YAML files

Step 4: Launch the Platform

docker-compose up --build
Run in the foreground to see live logs:
docker-compose up --build
Stop with Ctrl+C.

Step 5: Verify Deployment

Check that all services are healthy:
docker-compose ps
Expected output:
NAME                    STATUS
choco-api               Up (healthy)
analysis-api            Up (healthy)
a-mint-api              Up (healthy)
mcp-server              Up (healthy)
harvey-api              Up (healthy)
mcp-frontend            Up
Test individual endpoints:
curl http://localhost:8086/health
# {"status":"UP"}

Local Development Setup

Develop individual services with hot reloading:

Harvey API (Python)

1

Navigate to the service directory

cd harvey_api
2

Create virtual environment

python -m venv .venv
source .venv/bin/activate  # On Windows: .venv\Scripts\activate
3

Install dependencies

pip install -e .[dev]
Or with uv (faster):
uv venv
source .venv/bin/activate
uv pip install -e .[dev]
4

Create static directory

mkdir -p src/harvey_api/static
5

Set environment variables

export OPENAI_API_KEY="sk-proj-..."
export OPENAI_MODEL="gpt-4o"
export MCP_SERVER_URL="http://localhost:8085/sse"
6

Run with hot reload

uvicorn harvey_api.app:app --reload --port 8086
Access at http://localhost:8086

MCP Server (Python)

1

Setup environment

cd mcp_server
uv venv
source .venv/bin/activate
uv pip install -e .[dev]
2

Copy environment file

cp .env.example .env
Edit .env:
.env
AMINT_BASE_URL=http://localhost:8001
ANALYSIS_BASE_URL=http://localhost:8002
CACHE_BACKEND=memory
LOG_LEVEL=INFO
HTTP_HOST=0.0.0.0
HTTP_PORT=8085
3

Run the server

# Stdio transport (for MCP clients)
python -m pricing_mcp

# HTTP transport (for testing)
python -m pricing_mcp --transport http
4

Run tests

pytest

Analysis API (Node.js)

1

Install dependencies

cd analysis_api
npm install
2

Configure environment

export NODE_ENV=development
export PORT=3000
export CHOCO_API=http://localhost:8000
export LOG_LEVEL=debug
3

Run in development mode

npm run dev
Access Swagger docs at http://localhost:3000/docs
4

Build for production

npm run build
npm start

A-MINT API (Python)

1

Setup environment

cd src  # A-MINT is in the src/ directory
python -m venv .venv
source .venv/bin/activate
pip install -e .
2

Configure environment

export OPENAI_API_KEY="sk-proj-..."
export PYTHONPATH=/path/to/Pricing-Intelligence
export ANALYSIS_API=http://localhost:3000/api/v1
3

Run the service

uvicorn src.app:app --reload --port 8001
For detailed A-MINT configuration, visit the A-MINT repository.

CSP Service (Java)

1

Build the service

cd csp
./gradlew build
2

Run locally

java -jar build/libs/csp-service.jar
Or with Gradle:
./gradlew bootRun
3

Test the endpoint

curl http://localhost:8000/health

Frontend (React)

1

Install dependencies

cd frontend
npm install
2

Configure API base URL

Create .env.local:
.env.local
VITE_API_BASE_URL=http://localhost:8086
VITE_SPHERE_BASE_URL=https://sphere.score.us.es
3

Run development server

npm run dev
Access at http://localhost:5173
4

Build for production

npm run build
npm run preview

Environment Variables Reference

Harvey API

VariableDefaultDescription
OPENAI_API_KEY-Required: OpenAI API key
OPENAI_MODELgpt-5-nanoModel to use for Harvey agent
MCP_SERVER_URLhttp://mcp-server:8085/sseMCP server endpoint
MCP_TRANSPORTsseTransport protocol: sse or stdio
HARVEY_STATIC_DIR/app/staticDirectory for uploaded files
LOG_LEVELINFOLogging verbosity
CACHE_BACKENDmemoryCache backend: memory or redis

MCP Server

VariableDefaultDescription
AMINT_BASE_URLhttp://a-mint-api:8000A-MINT API endpoint
ANALYSIS_BASE_URLhttp://analysis-api:3000Analysis API endpoint
CACHE_BACKENDmemoryCache backend: memory or redis
LOG_LEVELINFOLogging verbosity
HTTP_HOST0.0.0.0Bind address
HTTP_PORT8085Server port
MCP_TRANSPORTsseTransport protocol

Analysis API

VariableDefaultDescription
NODE_ENVproductionEnvironment: development or production
PORT3000Server port
CHOCO_APIhttp://choco-api:8000CSP service endpoint
LOG_LEVELINFOLogging level

A-MINT API

VariableDefaultDescription
OPENAI_API_KEY-Required: OpenAI API key
OPENAI_API_KEYS-Optional: Comma-separated list for load balancing
ANALYSIS_APIhttp://analysis-api:3000/api/v1Analysis API endpoint
PYTHONPATH/appPython module search path
PORT8000Server port
LOG_LEVELINFOLogging verbosity

CSP Service

VariableDefaultDescription
PORT8000Server port
LOG_LEVELINFOLogging level

Advanced Configuration

Using Redis for Caching

For improved performance in multi-instance deployments:
1

Add Redis to docker-compose.yml

docker-compose.yml
services:
  redis:
    image: redis:7-alpine
    ports:
      - "6379:6379"
    volumes:
      - redis-data:/data
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 10s
      timeout: 5s
      retries: 5

volumes:
  redis-data:
2

Update service configurations

docker-compose.yml
harvey-api:
  environment:
    - CACHE_BACKEND=redis
    - REDIS_URL=redis://redis:6379
  depends_on:
    - redis

mcp-server:
  environment:
    - CACHE_BACKEND=redis
    - REDIS_URL=redis://redis:6379
  depends_on:
    - redis

Scaling Services

Scale specific services for higher load:
# Scale Analysis API to 3 instances
docker-compose up -d --scale analysis-api=3

# Scale MCP Server to 2 instances
docker-compose up -d --scale mcp-server=2
You’ll need a load balancer (e.g., nginx) to distribute traffic across scaled instances.

Custom Volume Mounts

Persist data and enable live code editing:
docker-compose.yml
services:
  harvey-api:
    volumes:
      # Mount source code for development
      - ./harvey_api/src:/app/src
      # Persist logs
      - ./logs/harvey:/app/logs
      # Persist static files
      - ./data/static:/app/static

  analysis-api:
    volumes:
      - ./analysis_api/src:/app/src
      - ./logs/analysis:/app/logs
      - ./data/output:/app/output

Network Configuration

Create a custom network for service isolation:
docker-compose.yml
networks:
  pricing-intelligence:
    driver: bridge
    ipam:
      config:
        - subnet: 172.28.0.0/16

services:
  harvey-api:
    networks:
      - pricing-intelligence
  # ... other services

Troubleshooting

Find the process using the port:
# Linux/macOS
lsof -i :8086

# Windows
netstat -ano | findstr :8086
Kill the process or change the port mapping in docker-compose.yml.
Clear Docker cache and rebuild:
docker-compose down -v
docker system prune -a
docker-compose build --no-cache
docker-compose up
Check service logs:
docker-compose logs harvey-api
Common issues:
  • Missing environment variables
  • Dependency services not ready
  • Insufficient memory (increase Docker memory limit)
Verify API key:
curl https://api.openai.com/v1/models \
  -H "Authorization: Bearer $HARVEY_API_KEY"
Check quota and billing at https://platform.openai.com/usage
Ensure MCP server is reachable:
docker-compose exec harvey-api curl http://mcp-server:8085/health
Check network connectivity between containers:
docker network inspect pricing_intelligence_default
Increase timeout limits in Analysis API:
docker-compose.yml
analysis-api:
  environment:
    - SOLVER_TIMEOUT=300000  # 5 minutes in ms
Or allocate more CPU to the CSP service:
docker-compose.yml
choco-api:
  deploy:
    resources:
      limits:
        cpus: '4'
        memory: 4G

Production Considerations

Security

1

Use secrets management

Never hardcode API keys. Use Docker secrets or external secret managers:
docker-compose.yml
secrets:
  openai_key:
    external: true

services:
  harvey-api:
    secrets:
      - openai_key
    environment:
      - OPENAI_API_KEY_FILE=/run/secrets/openai_key
2

Enable HTTPS

Use a reverse proxy (nginx, Traefik) with SSL certificates:
docker-compose.yml
services:
  nginx:
    image: nginx:alpine
    ports:
      - "443:443"
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf
      - ./certs:/etc/nginx/certs
3

Restrict network access

Limit external exposure:
docker-compose.yml
services:
  choco-api:
    # Don't expose ports externally
    expose:
      - "8000"
    # Only nginx needs public access
  
  nginx:
    ports:
      - "443:443"

Monitoring

Configure comprehensive health checks:
docker-compose.yml
harvey-api:
  healthcheck:
    test: ["CMD", "curl", "-f", "http://localhost:8086/health"]
    interval: 30s
    timeout: 10s
    retries: 3
    start_period: 40s

Backup & Recovery

Regularly backup persistent data:
# Backup uploaded YAML files
tar -czf static-backup-$(date +%Y%m%d).tar.gz ./data/static

# Backup analysis results
tar -czf output-backup-$(date +%Y%m%d).tar.gz ./data/output

# Backup logs
tar -czf logs-backup-$(date +%Y%m%d).tar.gz ./logs

Next Steps

Harvey API

Explore the Harvey API and chat interface

Architecture

Deep dive into system design and component interactions

Pricing Models

Learn the Pricing2Yaml data format

Basic Usage

Learn how to use the chat interface

Build docs developers (and LLMs) love