Skip to main content

Overview

Nectr is designed to be self-hosted. You own your data, your API keys, and your infrastructure. This guide covers deploying Nectr using Docker on any server, cloud provider, or Kubernetes cluster.
Self-hosting gives you complete control but requires managing your own database, Neo4j instance, and SSL certificates.

Why Self-Host?

  • Data Ownership: All code, PRs, and reviews stay on your infrastructure
  • Custom Integrations: Modify the codebase to fit your workflow
  • Security: Meet compliance requirements for sensitive codebases
  • Cost Control: No usage-based pricing for high-volume teams

Prerequisites

  • Linux server (Ubuntu 22.04+ recommended) or any Docker-compatible host
  • Docker and Docker Compose installed
  • Domain name with DNS access (for SSL certificates)
  • PostgreSQL database (self-hosted or managed)
  • Neo4j database (self-hosted or AuraDB)

Architecture

A self-hosted Nectr deployment consists of:
┌─────────────────────────────────────────────────────────────┐
│                      Your Infrastructure                     │
│                                                              │
│  ┌──────────────┐   ┌──────────────┐   ┌──────────────┐   │
│  │   Frontend   │   │   Backend    │   │  PostgreSQL  │   │
│  │   Next.js    │──▶│   FastAPI    │──▶│   Database   │   │
│  │  (port 3000) │   │  (port 8000) │   │  (port 5432) │   │
│  └──────────────┘   └──────────────┘   └──────────────┘   │
│         │                   │                               │
│         │                   └──────────────┐                │
│         │                                  │                │
│  ┌──────▼──────┐                    ┌──────▼──────┐        │
│  │   Nginx     │                    │    Neo4j    │        │
│  │   Reverse   │                    │    Graph    │        │
│  │   Proxy     │                    │  (port 7687)│        │
│  └─────────────┘                    └─────────────┘        │
│         │                                                   │
└─────────┼───────────────────────────────────────────────────┘


    Internet (HTTPS)

Docker Deployment

Using Docker Compose

The easiest way to self-host Nectr is with Docker Compose.
1

Clone Repository

git clone https://github.com/yourusername/nectr.git
cd nectr
2

Create docker-compose.yml

Create a docker-compose.yml file in the project root:
docker-compose.yml
version: '3.8'

services:
  backend:
    build:
      context: .
      dockerfile: Dockerfile
    ports:
      - "8000:8000"
    environment:
      - DATABASE_URL=postgresql+asyncpg://postgres:password@postgres:5432/nectr
      - NEO4J_URI=bolt://neo4j:7687
      - NEO4J_USERNAME=neo4j
      - NEO4J_PASSWORD=neo4jpassword
      - ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
      - GITHUB_CLIENT_ID=${GITHUB_CLIENT_ID}
      - GITHUB_CLIENT_SECRET=${GITHUB_CLIENT_SECRET}
      - GITHUB_PAT=${GITHUB_PAT}
      - SECRET_KEY=${SECRET_KEY}
      - MEM0_API_KEY=${MEM0_API_KEY}
      - BACKEND_URL=https://api.yourdomain.com
      - FRONTEND_URL=https://yourdomain.com
      - APP_ENV=production
      - LOG_LEVEL=INFO
    depends_on:
      - postgres
      - neo4j
    restart: unless-stopped

  frontend:
    build:
      context: ./nectr-web
      dockerfile: Dockerfile
    ports:
      - "3000:3000"
    environment:
      - NEXT_PUBLIC_API_URL=https://api.yourdomain.com
    restart: unless-stopped

  postgres:
    image: postgres:16-alpine
    environment:
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=password
      - POSTGRES_DB=nectr
    volumes:
      - postgres_data:/var/lib/postgresql/data
    restart: unless-stopped

  neo4j:
    image: neo4j:5.15
    environment:
      - NEO4J_AUTH=neo4j/neo4jpassword
      - NEO4J_PLUGINS=["apoc"]
    volumes:
      - neo4j_data:/data
    ports:
      - "7474:7474"  # Browser UI
      - "7687:7687"  # Bolt protocol
    restart: unless-stopped

volumes:
  postgres_data:
  neo4j_data:
3

Create Frontend Dockerfile

Create nectr-web/Dockerfile:
nectr-web/Dockerfile
FROM node:20-alpine AS builder

WORKDIR /app

COPY package*.json ./
RUN npm ci

COPY . .
RUN npm run build

FROM node:20-alpine AS runner

WORKDIR /app

ENV NODE_ENV production

COPY --from=builder /app/public ./public
COPY --from=builder /app/.next/standalone ./
COPY --from=builder /app/.next/static ./.next/static

EXPOSE 3000

CMD ["node", "server.js"]
4

Configure Environment Variables

Create a .env file in the project root:
.env
# AI
ANTHROPIC_API_KEY=sk-ant-...

# GitHub
GITHUB_CLIENT_ID=...
GITHUB_CLIENT_SECRET=...
GITHUB_PAT=ghp_...

# Auth
SECRET_KEY=your-generated-secret

# Mem0
MEM0_API_KEY=m0-...
Never commit .env to version control. Add it to .gitignore.
5

Start Services

docker-compose up -d
This starts:
  • Backend (FastAPI) on port 8000
  • Frontend (Next.js) on port 3000
  • PostgreSQL on port 5432
  • Neo4j on ports 7474 (browser) and 7687 (bolt)
6

Run Migrations

Apply database migrations:
docker-compose exec backend alembic upgrade head
7

Verify Deployment

curl http://localhost:8000/health
Expected response:
{
  "status": "healthy",
  "database": "connected",
  "neo4j": "connected"
}

SSL & Reverse Proxy

Use Nginx as a reverse proxy with Let’s Encrypt for SSL certificates.
1

Install Nginx

sudo apt update
sudo apt install nginx certbot python3-certbot-nginx
2

Configure Nginx

Create /etc/nginx/sites-available/nectr:
/etc/nginx/sites-available/nectr
# Frontend
server {
    listen 80;
    server_name yourdomain.com;

    location / {
        proxy_pass http://localhost:3000;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}

# Backend API
server {
    listen 80;
    server_name api.yourdomain.com;

    location / {
        proxy_pass http://localhost:8000;
        proxy_http_version 1.1;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        
        # WebSocket support (for MCP SSE)
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_read_timeout 86400;
    }
}
3

Enable Site

sudo ln -s /etc/nginx/sites-available/nectr /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl restart nginx
4

Get SSL Certificates

sudo certbot --nginx -d yourdomain.com -d api.yourdomain.com
Follow the prompts. Certbot automatically updates your Nginx config.
5

Auto-Renewal

Test auto-renewal:
sudo certbot renew --dry-run
Certbot automatically renews certificates before they expire.

Alternative: Single Server Without Docker

If you prefer not to use Docker:
1

Install Dependencies

# Backend
sudo apt install python3.14 python3-pip postgresql

# Frontend
curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash -
sudo apt install nodejs
2

Set Up Backend

python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
cp .env.example .env
# Edit .env with your values
alembic upgrade head
3

Set Up Frontend

cd nectr-web
npm install
cp .env.example .env.local
# Edit .env.local
npm run build
4

Create Systemd Services

Create /etc/systemd/system/nectr-backend.service:
[Unit]
Description=Nectr Backend
After=network.target

[Service]
Type=simple
User=nectr
WorkingDirectory=/home/nectr/nectr
Environment="PATH=/home/nectr/nectr/venv/bin"
EnvironmentFile=/home/nectr/nectr/.env
ExecStart=/home/nectr/nectr/venv/bin/uvicorn app.main:app --host 0.0.0.0 --port 8000
Restart=always

[Install]
WantedBy=multi-user.target
Create /etc/systemd/system/nectr-frontend.service:
[Unit]
Description=Nectr Frontend
After=network.target

[Service]
Type=simple
User=nectr
WorkingDirectory=/home/nectr/nectr/nectr-web
Environment="NODE_ENV=production"
EnvironmentFile=/home/nectr/nectr/nectr-web/.env.local
ExecStart=/usr/bin/npm start
Restart=always

[Install]
WantedBy=multi-user.target
5

Start Services

sudo systemctl daemon-reload
sudo systemctl enable nectr-backend nectr-frontend
sudo systemctl start nectr-backend nectr-frontend

Managed Services

For easier maintenance, use managed services:
ServiceProviderCost
PostgreSQLSupabase, AWS RDS, DigitalOceanFree tier available
Neo4jNeo4j AuraDBFree tier: 50k nodes
Mem0Mem0.aiPay-as-you-go
ServerAWS EC2, DigitalOcean, Hetzner$5-20/month
Using managed databases simplifies backups, scaling, and maintenance.

Monitoring & Backups

Health Monitoring

Set up uptime monitoring:
# Install monitoring agent (example: Uptime Robot)
curl https://yourdomain.com/health

Database Backups

Automated daily backups:
# Create backup script
cat > /home/nectr/backup.sh << 'EOF'
#!/bin/bash
BACKUP_DIR="/home/nectr/backups"
DATE=$(date +%Y%m%d_%H%M%S)
docker exec nectr_postgres_1 pg_dump -U postgres nectr > "$BACKUP_DIR/nectr_$DATE.sql"
find "$BACKUP_DIR" -mtime +7 -delete  # Keep 7 days
EOF

chmod +x /home/nectr/backup.sh

# Add to crontab
crontab -e
# Add: 0 2 * * * /home/nectr/backup.sh

Log Management

Rotate logs to prevent disk space issues:
# Configure logrotate
sudo cat > /etc/logrotate.d/nectr << EOF
/home/nectr/nectr/logs/*.log {
    daily
    rotate 7
    compress
    delaycompress
    missingok
    notifempty
}
EOF

Scaling

Horizontal Scaling

Run multiple backend instances behind a load balancer:
docker-compose.yml (excerpt)
services:
  backend:
    # ... existing config ...
    deploy:
      replicas: 3
  
  nginx:
    image: nginx:alpine
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf
    ports:
      - "80:80"
    depends_on:
      - backend

Vertical Scaling

Increase server resources:
  • CPU: 2+ cores for production
  • RAM: 4GB minimum, 8GB recommended
  • Storage: 20GB+ (depends on PR volume)

Security Best Practices

1

Use Strong Secrets

Generate cryptographically secure secrets:
python -c "import secrets; print(secrets.token_hex(32))"
2

Restrict Database Access

Configure PostgreSQL to only accept connections from localhost:
# postgresql.conf
listen_addresses = 'localhost'
3

Enable Firewall

sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw allow 22/tcp  # SSH
sudo ufw enable
4

Regular Updates

Keep system packages updated:
sudo apt update && sudo apt upgrade -y
5

Secure GitHub PAT

Store GitHub PAT in environment variables, never in code. Rotate it every 90 days.

Troubleshooting

Check logs:
docker-compose logs backend
docker-compose logs frontend
Common issues:
  • Missing environment variables
  • Database connection failures
  • Port conflicts
Verify PostgreSQL is accessible:
docker-compose exec backend ping postgres
Check DATABASE_URL format:
postgresql+asyncpg://user:password@host:5432/database
Reset Neo4j password:
docker-compose exec neo4j cypher-shell
# Change password in shell
Check disk usage:
df -h
docker system df
Clean up unused images:
docker system prune -a

Next Steps

Configure GitHub App

Set up OAuth and webhooks

Connect Repository

Connect your first repo

Build docs developers (and LLMs) love