Deploy your xmcp server to your own infrastructure using Node.js directly or containerized with Docker.
Building for Production
Build your xmcp server for production:
This compiles your TypeScript code and creates optimized bundles in the dist/ directory:
dist/
├── http.js # HTTP transport server
├── stdio.js # stdio transport server
├── index.js # Main server logic
└── ... # Additional bundled modules
Running the Server
HTTP Transport
Run the HTTP server:
By default, the server listens on port 3001. You can customize this with the PORT environment variable:
PORT=8080 node dist/http.js
stdio Transport
For local MCP clients using stdio transport:
The stdio transport reads from stdin and writes to stdout, making it compatible with MCP clients like Claude Desktop.
Process Managers
For production deployments, use a process manager to keep your server running:
PM2
PM2 is a popular production process manager for Node.js:
# Install PM2
npm install -g pm2
# Start your server
pm2 start dist/http.js --name xmcp-server
# View logs
pm2 logs xmcp-server
# Restart
pm2 restart xmcp-server
# Stop
pm2 stop xmcp-server
# Auto-start on system boot
pm2 startup
pm2 save
PM2 Ecosystem File
Create an ecosystem.config.js for advanced configuration:
module.exports = {
apps: [
{
name: "xmcp-server",
script: "./dist/http.js",
instances: "max",
exec_mode: "cluster",
env: {
NODE_ENV: "production",
PORT: 3001,
},
error_file: "./logs/err.log",
out_file: "./logs/out.log",
log_date_format: "YYYY-MM-DD HH:mm:ss Z",
},
],
};
Then start with:
pm2 start ecosystem.config.js
systemd
For Linux systems, use systemd:
/etc/systemd/system/xmcp.service
[Unit]
Description=xmcp MCP Server
After=network.target
[Service]
Type=simple
User=www-data
WorkingDirectory=/var/www/xmcp
ExecStart=/usr/bin/node /var/www/xmcp/dist/http.js
Restart=on-failure
RestartSec=10
Environment=NODE_ENV=production
Environment=PORT=3001
[Install]
WantedBy=multi-user.target
Enable and start:
sudo systemctl enable xmcp
sudo systemctl start xmcp
sudo systemctl status xmcp
View logs:
sudo journalctl -u xmcp -f
Docker Deployment
Dockerfile
Create a Dockerfile for containerized deployment:
FROM node:22-alpine AS builder
WORKDIR /app
# Copy package files
COPY package.json pnpm-lock.yaml ./
# Install dependencies
RUN npm install -g pnpm && pnpm install --frozen-lockfile
# Copy source code
COPY . .
# Build the application
RUN pnpm xmcp build
# Production stage
FROM node:22-alpine
WORKDIR /app
# Copy built files and dependencies
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./package.json
# Expose port
EXPOSE 3001
# Run the server
CMD ["node", "dist/http.js"]
Build and Run
# Build the image
docker build -t xmcp-server .
# Run the container
docker run -d \
--name xmcp-server \
-p 3001:3001 \
-e NODE_ENV=production \
xmcp-server
# View logs
docker logs -f xmcp-server
Docker Compose
For multi-container setups, use Docker Compose:
version: '3.8'
services:
xmcp:
build: .
ports:
- "3001:3001"
environment:
- NODE_ENV=production
- PORT=3001
- API_KEY=${API_KEY}
restart: unless-stopped
volumes:
- ./logs:/app/logs
networks:
- xmcp-network
networks:
xmcp-network:
driver: bridge
Start with:
Multi-Stage Build with pnpm
Optimized Dockerfile using pnpm:
FROM node:22-alpine AS base
RUN npm install -g pnpm
# Dependencies stage
FROM base AS dependencies
WORKDIR /app
COPY package.json pnpm-lock.yaml ./
RUN pnpm install --frozen-lockfile --prod
# Build stage
FROM base AS build
WORKDIR /app
COPY package.json pnpm-lock.yaml ./
RUN pnpm install --frozen-lockfile
COPY . .
RUN pnpm xmcp build
# Production stage
FROM node:22-alpine AS production
WORKDIR /app
COPY --from=dependencies /app/node_modules ./node_modules
COPY --from=build /app/dist ./dist
COPY package.json ./
EXPOSE 3001
ENV NODE_ENV=production
CMD ["node", "dist/http.js"]
Environment Variables
Manage environment variables securely:
.env File
Create a .env file for local development:
NODE_ENV=production
PORT=3001
API_KEY=your-secret-api-key
JWT_SECRET=your-jwt-secret
Never commit .env files to version control. Add .env to your .gitignore.
Loading Environment Variables
xmcp automatically loads environment variables. You can also use dotenv explicitly:
node -r dotenv/config dist/http.js
Or in your package.json:
{
"scripts": {
"start": "node -r dotenv/config dist/http.js"
},
"dependencies": {
"dotenv": "^16.6.1"
}
}
Common Environment Variables
| Variable | Purpose | Default |
|---|
NODE_ENV | Environment (development/production) | development |
PORT | HTTP server port | 3001 |
API_KEY | API key for authentication | - |
JWT_SECRET | JWT signing secret | - |
XMCP_TELEMETRY_DISABLED | Disable telemetry | false |
OPENAI_APPS_VERIFICATION_TOKEN | OpenAI app verification token | - |
Reverse Proxy
Use a reverse proxy like Nginx or Caddy for production:
Nginx
/etc/nginx/sites-available/xmcp
server {
listen 80;
server_name mcp.example.com;
location / {
proxy_pass http://localhost:3001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
}
}
Enable and restart:
sudo ln -s /etc/nginx/sites-available/xmcp /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginx
Caddy
Caddy provides automatic HTTPS:
mcp.example.com {
reverse_proxy localhost:3001
}
Run Caddy:
SSL/TLS Certificates
Let’s Encrypt with Certbot
For Nginx:
sudo apt install certbot python3-certbot-nginx
sudo certbot --nginx -d mcp.example.com
For automatic renewal:
sudo certbot renew --dry-run
Caddy (Automatic HTTPS)
Caddy automatically provisions and renews Let’s Encrypt certificates:
mcp.example.com {
reverse_proxy localhost:3001
# HTTPS is automatic!
}
Health Checks
Implement health checks for monitoring:
xmcp includes a built-in /health endpoint:
curl http://localhost:3001/health
Use this in orchestration tools:
Docker Compose Health Check
services:
xmcp:
build: .
healthcheck:
test: ["CMD", "wget", "--quiet", "--tries=1", "--spider", "http://localhost:3001/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
Kubernetes Probes
apiVersion: apps/v1
kind: Deployment
metadata:
name: xmcp-server
spec:
replicas: 3
template:
spec:
containers:
- name: xmcp
image: xmcp-server:latest
ports:
- containerPort: 3001
livenessProbe:
httpGet:
path: /health
port: 3001
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /health
port: 3001
initialDelaySeconds: 5
periodSeconds: 5
Scaling Considerations
Horizontal Scaling
xmcp servers are stateless and can be scaled horizontally:
- Load Balancer: Use Nginx, HAProxy, or cloud load balancers
- Session Persistence: Not required (stateless design)
- Shared State: Use external storage (Redis, PostgreSQL) if needed
Vertical Scaling
Adjust Node.js memory and CPU:
node --max-old-space-size=4096 dist/http.js
Monitoring and Logging
Logging
xmcp logs to stdout/stderr. Capture logs with:
- PM2: Built-in log management
- Docker:
docker logs
- systemd:
journalctl
- PM2 Plus: Real-time monitoring
- Prometheus: Metrics collection
- Grafana: Visualization
- Sentry: Error tracking
Package.json Scripts
Complete package.json for self-hosting:
{
"name": "my-xmcp-server",
"scripts": {
"dev": "xmcp dev",
"build": "xmcp build",
"start": "node dist/http.js",
"start:pm2": "pm2 start ecosystem.config.js",
"logs": "pm2 logs xmcp-server"
},
"dependencies": {
"xmcp": "latest",
"zod": "^4.0.10"
},
"devDependencies": {
"typescript": "^5.7.2"
}
}
Next Steps
Vercel Deployment
Deploy to Vercel for serverless hosting
Cloudflare Workers
Deploy to the edge with Cloudflare
Authentication
Add authentication to your server
Configuration
Configure your xmcp server