Common errors
503 Service Unavailable
Error message:No healthy backends available
Cause: The load balancer cannot find any healthy backend servers to route requests to. This happens when:
- All backend servers are down or unreachable
- All backends are returning non-2xx status codes during health checks
- Health check timeouts are occurring on all backends
src/proxy/proxyHandler.ts:18-20
src/proxy/proxyHandler.ts
Check backend server status
Verify that at least one backend server is running and accepting connections:
Wait for health check cycle
If you just started a backend, wait up to 5 seconds for the next health check cycle to detect it.
502 Bad Gateway
Error message:Bad gateway
Cause: The selected backend server failed while processing the request. This occurs when:
- The backend crashes mid-request
- Network connection to the backend is lost
- The backend closes the connection unexpectedly
src/proxy/proxyHandler.ts:31-38
src/proxy/proxyHandler.ts
Verify automatic failover
The load balancer automatically marks the failing backend as unhealthy. Check HEALTH logs to confirm:
When a backend fails, the load balancer immediately marks it unhealthy (line 35 in proxyHandler.ts). You don’t need to wait for the next health check cycle.
FAQ and solutions
Why are requests still routed to an unhealthy backend?
Why are requests still routed to an unhealthy backend?
This should not happen. The load balancer only selects from
BackendPool.getHealthyBackends() (see src/balancer/pool.ts:16-18).If you’re seeing this:- Check that the backend was marked unhealthy (look for
HEALTH: UNHEALTHYorERROR: Backend failedlogs) - Verify the health check is running (
INFO: Health checker started) - Ensure you’re not caching responses on the client side
src/balancer/pool.ts
Health checks are timing out on all backends
Health checks are timing out on all backends
The default health check timeout is 3 seconds (enforced via
AbortController).Possible causes:- Backends are overloaded and responding slowly
- Network latency between load balancer and backends is high
- Backend health check endpoint is doing expensive operations
- Increase the timeout in
src/healthchecker/healthChecker.ts:22(currently3000ms) - Optimize backend response time
- Use a dedicated lightweight health check endpoint on backends
src/healthchecker/healthChecker.ts
A recovered backend isn't being used
A recovered backend isn't being used
The health checker runs every 5 seconds by default. Wait for the next health check cycle.To verify recovery:
- Watch for
HEALTH: HEALTHYlog entries - Check that subsequent requests are routed to the recovered backend
- Reduce the health check interval in
src/index.ts(currently5000ms)
src/index.ts
Round robin distribution is uneven
Round robin distribution is uneven
Expected behavior: With 3 backends, requests should be distributed 1→2→3→1→2→3…If distribution is uneven, check:
- Are all backends healthy? Unhealthy backends are skipped in rotation
- Review
REQUESTlogs to verify the actual distribution pattern
How do I add a new backend without downtime?
How do I add a new backend without downtime?
Connection refused errors during startup
Connection refused errors during startup
Cause: The health checker runs immediately on startup before all backends are ready.Expected behavior: You may see This is normal if:
UNHEALTHY logs during startup:- Backends start slower than the load balancer
- You’re testing with backends that aren’t running yet
High response times logged
High response times logged
Response time is measured in High response times indicate:
ProxyHandler from when the backend is selected until the response is received:src/proxy/proxyHandler.ts
- Backend is slow to process the request
- Network latency between load balancer and backend
- Backend is under heavy load
- Investigate backend performance
- Add more backend servers to distribute load
- Implement caching on backends
Debugging tips
Verify backend pool state
While there’s no built-in admin dashboard yet, you can inspect the backend pool state by adding temporary debug logs:src/index.ts
Test with curl
Test individual backends directly:Simulate backend failure
To test automatic failover:- Send requests through the load balancer
- Kill one backend server (Ctrl+C)
- Observe the
ERRORandHEALTH: UNHEALTHYlogs - Verify subsequent requests skip the failed backend
- Restart the backend
- Watch for
HEALTH: HEALTHYand confirm it re-enters rotation