Docker issues
Containers fail to start
Symptom: docker compose up exits immediately or containers restart in a loop.
docker compose logs server
docker compose logs postgres
docker compose logs elasticsearch
Verify environment variables
Ensure .env file exists and contains all required variables: # Check if .env exists
ls -la apps/server/.env
# Compare with .env.example
diff .env.example apps/server/.env
Required variables:
DATABASE_URL
NAMESPACE_UUID
DOWNLOAD_SECRET
BETTER_AUTH_SECRET
REDIS_PASSWORD
Default ports: 3000 (server), 3001 (web), 5432 (postgres), 6379 (redis), 9200 (elasticsearch) # Check if ports are in use
sudo lsof -i :3000
sudo lsof -i :5432
sudo lsof -i :9200
Override ports in .env: SERVER_PORT = 3100
WEB_PORT = 3101
Permission errors on volumes
Symptom: permission denied errors in logs.
# Fix volume permissions
docker compose down
docker volume rm nanahoshi-v2_postgres_data
docker volume rm nanahoshi-v2_es_data
docker volume rm nanahoshi-v2_server_data
docker compose up -d
This deletes all data in the volumes. Backup first if you have existing data.
Out of disk space
Symptom: no space left on device.
# Check Docker disk usage
docker system df
# Clean up unused images, containers, and volumes
docker system prune -a --volumes
Database connection issues
Server cannot connect to PostgreSQL
Symptom: Logs show connection refused or ECONNREFUSED.
Check PostgreSQL health
docker exec nanahoshi-v2-postgres pg_isready -U postgres
Expected output: postgres:5432 - accepting connections
Verify DATABASE_URL
Ensure DATABASE_URL matches the docker-compose service name: # Correct (production)
DATABASE_URL = postgresql://postgres:password@postgres:5432/nanahoshi-v2
# Correct (development with local DB)
DATABASE_URL = postgresql://postgres:password@localhost:5432/nanahoshi-v2
Check network connectivity
docker exec nanahoshi-v2-server ping -c 3 postgres
Migration errors
Symptom: Server crashes on startup with migration errors.
// Migrations run automatically on startup
// packages/db/src/migrate.ts called from apps/server/src/index.ts:252
await runMigrations ();
Solutions:
Reset database
Manual migration
docker exec -it nanahoshi-v2-postgres psql -U postgres -c "DROP DATABASE \" nanahoshi-v2 \" ;"
docker exec -it nanahoshi-v2-postgres psql -U postgres -c "CREATE DATABASE \" nanahoshi-v2 \" ;"
docker restart nanahoshi-v2-server
# In development environment
bun run db:migrate
Elasticsearch issues
Elasticsearch won’t start
Symptom: Container exits with max virtual memory areas vm.max_map_count [65530] is too low.
Linux
macOS (Docker Desktop)
Windows (WSL2)
# Temporarily
sudo sysctl -w vm.max_map_count= 262144
# Permanently
echo "vm.max_map_count=262144" | sudo tee -a /etc/sysctl.conf
sudo sysctl -p
Index creation fails
Symptom: Server logs show [ES] Failed to ensure index on startup.
// apps/server/src/index.ts:256-259
await ensureIndex (). catch (( err : unknown ) => {
console . warn ( "[ES] Failed to ensure index on startup:" , err );
});
Check Elasticsearch health
curl http://localhost:9200/_cluster/health?pretty
Recreate index manually
# Delete existing index
curl -X DELETE http://localhost:9200/nanahoshi_books
# Restart server to recreate
docker restart nanahoshi-v2-server
Check Elasticsearch logs
docker logs nanahoshi-v2-elasticsearch
Search returns no results
Symptom: Books exist in database but search returns empty results.
Check index document count
curl http://localhost:9200/nanahoshi_books/_count?pretty
If count is 0, the index is empty.
Access Bull Board: http://localhost:3000/admin/queues/
Navigate to “book-index” queue
Click “Add job” and submit
Monitor progress in the jobs list
The worker processes in batches of 1000: // packages/api/src/infrastructure/workers/book.index.worker.ts:13
const BATCH_SIZE = 1000 ;
docker logs nanahoshi-v2-server 2>&1 | grep "\[Worker\]"
File scanning issues
Library scan finds no files
Symptom: Library scan completes but no books are added.
Check volume mounts
Verify book directories are mounted in docker-compose.yml: server :
volumes :
- server_data:/app/apps/server/data
- /host/path/to/books:/container/path:ro
Verify library path configuration
In the admin UI, ensure the library path matches the container path:
Host path: /home/user/manga
Container mount: /home/user/manga:/books/manga:ro
Library path in UI: /books/manga
Check file permissions
# From inside container
docker exec nanahoshi-v2-server ls -la /books/manga
Verify file extensions
Supported formats: .epub, .pdf, .cbz, .cbr, .zip
Files not processing
Symptom: Files detected but stuck in “pending” state.
Access Bull Board: http://localhost:3000/admin/queues/ Look for:
Failed jobs (red)
Stalled jobs (yellow)
Active jobs (processing)
The file event worker auto-scales based on CPU count: // packages/api/src/infrastructure/workers/file.event.worker.ts:86-87
const numCPUs = os . cpus (). length ;
const CONCURRENCY = Number ( process . env . WORKER_CONCURRENCY ) || Math . max ( 2 , numCPUs * 2 );
Override with environment variable: server :
environment :
WORKER_CONCURRENCY : 8
docker logs nanahoshi-v2-server 2>&1 | grep -A 5 "Failed job"
Duplicate books after rescan
Symptom: Same book appears multiple times.
Cause: Book identity is determined by filename + fileHash:
// packages/api/src/infrastructure/workers/file.event.worker.ts:114
uuid : generateDeterministicUUID ( filename , fileHash ),
If the file was moved or modified, it’s treated as a new book.
Solution:
Check scanned_file table for duplicates
Delete duplicate books from the UI
Ensure files are not being moved during scanning
Worker issues
Workers not processing jobs
Symptom: Jobs pile up in Bull Board but never complete.
Check Redis connection
docker exec nanahoshi-v2-redis redis-cli -a YOUR_PASSWORD ping
Verify workers are registered
Workers are imported as side effects on startup: // apps/server/src/index.ts:261-263
import "@nanahoshi-v2/api/infrastructure/workers/file.event.worker" ;
import "@nanahoshi-v2/api/infrastructure/workers/book.index.worker" ;
import "@nanahoshi-v2/api/infrastructure/workers/cover-color.worker" ;
Check logs for: [Worker] Starting with concurrency=X (CPUs=Y)
Restart server
docker restart nanahoshi-v2-server
Worker memory issues
Symptom: Workers crash or server runs out of memory.
# Check container memory usage
docker stats nanahoshi-v2-server
Solutions:
server :
environment :
WORKER_CONCURRENCY : 2 # Lower value
server :
deploy :
resources :
limits :
memory : 2G
elasticsearch :
environment :
- ES_JAVA_OPTS=-Xms256m -Xmx512m # Lower values
Authentication issues
Cannot login
Symptom: Login fails with “Invalid credentials” or no error message.
Check BETTER_AUTH_SECRET
Ensure it’s set and hasn’t changed since user creation: docker exec nanahoshi-v2-server printenv BETTER_AUTH_SECRET
Verify CORS_ORIGIN
Must match the URL you’re accessing the web UI from: # If accessing via http://localhost:3001
CORS_ORIGIN = http://localhost:3001
# If accessing via https://nanahoshi.example.com
CORS_ORIGIN = https://nanahoshi.example.com
Check session in database
docker exec -it nanahoshi-v2-postgres psql -U postgres nanahoshi-v2 -c "SELECT * FROM session LIMIT 5;"
Email verification not working
Symptom: Verification emails not received.
Required variables in .env: SMTP_HOST = smtp.gmail.com
SMTP_PORT = 465
SMTP_SECURE = true
SMTP_USER = [email protected]
SMTP_PASS = your-app-password # Not your regular password!
docker exec -it nanahoshi-v2-server bun -e "import nodemailer from 'nodemailer'; const t = nodemailer.createTransport({host: process.env.SMTP_HOST, port: Number(process.env.SMTP_PORT), secure: process.env.SMTP_SECURE === 'true', auth: {user: process.env.SMTP_USER, pass: process.env.SMTP_PASS}}); await t.verify(); console.log('SMTP OK');"
Verification emails may be marked as spam. Add SPF/DKIM records if self-hosting.
Slow search queries
Check Elasticsearch response times
curl -X GET "http://localhost:9200/nanahoshi_books/_search?pretty" -H 'Content-Type: application/json' -d '
{
"query": {"match_all": {}},
"size": 20
}'
Look for took field (milliseconds).
Increase Elasticsearch heap
elasticsearch :
environment :
- ES_JAVA_OPTS=-Xms1g -Xmx2g
curl -X POST "http://localhost:9200/nanahoshi_books/_forcemerge?max_num_segments=1"
Slow library scans
Cause: Worker concurrency too low or disk I/O bottleneck.
# Increase worker concurrency
docker compose down
# Add to docker-compose.yml:
server:
environment:
WORKER_CONCURRENCY: 16 # Adjust based on CPU cores
docker compose up -d
Getting help
If you’re still experiencing issues:
GitHub Issues Search existing issues or create a new one with logs and configuration
Community Discord Join the community for real-time help