Skip to main content

Overview

This guide walks you through setting up the InterviewGuide platform on your local machine for development. The platform requires PostgreSQL with pgvector extension, Redis, and S3-compatible object storage.

Prerequisites

SoftwareVersionPurpose
JDK21+Backend runtime (supports Virtual Threads)
Node.js18+Frontend development
PostgreSQL14+Primary database
pgvectorLatestVector similarity search for RAG
Redis6+Caching and message queue (Streams)
Gradle8.14+Backend build tool (wrapper included)
pnpmLatestFrontend package manager
  • Alibaba Cloud DashScope API Key: Required for AI features (resume analysis, mock interviews, RAG Q&A)
  • S3-Compatible Storage: MinIO (recommended for local dev) or RustFS
1

Install PostgreSQL with pgvector

Install PostgreSQL

brew install postgresql@16
brew services start postgresql@16

Install pgvector Extension

brew install pgvector

Create Database and Enable Extension

-- Connect to PostgreSQL
psql -U postgres

-- Create database
CREATE DATABASE interview_guide;

-- Connect to the new database
\c interview_guide

-- Enable pgvector extension
CREATE EXTENSION IF NOT EXISTS vector;

-- Verify installation
SELECT * FROM pg_extension WHERE extname = 'vector';
Spring AI will automatically create the vector_store table when spring.ai.vectorstore.pgvector.initialize-schema is set to true.
2

Install Redis

Redis is used for session caching and asynchronous processing via Redis Streams.
brew install redis
brew services start redis

# Test connection
redis-cli ping
# Expected: PONG
Default Redis configuration is suitable for development. For production, enable persistence (RDB/AOF) and set a strong password.
3

Setup Object Storage

Choose MinIO (recommended) or RustFS for local S3-compatible storage.
brew install minio/stable/minio

# Start MinIO server
minio server ~/minio-data --console-address ":9001"

Create Storage Bucket

# Install MinIO Client
brew install minio/stable/mc  # macOS
# Or download from https://min.io/docs/minio/linux/reference/minio-mc.html

# Configure client
mc alias set local http://localhost:9000 minioadmin minioadmin

# Create bucket
mc mb local/interview-guide

# Set public read access (for serving uploaded files)
mc anonymous set public local/interview-guide
Access MinIO Console at http://localhost:9001

Option B: RustFS

# Install RustFS
cargo install rustfs

# Start RustFS server
rustfs --endpoint http://localhost:9000 \
       --access-key wr45VXJZhCxc6FAWz0YR \
       --secret-key GtKxV57WJkpw4CvASPBzTy2DYElLnRqh8dIXQa0m
4

Configure Backend Application

Clone the Repository

git clone https://github.com/Snailclimb/interview-guide.git
cd interview-guide

Set Environment Variables

# Required: Alibaba Cloud DashScope API Key
export AI_BAILIAN_API_KEY=your_api_key_here

# Optional: Override defaults
export AI_MODEL=qwen-plus  # or qwen-max, qwen-long
export POSTGRES_PASSWORD=your_password
export REDIS_HOST=localhost
export REDIS_PORT=6379
Get your DashScope API key from https://bailian.console.aliyun.com/

Edit application.yml

Update app/src/main/resources/application.yml with your local settings:
app/src/main/resources/application.yml
spring:
  datasource:
    url: jdbc:postgresql://localhost:5432/interview_guide
    username: postgres
    password: your_password
  
  jpa:
    hibernate:
      ddl-auto: create  # Use 'create' for first run, then change to 'update'
  
  redis:
    redisson:
      config: |
        singleServerConfig:
          address: "redis://localhost:6379"
          database: 0
  
  ai:
    openai:
      api-key: ${AI_BAILIAN_API_KEY}
      chat:
        options:
          model: ${AI_MODEL:qwen-plus}
    vectorstore:
      pgvector:
        initialize-schema: true  # Auto-create vector_store table

app:
  storage:
    endpoint: http://localhost:9000
    access-key: minioadmin  # Or your RustFS credentials
    secret-key: minioadmin
    bucket: interview-guide
Critical: After first successful startup, change ddl-auto from create to update to prevent data loss on restart.
5

Start Backend Server

# From project root
./gradlew :app:bootRun

# Or build JAR and run
./gradlew :app:bootJar
java -jar app/build/libs/app-*.jar
Backend will start at http://localhost:8080

Verify Startup

Check the logs for successful initialization:
✅ Started App in X.XXX seconds
✅ Tomcat started on port 8080
✅ Vector store initialized
✅ Redis Stream consumers started
6

Setup and Start Frontend

cd frontend

# Install dependencies
pnpm install

# Start development server
pnpm dev
Frontend will start at http://localhost:5173

Build for Production

pnpm build

# Preview production build
pnpm preview

Troubleshooting

Symptom: Connection refused or FATAL: password authentication failedSolutions:
  1. Verify PostgreSQL is running:
    psql -U postgres -c "SELECT version();"
    
  2. Check pg_hba.conf for authentication settings:
    # Find config location
    psql -U postgres -c "SHOW hba_file;"
    
    # Allow local connections
    # Add this line:
    host    all             all             127.0.0.1/32            md5
    
  3. Restart PostgreSQL after config changes:
    sudo systemctl restart postgresql
    
Symptom: ERROR: could not open extension control file or extension "vector" does not existSolutions:
  1. Verify pgvector installation:
    SELECT * FROM pg_available_extensions WHERE name = 'vector';
    
  2. If not available, reinstall pgvector and restart PostgreSQL
  3. Ensure shared_preload_libraries includes vector (if required by your version):
    SHOW shared_preload_libraries;
    
Symptom: Unable to connect to Redis or Connection refusedSolutions:
  1. Check Redis is running:
    redis-cli ping
    
  2. Verify Redis config allows external connections:
    # Edit redis.conf
    bind 127.0.0.1 ::1
    protected-mode yes
    
  3. Check firewall settings if connecting from different host
Symptom: Resume status remains PROCESSING indefinitelyRoot Causes:
  1. Redis Stream Consumer not running: Check backend logs for consumer initialization
  2. AI API key invalid: Verify AI_BAILIAN_API_KEY is correct
  3. Network issues: Check connectivity to dashscope.aliyuncs.com
Debug Steps:
# Check Redis Streams
redis-cli
> XINFO STREAM resume:analysis:stream
> XINFO GROUPS resume:analysis:stream
> XPENDING resume:analysis:stream resume-analysis-group
Look for failed messages in backend logs:
ERROR: Resume analysis failed for ID=xxx, attempt 1/3
Symptom: 500 Internal Server Error when uploading resumes or documentsSolutions:
  1. Verify object storage is accessible:
    curl http://localhost:9000/minio/health/live
    
  2. Check bucket exists and is accessible:
    mc ls local/interview-guide
    
  3. Verify storage credentials in application.yml match MinIO/RustFS config
  4. Check file size is under limit (50MB for knowledge base, 10MB for resumes)
Symptom: Build errors or dependency resolution failuresSolutions:
  1. Clean build cache:
    ./gradlew clean
    rm -rf ~/.gradle/caches/
    
  2. Verify Java version:
    java -version  # Should be 21+
    ./gradlew --version
    
  3. Try using Aliyun mirror (already configured in build.gradle)
  4. Build without tests:
    ./gradlew :app:bootJar -x test
    
Symptom: pnpm build fails or dependencies cannot be resolvedSolutions:
  1. Clear pnpm cache:
    pnpm store prune
    rm -rf node_modules
    pnpm install
    
  2. Verify Node.js version:
    node -v  # Should be 18+
    
  3. Check for port conflicts:
    lsof -i :5173  # Kill conflicting process
    

Development Workflow

Hot Reload

Backend

Spring Boot DevTools enables automatic restart on code changes:
./gradlew :app:bootRun
# Edit Java files - server restarts automatically

Frontend

Vite provides instant HMR (Hot Module Replacement):
pnpm dev
# Edit React components - updates instantly

Database Schema Updates

# Development: Auto-update schema
jpa:
  hibernate:
    ddl-auto: update

# Production: Manual migrations only
jpa:
  hibernate:
    ddl-auto: validate  # or none

Monitoring Redis Streams

# Connect to Redis
redis-cli

# Monitor resume analysis stream
> XINFO STREAM resume:analysis:stream
> XLEN resume:analysis:stream

# Monitor knowledge base vectorization
> XINFO STREAM knowledgebase:vectorization:stream

# View pending messages
> XPENDING resume:analysis:stream resume-analysis-group

Next Steps

Docker Deployment

One-command setup with Docker Compose

Production Setup

Production-ready deployment checklist

Configuration Guide

Detailed environment variable reference

Architecture Overview

Understand the system design

Build docs developers (and LLMs) love