System Requirements
- Operating System: Linux, macOS, or Windows with WSL2
- Python: 3.8 or higher
- Node.js: 16 or higher
- RAM: Minimum 8GB (16GB recommended for Ollama models)
- Disk Space: At least 10GB free (for models and databases)
Core Dependencies
EduMate requires five main services to function:- PostgreSQL - User data and assessment storage
- Qdrant - Vector database for semantic search
- Ollama - Local LLM embeddings
- Redis - Background job queue
- Google Gemini - AI question generation
Install PostgreSQL
PostgreSQL stores user accounts, authentication data, and generated assessments.Create the database and user:Verify the connection:
EduMate uses SQLAlchemy with PostgreSQL-specific features like JSONB columns for storing assessment data efficiently.
Install Qdrant Vector Database
Qdrant stores document embeddings for semantic search and retrieval.The easiest way to run Qdrant is using Docker:Alternatively, install Qdrant natively:Verify Qdrant is running:
EduMate creates collections dynamically with names like
edu_mate_<uuid> for each uploaded document. The embeddings use dimension 896 (from qwen3-embedding:0.6b).Install Ollama and Embedding Model
Ollama provides local embeddings for document chunking and semantic search.Install Ollama:Start the Ollama service:Pull the required embedding model:Verify the installation:
Install Redis
Redis powers the RQ (Redis Queue) system for background job processing.Test the Redis connection:
The Redis connection is configured in
backend/client/rq_client.py:Get Google Gemini API Key
EduMate uses Google’s Gemini 2.5 Flash model for generating MCQ questions.
- Go to Google AI Studio
- Sign in with your Google account
- Click “Create API Key”
- Copy the generated API key
.env file in your project root:The Gemini integration is configured in
backend/queue/chat.py to use the OpenAI-compatible endpoint:Install Python Dependencies
Clone the repository and install the backend dependencies:Create a virtual environment (recommended):Install dependencies from Key dependencies installed:
requirements.txt:- FastAPI 0.124.4 - Modern web framework
- SQLAlchemy - Database ORM
- LangChain 1.2.0 - LLM orchestration
- langchain-qdrant 1.1.0 - Qdrant vector store integration
- langchain-ollama 1.0.1 - Ollama embeddings
- Qdrant-client 1.16.2 - Vector database client
- Redis 7.1.0 + RQ 2.6.1 - Job queue system
- PyPDF 6.4.2 - PDF parsing
- PyJWT 2.8.0 - JWT authentication
- Bcrypt 5.0.0 - Password hashing
- Psycopg2-binary 2.9.11 - PostgreSQL adapter
Install Frontend Dependencies
Install Node.js dependencies for the React frontend:Key packages from Or run in development mode:
package.json:- React 19.2.0 - UI library
- Vite 7.3.1 - Build tool and dev server
- React Router DOM 7.13.0 - Client-side routing
- Axios 1.13.5 - HTTP client for API calls
- Framer Motion 12.34.1 - Animations
- Lucide React 0.574.0 - Icons
- Tailwind CSS 3.4.17 - Utility-first CSS
- jsPDF 4.1.0 - PDF export
- docx 9.5.3 - DOCX export
Configure Database Connection
Update the database URL in The database tables will be created automatically on first run:This creates two tables:
backend/database.py if you used different credentials:backend/database.py
backend/server.py
- users - Authentication and user profiles
- assessments - Saved MCQ assessments with JSONB content
Start All Services
You’ll need three terminal windows:Terminal 1 - Redis Worker:Terminal 2 - Backend Server:The server starts on
http://localhost:8000 (configured in backend/main.py).Terminal 3 - Frontend (Development):If you built the frontend (
npm run build), the FastAPI server automatically serves it from the / route. No need to run a separate frontend dev server.Verify Installation
Test that everything is working:Configuration Reference
Environment Variables
Create a.env file in the project root:
.env
Key Configuration Files
| File | Purpose |
|---|---|
backend/database.py | PostgreSQL connection string |
backend/server.py | JWT secret, token expiration, API routes |
backend/client/rq_client.py | Redis connection for job queue |
backend/queue/doc_chunking.py | Embedding model, chunk size, Qdrant config |
backend/queue/chat.py | Gemini model, prompt templates |
frontend/vite.config.js | Frontend build configuration |
Document Processing Configuration
Inbackend/queue/doc_chunking.py:
Common Issues
Production Deployment
For production environments, consider:- Use a production WSGI server like Gunicorn or Uvicorn with workers
- Set a strong
SECRET_KEYinbackend/server.py(current default is for development only) - Enable HTTPS with a reverse proxy (Nginx/Caddy)
- Use managed PostgreSQL and Redis services
- Deploy Qdrant in cluster mode for high availability
- Set up proper logging and monitoring
- Use environment variables instead of hardcoded credentials
- Implement rate limiting on API endpoints
- Set up automatic backups for PostgreSQL
Quickstart Guide
Ready to generate your first assessment? Follow the quickstart guide.