Skip to main content

System Requirements

Before installing, ensure your system meets these requirements:

Python Version

Python 3.10 or 3.11 (recommended)

RAM

Minimum 8GB RAM (16GB recommended for large candidate pools)

Storage

~2GB free space for dependencies and embedding models

Operating System

Windows, macOS, or Linux
Python 3.12+ is not yet officially supported by all dependencies. Stick with Python 3.10 or 3.11 for best compatibility.

Installation Methods

Dependencies Breakdown

Here’s what gets installed from requirements.txt:
langchain
langchain-community
langchain-google-genai
langchain-huggingface
sentence-transformers
faiss-cpu
pypdf
reportlab
pandas==2.2.2
matplotlib
plotly

Core Dependencies

PackageVersionPurpose
langchainlatestFramework for building LLM applications
langchain-google-genailatestGemini 1.5 Flash integration
langchain-huggingfacelatestHuggingFace embeddings wrapper
sentence-transformerslatestPre-trained embedding models
faiss-cpulatestVector similarity search engine
pypdflatestPDF document parsing
pandas2.2.2Structured data manipulation
GPU Acceleration: If you have a CUDA-compatible GPU, replace faiss-cpu with faiss-gpu for faster vector search on large candidate databases.

API Key Setup

The system requires a Google API key to use Gemini 1.5 Flash. Follow these steps to obtain and configure it:

Step 1: Get Your API Key

1

Visit Google AI Studio

Navigate to Google AI Studio
2

Create API Key

Click “Create API Key” and select a Google Cloud project (or create a new one)
3

Copy the Key

Copy the generated API key - you’ll use it in the next step
Google AI Studio offers a generous free tier with 60 queries per minute for Gemini 1.5 Flash. Perfect for testing and small-scale deployments.

Step 2: Configure Environment Variable

Set the GOOGLE_API_KEY environment variable in your system:
# Temporary (current session only)
export GOOGLE_API_KEY="your_api_key_here"

# Permanent (add to ~/.bashrc or ~/.zshrc)
echo 'export GOOGLE_API_KEY="your_api_key_here"' >> ~/.bashrc
source ~/.bashrc
Security Best Practices:
  • Never commit .env files to version control (add to .gitignore)
  • Use different API keys for development and production
  • Rotate keys periodically
  • Monitor API usage in Google Cloud Console

Verification Steps

After installation, verify everything is working correctly:

1. Check Python Version

python --version
# Expected: Python 3.10.x or 3.11.x

2. Verify Dependencies

pip list | grep -E "langchain|faiss|sentence-transformers"
Expected Output:
faiss-cpu                 1.7.4
langchain                 0.1.0
langchain-community       0.0.13
langchain-google-genai    0.0.5
langchain-huggingface     0.0.1
sentence-transformers     2.2.2

3. Test API Connection

Create a test script test_setup.py:
test_setup.py
import os
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain_huggingface import HuggingFaceEmbeddings

# Test 1: API Key
print("[1/3] Checking API key...")
if not os.getenv("GOOGLE_API_KEY"):
    raise ValueError("❌ GOOGLE_API_KEY not found in environment variables")
print("✓ API key configured")

# Test 2: LLM Connection
print("\n[2/3] Testing Gemini connection...")
try:
    llm = ChatGoogleGenerativeAI(
        model="gemini-1.5-flash",
        temperature=0
    )
    response = llm.invoke("Hello, respond with just 'OK' if you can read this")
    print(f"✓ Gemini response: {response.content}")
except Exception as e:
    print(f"❌ Gemini connection failed: {e}")
    raise

# Test 3: Embeddings
print("\n[3/3] Testing embeddings model...")
try:
    embeddings = HuggingFaceEmbeddings()
    test_vec = embeddings.embed_query("test sentence")
    print(f"✓ Embeddings working (dimension: {len(test_vec)})")
except Exception as e:
    print(f"❌ Embeddings failed: {e}")
    raise

print("\n✅ All systems operational!")
Run the test:
python test_setup.py
Expected Output:
[1/3] Checking API key...
✓ API key configured

[2/3] Testing Gemini connection...
✓ Gemini response: OK

[3/3] Testing embeddings model...
✓ Embeddings working (dimension: 768)

✅ All systems operational!

4. Verify FAISS Installation

import faiss
import numpy as np

# Create a simple vector index
dimension = 128
index = faiss.IndexFlatL2(dimension)

# Add some random vectors
vectors = np.random.random((10, dimension)).astype('float32')
index.add(vectors)

print(f"✓ FAISS working - indexed {index.ntotal} vectors")

Troubleshooting

Solution: Install the CPU version explicitly:
pip uninstall faiss faiss-cpu faiss-gpu
pip install faiss-cpu
If you have a GPU:
pip install faiss-gpu
Solution: Update certificates or disable SSL verification (not recommended for production):
pip install --upgrade certifi
Or set environment variable:
export CURL_CA_BUNDLE=""
Solution: Ensure the key is properly set and restart your terminal:
import os
print(os.getenv("GOOGLE_API_KEY"))  # Should print your key
If it returns None, the variable isn’t set. Re-run the export command and restart your Python session.
Solution: Reduce batch size or use a smaller embedding model:
embeddings = HuggingFaceEmbeddings(
    model_name="sentence-transformers/all-MiniLM-L6-v2"  # Smaller model
)
Solution: The project specifies pandas==2.2.2 for stability. If you encounter issues:
pip install --upgrade pandas==2.2.2

Optional: GPU Acceleration

For production deployments with large candidate databases (1000+ resumes), GPU acceleration significantly improves performance:
1

Install CUDA Toolkit

Download from NVIDIA CUDA Downloads
2

Install GPU Version of FAISS

pip uninstall faiss-cpu
pip install faiss-gpu
3

Verify GPU Detection

import faiss
print(f"GPU available: {faiss.get_num_gpus()}")
GPU acceleration provides 10-100x speedup for vector search on large datasets. However, the free Google Colab environment already includes GPU support, making it unnecessary for most users.

Next Steps

You’re all set! Here’s what to do next:

Run the Quickstart

Try the system with sample data in 5 minutes

Architecture Guide

Understand how components work together

Configuration

Customize models, prompts, and retrieval settings

API Reference

Explore available functions and classes
Installation Complete! You now have a working local environment for the RAG Recruitment Assistant.

Build docs developers (and LLMs) love