# Install all dependencies including dev dependenciesuv sync# This creates a virtual environment and installs:# - Core Memori dependencies# - Development tools (pytest, ruff, etc.)# - LLM client libraries (OpenAI, Anthropic, Google)# - Database drivers (PostgreSQL, MySQL, MongoDB, etc.)
4
Install pre-commit hooks
uv run pre-commit install# Test the hooksuv run pre-commit run --all-files
Pre-commit hooks automatically:
Format code with Ruff
Check linting
Validate YAML/JSON
Check for secrets
5
Run tests
# Run unit tests (fast, no external dependencies)uv run pytest# Run with coverageuv run pytest --cov=memori# View HTML coverage reportopen htmlcov/index.html # macOSxdg-open htmlcov/index.html # Linux
Success! You’re ready to start contributing. The unit tests should pass without any external dependencies.
For integration testing with real databases, use our Docker environment:
1
Copy environment file
cp .env.example .env
Edit .env and add your API keys (optional for unit tests):
# Required for integration testsOPENAI_API_KEY=sk-...ANTHROPIC_API_KEY=sk-ant-...GOOGLE_API_KEY=...# Optional: For Memori Cloud featuresMEMORI_API_KEY=...
# Start environmentmake dev-up# Stop environmentmake dev-down# Enter development shellmake dev-shell# Run testsmake test# Format codemake format# Check lintingmake lint# Run security scansmake security# Clean up everythingmake clean# Complete teardown (containers, volumes, cache)make dev-clean
View full Makefile reference
make help # Show all available commandsmake dev-up # Start development environmentmake dev-down # Stop development environmentmake dev-shell # Open shell in dev containermake dev-build # Rebuild dev containermake dev-clean # Complete teardownmake test # Run tests in containermake lint # Run lintingmake format # Format codemake security # Run security scansmake init-postgres # Initialize PostgreSQL schemamake init-mysql # Initialize MySQL schemamake init-mongodb # Initialize MongoDB schemamake init-sqlite # Initialize SQLite schemamake init-oceanbase # Initialize OceanBase schemamake init-oracle # Initialize Oracle schemamake clean # Clean containers, volumes, cache
Fast tests that use mocks and don’t require external services:
# Run all unit testsuv run pytest# Run specific test fileuv run pytest tests/memory/test_recall.py# Run specific testuv run pytest tests/memory/test_recall.py::test_similarity_search# Run with verbose outputuv run pytest -v# Run with coverageuv run pytest --cov=memori --cov-report=html
Tests that require real databases and LLM API keys:
# Set test mode and API keysexport MEMORI_TEST_MODE=1export OPENAI_API_KEY=sk-...export ANTHROPIC_API_KEY=sk-ant-...export GOOGLE_API_KEY=...# Initialize database schemamake init-postgres # or your preferred database# Run all integration testsuv run pytest tests/integration/ -v -m integration# Run specific provider testsuv run pytest tests/integration/providers/test_openai.py# Run specific integration test fileMEMORI_TEST_MODE=1 uv run python tests/llm/clients/oss/openai/sync.py
Integration tests make real API calls and will consume API credits. Use test API keys if available.
# Format code (in-place)uv run ruff format .# Check lintinguv run ruff check .# Auto-fix linting issuesuv run ruff check --fix .# Check specific fileuv run ruff check memori/llm/clients/openai.py
Pro tip: Install the Ruff extension for your IDE (VS Code, PyCharm, etc.) for real-time linting and formatting.
# Bandit - security issues scanneruv run bandit -r memori -ll -ii# pip-audit - check for vulnerable dependenciesuv run pip-audit --require-hashes --disable-pip || true# Or use make commandmake security
Pre-commit hooks run automatically before each commit:
# Install hooks (one-time setup)uv run pre-commit install# Run manually on all filesuv run pre-commit run --all-files# Update hooks to latest versionsuv run pre-commit autoupdate# Skip hooks (not recommended)git commit --no-verify
# Run performance benchmarksuv run pytest tests/benchmarks/ -v --benchmark-only# Run specific benchmarkuv run pytest tests/benchmarks/test_embeddings.py --benchmark-only# Compare with baselineuv run pytest tests/benchmarks/ --benchmark-compare