Skip to main content

Prerequisites

Before setting up the development environment, ensure you have the following installed:
  • Python 3.10 - The API is built on Python 3.10
  • MySQL 5.7+ - Primary relational database
  • MongoDB 6.0+ - Document storage for catalog and user data
  • Redis 7.2+ - Caching and session management
  • Git - Version control
Make sure you have Python 3.10 specifically installed. The application uses dependencies that require Python 3.10 compatibility.

Installation Steps

1

Clone the Repository

Clone the project repository to your local machine:
git clone <repository-url>
cd <project-directory>
2

Set Up Python Environment

Create and activate a virtual environment:
python3.10 -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate
3

Install Dependencies

Install all required Python packages from requirement.txt:
pip install --upgrade pip
pip install setuptools wheel
pip install -r requirement.txt
The project uses the following core dependencies:
  • Web Framework: Falcon 3.1.1 with auth, CORS, multipart support
  • Database: SQLAlchemy 1.4.53, PyMySQL 1.1.0, pymongo 4.6.1
  • Task Queue: Celery 5.3.6 with Redis backend
  • Caching: Redis 5.0.1, Beaker 1.12.1
  • AWS Services: boto3 1.34.34 for S3, Kinesis
  • GraphQL: graphene 3.3 for API layer
  • Search: elasticsearch 6.8.2
  • WSGI Server: gunicorn 21.2.0
4

Configure Environment Variables

Copy the example environment file and configure your local settings:
cp .env.example .env
Edit .env with your local database credentials. See Configuration Reference for all available options.
5

Set Up Databases

MySQL Database:
mysql -u root -p
CREATE DATABASE tss_stage CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
CREATE USER 'tss_stage'@'localhost' IDENTIFIED BY 'your_password';
GRANT ALL PRIVILEGES ON tss_stage.* TO 'tss_stage'@'localhost';
FLUSH PRIVILEGES;
MongoDB:
# Start MongoDB service
sudo systemctl start mongod

# Connect to MongoDB
mongosh
use tss
db.createUser({
  user: "admin",
  pwd: "password",
  roles: [{ role: "readWrite", db: "tss" }]
})
Redis:
# Start Redis service
sudo systemctl start redis

# Verify Redis is running
redis-cli ping
# Should return: PONG
6

Run Database Migrations

Apply database migrations to set up the schema:
python migrate.py
This will create all necessary tables in your MySQL database.
7

Verify Installation

Test that all connections are working:
python3 -c "import pymongo; print(pymongo.has_c())"
python3 -c "from datetime import datetime; print(datetime.now())"

Running the Application Locally

Start the API Server

The application uses Gunicorn as the WSGI server. Start the development server:
# Run with 4 workers and debug logging
gunicorn main:app -b 0.0.0.0:8001 -w 4 --timeout 1200 --log-level debug

Gunicorn Configuration

The recommended Gunicorn settings for development:
  • Workers (-w): 4 (adjust based on CPU cores)
  • Threads (-t): 8 per worker
  • Bind (-b): 0.0.0.0:8001 (accessible on all interfaces)
  • Timeout: 1200 seconds (20 minutes for long-running requests)
  • Log Level: debug for development, info for staging
The application requires a minimum timeout of 1200 seconds to handle long-running operations like bulk imports and complex reports.

Start Celery Workers

For background task processing, start Celery workers in a separate terminal:
celery -A tasks worker --loglevel=info --concurrency=4 --pool=eventlet

Start Flower (Optional)

Monitor Celery tasks with Flower:
flower -A tasks --port=5555
Access the Flower dashboard at http://localhost:5555

Verify Setup

Once the server is running, verify the installation:

Health Check

curl http://localhost:8001/api/v2/health
Expected response:
{
  "status": "healthy",
  "timestamp": "2026-03-08T12:00:00Z"
}

API Endpoints

The API exposes endpoints under the following prefixes:
  • /api/v2/* - Main API v2 endpoints
  • /api/v3/* - New API v3 endpoints
  • /api/v2/haptik/* - Haptik integration endpoints
  • /api/v2/external/* - External API endpoints

Development Tools

Database Connection Pooling

The application uses SQLAlchemy connection pooling with the following defaults:
  • Pool Size: 5 connections (configurable via POOL_SIZE)
  • Max Overflow: 30 connections (configurable via MAX_OVERFLOW)
  • Auto Commit: Enabled
  • Echo: Disabled (set DB_ECHO=True for SQL logging)

Timezone Configuration

The application uses Asia/Kolkata (IST) as the default timezone. All timestamps are stored in UTC and converted as needed.

Troubleshooting

If port 8001 is already in use, either stop the conflicting process or change the port:
# Find process using port 8001
lsof -i :8001

# Kill the process
kill -9 <PID>

# Or use a different port
gunicorn main:app -b 0.0.0.0:8002 -w 4
Ensure your database services are running and credentials in .env are correct:
# Check MySQL
sudo systemctl status mysql

# Check MongoDB
sudo systemctl status mongod

# Check Redis
sudo systemctl status redis
If you encounter import errors, ensure your virtual environment is activated and all dependencies are installed:
source venv/bin/activate
pip install -r requirement.txt
The application requires Python implementation for protobuf (used by pydgraph). This is set automatically via environment variable:
export PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python
This is included in the Dockerfile but should be set manually for local development if you encounter protobuf errors.

Next Steps

  • Review the Configuration Reference for all environment variables
  • Learn about Deployment for production setup
  • Explore the API endpoints in the API Reference section

Build docs developers (and LLMs) love