Skip to main content

Deployment Options

MilesONerd AI can be deployed in multiple ways depending on your infrastructure and requirements:
  1. Direct Python Execution - Run locally or on a server
  2. Docker Container - Containerized deployment
  3. Cloud Platforms - Heroku, Railway, or other PaaS providers

Running with Python

The simplest way to run the bot is directly with Python.
1

Ensure Configuration

Verify your .env file is properly configured with your bot token:
cat .env
You should see:
TELEGRAM_BOT_TOKEN=your_actual_token_here
2

Start the Bot

Run the bot using Python:
python bot.py
3

Verify Startup

Look for successful initialization messages:
2024-03-09 10:00:00 - __main__ - INFO - Initializing AI models...
2024-03-09 10:00:01 - __main__ - INFO - CUDA available: True
2024-03-09 10:00:01 - __main__ - INFO - GPU Device: NVIDIA GeForce RTX 3090
2024-03-09 10:00:02 - __main__ - INFO - Loading BART model: facebook/bart-large
2024-03-09 10:00:05 - __main__ - INFO - BART model loaded successfully
2024-03-09 10:00:05 - __main__ - INFO - Loading Llama model: nvidia/Llama-3.1-Nemotron-70B-Instruct-HF
2024-03-09 10:01:30 - __main__ - INFO - Llama model loaded successfully
2024-03-09 10:01:30 - __main__ - INFO - MilesONerd AI Bot is starting...
The first run will download models from Hugging Face Hub, which may take several minutes. Subsequent runs will use cached models.

Running in Background

To keep the bot running in the background on Linux:
nohup python bot.py > bot.log 2>&1 &

Docker Deployment

Deploy the bot using Docker for better isolation and portability.

Dockerfile

The project includes a production-ready Dockerfile:
Dockerfile
# Choose Python 3.8 or higher base image
FROM python:3.8-slim

# Set the working directory inside the container
WORKDIR /app

# Copy the requirements.txt file into the container
COPY requirements.txt /app/

# Copy the .env.example file into the container
COPY .env.example /app/.env

# Install the project dependencies
RUN pip install --no-cache-dir -r requirements.txt

# Copy your project code into the container
COPY . /app/

# Define the command to run your program
CMD ["python", "bot.py"]

Building and Running

1

Build Docker Image

Build the Docker image:
docker build -t milesonerd-ai-bot .
2

Create .env File

Create a .env file with your configuration:
echo "TELEGRAM_BOT_TOKEN=your_token_here" > .env
echo "DEFAULT_MODEL=llama" >> .env
echo "ENABLE_CONTINUOUS_LEARNING=true" >> .env
3

Run Container

Run the bot container:
docker run -d \
  --name telegram-bot \
  --env-file .env \
  --restart unless-stopped \
  milesonerd-ai-bot
For GPU support, add:
docker run -d \
  --name telegram-bot \
  --env-file .env \
  --gpus all \
  --restart unless-stopped \
  milesonerd-ai-bot
4

View Logs

Monitor the bot logs:
docker logs -f telegram-bot
The Dockerfile copies .env.example as .env by default. For production, mount your actual .env file using -v ./env:/app/.env or use --env-file.

Docker Compose

For easier management, use Docker Compose:
docker-compose.yml
version: '3.8'

services:
  telegram-bot:
    build: .
    container_name: milesonerd-ai-bot
    env_file:
      - .env
    restart: unless-stopped
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1
              capabilities: [gpu]
Run with:
docker-compose up -d

Cloud Platform Deployment

Heroku Deployment

The bot includes a Procfile for Heroku deployment:
Procfile
worker: python bot.py
1

Install Heroku CLI

Install the Heroku CLI from heroku.com
2

Login and Create App

heroku login
heroku create your-bot-name
3

Set Environment Variables

Configure your bot token:
heroku config:set TELEGRAM_BOT_TOKEN=your_token_here
heroku config:set DEFAULT_MODEL=llama
heroku config:set ENABLE_CONTINUOUS_LEARNING=true
4

Deploy

Push to Heroku:
git push heroku main
5

Scale Worker

Ensure the worker dyno is running:
heroku ps:scale worker=1
6

View Logs

Monitor logs:
heroku logs --tail
Heroku’s free tier has limitations. The bot requires substantial memory (2GB+) for AI models, so you may need a paid dyno.

Railway Deployment

Railway provides excellent support for Python applications:
1

Connect Repository

Connect your GitHub repository to Railway
2

Configure Environment

Add environment variables in Railway dashboard:
  • TELEGRAM_BOT_TOKEN
  • DEFAULT_MODEL
  • ENABLE_CONTINUOUS_LEARNING
3

Deploy

Railway automatically detects the Python app and deploys it

Bot Initialization

The bot performs critical initialization before accepting messages:
bot.py:116-128
async def initialize() -> bool:
    """Initialize AI models and other components."""
    try:
        logger.info("Initializing AI models...")
        success = await ai_handler.initialize_models()
        if not success:
            logger.error("Failed to initialize AI models")
            return False
        logger.info("AI models initialized successfully")
        return True
    except Exception as e:
        logger.error(f"Error during initialization: {str(e)}")
        return False
If model initialization fails, the bot will exit. Ensure you have sufficient memory and disk space for the models.

Main Entry Point

The bot’s main function handles startup and error handling:
bot.py:130-174
def main() -> None:
    """Start the bot."""
    # Get the token from environment variable
    token = os.getenv("TELEGRAM_BOT_TOKEN")
    if not token:
        logger.error("No token found! Make sure to set TELEGRAM_BOT_TOKEN in .env file")
        return

    # Create the Application and initialize models
    application = Application.builder().token(token).build()
    
    # Initialize models before starting the bot
    loop = asyncio.new_event_loop()
    asyncio.set_event_loop(loop)
    
    try:
        # Run initialization in the event loop
        init_success = loop.run_until_complete(initialize())
        if not init_success:
            logger.error("Failed to initialize AI models. Exiting...")
            return
        logger.info("AI models initialized successfully")
        
        # Add handlers
        application.add_handler(CommandHandler("start", start))
        application.add_handler(CommandHandler("help", help_command))
        application.add_handler(CommandHandler("about", about_command))
        application.add_handler(MessageHandler(filters.TEXT & ~filters.COMMAND, handle_message))

        # Log startup
        logger.info("MilesONerd AI Bot is starting...")
        
        # Run the bot
        application.run_polling(allowed_updates=Update.ALL_TYPES)
    finally:
        loop.close()

if __name__ == '__main__':
    try:
        main()
    except KeyboardInterrupt:
        logger.info("Bot stopped by user")
    except Exception as e:
        logger.error(f"Error running bot: {str(e)}")
        raise  # Re-raise the exception for proper error handling

Production Checklist

Before deploying to production:
1

Security

  • Secure your .env file (never commit to git)
  • Use environment variables for all secrets
  • Enable HTTPS if exposing any web endpoints
2

Monitoring

  • Set up logging to a persistent location
  • Configure error tracking (e.g., Sentry)
  • Monitor memory and CPU usage
3

Resources

  • Ensure adequate RAM (16GB+ recommended)
  • Provision sufficient disk space for models
  • Configure GPU if available
4

Reliability

  • Set up automatic restart on failure
  • Configure health checks
  • Plan for model update strategy
Start with Docker deployment for easier management and portability. You can always migrate to other platforms later.

Next Steps

Build docs developers (and LLMs) love