Quick Start Guide
This guide will help you set up and run your MilesONerd AI Telegram bot in just a few minutes. By the end, you’ll have a fully functional AI-powered bot responding to messages on Telegram.GPU Recommended: While the bot can run on CPU, using a GPU with at least 40GB VRAM is highly recommended for optimal performance with the Llama 3.1-Nemotron 70B model.
/newbot- Name: MilesONerd AI (or your preferred name)
- Username: Must end in “bot” (e.g.,
milesonerd_ai_bot)
Save your bot token securely. You’ll use it in the configuration step. The token looks like:
1234567890:ABCdefGHIjklMNOpqrsTUVwxyzThe installation includes:
python-telegram-bot(21.10) - Telegram Bot API wrappertransformers(4.48.0) - Hugging Face model librarytorch(2.5.1) - PyTorch for model inferencepython-dotenv(1.0.1) - Environment variable management- Additional dependencies for model optimization
# Required: Your Telegram Bot Token from @BotFather
TELEGRAM_BOT_TOKEN=your_bot_token_here
# Optional: Default AI model to use
DEFAULT_MODEL=llama
# Optional: Enable continuous learning (future feature)
ENABLE_CONTINUOUS_LEARNING=true
# Optional: Google Search API key (future feature)
# SERPAPI_API_KEY=your_serpapi_key_here
Security: Never commit your
.env file to version control. The token grants full access to your bot.2026-03-09 10:30:45 - __main__ - INFO - Initializing AI models...
2026-03-09 10:30:45 - ai_handler - INFO - Starting model initialization...
2026-03-09 10:30:45 - ai_handler - INFO - CUDA available: True
2026-03-09 10:30:45 - ai_handler - INFO - GPU Device: NVIDIA A100-SXM4-80GB
2026-03-09 10:30:46 - ai_handler - INFO - Loading BART model: facebook/bart-large
2026-03-09 10:30:52 - ai_handler - INFO - BART tokenizer loaded successfully
2026-03-09 10:30:58 - ai_handler - INFO - BART model loaded successfully
2026-03-09 10:30:58 - ai_handler - INFO - Loading Llama model: nvidia/Llama-3.1-Nemotron-70B-Instruct-HF
2026-03-09 10:31:15 - ai_handler - INFO - Llama tokenizer loaded successfully
2026-03-09 10:32:45 - ai_handler - INFO - Llama model loaded successfully
2026-03-09 10:32:45 - ai_handler - INFO - All models initialized successfully
2026-03-09 10:32:45 - __main__ - INFO - MilesONerd AI Bot is starting...
First Run: The first time you run the bot, it will download the AI models from Hugging Face (~150GB total). This can take 10-30 minutes depending on your internet connection. Subsequent runs will use cached models.
@milesonerd_ai_bot)/startHi @yourusername! I'm MilesONerd AI, your intelligent assistant.
I can help you with various tasks using advanced AI models and internet search.
Use /help to see available commands.
Understanding the Bot’s Behavior
The bot intelligently routes your messages to different AI models based on content analysis:Message Routing Logic
Model Selection Table
| Message Type | Word Count | Keywords | AI Model Used | Max Length |
|---|---|---|---|---|
| Long message | > 100 words | - | BART → Llama | 200 tokens |
| Summarization | Any | ”summarize”, “tldr”, “summary” | BART | 130 tokens |
| Conversation | Any | ”chat”, “conversation”, “talk” | Llama | 200 tokens |
| Short query | < 10 words | - | Llama | 100 tokens |
| Default | 10-100 words | - | Llama | 150 tokens |
Troubleshooting
Bot Token Error
.env file exists and contains the correct token:
Model Initialization Failed
- Insufficient GPU/CPU memory
- Network issues downloading models
- Missing dependencies
- Check GPU memory:
- Verify PyTorch installation:
- Reinstall dependencies:
Out of Memory Error
- Use a smaller model (modify
ai_handler.py:35) - Enable CPU offloading (already configured with
device_map='auto') - Use quantization (future enhancement)
Bot Not Responding
Checklist:- Is
bot.pyrunning without errors? - Did you use the correct bot username in Telegram?
- Is your internet connection stable?
- Check logs for error messages
Model Download Takes Too Long
The models are large (~150GB combined). To monitor progress:Configuration Options
Environment Variables
Customize your bot’s behavior by editing.env:
.env
Model Parameters
You can adjust generation parameters inai_handler.py:
Next Steps
Now that your bot is running, explore more:API Reference
Learn about all available commands and functions
Advanced Configuration
Customize model parameters and behavior
Deployment Guide
Deploy your bot to production servers
Model Details
Contribute to the MilesONerd AI project
Getting Help
If you encounter issues:- Check the GitHub Issues
- Review the source code
- Contact the author: MilesONerd
Logging: The bot outputs detailed logs to the console. Use these logs to diagnose issues:
Congratulations! You now have a fully functional AI-powered Telegram bot. Start chatting and explore its capabilities!
