Skip to main content

AI Configuration

The TelegramBot API uses OpenRouter to access various AI language models. This guide covers API setup, model selection, prompt engineering, and tuning response behavior.

OpenRouter Setup

OpenRouter provides unified access to multiple AI models through a single API.
1

Create an account

Visit OpenRouter.ai and sign up for an account.
OpenRouter offers free models and paid premium models. Check the pricing page for current rates.
2

Generate API key

Navigate to the Keys section in your OpenRouter dashboard and create a new API key.Click Create Key and copy the generated key.
3

Add credits (optional)

For premium models, add credits to your account:
  • Go to the Credits section
  • Choose an amount to add
  • Complete the payment
Free models like tngtech/deepseek-r1t2-chimera:free don’t require credits.
4

Configure in application

Add your API key to the .env file:
.env
OPENROUTE_KEY=sk-or-v1-abc123def456...

Model Selection

The default model is configured in application.yaml:
application.yaml:26-29
openrouter:
  api-key: ${OPENROUTE_KEY}
  base-url: https://openrouter.ai/api/v1
  model: tngtech/deepseek-r1t2-chimera:free

Available Models

OpenRouter provides access to numerous models. Popular options include:
These models are free to use:
  • tngtech/deepseek-r1t2-chimera:free (default)
  • google/gemini-2.0-flash-001:free
  • meta-llama/llama-3.1-8b-instruct:free
  • mistralai/mistral-7b-instruct:free
Free models may have rate limits and lower priority during high demand.
Paid models offering enhanced performance:
  • anthropic/claude-3.5-sonnet - Excellent reasoning and instruction following
  • openai/gpt-4-turbo - Strong general performance
  • google/gemini-pro-1.5 - Large context window
  • deepseek/deepseek-chat - Cost-effective reasoning
Premium models require credits in your OpenRouter account.
Models optimized for specific tasks:
  • reasoning models - Better at logical reasoning
  • creative models - Enhanced creative writing
  • code models - Optimized for programming tasks
Browse the full catalog at OpenRouter Models.

Changing the Model

To use a different model, update application.yaml:
application.yaml:29
model: anthropic/claude-3.5-sonnet
Or override with an environment variable:
.env
OPENROUTER_MODEL=anthropic/claude-3.5-sonnet
Ensure your OpenRouter account has sufficient credits for premium models. Check pricing at OpenRouter Pricing.

System Prompt Configuration

The system prompt defines your bot’s personality and behavior.

Default Prompt

The application includes a default “chaotic” Spanish AI personality:
application.yaml:30
system-prompt: ${AI_SYSTEM_PROMPT:"IA caótica en español sin lógica, patos espaciales, humor cambiante (no agresivo), sin ser útil."}

Customizing the Prompt

Define your own personality by setting AI_SYSTEM_PROMPT in .env:
AI_SYSTEM_PROMPT="You are a helpful and friendly assistant. Provide clear, accurate, and concise answers to user questions."

Prompt Engineering Tips

Effective System Prompts Should:
  1. Define the role clearly: “You are a…”
  2. Specify the tone: professional, casual, humorous, formal
  3. Set boundaries: what the bot should/shouldn’t do
  4. Include formatting rules: if you want specific response structures
  5. Mention language: if you want responses in a specific language

Temperature Settings

Temperature controls response randomness and creativity.

Configuration

application.yaml:31
temperature: ${AI_TEMPERATURE:1.2}
Set in .env:
.env
AI_TEMPERATURE=0.7

Temperature Guide

RangeBehaviorUse Case
0.0 - 0.3Very deterministic, focused, repetitiveTechnical Q&A, factual information
0.4 - 0.7Balanced, consistent, reliableCustomer support, documentation
0.8 - 1.2Creative, varied, engagingConversation bots, creative writing
1.3 - 1.7Highly creative, unpredictableEntertainment, humor, brainstorming
1.8 - 2.0Chaotic, experimental, randomExperimental bots, art generation
The default value of 1.2 is optimized for the “chaotic AI” personality. Adjust based on your desired bot behavior.

Max Tokens

Controls the maximum length of AI responses.

Configuration

application.yaml:32
max-tokens: ${AI_MAX_TOKENS:150}
Set in .env:
.env
AI_MAX_TOKENS=300

Token Guidelines

Tokens are pieces of words. Roughly:
  • 1 token ≈ 4 characters in English
  • 1 token ≈ ¾ of a word
  • 100 tokens ≈ 75 words
Example:
  • “Hello world” = 2 tokens
  • A typical paragraph = 50-100 tokens
Short responses (50-150 tokens):
  • Quick replies
  • Chat bot interactions
  • Lower API costs
Medium responses (150-500 tokens):
  • Detailed explanations
  • Customer support
  • Balanced cost and quality
Long responses (500-2000 tokens):
  • Articles or essays
  • Technical documentation
  • Higher API costs
Higher token limits increase costs proportionally. Monitor your OpenRouter usage.

OpenRouter Configuration in Code

OpenRouterConfig

The configuration class sets up the REST client:
src/main/java/com/acamus/telegrm/infrastructure/config/OpenRouterConfig.java
@Configuration
public class OpenRouterConfig {

    @Value("${openrouter.base-url}")
    private String baseUrl;

    @Value("${openrouter.api-key}")
    private String apiKey;

    @Bean
    public RestClient openRouterRestClient() {
        return RestClient.builder()
                .baseUrl(baseUrl)
                .defaultHeader("Authorization", "Bearer " + apiKey)
                .defaultHeader("HTTP-Referer", "https://github.com/acamus/telegrm")
                .defaultHeader("X-Title", "Telegrm Bot")
                .build();
    }
}

OpenRouterAdapter

The adapter implements the AI generation logic:
src/main/java/com/acamus/telegrm/infrastructure/adapters/out/ai/OpenRouterAdapter.java
@Component
public class OpenRouterAdapter implements AiGeneratorPort {

    private final RestClient restClient;
    private final String model;
    private final String systemPrompt;
    private final double temperature;
    private final int maxTokens;

    @Override
    public String generateResponse(String userInput) {
        ChatMessage systemMessage = new ChatMessage("system", systemPrompt, null);
        ChatMessage userMessage = new ChatMessage("user", userInput, null);

        OpenRouterRequest request = new OpenRouterRequest(
            model, 
            List.of(systemMessage, userMessage), 
            maxTokens, 
            temperature
        );

        OpenRouterResponse response = restClient.post()
                .uri("/chat/completions")
                .body(request)
                .retrieve()
                .body(OpenRouterResponse.class);

        return response.choices().getFirst().message().content();
    }
}

Testing AI Configuration

1

Start the application

docker-compose up --build
2

Send test messages

Message your Telegram bot with various prompts to test:
  • Response quality
  • Personality consistency
  • Response length
  • Creativity level
3

Monitor logs

Check logs for AI processing:
docker-compose logs -f | grep -i "openrouter\|ai"
4

Adjust parameters

Fine-tune AI_TEMPERATURE and AI_MAX_TOKENS based on results.

Error Handling

The adapter includes comprehensive error handling:
src/main/java/com/acamus/telegrm/infrastructure/adapters/out/ai/OpenRouterAdapter.java:60-66
try {
    // AI generation logic
} catch (RestClientResponseException e) {
    return "La IA rechazó mi petición (HTTP " + e.getStatusCode() + ").";
} catch (ResourceAccessException e) {
    return "No puedo conectar con la IA (Error de red).";
} catch (Exception e) {
    return "Ocurrió un error interno al procesar la respuesta de la IA: " + e.getMessage();
}

Common Errors

Cause: Invalid or missing API keySolution:
  1. Verify OPENROUTE_KEY in .env
  2. Check the key is active in OpenRouter dashboard
  3. Ensure no extra spaces in the key
Cause: Insufficient credits for premium modelsSolution:
  1. Add credits to your OpenRouter account
  2. Switch to a free model temporarily
  3. Check usage at OpenRouter Dashboard
Cause: Too many requests in a short timeSolution:
  1. Reduce Telegram polling frequency
  2. Implement request queuing
  3. Upgrade OpenRouter plan for higher limits
Cause: OpenRouter service issueSolution:
  1. Check OpenRouter Status
  2. Retry after a few minutes
  3. Switch to a different model temporarily

Monitoring Usage and Costs

Track your AI usage to manage costs effectively:
  1. OpenRouter Dashboard: View real-time usage and costs
  2. Activity Log: See individual API calls and token usage
  3. Set Limits: Configure spending limits in your account

Usage Tracking

Visit the OpenRouter Activity Page to monitor:
  • Total API calls
  • Token usage per request
  • Cost per model
  • Daily/monthly spending

Advanced Configuration

Custom Headers

The adapter sets custom headers for attribution:
src/main/java/com/acamus/telegrm/infrastructure/config/OpenRouterConfig.java:22-23
.defaultHeader("HTTP-Referer", "https://github.com/acamus/telegrm")
.defaultHeader("X-Title", "Telegrm Bot")
These help OpenRouter attribute usage to your project.

Reasoning Support

Some models support reasoning output. To enable it, uncomment in OpenRouterAdapter.java:
src/main/java/com/acamus/telegrm/infrastructure/adapters/out/ai/OpenRouterAdapter.java:54-56
// if (msg.reasoning() != null && !msg.reasoning().isBlank()) {
//     return msg.reasoning();
// }
Reasoning shows the model’s thought process before generating the final answer. Only supported by specific models like deepseek-reasoner.

Configuration Summary

VariableDefaultPurpose
OPENROUTE_KEYrequiredAPI authentication
AI_SYSTEM_PROMPTChaotic Spanish AIBot personality
AI_TEMPERATURE1.2Response creativity
AI_MAX_TOKENS150Response length
openrouter.modeltngtech/deepseek-r1t2-chimera:freeAI model selection
openrouter.base-urlhttps://openrouter.ai/api/v1API endpoint

Next Steps

Telegram Setup

Configure your Telegram bot

Environment Config

Review all configuration options

Development

Set up your development environment

Architecture

Understand the system architecture

Build docs developers (and LLMs) love