Overview
Agent LoL integrates with OpenAI’s GPT-4-mini model to provide intelligent match analysis and coaching feedback. This feature analyzes match timeline data and provides personalized insights about your early game performance.Getting Your API Key
Step 1: Create an OpenAI Account
- Visit platform.openai.com
- Sign up or log in with your account
- Navigate to API Keys
Step 2: Generate an API Key
- Click “Create new secret key”
- Give it a descriptive name (e.g., “Agent LoL Development”)
- Copy the key immediately - it won’t be shown again
Step 3: Configure Environment Variables
Add these variables to your.env.local file:
Configuration
Your OpenAI API key with access to chat completionsRequired for AI features. The key must have permissions to use the Chat Completions API.
Feature flag to enable/disable AI coachingSet to
true to activate the match context agent. When disabled, the application still fetches timeline data but skips AI analysis.Both
OPENAI_KEY and ENABLE_MATCH_AGENT=true must be configured for AI features to work. See route.js:22-23 where these are checked.AI Features
When OpenAI integration is enabled, the application provides:Timeline Comparison Analysis
The AI agent analyzes early game performance by comparing you against your lane opponent:- Gold differential tracking from minute 0 to the comparison point
- CS (minion) differential and farming efficiency
- XP and level progression throughout early game
- Trend analysis showing who gained advantages and when
- Actionable coaching tips on what to improve
How It Works
-
Data Collection - The
/api/riot/match/timeline/compareendpoint fetches:- Full match data from Riot API
- Minute-by-minute timeline frames
- Participant stats for you and your lane opponent
-
Frame Analysis - The application builds progression data from minute 0 to
TIMELINE_COMPARE(seeroute.js:94-101): -
AI Coaching - If OpenAI is enabled, the data is sent to GPT-4-mini with a specialized prompt (see
route.js:110-139)
The AI Prompt
The application uses a carefully crafted prompt to generate coaching feedback:System Prompt
User Prompt
Model Configuration
The application uses OpenAI’s GPT-4o-mini model for cost-effective analysis:route.js
route.js:116-139 for the complete API call configuration.
Why GPT-4o-mini?
- Cost-effective: ~0.60 per 1M output tokens
- Fast: Low latency for real-time analysis
- Sufficient: Handles structured data analysis well
- Reliable: Consistent coaching quality
Each analysis uses approximately 500-1000 input tokens (timeline data) and generates 100-200 output tokens (coaching feedback).
Cost Implications
OpenAI charges based on token usage:Estimated Costs per Analysis
| Component | Tokens | Cost (GPT-4o-mini) |
|---|---|---|
| Input (timeline data) | ~750 | $0.0001125 |
| Output (coaching) | ~150 | $0.00009 |
| Total per request | ~900 | ~$0.0002 |
Monthly Cost Estimates
| Usage | Analyses/Month | Estimated Cost |
|---|---|---|
| Light (personal) | 100 | $0.02 |
| Medium (active) | 1,000 | $0.20 |
| Heavy (public) | 10,000 | $2.00 |
Response Structure
The AI generates a 4-6 sentence analysis in Spanish covering:- Initial Comparison - Who started ahead and why
- Mid-Game Shift - Key moments where advantages changed
- Trend Analysis - Who improved or fell behind over time
- Coaching Tip - Specific advice for the player who is behind
Example Response
Error Handling
The application gracefully handles OpenAI errors:route.js
route.js:109-150 for complete error handling.
If OpenAI fails, the endpoint still returns timeline data without the
comparison field. The UI should handle null comparison gracefully.Disabling AI Features
To disable AI coaching while keeping the rest of the app functional:Option 1: Remove OpenAI Key
Option 2: Set Feature Flag to False
Option 3: Both (Safest)
/api/riot/match/timeline/compare endpoint will:
- Still fetch match and timeline data from Riot API
- Still calculate frame comparisons
- Return
comparison: nullin the response - Skip the OpenAI API call entirely
Security Best Practices
DO:
- ✅ Store
OPENAI_KEYin.env.local(gitignored) - ✅ Make OpenAI calls from Next.js API routes (
/app/api/*) - ✅ Validate and sanitize user input before sending to OpenAI
- ✅ Implement rate limiting for public deployments
- ✅ Monitor usage in the OpenAI dashboard
DON’T:
- ❌ Use
NEXT_PUBLIC_OPENAI_KEY(exposes to browser) - ❌ Make OpenAI calls from client components
- ❌ Send raw user input without validation
- ❌ Commit
.env.localto version control
Testing the Integration
-
Configure your environment variables:
-
Start the development server:
- View a match and navigate to the timeline comparison feature
-
Check the server logs for OpenAI API calls:
- Verify the coaching feedback appears in the UI
Monitoring Usage
Track your OpenAI usage and costs:- Visit platform.openai.com/usage
- View requests, token consumption, and costs
- Set up usage limits and alerts
- Monitor for unusual activity
Customizing the AI
Change the Language
Modify the system prompt inroute.js:122-123:
Adjust Temperature
Control response creativity by modifyingroute.js:118:
- 0.0-0.3: More factual, consistent coaching
- 0.4-0.6: Balanced (current setting)
- 0.7-1.0: More creative, varied responses
Change the Model
Upgrade to GPT-4 for potentially better analysis:Troubleshooting
”OpenAI timeline compare error”
- Check your API key is valid and has credits
- Verify the key has Chat Completions API access
- Check the OpenAI status page
No coaching feedback appears
- Ensure
ENABLE_MATCH_AGENT=true - Verify
OPENAI_KEYis set correctly - Check server logs for OpenAI errors
- Confirm you have OpenAI API credits remaining
”Rate limit exceeded”
- You’ve hit OpenAI’s rate limits (tier-based)
- Wait 60 seconds and try again
- Consider implementing request queuing
- Upgrade your OpenAI tier at platform.openai.com/settings
High costs
- Implement response caching for identical requests
- Add rate limiting per user
- Consider switching to GPT-3.5-turbo (cheaper, lower quality)
- Set up spending limits in OpenAI dashboard
