AI Copilot
The AI Copilot is the brain of Support Bot - an intelligent agent powered by LangGraph that helps you resolve incidents faster by searching historical data, analyzing patterns, and providing contextual recommendations.How It Works
The AI Copilot uses a sophisticated agent graph that processes your queries through multiple steps:Query Understanding
The copilot analyzes your question, extracts search intent, and rewrites conversational queries into search-optimized terms.For example:
- “hey how to solve the issue with loan emi?” →
"Loan EMI issue" - “what happened with Swift transfer delays?” →
"Swift transfer delays"
Tool Selection
Based on your query, the agent automatically selects the right search tool:
lookup_incident_by_id: When you mention a specific incident IDsearch_similar_incidents: When you describe a problem or errorget_incidents_by_application: When asking about a specific app/systemget_recent_incidents: When asking about recent timeframes
Knowledge Retrieval
The agent searches your knowledge base using vector similarity search to find the most relevant historical incidents and solutions.
Key Features
Conversational Memory
The copilot maintains context across your entire conversation using PostgreSQL checkpointing:Golden Examples
The copilot learns from verified incident resolutions to improve its recommendations:Smart Query Rewriting
The copilot automatically cleans and optimizes your queries for better search results:Multi-LLM Support
You can use any LLM provider with the copilot:Anthropic Claude
Claude 3.5 Sonnet for powerful reasoning and analysis
OpenAI GPT
GPT-4 for general-purpose assistance
Google Gemini
Gemini Pro for cost-effective processing
Ollama (Local)
Run models locally for privacy and control
Agent Architecture
The copilot uses a LangGraph state machine with three main nodes:State Management
The agent maintains state across interactions:- Track conversation history
- Remember user context
- Generate conversation titles
- Manage tool execution
Using the Copilot
In the Web Interface
The copilot powers the chat interface automatically. Just type your question:Via the API
You can integrate the copilot into your own applications:With Streaming
For real-time responses as the agent thinks:Best Practices
Be Specific with Incident IDs
Be Specific with Incident IDs
When referencing specific incidents, include the full ID:✅ “Tell me about incident INC-2025-08-24-001”❌ “Tell me about that payment incident”
Use Clear Search Terms
Use Clear Search Terms
The copilot works best with clear, specific terminology:✅ “HTTP 403 forbidden errors in PayU integration”❌ “that thing that’s not working”
Ask Follow-up Questions
Ask Follow-up Questions
Take advantage of conversation memory:
- “Show me database timeout incidents”
- “What was the root cause of the first one?”
- “Has this happened before?”
Leverage Time-based Queries
Leverage Time-based Queries
The copilot understands temporal context:
- “Recent incidents”
- “Last week’s failures”
- “Issues from yesterday”
Configuration Options
Temperature Control
Adjust the LLM’s creativity vs. consistency:- Lower (0.0-0.3): More consistent, factual responses
- Medium (0.4-0.7): Balanced creativity and accuracy
- Higher (0.8-1.0): More creative, varied responses
Tracing with Langfuse
Enable observability to debug and monitor the copilot:- LLM invocations
- Tool executions
- Token usage
- Response quality
Langfuse tracking is opt-in by default for privacy. Users must explicitly enable it.
Troubleshooting
Copilot Not Finding Incidents
If the copilot isn’t finding relevant incidents:- Check your knowledge base: Ensure incidents are ingested into Qdrant
- Verify embeddings: The system uses
all-MiniLM-L6-v2for semantic search - Review query formatting: Make sure queries are clear and specific
Slow Response Times
If responses are taking too long:- Check LLM provider: Some models are faster than others
- Use streaming mode: Get partial responses while processing
- Optimize tool calls: Reduce the number of concurrent searches
Context Not Maintained
If the copilot forgets previous messages:- Verify thread_id: Ensure the same thread ID is used across requests
- Check PostgreSQL: The checkpointer requires a working database connection
- Review session state: Make sure
session_idis passed correctly
Next Steps
Knowledge Base
Learn how to populate and manage your incident knowledge base
LangGraph Architecture
Deep dive into the agent’s internal workflow
LLM Providers
Configure and manage different AI model providers
API Reference
Integrate the copilot into your applications