Skip to main content
The GitHub Webhook Server includes AI-powered features to enhance development workflows through intelligent test recommendations and AI agent integration.

PR Test Oracle

AI-powered test recommendations based on pull request diff analysis. The Test Oracle analyzes code changes and recommends which tests to run, helping teams focus testing efforts on affected areas.

Overview

PR Test Oracle is an external service that integrates with the webhook server to provide AI-driven test recommendations. It analyzes PR diffs using advanced AI models and suggests relevant tests based on code changes. GitHub Repository: myk-org/pr-test-oracle

Configuration

Global Configuration:
test-oracle:
  server-url: "http://localhost:8000"
  ai-provider: "claude"  # claude | gemini | cursor
  ai-model: "claude-opus-4-6[1m]"
  test-patterns:
    - "tests/**/*.py"
    - "tests/**/*.js"
  triggers:
    - approved  # Run when /approve command is used
    # - pr-opened             # Run when PR is opened
    # - pr-synchronized       # Run when new commits pushed
Repository-specific Override:
repositories:
  my-project:
    test-oracle:
      server-url: "http://localhost:8000"
      ai-provider: "gemini"
      ai-model: "gemini-pro"
      test-patterns:
        - "tests/**/*.py"
      triggers:
        - approved
        - pr-synchronized

Configuration Options

OptionRequiredDescription
server-urlYesURL of the pr-test-oracle server
ai-providerYesAI provider: claude, gemini, or cursor
ai-modelYesAI model to use (provider-specific)
test-patternsNoGlob patterns for test files
triggersNoWhen to run analysis (default: ["approved"])

Trigger Events

  • approved: Run when /approve command is used (default)
  • pr-opened: Run automatically when PR is opened
  • pr-synchronized: Run when new commits are pushed to PR
Note: “approved” refers to the /approve command trigger, not GitHub’s review approval state.

Manual Triggering

Users can manually request test recommendations:
/test-oracle
Behavior:
  • Command always works when test-oracle is configured
  • No trigger configuration needed for manual requests
  • Posts review comment with test recommendations
  • Links to relevant test files
Permissions: Any authenticated user

How It Works

  1. Trigger: Comment command or configured event trigger
  2. Health Check: Server verifies pr-test-oracle is accessible
  3. Analysis: Sends PR URL and configuration to test oracle
  4. AI Processing: Test oracle analyzes diff with AI model
  5. Recommendations: Posts review with recommended tests
  6. Error Handling: Gracefully handles failures without breaking workflow

AI Providers

Supported AI providers and their configuration:

Claude (Anthropic)

test-oracle:
  ai-provider: "claude"
  ai-model: "claude-opus-4-6[1m]"  # or "sonnet", "haiku"
Container Environment:
ANTHROPIC_API_KEY=sk-ant-xxx

Gemini (Google)

test-oracle:
  ai-provider: "gemini"
  ai-model: "gemini-pro"  # or other Gemini models
Container Environment:
GEMINI_API_KEY=xxx

Cursor

test-oracle:
  ai-provider: "cursor"
  ai-model: "cursor-model"
Container Environment (API Key Method):
CURSOR_API_KEY=xxx
Container Environment (Interactive Login):
# Execute inside container to get login link
docker exec -it github-webhook-server agent

AI CLI Tools in Container

The container image includes these AI CLI tools:
ToolAuth Method
Claude CodeANTHROPIC_API_KEY environment variable
Gemini CLIGEMINI_API_KEY environment variable
Cursor AgentCURSOR_API_KEY environment variable, or interactive login

Docker Compose Configuration

version: "3.8"
services:
  github-webhook-server:
    image: ghcr.io/myk-org/github-webhook-server:latest
    environment:
      # AI CLI API keys for pr-test-oracle integration
      - ANTHROPIC_API_KEY=sk-ant-xxx       # Claude Code
      - GEMINI_API_KEY=xxx                  # Gemini CLI
      - CURSOR_API_KEY=xxx                  # Cursor Agent (API key method)
      # For Cursor interactive login: docker exec -it github-webhook-server agent

Error Handling

Health Check Failure:
  • PR comment posted notifying user of server unavailability
  • Webhook processing continues normally
  • Error logged for debugging
Analysis Errors:
  • Errors logged but no PR comment posted
  • Webhook processing continues
  • Never breaks the webhook flow
Example Error Scenarios:
  • Test oracle server not running
  • Invalid AI provider configuration
  • AI API rate limits exceeded
  • Network connectivity issues

Use Cases

Focused Testing:
Developer changes authentication code
→ AI recommends auth-related tests
→ Faster feedback, targeted testing
New Contributors:
First-time contributor submits PR
→ AI suggests comprehensive test coverage
→ Helps ensure quality without deep codebase knowledge
Large Refactoring:
Major code restructuring
→ AI identifies affected test suites
→ Ensures thorough regression testing

MCP Server for AI Agents

The webhook server includes Model Context Protocol (MCP) integration, enabling AI agents to interact with webhook logs and monitoring data programmatically.

Overview

MCP provides a secure, read-only interface for AI agents to analyze webhook processing, monitor system health, and assist with troubleshooting.

Enabling MCP Server

# Environment variable
export ENABLE_MCP_SERVER=true
Docker Compose:
services:
  github-webhook-server:
    environment:
      - ENABLE_MCP_SERVER=1

Available MCP Endpoints

EndpointDescriptionUse Case
/mcp/webhook_server/healthcheckServer health statusSystem monitoring and uptime checks
/mcp/logs/api/entriesHistorical log data with filteringLog analysis and debugging
/mcp/logs/api/exportLog export functionalityData analysis and reporting
/mcp/logs/api/pr-flow/{identifier}PR flow visualization dataWorkflow analysis and timing
/mcp/logs/api/workflow-steps/{identifier}Workflow timeline dataPerformance analysis
Note: All MCP endpoints are proxied under the /mcp mount point for security isolation.

Security Design

The MCP integration follows a security-first approach:
  • Webhook Processing Protected: Core /webhook_server endpoint NOT exposed to AI agents
  • Read-Only Access: Only monitoring and log analysis endpoints available
  • No Static Files: CSS/JS assets excluded from MCP interface
  • API-Only: Clean interface designed specifically for AI operations
  • Dual-App Architecture: MCP runs on separate FastAPI app instance for isolation

Security Warning - Sensitive Log Data

IMPORTANT: The /mcp/logs/* endpoints expose potentially highly sensitive data:
  • 🔑 GitHub Personal Access Tokens and API credentials
  • 👤 User information and GitHub usernames
  • 📋 Repository details and webhook payloads
  • 🔒 Internal system information and error details
Required Security Measures:
  • ✅ Deploy only on trusted networks (VPN, internal network)
  • Never expose MCP endpoints directly to the internet
  • ✅ Implement reverse proxy authentication for any external access
  • ✅ Use firewall rules to restrict access to authorized IP ranges only
  • ✅ Monitor and audit access to these endpoints

Claude Desktop Integration

Add to your MCP settings:
{
  "mcpServers": {
    "github-webhook-server-logs": {
      "command": "npx",
      "args": ["mcp-remote", "http://your-server:port/mcp", "--allow-http"]
    }
  }
}

AI Agent Capabilities

With MCP integration, AI agents can:
  • Monitor webhook health and processing status in real-time
  • Analyze error patterns and provide intelligent troubleshooting recommendations
  • Track PR workflows and identify performance bottlenecks
  • Generate comprehensive reports on repository automation performance
  • Provide intelligent alerts for system anomalies and failures
  • Query logs naturally using plain English questions
  • Export filtered data for further analysis and reporting

Example AI Queries

Once configured, you can ask AI agents natural language questions: Error Analysis:
"Show me recent webhook errors from the last hour"
"What's the current health status of my webhook server?"
"Find all webhook failures for repository myorg/myrepo today"
Performance Analysis:
"Analyze the processing time for PR #123 and identify bottlenecks"
"Compare processing times between successful and failed webhooks"
"Show me memory usage patterns in recent webhook processing"
Workflow Monitoring:
"What happened with webhook delivery abc123?"
"Export error logs from the last 24 hours for analysis"
"Track the full lifecycle of PR #456"

Use Cases

Development Teams:
  • Automated troubleshooting with AI-powered error analysis
  • Performance monitoring with intelligent pattern recognition
  • Proactive alerting for webhook processing issues
DevOps Engineers:
  • Infrastructure monitoring with real-time health checks
  • Automated incident response with AI-driven root cause analysis
  • Capacity planning through historical performance data
Repository Maintainers:
  • PR workflow optimization by identifying processing bottlenecks
  • Community contribution monitoring with automated metrics
  • Quality assurance reporting and trend analysis

AI Features Configuration

Additional AI-powered enhancements for development workflows.

Conventional Title Suggestions

AI-powered suggestions for Conventional Commit formatted PR titles. Configuration:
ai-features:
  ai-provider: "claude"  # claude | gemini | cursor
  ai-model: "claude-opus-4-6[1m]"
  conventional-title: "true"  # "true" | "false" | "fix"
Modes:
  • "true": Show AI-suggested title in check run output when validation fails
  • "false": Disabled (default)
  • "fix": Auto-update PR title with AI suggestion when validation fails
Behavior:
  1. PR title fails conventional commit validation
  2. AI analyzes PR content and suggests properly formatted title
  3. Depending on mode:
    • “true”: Suggestion shown in check run for manual application
    • “fix”: Title automatically updated (suggestion validated first)
    • “false”: No AI suggestion
On AI CLI Failure:
  • Error is logged
  • Flow continues without suggestion
  • Original validation still enforced

Technical Implementation

The AI integration uses shared modules for consistency:

Test Oracle Module

Location: webhook_server/libs/test_oracle.py Function: call_test_oracle(github_webhook, pull_request, trigger=None) Features:
  • Shared helper for all test oracle integrations
  • Health check before analysis
  • Configurable triggers
  • Graceful error handling

AI CLI Module

Location: webhook_server/libs/ai_cli.py Features:
  • Shared wrapper for AI CLI tools
  • Provider abstraction (Claude, Gemini, Cursor)
  • Command execution with timeout
  • Error handling and logging

MCP Integration

Library: fastapi-mcp Features:
  • Automatic endpoint discovery
  • Structured responses
  • Error handling
  • Performance optimization

Best Practices

Security

  1. API Keys: Store AI provider API keys securely in environment variables
  2. Network Isolation: Deploy AI services on trusted networks only
  3. Access Control: Restrict MCP endpoints to authorized agents
  4. Audit Logging: Monitor AI agent access to webhook data
  5. Data Sanitization: Be aware of sensitive data in logs

Performance

  1. Trigger Configuration: Choose appropriate trigger events to balance automation and API usage
  2. Model Selection: Select AI models based on speed/accuracy tradeoffs
  3. Timeout Settings: Configure reasonable timeouts for AI operations
  4. Error Handling: Ensure AI failures don’t block critical workflows
  5. Rate Limits: Monitor and respect AI provider rate limits

Operational

  1. Health Monitoring: Regularly check test oracle server availability
  2. Cost Tracking: Monitor AI provider API usage and costs
  3. Quality Metrics: Track usefulness of AI recommendations
  4. User Feedback: Collect feedback on AI-generated suggestions
  5. Continuous Improvement: Refine prompts and configuration based on results

Build docs developers (and LLMs) love