Installation
Gradle
Add the following to yourbuild.gradle:
Maven
Add the following to yourpom.xml:
Requirements
- Java 17 or higher
- Spring AI 1.0.0 or higher
- OpenTelemetry Java 1.49.0 or higher
- Micrometer Observation 1.15.0 or higher
Quick Start
Basic Setup
OpenTelemetry Setup
Basic Setup with Phoenix
Configuration
Custom Trace Configuration
Control what information is captured in traces:Chat Options
Configure model parameters:Spring Boot Integration
Application Configuration
Service Example
Tool Calling (Function Calling)
Spring AI supports function calling with automatic tracing:Captured Trace Data
The instrumentation automatically captures:- LLM Model Information: Model name, provider
- Input Messages: User prompts, system messages, conversation history
- Output Messages: Model responses, assistant messages
- Invocation Parameters: Temperature, max tokens, top_p, etc.
- Token Usage: Prompt tokens, completion tokens, total tokens
- Tool Calls: Function names, arguments, and responses
- Message Roles: System, user, assistant, tool
- Timing Information: Request latency and duration
- Error Information: Exceptions and error messages
Multi-turn Conversations
Trace complete conversations with context:Viewing Traces
Using Phoenix
-
Start Phoenix locally:
- Run your instrumented application
- View traces at http://localhost:6006
Using Other Backends
OpenInference instrumentation works with any OpenTelemetry-compatible backend:- Jaeger: Change the OTLP endpoint to your Jaeger instance
- Zipkin: Use the Zipkin exporter
- Cloud Providers: AWS X-Ray, Google Cloud Trace, Azure Monitor
Best Practices
- Singleton Registry: Create a single
ObservationRegistryinstance and reuse it across your application - Spring Boot Integration: Use Spring’s dependency injection for
ObservationRegistry - Set Service Name: Always set a meaningful
service.namein your OpenTelemetry resource - Use Batch Processing: Use
BatchSpanProcessorfor better performance - Handle Secrets: Never log API keys in traces
- Graceful Shutdown: Flush spans before application shutdown
Troubleshooting
No traces appearing
- Verify the
ObservationRegistryis properly configured withSpringAIInstrumentor - Ensure the registry is passed to your
ChatModelvia.observationRegistry() - Check that your OTLP endpoint is accessible
- Enable debug logging for Spring AI observations