Response Mapping
After sending requests to Google’s Gemini API, Antigravity transforms the responses back into the original protocol format expected by the client.Architecture
Gemini to OpenAI Conversion
Location:src-tauri/src/proxy/mappers/openai/response.rs
Basic Structure
Gemini Response:Content Extraction
The mapper processes each part:Thinking Content Separation
Thinking models output two content streams:content: Final answer visible to userreasoning_content: Internal reasoning process
thought: true flag on parts.
Tool Calls Conversion
Gemini Format:Image Responses
Inline images are converted to markdown:Grounding Metadata (Web Search)
Search results are appended as markdown:Finish Reason Mapping
| Gemini Reason | OpenAI Reason | Description |
|---|---|---|
STOP | stop | Natural completion |
MAX_TOKENS | length | Token limit reached |
SAFETY | content_filter | Safety violation |
RECITATION | content_filter | Copyright detection |
| Tool call present | tool_calls | Function called |
Usage Metadata Mapping
Gemini to Claude Conversion
Location:src-tauri/src/proxy/mappers/claude/streaming.rs (non-streaming uses similar logic)
Message Structure
Claude expects a structured message with content blocks:Content Block Types
1. Thinking BlockStop Reason Logic
Usage Conversion
Claude format includes cache information:Context Scaling
For large context windows (>1M tokens), usage can be scaled:Multi-Candidate Support
Both protocols support multiple response candidates (OpenAI’sn parameter):
Image Generation Responses
For image models: Gemini:Signature Caching
Thought signatures are automatically cached for future requests:- Conversation continuity - Signatures persist across turns
- Retry recovery - Failed requests can reuse signatures
- Tool loop support - Function calls maintain signature context
Response Validation
Before returning responses, the mapper validates:- Required fields present (id, object, choices)
- At least one choice exists
- Valid finish reason for completed responses
- Usage data matches actual tokens (when available)
Error Responses
When upstream errors occur: OpenAI Format:Performance Optimizations
Zero-Copy Deserialization
Responses useserde_json::from_str with borrowed strings to avoid allocations:
Lazy Evaluation
Usage metadata is only extracted when needed:String Pooling
Common strings are reused:See Also
- Request Mapping - Converting requests to Gemini
- Streaming - Real-time SSE transformation
- Error Handling - Self-healing and retry