Skip to main content

Overview

Antigravity Manager includes a built-in proxy server that translates OpenAI, Anthropic, and Gemini API requests to use your Google AI accounts.

Basic Configuration

Server Settings

enabled
boolean
default:"false"
Enable the proxy service
port
number
default:"8045"
Listening port for the proxy serverRange: 8000-65535Location: config.rs:478
auto_start
boolean
default:"false"
Automatically start proxy server when application launchesLocation: config.rs:487
request_timeout
number
default:"120"
API request timeout in secondsDefault: 120 seconds
Range: 30-7200 seconds
Location: config.rs:494-495

Network Access

allow_lan_access
boolean
default:"false"
Allow LAN access to the proxy server
  • false: Bind to 127.0.0.1 (local only, privacy-first)
  • true: Bind to 0.0.0.0 (allow LAN access)
Location: config.rs:463-467
When enabling LAN access, ensure you configure authentication to prevent unauthorized access.

Authentication Modes

auth_mode
enum
default:"auto"
API authentication policyOptions:
  • off: No authentication required
  • strict: Authentication required for all routes
  • all_except_health: Authentication required except /healthz
  • auto: Recommended defaults (LAN=all_except_health, local=off)
Location: config.rs:469-476
api_key
string
required
API key for client authenticationFormat: Must start with sk- followed by UUIDDefault: Auto-generated on first run
Example: sk-a1b2c3d4e5f6...
Location: config.rs:481
admin_password
string
Web UI management console passwordIf not set, defaults to api_key value.Location: config.rs:484

Upstream Proxy Configuration

Route requests through an upstream proxy server:
upstream_proxy.enabled
boolean
default:"false"
Enable upstream proxyLocation: config.rs:564
upstream_proxy.url
string
Upstream proxy URLSupported protocols: http://, https://, socks5://Example: http://127.0.0.1:7890Location: config.rs:566
If the proxy URL doesn’t include a protocol, http:// is automatically prepended. See normalize_proxy_url() in config.rs:10-21

Logging Configuration

enable_logging
boolean
default:"true"
Enable request logging (required for token statistics)Location: config.rs:499
debug_logging.enabled
boolean
default:"false"
Enable debug logging (saves full request/response chain)Location: config.rs:502
debug_logging.output_dir
string
Custom directory for debug logsIf not set, uses default application data directory.Location: config.rs:503

Advanced Settings

User-Agent Override

user_agent_override
string
Override User-Agent header sent to upstream APIsExample: antigravity/1.15.8 darwin/arm64Location: config.rs:515
saved_user_agent
string
Persisted User-Agent value (even when override is disabled)Location: config.rs:536-537

Configuration Example

{
  "proxy": {
    "enabled": true,
    "port": 8045,
    "auto_start": true,
    "allow_lan_access": false,
    "auth_mode": "auto",
    "api_key": "sk-a1b2c3d4e5f6...",
    "admin_password": "my-secure-password",
    "request_timeout": 120,
    "enable_logging": true,
    "upstream_proxy": {
      "enabled": false,
      "url": ""
    },
    "debug_logging": {
      "enabled": false,
      "output_dir": null
    },
    "user_agent_override": null
  }
}

Usage Examples

Python (OpenAI SDK)

from openai import OpenAI

client = OpenAI(
    base_url="http://127.0.0.1:8045/v1",
    api_key="sk-your-antigravity-key"
)

response = client.chat.completions.create(
    model="gemini-3-flash",
    messages=[{"role": "user", "content": "Hello!"}]
)

print(response.choices[0].message.content)

Python (Anthropic SDK)

from anthropic import Anthropic

client = Anthropic(
    base_url="http://127.0.0.1:8045",
    api_key="sk-your-antigravity-key"
)

response = client.messages.create(
    model="claude-sonnet-4-6",
    max_tokens=1024,
    messages=[{"role": "user", "content": "Hello!"}]
)

print(response.content[0].text)

Python (Native Gemini SDK)

import google.generativeai as genai

genai.configure(
    api_key="sk-your-antigravity-key",
    transport='rest',
    client_options={'api_endpoint': 'http://127.0.0.1:8045'}
)

model = genai.GenerativeModel('gemini-3-flash')
response = model.generate_content("Hello!")
print(response.text)

Build docs developers (and LLMs) love