Skip to main content
Some LLM providers use OAuth authentication instead of API keys. Nanobot supports OAuth providers through an interactive login flow.

Supported OAuth Providers

From nanobot/providers/registry.py, two OAuth providers are currently supported:
  1. OpenAI Codex - ChatGPT Codex Responses API
  2. GitHub Copilot - GitHub Copilot LLM access

OpenAI Codex

From nanobot/providers/registry.py:208-218:
ProviderSpec(
    name="openai_codex",
    keywords=("openai_codex", "codex", "gpt-5"),
    env_key="",                         # OAuth-based, no API key
    display_name="OpenAI Codex",
    litellm_prefix="openai-codex",     # model → openai-codex/model
    skip_prefixes=("openai-codex/", "openai_codex/"),
    env_extras=(),
    is_gateway=False,
    is_oauth=True,                      # OAuth-based authentication
)
Requirements:
  • ChatGPT Plus or Pro account
  • Interactive terminal for OAuth login

GitHub Copilot

From nanobot/providers/registry.py:221-237:
ProviderSpec(
    name="github_copilot",
    keywords=("github_copilot", "copilot"),
    env_key="",                         # OAuth-based, no API key
    display_name="Github Copilot",
    litellm_prefix="github_copilot",   # github_copilot/model → github_copilot/model
    skip_prefixes=("github_copilot/",),
    env_extras=(),
    is_gateway=False,
    is_local=False,
    is_oauth=True,                      # OAuth-based authentication
)
Requirements:
  • GitHub Copilot subscription
  • Interactive terminal for OAuth login

OpenAI Codex Implementation

Provider Architecture

From nanobot/providers/openai_codex_provider.py:20-24:
class OpenAICodexProvider(LLMProvider):
    """Use Codex OAuth to call the Responses API."""
    
    def __init__(self, default_model: str = "openai-codex/gpt-5.1-codex"):
        super().__init__(api_key=None, api_base=None)
        self.default_model = default_model

Authentication Flow

From nanobot/providers/openai_codex_provider.py:39-40:
token = await asyncio.to_thread(get_codex_token)
headers = _build_headers(token.account_id, token.access)
The oauth_cli_kit library handles the OAuth flow:
  1. Opens browser for ChatGPT login
  2. User authenticates with OpenAI
  3. Token is stored locally for reuse
  4. Access token is refreshed automatically

Request Headers

From nanobot/providers/openai_codex_provider.py:92-101:
def _build_headers(account_id: str, token: str) -> dict[str, str]:
    return {
        "Authorization": f"Bearer {token}",
        "chatgpt-account-id": account_id,
        "OpenAI-Beta": "responses=experimental",
        "originator": DEFAULT_ORIGINATOR,
        "User-Agent": "nanobot (python)",
        "accept": "text/event-stream",
        "content-type": "application/json",
    }

API Endpoint

From nanobot/providers/openai_codex_provider.py:16-17:
DEFAULT_CODEX_URL = "https://chatgpt.com/backend-api/codex/responses"
DEFAULT_ORIGINATOR = "nanobot"

Message Conversion

The Codex API uses a different message format than OpenAI. From nanobot/providers/openai_codex_provider.py:136-193:
def _convert_messages(messages: list[dict[str, Any]]) -> tuple[str, list[dict[str, Any]]]:
    system_prompt = ""
    input_items: list[dict[str, Any]] = []
    
    for idx, msg in enumerate(messages):
        role = msg.get("role")
        content = msg.get("content")
        
        if role == "system":
            system_prompt = content if isinstance(content, str) else ""
            continue
        
        if role == "user":
            input_items.append(_convert_user_message(content))
            continue
        
        if role == "assistant":
            # Handle text first.
            if isinstance(content, str) and content:
                input_items.append(
                    {
                        "type": "message",
                        "role": "assistant",
                        "content": [{"type": "output_text", "text": content}],
                        "status": "completed",
                        "id": f"msg_{idx}",
                    }
                )
            # Then handle tool calls.
            for tool_call in msg.get("tool_calls", []) or []:
                fn = tool_call.get("function") or {}
                call_id, item_id = _split_tool_call_id(tool_call.get("id"))
                call_id = call_id or f"call_{idx}"
                item_id = item_id or f"fc_{idx}"
                input_items.append(
                    {
                        "type": "function_call",
                        "id": item_id,
                        "call_id": call_id,
                        "name": fn.get("name"),
                        "arguments": fn.get("arguments") or "{}",
                    }
                )
            continue
        
        if role == "tool":
            call_id, _ = _split_tool_call_id(msg.get("tool_call_id"))
            output_text = content if isinstance(content, str) else json.dumps(content, ensure_ascii=False)
            input_items.append(
                {
                    "type": "function_call_output",
                    "call_id": call_id,
                    "output": output_text,
                }
            )
            continue
    
    return system_prompt, input_items

Prompt Caching

From nanobot/providers/openai_codex_provider.py:224-226:
def _prompt_cache_key(messages: list[dict[str, Any]]) -> str:
    raw = json.dumps(messages, ensure_ascii=True, sort_keys=True)
    return hashlib.sha256(raw.encode("utf-8")).hexdigest()

Usage

Login Flow

OpenAI Codex

nanobot provider login openai-codex
This command:
  1. Opens your browser to ChatGPT login page
  2. Prompts you to authenticate
  3. Stores access token locally
  4. Confirms successful authentication

GitHub Copilot

nanobot provider login github-copilot
Similar flow for GitHub authentication.

Configuration

After login, configure your model in ~/.nanobot/config.json:

OpenAI Codex

{
  "agents": {
    "defaults": {
      "model": "openai-codex/gpt-5.1-codex"
    }
  }
}
Available models:
  • gpt-5.1-codex - Latest Codex model
  • gpt-5.0-codex - Previous version

GitHub Copilot

{
  "agents": {
    "defaults": {
      "model": "github_copilot/gpt-4"
    }
  }
}

Starting a Chat

# Interactive mode
nanobot agent

# Single message
nanobot agent -m "Explain OAuth flow"

Docker Usage

For Docker users, interactive OAuth requires the -it flags:
# Login (requires interactive terminal)
docker run -it -v ~/.nanobot:/root/.nanobot nanobot provider login openai-codex

# Chat
docker run -it -v ~/.nanobot:/root/.nanobot nanobot agent

Advanced Features

Reasoning Effort

From nanobot/providers/openai_codex_provider.py:55-56:
if reasoning_effort:
    body["reasoning"] = {"effort": reasoning_effort}
Configure reasoning effort in your config:
{
  "agents": {
    "defaults": {
      "model": "openai-codex/gpt-5.1-codex",
      "reasoningEffort": "high"
    }
  }
}
Levels: low, medium, high

Tool Support

Codex supports function calling with automatic tool conversion. From nanobot/providers/openai_codex_provider.py:118-133:
def _convert_tools(tools: list[dict[str, Any]]) -> list[dict[str, Any]]:
    """Convert OpenAI function-calling schema to Codex flat format."""
    converted: list[dict[str, Any]] = []
    for tool in tools:
        fn = (tool.get("function") or {}) if tool.get("type") == "function" else tool
        name = fn.get("name")
        if not name:
            continue
        params = fn.get("parameters") or {}
        converted.append({
            "type": "function",
            "name": name,
            "description": fn.get("description") or "",
            "parameters": params if isinstance(params, dict) else {},
        })
    return converted

Error Handling

From nanobot/providers/openai_codex_provider.py:313-316:
def _friendly_error(status_code: int, raw: str) -> str:
    if status_code == 429:
        return "ChatGPT usage quota exceeded or rate limit triggered. Please try again later."
    return f"HTTP {status_code}: {raw}"

SSL Verification Fallback

From nanobot/providers/openai_codex_provider.py:64-70:
try:
    content, tool_calls, finish_reason = await _request_codex(url, headers, body, verify=True)
except Exception as e:
    if "CERTIFICATE_VERIFY_FAILED" not in str(e):
        raise
    logger.warning("SSL certificate verification failed for Codex API; retrying with verify=False")
    content, tool_calls, finish_reason = await _request_codex(url, headers, body, verify=False)

Troubleshooting

Token Expired

If you receive authentication errors, re-login:
nanobot provider login openai-codex
Tokens are automatically refreshed, but manual re-login may be needed if the refresh token expires.

Browser Not Opening

If the browser doesn’t open automatically:
  1. Copy the URL from terminal output
  2. Open it manually in your browser
  3. Complete authentication
  4. Return to terminal

Rate Limits

Codex uses ChatGPT usage quotas. If you hit limits:
  • Plus users: Wait for quota refresh (typically hourly)
  • Pro users: Higher limits, but still rate-limited
  • Alternative: Switch to API key-based provider temporarily

Docker Interactive Issues

Ensure you use -it flags:
# Correct
docker run -it -v ~/.nanobot:/root/.nanobot nanobot provider login openai-codex

# Incorrect (will fail)
docker run -v ~/.nanobot:/root/.nanobot nanobot provider login openai-codex

Comparison: OAuth vs API Keys

FeatureOAuth ProvidersAPI Key Providers
AuthenticationBrowser-based loginAPI key in config
Token ManagementAuto-refreshManual key rotation
BillingSubscription-basedUsage-based
AccessTied to accountTied to key
RevocationLogout/revokeDelete key
SetupInteractiveCopy-paste
DockerRequires -itWorks in background
CI/CDNot recommendedRecommended

Security Considerations

  1. Token Storage: OAuth tokens are stored locally in ~/.nanobot/ - protect this directory
  2. Token Lifespan: Tokens expire and are refreshed automatically
  3. Revocation: Revoke access through provider’s account settings if needed
  4. Shared Systems: Don’t use OAuth providers on shared systems; use service accounts with API keys instead
  5. CI/CD: OAuth requires interactive login; use API key providers for automation

Best Practices

  1. Use for personal workflows: OAuth is ideal for individual developer use
  2. Use API keys for automation: Bots, CI/CD, and servers should use API key providers
  3. Protect token files: Set appropriate file permissions on ~/.nanobot/
  4. Re-authenticate periodically: If you notice authentication issues, re-login
  5. Monitor usage: OAuth providers tie to subscription quotas; monitor usage to avoid interruptions

Model Prefixing

OAuth providers use model prefixing to ensure proper routing. From README.md, the registry automatically handles prefixing:
{
  "agents": {
    "defaults": {
      "model": "gpt-5.1-codex"  // Auto-prefixed to "openai-codex/gpt-5.1-codex"
    }
  }
}
You can also use the full prefix explicitly:
{
  "agents": {
    "defaults": {
      "model": "openai-codex/gpt-5.1-codex"
    }
  }
}

Build docs developers (and LLMs) love