Skip to main content

OpenAI Backend (Stub)

The OpenAIBackend is a placeholder for ChatGPT / GPT-4 prompt compilation. It is scheduled for Phase 2 expansion.

Status

Implementation: NOT YET IMPLEMENTED
Priority: Scheduled for Phase 2 expansion
Reference: See anthropic_backend.py for implementation pattern

Planned Design

Target API

OpenAI Chat Completions API:
https://platform.openai.com/docs/api-reference/chat/create
Models: GPT-4, GPT-4 Turbo, GPT-3.5, future models

Key Differences from Anthropic

  1. Message Roles: OpenAI uses system / user / assistant roles
  2. Tool Calls: Uses function_call / tools array format
  3. JSON Mode: Structured output via response_format: {"type": "json_object"}
  4. Thinking: No extended thinking blocks (requires prompt engineering)

Stub Implementation

from axon.backends.base_backend import BaseBackend, CompiledStep, CompilationContext
from axon.compiler.ir_nodes import IRNode, IRPersona, IRContext, IRAnchor, IRToolSpec

class OpenAIBackend(BaseBackend):
    """Stub implementation for the OpenAI backend."""

    @property
    def name(self) -> str:
        return "openai"

    def compile_step(
        self, step: IRNode, context: CompilationContext
    ) -> CompiledStep:
        raise NotImplementedError(
            "OpenAI backend is not yet implemented. "
            "Scheduled for Phase 2 expansion. "
            "See: anthropic_backend.py for reference implementation."
        )

    def compile_system_prompt(
        self,
        persona: IRPersona | None,
        context: IRContext | None,
        anchors: list[IRAnchor],
    ) -> str:
        raise NotImplementedError(
            "OpenAI system prompt compilation is not yet implemented. "
            "Should produce OpenAI Chat Completions 'system' role content."
        )

    def compile_tool_spec(self, tool: IRToolSpec) -> dict[str, Any]:
        raise NotImplementedError(
            "OpenAI tool spec compilation is not yet implemented. "
            "Should produce OpenAI function_call / tool_call format."
        )

Planned Features

System Message Format

Expected structure for OpenAI’s system role:
system_message = {
    "role": "system",
    "content": """
You are LegalExpert, an AI assistant specializing in contract law and IP.

Communication Guidelines:
- Tone: precise and analytical
- Always cite sources for factual claims
- Only provide claims you are at least 85% confident about

Hard Constraints (NEVER violate):
1. NoHallucination: You must cite all sources. You must not hallucinate or speculate.
2. When uncertain, respond with: "I don't have sufficient information to answer this confidently."
"""
}

Tool Declaration Format

OpenAI function_call format:
{
    "type": "function",
    "function": {
        "name": "WebSearch",
        "description": "Search the web for information. Provider: brave. Timeout: 10s.",
        "parameters": {
            "type": "object",
            "properties": {
                "query": {
                    "type": "string",
                    "description": "The search query"
                },
                "max_results": {
                    "type": "integer",
                    "description": "Maximum number of results",
                    "default": 5
                }
            },
            "required": ["query"]
        }
    }
}

JSON Mode for Structured Output

# When IRProbe or structured type is expected:
request_params = {
    "model": "gpt-4",
    "messages": [...],
    "response_format": {"type": "json_object"},
    "tools": [...]  # Optional
}

Implementation Roadmap

Phase 1: Core Compilation

  • Implement compile_system_prompt() with persona + anchors
  • Implement compile_step() for IRStep, IRIntent, IRProbe
  • Add message role wrapping (system, user)

Phase 2: Advanced Features

  • Implement compile_tool_spec() for function calling
  • Add IRReason compilation with chain-of-thought framing
  • Implement IRWeave compilation for synthesis

Phase 3: Optimization

  • Tune prompt engineering for GPT-4’s reasoning style
  • Add context window management (8k / 32k / 128k)
  • Implement JSON schema validation for structured output

How to Implement

1. Study the Reference

Read anthropic_backend.py to understand:
  • System prompt structure
  • Step compilation dispatch
  • Tool specification format

2. Adapt for OpenAI

Key changes:
# Anthropic: single system prompt string
system_prompt = "You are LegalExpert..."

# OpenAI: system message object
system_message = {
    "role": "system",
    "content": "You are LegalExpert..."
}
# Anthropic: input_schema
"input_schema": {...}

# OpenAI: parameters (note: no "input_" prefix)
"parameters": {...}

3. Test with Mock Client

from axon.backends.openai_backend import OpenAIBackend
from tests.mocks import MockModelClient

backend = OpenAIBackend()
compiled = backend.compile_program(ir_program)

# Verify structure
assert compiled.backend_name == "openai"
assert "role" in compiled.execution_units[0].system_prompt  # Message format

Contributing

Interested in implementing the OpenAI backend?
  1. Fork the repository
  2. Create a feature branch: git checkout -b feature/openai-backend
  3. Implement OpenAIBackend following the BaseBackend interface
  4. Add tests in tests/test_openai_backend.py
  5. Submit a PR with:
    • Implementation of all abstract methods
    • Unit tests for each compilation method
    • Integration test with MockModelClient
    • Documentation updates

Next Steps

Anthropic Reference

Study the reference implementation

Backend Overview

Review backend architecture principles

Build docs developers (and LLMs) love