Skip to main content

Custom Models Integration

KoreShield can proxy any OpenAI-compatible API endpoint. This is useful for self-hosted models, gateways, or third-party providers that expose OpenAI-style endpoints.

Setup

1

Identify Your API

Ensure your custom model API is OpenAI-compatible (supports /v1/chat/completions).
2

Configure KoreShield

Set up KoreShield proxy to route requests to your custom endpoint.
3

Set Environment Variables

Configure API keys and endpoints for your custom model.

Basic Request

const response = await fetch("http://localhost:8000/v1/chat/completions", {
  method: "POST",
  headers: { "content-type": "application/json" },
  body: JSON.stringify({
    model: "your-model-name",
    messages: [{ role: "user", content: "Summarize the audit log." }]
  })
});

Streaming

const response = await fetch("http://localhost:8000/v1/chat/completions", {
  method: "POST",
  headers: { "content-type": "application/json" },
  body: JSON.stringify({
    model: "your-model-name",
    stream: true,
    messages: [{ role: "user", content: "Generate a short summary." }]
  })
});

Compatibility Notes

  • The environment variable must match the provider name (uppercased + _API_KEY)
  • If the upstream API expects additional headers, configure them in your gateway or extend the provider adapter

Error Handling

  • 403 indicates a blocked request due to policy enforcement
  • 429 or 5xx typically indicates provider or rate-limit issues

Next Steps

Configuration

Configure providers and security settings

OpenAI Integration

Review OpenAI-compatible routing

Build docs developers (and LLMs) love