Skip to main content

OpenAI Integration

Protect OpenAI usage from prompt injection and policy violations by scanning inputs with KoreShield and routing requests through the proxy.

Installation

npm install koreshield openai

Basic Integration

import { Koreshield } from "koreshield";
import OpenAI from "openai";

const koreshield = new Koreshield({
  apiKey: process.env.KORESHIELD_API_KEY
});

const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY
});

export async function secureChat(userMessage: string) {
  const scan = await koreshield.scan({
    content: userMessage,
    userId: "user-123"
  });

  if (scan.threat_detected) {
    throw new Error(`Threat detected: ${scan.threat_type}`);
  }

  const response = await openai.chat.completions.create({
    model: "gpt-5-mini",
    messages: [{ role: "user", content: userMessage }]
  });

  return response.choices[0].message.content;
}
Route OpenAI-compatible requests through KoreShield:
const response = await fetch("http://localhost:8000/v1/chat/completions", {
  method: "POST",
  headers: { "content-type": "application/json" },
  body: JSON.stringify({
    model: "gpt-5-mini",
    messages: [{ role: "user", content: "Summarize the report." }]
  })
});

const data = await response.json();
console.log(data.choices[0].message.content);

Streaming

const stream = await openai.chat.completions.create({
  model: "gpt-5-mini",
  messages: [{ role: "user", content: "Write a release note." }],
  stream: true
});

for await (const chunk of stream) {
  const delta = chunk.choices[0]?.delta?.content || "";
  process.stdout.write(delta);
}

Tool Use (Function Calling)

const tools: OpenAI.Chat.ChatCompletionTool[] = [
  {
    type: "function",
    function: {
      name: "search_database",
      description: "Search the database",
      parameters: {
        type: "object",
        properties: {
          query: { type: "string" }
        },
        required: ["query"]
      }
    }
  }
];

const response = await openai.chat.completions.create({
  model: "gpt-5-mini",
  messages: [{ role: "user", content: "Search for user 123" }],
  tools
});

Embeddings

const response = await openai.embeddings.create({
  model: "text-embedding-3-small",
  input: ["Document text", "Another chunk"]
});

const vectors = response.data.map(d => d.embedding);

Assistants API (Threads)

await openai.beta.threads.messages.create("thread_id", {
  role: "user",
  content: "Summarize the incident"
});

const run = await openai.beta.threads.runs.create("thread_id", {
  assistant_id: "asst_xxxxx"
});

let status = await openai.beta.threads.runs.retrieve("thread_id", run.id);
while (status.status !== "completed") {
  await new Promise(resolve => setTimeout(resolve, 1000));
  status = await openai.beta.threads.runs.retrieve("thread_id", run.id);
}

System Prompts and Multi-Turn

{
  "model": "gpt-5-mini",
  "messages": [
    {"role": "system", "content": "You are a security analyst."},
    {"role": "user", "content": "Summarize the incident."},
    {"role": "assistant", "content": "Summary..."},
    {"role": "user", "content": "List next steps."}
  ]
}

Error Handling

  • 403 indicates a blocked request due to policy enforcement
  • 429 or 5xx typically indicates provider or rate-limit issues
  • Use retries with exponential backoff on transient errors

Security Controls

security:
  sensitivity: medium
  default_action: block
  features:
    sanitization: true
    detection: true
    policy_enforcement: true

Next Steps

Configuration

Configure providers and security settings

JavaScript SDK

Explore the JavaScript SDK

Python SDK

Explore the Python SDK

Build docs developers (and LLMs) love