Skip to main content

Welcome to KoreShield

KoreShield is an open-source security platform designed to protect enterprise applications that use Large Language Models (LLMs) from prompt injection attacks. It acts as a transparent security layer between your application and LLM API providers (DeepSeek, OpenAI, Anthropic, etc.), sanitizing inputs, detecting threats, and enforcing policies before requests reach the model.

Quick Start

Get up and running with KoreShield in under 5 minutes

Installation

Install the SDK or deploy the proxy service

Configuration

Configure security policies and provider settings

API Reference

Explore the complete KoreShield API documentation

Why KoreShield?

Integrating LLMs into production environments introduces novel attack vectors that traditional WAFs cannot detect. Prompt injection attacks pose a critical security risk for LLM-integrated systems. Attackers can:
  • Override system instructions - Manipulate the model’s behavior by injecting malicious prompts
  • Extract sensitive data - Exfiltrate PII, credentials, or proprietary information
  • Bypass security controls - Circumvent safety guardrails and authentication
  • Manipulate AI behavior - Force the model to produce harmful or unintended outputs
  • Exfiltrate proprietary information - Steal training data or system prompts
  • Launch denial of service attacks - Resource exhaustion targeting expensive LLM tokens

Defense-in-Depth Protection

KoreShield provides comprehensive security with four core components:

Input Sanitization

Cleans and normalizes prompts to remove potentially malicious content before processing

Attack Detection

Multi-layered analysis using heuristics, patterns, and LLM-based evaluation to identify threats

Policy Enforcement

Configurable security rules that determine how to handle suspicious requests

Audit Logging

Complete security event recording for compliance and monitoring

Architecture Overview

When running in Proxy Mode, KoreShield sits transparently between your application and LLM providers:
┌─────────────────┐    ┌─────────────────┐    ┌─────────────────┐    ┌─────────────────┐    ┌─────────────────┐
│   Application   │───▶│ Input Sanitizer │───▶│ Attack Detector │───▶│  Policy Engine  │───▶│   LLM Provider  │
│                 │    │                 │    │                 │    │                 │    │  (DeepSeek/     │
└─────────────────┘    └─────────────────┘    └─────────────────┘    └─────────────────┘    │   OpenAI/etc)   │
                                                                                   │        └─────────────────┘

                                                                    ┌─────────────────┐
                                                                    │  Audit Logger   │
                                                                    └─────────────────┘
KoreShield operates as a transparent proxy, intercepting API requests, performing deep security analysis, and enforcing your organization’s security policies before requests reach the model provider.

Supported LLM Providers

KoreShield supports multiple LLM providers through its proxy architecture:
  • DeepSeek - High-performance models with OpenAI-compatible API
  • OpenAI - GPT-3.5, GPT-4, and other models
  • Anthropic - Claude models (Claude 3.5 Sonnet, etc.)
  • Google Gemini - Gemini Pro and Ultra models
  • Azure OpenAI - Enterprise OpenAI deployment
  • Custom Models - Any OpenAI-compatible API endpoint

Key Features

Multi-Provider Support

Works with OpenAI, Anthropic, Google Gemini, Azure OpenAI, and any OpenAI-compatible API

Real-time Protection

Sub-millisecond latency with comprehensive security scanning

Configurable Policies

Adjust sensitivity levels and response actions based on your security requirements

Enterprise Ready

SOC 2 compliant with comprehensive audit trails and compliance reporting

Easy Integration

Drop-in replacement for existing LLM API calls

Open Source

Transparent security with community-driven improvements

Deployment Options

KoreShield offers flexible deployment options to fit your infrastructure: Deploy KoreShield as a standalone security proxy service. Your application sends LLM requests to KoreShield, which validates them before forwarding to your provider. This offers the best security isolation and works with any programming language.
# Docker deployment
docker run -p 8000:8000 \
  -e DEEPSEEK_API_KEY=your-key \
  Koreshield/Koreshield

2. SDK Mode

Use the KoreShield Python or JavaScript SDK to communicate with your KoreShield proxy or integrate security features directly into your application.
from Koreshield import KoreshieldClient

client = KoreshieldClient(base_url="http://localhost:8000")
result = client.scan_prompt("Hello, how are you?")

Performance First

Sub-millisecond Latency

Optimized Go/Python engine for minimal overhead

Streaming Support

Full support for Server-Sent Events (SSE) and token streaming

High Throughput

Horizontally scalable architecture designed for high-volume workloads

Get Started

Installation Guide

Deploy your first instance in under 5 minutes

Quick Start Tutorial

Secure your first LLM integration with a working example

RAG Defense

Learn how to secure your retrieval pipelines from indirect injection

API Reference

Explore the programmable security interface

Build docs developers (and LLMs) love