Skip to main content

Overview

Portkey AI Gateway supports 250+ LLMs from 78+ providers, giving you access to virtually every major AI model through a single, unified API.

All Supported Providers

Here’s the complete list of providers integrated with Portkey:

Major LLM Providers

ProviderModelsFeaturesDocumentation
OpenAIGPT-4, GPT-3.5, o1, o3, DALL-E, WhisperChat, Completions, Embeddings, Images, Audio, RealtimeOpenAI →
AnthropicClaude 3.5, Claude 3, Claude 2Chat, Vision, Function CallingAnthropic →
Azure OpenAIGPT-4, GPT-3.5, EmbeddingsAll OpenAI features via AzureAzure OpenAI →
Google GeminiGemini 2.0, Gemini 1.5 Pro/FlashChat, Vision, Embeddings, Function CallingGoogle Gemini →
AWS BedrockClaude, Llama, Mistral, TitanChat, Embeddings, Converse APIAWS Bedrock →
CohereCommand, Command R, Command R+Chat, Embeddings, RerankCohere →
Mistral AIMistral Large, Medium, SmallChat, Embeddings, Function CallingMistral →

Specialized Providers

ProviderModelsSpecialtyDocumentation
Together AI100+ open modelsOpen-source models, Fast inferenceTogether AI →
AnyscaleLlama, Mistral, MixtralOpen models with EndpointsAnyscale →
GroqLlama, Mixtral, GemmaUltra-fast inference (500+ tokens/s)Groq →
DeepInfra100+ modelsCost-effective inferenceDeepInfra →
PerplexitySonar modelsSearch-augmented generationPerplexity →
OllamaAny local modelLocal/self-hosted modelsOllama →
Fireworks AI80+ modelsFast inference, fine-tuningFireworks AI
ReplicateThousands of modelsCommunity models, image genReplicate

Cloud AI Platforms

ProviderDescription
Google Vertex AIGoogle Cloud AI platform with Gemini, PaLM
Azure AI InferenceMicrosoft’s unified AI inference service
SagemakerAWS machine learning platform
Workers AICloudflare’s edge AI platform

Additional Providers (A-Z)

  • 302.AI - AI model aggregation platform
  • AI21 - Jamba models
  • AIBadgr - Educational AI platform
  • Anyscale - Ray-based inference
  • Cerebras - Ultra-fast inference
  • CometAPI - API marketplace
  • Cohere - Enterprise NLP
  • Cortex - Snowflake AI
  • DashScope - Alibaba AI platform
  • DeepBricks - AI infrastructure
  • DeepInfra - Cost-effective inference
  • DeepSeek - Chinese AI models
  • Featherless AI - Lightweight models
  • Fireworks AI - Fast inference platform
  • HuggingFace - 100,000+ models
  • Hyperbolic - Decentralized AI
  • Inference.net - Distributed inference
  • IO Intelligence - Enterprise AI
  • Jina - Embeddings and search
  • Kluster AI - Cluster computing
  • Krutrim - Indian AI models
  • Lambda - GPU cloud
  • LemonfoxAI - AI infrastructure
  • Lepton - Simplified AI deployment
  • LingYi - Chinese AI models
  • MatterAI - Scientific AI
  • Meshy - 3D generation
  • Milvus - Vector database
  • Modal - Serverless AI
  • MonsterAPI - Cost-effective inference
  • Moonshot - Chinese AI platform
  • NCompass - Enterprise AI
  • Nebius - Cloud AI platform
  • NextBit - AI infrastructure
  • Nomic - Embeddings (Nomic Embed)
  • Novita AI - Multi-modal AI
  • NScale - Scalable inference
  • Ollama - Local models
  • OpenRouter - Model router
  • Oracle - Oracle Cloud AI
  • OVHcloud - European cloud AI
  • PaLM - Google’s legacy models
  • Perplexity AI - Search-augmented LLMs
  • Predibase - Fine-tuning platform
  • Qdrant - Vector search
  • Recraft AI - Image generation
  • Reka AI - Multimodal models
  • Replicate - Community model hosting
  • SambaNova - AI hardware acceleration
  • Segmind - Image generation
  • SiliconFlow - Chinese AI platform
  • Stability AI - Stable Diffusion
  • Together AI - Open-source models
  • Triton - NVIDIA Triton
  • Tripo3D - 3D generation
  • Upstage - Korean AI models
  • Voyage - Embeddings
  • Workers AI - Cloudflare edge AI
  • X.AI - Grok models
  • Z.AI - AI infrastructure
  • Zhipu - Chinese AI (ChatGLM)

Provider Identifier Reference

When making requests, use these provider identifiers:
# Syntax
client = Portkey(
    provider="<provider-identifier>",
    Authorization="<api-key>"
)

Common Provider Identifiers

Provider NameIdentifierExample
OpenAIopenaiprovider="openai"
Anthropicanthropicprovider="anthropic"
Azure OpenAIazure-openaiprovider="azure-openai"
Google Geminigoogleprovider="google"
AWS Bedrockbedrockprovider="bedrock"
Coherecohereprovider="cohere"
Mistral AImistral-aiprovider="mistral-ai"
Together AItogether-aiprovider="together-ai"
Anyscaleanyscaleprovider="anyscale"
Groqgroqprovider="groq"
Perplexityperplexity-aiprovider="perplexity-ai"
DeepInfradeepinfraprovider="deepinfra"
Ollamaollamaprovider="ollama"
Fireworks AIfireworks-aiprovider="fireworks-ai"
Replicatereplicateprovider="replicate"

Feature Support Matrix

Core Features

ProviderChatStreamingEmbeddingsFunction CallingVision
OpenAI
Anthropic
Azure OpenAI
Google Gemini
AWS Bedrock
Cohere
Mistral
Together AI
Anyscale
Groq
DeepInfra
Perplexity
Ollama

Special Features

ProviderAudio (TTS)Audio (STT)Image GenerationBatch APIFine-tuning
OpenAI
Anthropic
Azure OpenAI
AWS Bedrock
Stability AI
Fireworks AI

Request Examples

Basic Provider Switching

from portkey_ai import Portkey

# OpenAI
openai_client = Portkey(provider="openai", Authorization="sk-***")
response = openai_client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello"}]
)

# Switch to Anthropic - same code structure!
anthropic_client = Portkey(provider="anthropic", Authorization="sk-ant-***")
response = anthropic_client.chat.completions.create(
    model="claude-3-5-sonnet-20241022",
    messages=[{"role": "user", "content": "Hello"}]
)

Multi-Provider Fallback

config = {
    "strategy": {"mode": "fallback"},
    "targets": [
        {"provider": "openai", "api_key": "sk-***"},
        {"provider": "anthropic", "api_key": "sk-ant-***"},
        {"provider": "google", "api_key": "***"},
        {"provider": "groq", "api_key": "gsk-***"}
    ]
}

client = Portkey().with_options(config=config)

Adding New Providers

Portkey regularly adds new providers. To request a provider integration:
  1. Check the GitHub issues for existing requests
  2. Open a feature request with provider details
  3. Contribute a provider implementation

Contribute a Provider

Help add new providers to the gateway

Provider Pricing

For detailed pricing information across all providers, visit:

Portkey Models

Browse pricing for 2,300+ models across 40+ providers

Next Steps

Provider Overview

Learn how provider routing works

OpenAI

OpenAI integration guide

Fallbacks

Set up automatic fallbacks

Load Balancing

Distribute across providers

Build docs developers (and LLMs) love