Skip to main content

What is the Playground?

The Playground is an interactive environment for testing LLM models without writing code. It provides:
  • Chat Interface: Multi-turn conversations with any supported model
  • Image Generation: Create images with AI models like Qwen, Gemini Nano Banana
  • Group Chat: Compare responses from multiple models simultaneously
  • Model Comparison: Test the same prompt across different models
  • Web Search: Enable models to search the internet for real-time information
  • Advanced Controls: Fine-tune temperature, reasoning effort, and more
Usage in the Playground is billed the same as API requests, using your organization’s credits or provider keys.

Getting Started

Access the Playground at playground.llmgateway.io.
1

Sign In

Log in with your LLM Gateway account.
2

Select Organization

Choose which organization’s credits to use.
3

Select Project

Pick a project to track usage.
4

Choose a Model

Select from 150+ available models.
5

Start Chatting

Type your message and press Enter or click Send.

Chat Interface

The standard chat interface supports text conversations with any LLM.

Basic Usage

  1. Type your message in the input box at the bottom
  2. Press Enter or click the send button
  3. View the response in the conversation thread
  4. Continue the conversation by sending more messages

Supported Features

All models support basic text conversations:
User: Explain quantum computing in simple terms

Assistant: Quantum computing uses the principles of...

Model Selector

Click the model name to browse and switch models: Filter by:
  • Provider (OpenAI, Anthropic, Google, etc.)
  • Capabilities (Vision, Tools, Reasoning)
  • Pricing (Free vs. Paid)
  • Context window size
Search: Type to search by model name (e.g., “gpt-4”, “claude”, “gemini”)

Advanced Settings

Click the settings icon to configure:
Controls randomness in responses (0.0 - 2.0):
  • 0.0-0.3: Deterministic, focused (code, math)
  • 0.4-0.7: Balanced (default for most tasks)
  • 0.8-1.2: Creative (stories, brainstorming)
  • 1.3-2.0: Highly random (experimental)
Maximum length of the response:
  • Controls output verbosity
  • Higher values = longer responses
  • Limited by model’s max output
Default is model-specific (usually 4000-8000 tokens)
For reasoning models (o1, o3, R1):
  • Minimal: Quick responses
  • Low: Standard reasoning
  • Medium: More thorough (default)
  • High: Maximum reasoning time
Higher effort = longer processing time but better accuracy.

Chat Management

New Chat Click the + button to start a fresh conversation. Chat History Access previous chats from the sidebar:
  • Chats are named automatically based on content
  • Click to resume a conversation
  • Delete unwanted chats
Export Chat Save your conversation:
  1. Click the export button (↓)
  2. Choose format: JSON, Markdown, or Text
  3. Download or copy to clipboard

Image Generation

Generate images with AI using supported models.

Accessing Image Generation

  1. Navigate to PlaygroundImage
  2. Or click the image icon in the main playground

Supported Models

Qwen Image Plus

Fast, high-quality image generationSizes: 1024x1024, 1024x768, 768x1024

Qwen Image Max

Premium quality with more detailsSizes: 1024x1024, 1024x768, 768x1024

Gemini Nano Banana

Experimental, creative generationsAspect Ratios: 1:1, 16:9, 4:3, 5:4

Generating Images

1

Select Model

Choose an image generation model from the dropdown.
2

Enter Prompt

Describe the image you want to create:
A serene mountain landscape at sunset with flying cars,
cyberpunk aesthetic, highly detailed
Be specific about style, mood, and details for best results
3

Configure Settings

Size/Aspect Ratio: Choose dimensionsNumber of Images: Generate 1-4 variants
4

Generate

Click Generate and wait for the model to create your image.
5

Download or Refine

  • Click the download button to save
  • Adjust prompt and regenerate if needed
  • Try different models for variations

Image Generation Tips

Be specific:
  • Include subject, style, mood, lighting
  • Reference art styles (realistic, anime, oil painting)
  • Add quality tags (4K, highly detailed, masterpiece)
Good:
“Portrait of a cyberpunk detective in neon-lit Tokyo, film noir style, moody lighting, highly detailed”
Avoid:
“Make me a person”
Choose based on use case:
  • 1:1: Social media, profile pictures
  • 16:9: Wallpapers, presentations
  • 9:16: Mobile wallpapers, stories
  • 4:3: Traditional displays
Refine your results:
  1. Start with a basic prompt
  2. Add details incrementally
  3. Try different models
  4. Adjust aspect ratio if needed
  5. Generate multiple variants

Group Chat (Multi-Model Comparison)

Compare responses from multiple models simultaneously.

How It Works

  1. Navigate to PlaygroundGroup Chat
  2. Select 2-4 models to compare
  3. Enter your prompt
  4. All models respond in parallel
  5. Compare responses side-by-side

Use Cases

Quality Comparison

Test which model gives the best answer for your use case.

Consistency Check

Verify factual information across multiple models.

Style Evaluation

Compare tone and writing style.

Performance Testing

Measure response speed and quality.

Best Practices

  • Compare models from different providers
  • Use the same temperature for fair comparison
  • Test with realistic production prompts
  • Note which model performs best for your specific task
  • Consider cost vs. quality tradeoffs

Web Search Integration

Enable models to access real-time information from the internet.
  1. Toggle Web Search in the settings panel
  2. Available for supported models only
  3. Send your query
  4. Model searches the web and incorporates results
  • Current events and news
  • Latest product information
  • Real-time data (stock prices, weather)
  • Recent research or publications
  • Up-to-date technical documentation
Web search incurs additional costs per query. Enable only when you need real-time information.

Sources

When web search is enabled, sources are cited in responses:
  1. Click Sources to view links
  2. Each source includes URL and snippet
  3. Verify information from multiple sources

MCP (Model Context Protocol) Integration

Connect external tools and data sources to the Playground.

What is MCP?

MCP allows models to:
  • Access your local files
  • Query databases
  • Call external APIs
  • Execute custom tools

Connecting MCP Servers

1

Open MCP Settings

Click the MCP icon in the Playground toolbar.
2

Add Server

Enter your MCP server URL or select a pre-configured server.
3

Authenticate

Provide any required credentials.
4

Enable Tools

Toggle which tools the model can access.
See the MCP Integration Guide for detailed setup.

API Key Management

While using the Playground, you’re making requests through your organization’s projects. To view API usage:
  1. Click your organization name
  2. Select View API Keys
  3. See usage for each project
To generate API keys:
  • Playground usage doesn’t require API keys
  • Generate keys when ready to integrate into your application
  • See Projects Guide for API key creation

Keyboard Shortcuts

Speed up your workflow with shortcuts:
ShortcutAction
EnterSend message
Shift + EnterNew line
Cmd/Ctrl + NNew chat
Cmd/Ctrl + KFocus search
EscClose modal
Cmd/Ctrl + /Show shortcuts

Troubleshooting

Possible causes:
  • Insufficient credits
  • Provider API issues
  • Network timeout
Solutions:
  • Check credit balance
  • Try a different model
  • Refresh the page
  • Check status page
Possible causes:
  • File too large (>20MB)
  • Unsupported format
  • Model doesn’t support vision
Solutions:
  • Compress image
  • Convert to PNG/JPEG
  • Switch to vision-capable model
Possible causes:
  • High reasoning effort
  • Large context/images
  • Provider latency
Solutions:
  • Reduce reasoning effort
  • Use smaller images
  • Try a different provider
Possible causes:
  • Model doesn’t support web search
  • Search quota exceeded
Solutions:
  • Check model capabilities
  • Contact support for quota increase

Best Practices

  • Start simple, add complexity gradually
  • Be specific about desired format
  • Provide examples when possible
  • Use system messages for consistent behavior
  • Test multiple phrasings
  • Start with cheaper models (GPT-4o-mini, Claude Haiku)
  • Only upgrade if quality is insufficient
  • Disable web search when not needed
  • Use lower reasoning effort for simple tasks
  • Enable caching for repeated queries
For coding:
  • Claude 3.5 Sonnet
  • GPT-4o
  • DeepSeek Coder
For creative writing:
  • Claude 3 Opus
  • GPT-4o
  • Gemini 1.5 Pro
For reasoning:
  • OpenAI o1/o3
  • DeepSeek R1
  • Claude 3.5 Sonnet (extended)
For vision:
  • GPT-4 Vision
  • Claude 3 Opus
  • Gemini Pro Vision

From Playground to Production

Once you’ve perfected your prompts in the Playground:
1

Note Model and Settings

Record which model and parameters worked best.
2

Generate API Key

Create an API key in your project settings.
3

Implement in Code

Use the same model and parameters in your application.See the Quickstart Guide for code examples.
4

Test and Monitor

Test in your development environment and monitor usage in production.

Next Steps

Quickstart

Integrate LLM Gateway into your application.

API Reference

Explore the complete API documentation.

MCP Integration

Connect external tools to your models.

Projects

Manage projects and API keys.

Build docs developers (and LLMs) love