What is the Playground?
The Playground is an interactive environment for testing LLM models without writing code. It provides:- Chat Interface: Multi-turn conversations with any supported model
- Image Generation: Create images with AI models like Qwen, Gemini Nano Banana
- Group Chat: Compare responses from multiple models simultaneously
- Model Comparison: Test the same prompt across different models
- Web Search: Enable models to search the internet for real-time information
- Advanced Controls: Fine-tune temperature, reasoning effort, and more
Usage in the Playground is billed the same as API requests, using your organization’s credits or provider keys.
Getting Started
Access the Playground at playground.llmgateway.io.Chat Interface
The standard chat interface supports text conversations with any LLM.Basic Usage
- Type your message in the input box at the bottom
- Press Enter or click the send button
- View the response in the conversation thread
- Continue the conversation by sending more messages
Supported Features
- Text
- Images
- Tools
- Reasoning
All models support basic text conversations:
Model Selector
Click the model name to browse and switch models: Filter by:- Provider (OpenAI, Anthropic, Google, etc.)
- Capabilities (Vision, Tools, Reasoning)
- Pricing (Free vs. Paid)
- Context window size
Advanced Settings
Click the settings icon to configure:Temperature
Temperature
Controls randomness in responses (0.0 - 2.0):
- 0.0-0.3: Deterministic, focused (code, math)
- 0.4-0.7: Balanced (default for most tasks)
- 0.8-1.2: Creative (stories, brainstorming)
- 1.3-2.0: Highly random (experimental)
Max Tokens
Max Tokens
Maximum length of the response:
- Controls output verbosity
- Higher values = longer responses
- Limited by model’s max output
Reasoning Effort
Reasoning Effort
For reasoning models (o1, o3, R1):
- Minimal: Quick responses
- Low: Standard reasoning
- Medium: More thorough (default)
- High: Maximum reasoning time
Web Search
Web Search
Enable models to search the internet:
- Toggle on/off per conversation
- Supported by select models
- Incurs additional cost per search
- Provides real-time information
Not all models support web search
Chat Management
New Chat Click the + button to start a fresh conversation. Chat History Access previous chats from the sidebar:- Chats are named automatically based on content
- Click to resume a conversation
- Delete unwanted chats
- Click the export button (↓)
- Choose format: JSON, Markdown, or Text
- Download or copy to clipboard
Image Generation
Generate images with AI using supported models.Accessing Image Generation
- Navigate to Playground → Image
- Or click the image icon in the main playground
Supported Models
Qwen Image Plus
Fast, high-quality image generationSizes: 1024x1024, 1024x768, 768x1024
Qwen Image Max
Premium quality with more detailsSizes: 1024x1024, 1024x768, 768x1024
Gemini Nano Banana
Experimental, creative generationsAspect Ratios: 1:1, 16:9, 4:3, 5:4
Generating Images
Image Generation Tips
Writing Better Prompts
Writing Better Prompts
Be specific:
- Include subject, style, mood, lighting
- Reference art styles (realistic, anime, oil painting)
- Add quality tags (4K, highly detailed, masterpiece)
“Portrait of a cyberpunk detective in neon-lit Tokyo, film noir style, moody lighting, highly detailed”Avoid:
“Make me a person”
Aspect Ratios
Aspect Ratios
Choose based on use case:
- 1:1: Social media, profile pictures
- 16:9: Wallpapers, presentations
- 9:16: Mobile wallpapers, stories
- 4:3: Traditional displays
Iterating
Iterating
Refine your results:
- Start with a basic prompt
- Add details incrementally
- Try different models
- Adjust aspect ratio if needed
- Generate multiple variants
Group Chat (Multi-Model Comparison)
Compare responses from multiple models simultaneously.How It Works
- Navigate to Playground → Group Chat
- Select 2-4 models to compare
- Enter your prompt
- All models respond in parallel
- Compare responses side-by-side
Use Cases
Quality Comparison
Test which model gives the best answer for your use case.
Consistency Check
Verify factual information across multiple models.
Style Evaluation
Compare tone and writing style.
Performance Testing
Measure response speed and quality.
Best Practices
- Compare models from different providers
- Use the same temperature for fair comparison
- Test with realistic production prompts
- Note which model performs best for your specific task
- Consider cost vs. quality tradeoffs
Web Search Integration
Enable models to access real-time information from the internet.Enabling Web Search
- Toggle Web Search in the settings panel
- Available for supported models only
- Send your query
- Model searches the web and incorporates results
When to Use Web Search
- Good Use Cases
- Not Needed
- Current events and news
- Latest product information
- Real-time data (stock prices, weather)
- Recent research or publications
- Up-to-date technical documentation
Sources
When web search is enabled, sources are cited in responses:- Click Sources to view links
- Each source includes URL and snippet
- Verify information from multiple sources
MCP (Model Context Protocol) Integration
Connect external tools and data sources to the Playground.What is MCP?
MCP allows models to:- Access your local files
- Query databases
- Call external APIs
- Execute custom tools
Connecting MCP Servers
See the MCP Integration Guide for detailed setup.
API Key Management
While using the Playground, you’re making requests through your organization’s projects. To view API usage:- Click your organization name
- Select View API Keys
- See usage for each project
- Playground usage doesn’t require API keys
- Generate keys when ready to integrate into your application
- See Projects Guide for API key creation
Keyboard Shortcuts
Speed up your workflow with shortcuts:| Shortcut | Action |
|---|---|
Enter | Send message |
Shift + Enter | New line |
Cmd/Ctrl + N | New chat |
Cmd/Ctrl + K | Focus search |
Esc | Close modal |
Cmd/Ctrl + / | Show shortcuts |
Troubleshooting
Model Not Responding
Model Not Responding
Possible causes:
- Insufficient credits
- Provider API issues
- Network timeout
- Check credit balance
- Try a different model
- Refresh the page
- Check status page
Image Upload Failing
Image Upload Failing
Possible causes:
- File too large (>20MB)
- Unsupported format
- Model doesn’t support vision
- Compress image
- Convert to PNG/JPEG
- Switch to vision-capable model
Slow Responses
Slow Responses
Possible causes:
- High reasoning effort
- Large context/images
- Provider latency
- Reduce reasoning effort
- Use smaller images
- Try a different provider
Web Search Not Working
Web Search Not Working
Possible causes:
- Model doesn’t support web search
- Search quota exceeded
- Check model capabilities
- Contact support for quota increase
Best Practices
Prompt Engineering
Prompt Engineering
- Start simple, add complexity gradually
- Be specific about desired format
- Provide examples when possible
- Use system messages for consistent behavior
- Test multiple phrasings
Cost Optimization
Cost Optimization
- Start with cheaper models (GPT-4o-mini, Claude Haiku)
- Only upgrade if quality is insufficient
- Disable web search when not needed
- Use lower reasoning effort for simple tasks
- Enable caching for repeated queries
Model Selection
Model Selection
For coding:
- Claude 3.5 Sonnet
- GPT-4o
- DeepSeek Coder
- Claude 3 Opus
- GPT-4o
- Gemini 1.5 Pro
- OpenAI o1/o3
- DeepSeek R1
- Claude 3.5 Sonnet (extended)
- GPT-4 Vision
- Claude 3 Opus
- Gemini Pro Vision
From Playground to Production
Once you’ve perfected your prompts in the Playground:Implement in Code
Use the same model and parameters in your application.See the Quickstart Guide for code examples.
Next Steps
Quickstart
Integrate LLM Gateway into your application.
API Reference
Explore the complete API documentation.
MCP Integration
Connect external tools to your models.
Projects
Manage projects and API keys.