Overview
The OpenAI integration provides access to ChatGPT models through the Chat completion endpoint. Supports all GPT models including GPT-4, GPT-4 Turbo, GPT-3.5 Turbo, and the latest o1 reasoning models.Setup
Get API Key
Sign up at OpenAI Platform and generate an API key from the API keys section
Add Credential
In Flowise, navigate to Credentials and create a new OpenAI API credential with your key
Configuration
Basic Parameters
Your OpenAI API credential containing the API key
The model to use. Available models are loaded dynamically from your account.Popular options:
gpt-4o- Latest GPT-4 Omni modelgpt-4o-mini- Faster, cost-effective GPT-4 variantgpt-4-turbo- GPT-4 Turbo with visiongpt-3.5-turbo- Fast and economicalo1-preview- Advanced reasoning model
Controls randomness. Lower values (0.1) make output more deterministic, higher values (0.9) more creative
Enable streaming responses for real-time output
Advanced Parameters
Maximum number of tokens to generate in the response
Nucleus sampling parameter. Alternative to temperature for controlling randomness
Penalize new tokens based on their frequency in the text so far (-2.0 to 2.0)
Penalize new tokens based on whether they appear in the text so far (-2.0 to 2.0)
List of sequences where the API will stop generating. Separate multiple with commas
Request timeout in milliseconds
Enable strict mode for function calling to ensure JSON schema compliance
Vision Support
Enable image input for vision-capable models like GPT-4 Turbo and GPT-4o
Control image resolution for vision models:
low- Faster, lower costhigh- Better detail recognitionauto- Let the model decide
Reasoning Models (o1, o3)
Reasoning models like o1-preview and o3-mini have special parameters and don’t support temperature or stop sequences.
Enable reasoning mode for o1/o3 models
Constrain reasoning effort:
low- Faster responsesmedium- Balancedhigh- Most thorough reasoning
Get a summary of the model’s reasoning process:
auto- Default behaviorconcise- Brief summarydetailed- Full reasoning trace
Proxy & Custom Configuration
Custom API base URL for OpenAI-compatible endpoints
HTTPS proxy URL for routing requests
Custom headers and configuration as JSON
Usage Examples
Basic Chat Model
Function Calling Agent
Vision-Enabled Chat
Reasoning Model
Best Practices
Model Selection
- Use
gpt-4o-minifor most tasks (cost-effective) - Use
gpt-4ofor complex reasoning - Use
o1-previewfor advanced problem-solving
Cost Optimization
- Enable caching to reduce repeated calls
- Set appropriate
maxTokenslimits - Use
gpt-3.5-turbofor simple tasks
Function Calling
- Enable
strictToolCallingfor reliability - Lower temperature (0.1-0.3) for tool use
- Provide clear function descriptions
Performance
- Enable streaming for better UX
- Set reasonable timeout values
- Use appropriate reasoning effort
Common Issues
Rate Limit Errors
Rate Limit Errors
OpenAI enforces rate limits based on your usage tier:
- Add retry logic with exponential backoff
- Monitor your usage in the OpenAI dashboard
- Consider upgrading your usage tier
Context Length Exceeded
Context Length Exceeded
If you exceed the model’s context window:
- Reduce
maxTokensparameter - Implement conversation summarization
- Use a model with larger context (e.g., gpt-4-turbo-128k)
Reasoning Models Not Working
Reasoning Models Not Working
o1 and o3 models have different requirements:
- Don’t set temperature (automatically disabled)
- Don’t use stop sequences
- Enable reasoning parameter