OpenAI Integration
The OpenAI integration allows you to leverage OpenAI’s powerful AI models including GPT-4, DALL-E, Whisper, and Sora for a variety of tasks including text generation, image generation and analysis, audio transcription and generation, and video generation.Available Nodes
n8n provides two types of OpenAI nodes:OpenAI Node
Specialized operations: image generation (DALL-E), audio (Whisper, TTS), and video generation (Sora)
OpenAI Chat Model
For text generation, reasoning, and tools - use with AI Agent
For most text generation and LLM tasks, use the AI Agent node with OpenAI Chat Model instead of the standalone OpenAI node. The OpenAI node is optimized for specialized operations like image, audio, and video generation.
Prerequisites
Before you begin, you’ll need:- An OpenAI account
- An OpenAI API key (get one from OpenAI Platform)
- Sufficient API credits in your OpenAI account
Setup
Create OpenAI API Credentials
- Go to the OpenAI API Keys page
- Click “Create new secret key”
- Give your key a name and copy it immediately (you won’t be able to see it again)
- Store the key securely
Add Credentials in n8n
- In your n8n workflow, add an OpenAI node
- Click on the Credential to connect with dropdown
- Click Create New Credential
- Paste your API key
- (Optional) Configure custom base URL if using a proxy
- Click Save
OpenAI Node
The OpenAI node provides specialized operations for image, audio, and video generation. It supports multiple resources and operations.Available Resources
- Text
- Image
- Audio
- Video
- Assistant
- File
- Conversation
Generate text responses and perform classifications using GPT models.Operations:
- Message: Send messages to GPT models and get responses
- Classify: Classify text using moderation models
- Generate text completions
- Content moderation
- Multi-turn conversations
- Structured output with JSON mode
Advanced Features
Function Calling / Tools
Connect custom n8n tools to the OpenAI node to enable function calling:- Add tool nodes to your workflow (e.g., HTTP Request Tool, Code Tool)
- Connect them to the Tools input on the OpenAI node
- OpenAI will automatically call these tools when needed
Structured Output
Use JSON mode or response format to get structured data:When using JSON mode, include the word “json” in your prompt and use models released after November 2023.
Streaming Responses
Enable streaming for real-time responses:OpenAI Chat Model
The OpenAI Chat Model node is designed for use with LangChain components, particularly the AI Agent.Configuration
Select Model
Choose from available models:
- GPT-4 Turbo: Most capable, best for complex tasks
- GPT-4: High capability, balanced performance
- GPT-3.5 Turbo: Fast and cost-effective
- o1/o3 models: Advanced reasoning capabilities
Configure Options
Set temperature, max tokens, and other parameters:
- Temperature: Controls randomness (0-2)
- Max Tokens: Limit response length
- Top P: Nucleus sampling parameter
- Frequency Penalty: Reduce repetition
- Presence Penalty: Encourage new topics
Model Selection
| Model | Best For | Context Window | Capabilities |
|---|---|---|---|
| GPT-4 Turbo | Complex reasoning, latest features | 128K tokens | Vision, JSON mode, function calling |
| GPT-4 | Balanced performance | 8K tokens | High accuracy, reliable |
| GPT-3.5 Turbo | Speed and cost | 16K tokens | Fast responses, good for simple tasks |
| o1-preview | Advanced reasoning | 128K tokens | Complex problem solving |
| o3-mini | Efficient reasoning | 128K tokens | Cost-effective reasoning |
Response Formats
Common Use Cases
1. Content Generation
Generate blog posts, product descriptions, or marketing copy:2. Image Analysis Pipeline
Analyze images and take actions based on content:3. Audio Transcription
Transcribe audio files and process the text:4. AI Agent with Tools
Create an intelligent agent that can use multiple tools:Best Practices
Choose the Right Model
- Use GPT-4 for complex reasoning and high-quality outputs
- Use GPT-3.5 Turbo for speed and cost efficiency
- Use specialized models (DALL-E, Whisper) for specific tasks
Optimize Token Usage
- Set appropriate
max_tokenslimits - Use system messages to set context efficiently
- Consider caching responses for repeated queries
Handle Errors Gracefully
- Implement retry logic for rate limits
- Check error responses and handle them appropriately
- Monitor API usage and costs
Use Streaming for Long Responses
- Enable streaming for better user experience
- Process chunks as they arrive
- Handle connection interruptions
Troubleshooting
Rate Limits
If you encounter rate limit errors:- Implement exponential backoff
- Upgrade your OpenAI plan
- Use batch processing where possible
- Cache responses to reduce API calls
Token Limits
If responses are cut off:- Increase
max_tokensparameter - Split large inputs into chunks
- Use models with larger context windows
- Summarize previous context
Model Not Found
If a model is unavailable:- Check your OpenAI account access level
- Verify the model name is correct
- Ensure your API key has access to the model
- Check OpenAI’s status page for outages