POST /functions/v1/generate-text
Generates text using AI models (OpenAI GPT or Anthropic Claude) based on a prompt. Supports placeholder substitution for dynamic prompts.Request
Headers
Bearer token for authentication:
Bearer <your-supabase-jwt-token>Must be
application/jsonBody Parameters
The prompt to send to the AI model. Supports placeholder syntax like
{{input.text}} which will be resolved from previousOutput.The workflow element ID. Required if API key and model are not provided directly. Used to load configuration from the database.
API key for OpenAI or Anthropic. If not provided, will be loaded from agent configuration based on
elementId.Model identifier (e.g.,
gpt-3.5-turbo, gpt-4, claude-3-opus-20240229). Defaults to gpt-3.5-turbo if not specified.Object containing outputs from previous workflow steps, used for placeholder resolution.
If
true, uses provided credentials instead of loading from database. Defaults to false.Response
Indicates whether text generation was successful
The generated text content from the AI model
Model Selection
The function automatically detects the provider based on the model name:- OpenAI: Models containing
gpt(e.g.,gpt-3.5-turbo,gpt-4,gpt-4-turbo) - Anthropic: Models containing
claude(e.g.,claude-3-opus-20240229,claude-3-sonnet-20240229)
OpenAI Configuration
- Endpoint:
https://api.openai.com/v1/chat/completions - Max tokens: 1000
- Message role:
user
Anthropic Configuration
- Endpoint:
https://api.anthropic.com/v1/messages - Max tokens: 1000
- API version:
2023-06-01
Placeholder Resolution
Prompts can include placeholders that reference previous workflow outputs:previousOutput parameter, which typically contains:
input.text- Text from previous text generatorinput.emailBody- Email body from Gmail readerinput.messages- Array of messages- Custom paths like
input.messages.0.subject
Examples
Request with Direct Credentials
Request with Element Configuration
Success Response
Error Response
Error Codes
| Status Code | Error Message | Description |
|---|---|---|
| 400 | Prompt is required | Missing prompt parameter |
| 400 | API key is required | No API key provided or found in config |
| 404 | No text generator configuration found | Element ID has no associated configuration |
| 500 | Failed to retrieve agent configuration | Database error loading configuration |
| 500 | Failed to generate text | AI API returned an error |
Configuration Storage
When usingelementId, credentials are:
- Retrieved from
agent_configstable - Filtered by
user_id,element_id, andagent_type: 'text_generator' - Decrypted using XOR encryption with
ENCRYPTION_KEYenvironment variable - Used for API calls to OpenAI or Anthropic
Notes
- API keys are encrypted in the database using XOR encryption
- Maximum token limit is set to 1000 for all requests
- The function automatically determines the provider from the model name
- Placeholder resolution happens before sending to the AI API
- Both
apiKeyandapi_keyfield names are supported in configurations