Overview
Generates a model response for the given conversation and configuration. Supports OpenAI-compatible parameters and provider-specific extensions.
Method Signature
func (r *ChatCompletionService) New(
ctx context.Context,
body ChatCompletionNewParams,
opts ...option.RequestOption,
) (*ChatCompletion, error)
Streaming Method
func (r *ChatCompletionService) NewStreaming(
ctx context.Context,
body ChatCompletionNewParams,
opts ...option.RequestOption,
) *ssestream.Stream[ChatCompletionChunk]
Request Parameters
messages
[]ChatCompletionMessageParam
required
Array of messages in the conversation. Each message has a role (system, user, assistant, developer, function, tool) and content.
ID of the model to use. Format: provider/model-name (e.g., openai/gpt-4, anthropic/claude-3-5-sonnet-20241022)
Whether to stream responses using Server-Sent Events (SSE)
Sampling temperature between 0 and 2. Higher values make output more random.
Maximum number of tokens to generate in the completion
Nucleus sampling parameter. Alternative to temperature.
Number of chat completion choices to generate
Up to 4 sequences where the API will stop generating tokens
Penalty for new tokens based on whether they appear in the text so far (-2.0 to 2.0)
Penalty for new tokens based on their frequency in the text so far (-2.0 to 2.0)
tools
[]ChatCompletionToolParam
List of tools the model may call. Use this for function calling.
Controls which (if any) tool is called. Options: auto, none, any, or specific tool.
Format for the model’s output. Supports text, json_object, or json_schema.
Unique identifier for the end-user for abuse monitoring
Response Fields
Unique identifier for the chat completion
Object type, always chat.completion
Unix timestamp (in seconds) when the completion was created
The model used for the completion
Array of chat completion choices
Reason the model stopped: stop, length, tool_calls, content_filter, or function_call
Log probability information for the choice
Token usage statistics
Number of tokens in the prompt
Number of tokens in the completion
Total tokens used (prompt + completion)
List of tool names executed server-side (MCP tools)
Information about MCP server failures, if any occurred
Backend configuration fingerprint for determinism tracking
Code Examples
Basic Chat Completion
package main
import (
"context"
"fmt"
"log"
dedalus "github.com/dedalus-labs/dedalus-sdk-go"
"github.com/dedalus-labs/dedalus-sdk-go/option"
)
func main() {
client := dedalus.NewClient(
option.WithAPIKey("your-api-key"),
)
ctx := context.Background()
completion, err := client.Chat.Completions.New(ctx, dedalus.ChatCompletionNewParams{
Model: dedalus.F("openai/gpt-4"),
Messages: dedalus.F([]dedalus.ChatCompletionMessageParamUnion{
dedalus.ChatCompletionUserMessageParam{
Role: dedalus.F(dedalus.ChatCompletionUserMessageParamRoleUser),
Content: dedalus.F(dedalus.ChatCompletionUserMessageParamContentUnion(
dedalus.String("What is the capital of France?"),
)),
},
}),
})
if err != nil {
log.Fatal(err)
}
fmt.Println(completion.Choices[0].Message.Content)
}
Streaming Chat Completion
stream := client.Chat.Completions.NewStreaming(ctx, dedalus.ChatCompletionNewParams{
Model: dedalus.F("openai/gpt-4"),
Messages: dedalus.F([]dedalus.ChatCompletionMessageParamUnion{
dedalus.ChatCompletionUserMessageParam{
Role: dedalus.F(dedalus.ChatCompletionUserMessageParamRoleUser),
Content: dedalus.F(dedalus.ChatCompletionUserMessageParamContentUnion(
dedalus.String("Tell me a story"),
)),
},
}),
})
for stream.Next() {
chunk := stream.Current()
if len(chunk.Choices) > 0 {
fmt.Print(chunk.Choices[0].Delta.Content)
}
}
if err := stream.Err(); err != nil {
log.Fatal(err)
}
Function Calling
completion, err := client.Chat.Completions.New(ctx, dedalus.ChatCompletionNewParams{
Model: dedalus.F("openai/gpt-4"),
Messages: dedalus.F([]dedalus.ChatCompletionMessageParamUnion{
dedalus.ChatCompletionUserMessageParam{
Role: dedalus.F(dedalus.ChatCompletionUserMessageParamRoleUser),
Content: dedalus.F(dedalus.ChatCompletionUserMessageParamContentUnion(
dedalus.String("What's the weather in San Francisco?"),
)),
},
}),
Tools: dedalus.F([]dedalus.ChatCompletionToolParam{
{
Type: dedalus.F(dedalus.ChatCompletionToolParamTypeFunction),
Function: dedalus.F(shared.FunctionDefinitionParam{
Name: dedalus.F("get_weather"),
Description: dedalus.F("Get the current weather in a location"),
Parameters: dedalus.F(shared.FunctionParameters{
"type": "object",
"properties": map[string]interface{}{
"location": map[string]interface{}{
"type": "string",
"description": "The city name",
},
},
"required": []string{"location"},
}),
}),
},
}),
})
Message Types
The SDK supports multiple message types:
- System Message: Instructions for the model
- User Message: User input (supports text, images, audio, files)
- Assistant Message: Model responses
- Tool Message: Tool execution results
- Function Message: Function call results (deprecated)
- Developer Message: Developer instructions (replaces system for o1 models)
Error Responses
- 400 Bad Request: Validation error in request parameters
- 401 Unauthorized: Invalid or missing API key
- 402 Payment Required: Insufficient balance or quota
- 429 Too Many Requests: Rate limit exceeded
- 500 Internal Server Error: Unexpected server failure