curl --request POST \
--url https://api.example.com/v1/chat/completions{
"id": "<string>",
"object": "<string>",
"created": 123,
"model": "<string>",
"choices": [
{}
],
"usage": {},
"system_fingerprint": "<string>",
"service_tier": "<string>",
"tools_executed": [
"<string>"
],
"mcp_server_errors": {}
}Create chat completions with the Dedalus SDK
curl --request POST \
--url https://api.example.com/v1/chat/completions{
"id": "<string>",
"object": "<string>",
"created": 123,
"model": "<string>",
"choices": [
{}
],
"usage": {},
"system_fingerprint": "<string>",
"service_tier": "<string>",
"tools_executed": [
"<string>"
],
"mcp_server_errors": {}
}client.chat.completions.create() method generates model responses for conversations. It supports OpenAI-compatible parameters with Dedalus-specific extensions for multi-model routing, server-side tool execution, and agent orchestration.
import Dedalus from 'dedalus-labs';
const client = new Dedalus({
apiKey: process.env.DEDALUS_API_KEY,
});
const completion = await client.chat.completions.create({
model: 'openai/gpt-4',
messages: [
{ role: 'user', content: 'What is the capital of France?' }
],
});
console.log(completion.choices[0].message.content);
client.chat.completions.create(
body: CompletionCreateParams,
options?: RequestOptions
): APIPromise<Completion | Stream<StreamChunk>>
'openai/gpt-4''openai/gpt-4''anthropic/claude-3-5-sonnet''google/gemini-pro'role and content.Supported roles:user - Messages from the end userassistant - Messages from the AI assistantsystem - System instructions (legacy, use developer for newer models)developer - Developer instructions (o1 models and newer)tool - Tool execution resultsfunction - Function call results (deprecated)true, returns a Stream<StreamChunk> instead of Completion.See Streaming documentation for details.0.0 - Deterministic, focused1.0 - Balanced (default)2.0 - Very creative, randommax_completion_tokens instead.'auto' - Model decides (default)'none' - No tools used'required' - Model must use a tool{ type: 'tool', name: 'tool_name' } - Specific toolfalse, returns raw tool calls for client-side handling.true, the model can call multiple tools simultaneously.['github:user/repo', 'https://mcp.example.com']{ creativity: 0.8, accuracy: 0.9 }{
'openai/gpt-4': { speed: 0.7, cost: 0.3 },
'anthropic/claude-3-5-sonnet': { quality: 0.9 }
}
logprobs: true.{ type: 'text' } - Plain text (default){ type: 'json_object' } - JSON object{ type: 'json_schema', json_schema: {...} } - Structured JSON with schematemperature: 0 for reproducible results.safety_identifier instead.'low', 'medium', 'high'.{ type: 'enabled', budget_tokens: number }{ type: 'disabled' }modalities includes 'audio').['text'], ['text', 'audio'].'chat.completion'.n > 1.Each choice contains:index (number) - Choice indexmessage (ChatCompletionMessage) - Generated messagefinish_reason (string) - Why generation stopped: 'stop', 'length', 'tool_calls', 'content_filter'logprobs (ChoiceLogprobs | null) - Log probability information if requestedprompt_tokens (number) - Tokens in the promptcompletion_tokens (number) - Tokens in the completiontotal_tokens (number) - Total tokens usedcompletion_tokens_details (object) - Breakdown of completion tokensprompt_tokens_details (object) - Breakdown of prompt tokensseed for understanding determinism.'auto', 'default', 'flex', 'scale', 'priority'.automatic_tool_execution: true.const completion = await client.chat.completions.create({
model: 'openai/gpt-4',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Explain quantum computing in simple terms.' }
],
temperature: 0.7,
max_tokens: 500,
});
console.log(completion.choices[0].message.content);
const messages = [
{ role: 'user', content: 'What is 2+2?' },
{ role: 'assistant', content: '2+2 equals 4.' },
{ role: 'user', content: 'What about 2+3?' }
];
const completion = await client.chat.completions.create({
model: 'openai/gpt-4',
messages,
});
const completion = await client.chat.completions.create({
model: 'openai/gpt-4',
messages: [
{
role: 'system',
content: 'Extract the name and age as JSON.'
},
{
role: 'user',
content: 'My name is John and I am 30 years old.'
}
],
response_format: { type: 'json_object' },
});
const completion = await client.chat.completions.create({
model: 'openai/gpt-4',
messages: [
{ role: 'user', content: 'Extract person info from: John Doe, 30 years old' }
],
response_format: {
type: 'json_schema',
json_schema: {
name: 'person_info',
strict: true,
schema: {
type: 'object',
properties: {
name: { type: 'string' },
age: { type: 'number' }
},
required: ['name', 'age'],
additionalProperties: false
}
}
},
});
const completion = await client.chat.completions.create({
model: [
'openai/gpt-4',
'anthropic/claude-3-5-sonnet'
],
messages: [
{ role: 'user', content: 'Explain machine learning.' }
],
agent_attributes: {
creativity: 0.8,
technical_depth: 0.9
},
});
const completion = await client.chat.completions.create({
model: 'openai/gpt-4',
messages: [
{ role: 'user', content: 'Search for recent AI news' }
],
mcp_servers: ['github:user/web-search-mcp'],
automatic_tool_execution: true,
});
import { APIError, RateLimitError, AuthenticationError } from 'dedalus-labs';
try {
const completion = await client.chat.completions.create({
model: 'openai/gpt-4',
messages: [{ role: 'user', content: 'Hello' }],
});
} catch (error) {
if (error instanceof AuthenticationError) {
console.error('Invalid API key');
} else if (error instanceof RateLimitError) {
console.error('Rate limit exceeded');
} else if (error instanceof APIError) {
console.error('API error:', error.status, error.message);
}
}
200 OK - Successful completion400 Bad Request - Invalid parameters401 Unauthorized - Authentication failed402 Payment Required - Insufficient credits429 Too Many Requests - Rate limit exceeded500 Internal Server Error - Server error