Quickstart
Get up and running with Core AI in just a few minutes. This guide shows you how to create a simple chat completion using OpenAI’s GPT model.
Prerequisites
Before you begin, make sure you have:
Create your first chat completion
Set up your project
Create a new file called chat.ts in your project: Make sure you have your API key set as an environment variable: export OPENAI_API_KEY = "your-api-key-here"
Import dependencies
Add the necessary imports to your chat.ts file: import { generate } from '@core-ai/core-ai' ;
import { createOpenAI } from '@core-ai/openai' ;
The generate function handles chat completions, while createOpenAI initializes the OpenAI provider.
Initialize the provider and model
Create an OpenAI provider instance and select a chat model: const openai = createOpenAI ({ apiKey: process . env . OPENAI_API_KEY });
const model = openai . chatModel ( 'gpt-5-mini' );
You can use any OpenAI model ID like gpt-5-mini, gpt-5, or gpt-o3-mini.
Generate a response
Call the generate function with your model and messages: const result = await generate ({
model ,
messages: [
{ role: 'system' , content: 'You are a helpful assistant.' },
{ role: 'user' , content: 'Explain quantum computing in one sentence.' },
],
});
console . log ( result . content );
console . log ( 'Usage:' , result . usage );
Run your code
Execute your script using tsx: You should see the AI’s response printed to the console along with token usage statistics.
Complete example
Here’s the full working example:
import { generate } from '@core-ai/core-ai' ;
import { createOpenAI } from '@core-ai/openai' ;
const openai = createOpenAI ({ apiKey: process . env . OPENAI_API_KEY });
const model = openai . chatModel ( 'gpt-5-mini' );
const result = await generate ({
model ,
messages: [
{ role: 'system' , content: 'You are a helpful assistant.' },
{ role: 'user' , content: 'Explain quantum computing in one sentence.' },
],
});
console . log ( 'Response:' , result . content );
console . log ( 'Usage:' , result . usage );
// Output:
// Response: Quantum computing uses quantum mechanical phenomena...
// Usage: { inputTokens: 25, outputTokens: 18, totalTokens: 43 }
Try streaming
Core AI makes streaming responses just as easy. Here’s how to stream text as it’s generated:
import { stream } from '@core-ai/core-ai' ;
import { createOpenAI } from '@core-ai/openai' ;
const openai = createOpenAI ({ apiKey: process . env . OPENAI_API_KEY });
const model = openai . chatModel ( 'gpt-5-mini' );
const result = await stream ({
model ,
messages: [
{ role: 'user' , content: 'Write a short haiku about TypeScript.' },
],
});
// Stream each text chunk as it arrives
for await ( const event of result ) {
if ( event . type === 'text-delta' ) {
process . stdout . write ( event . text );
}
}
// Get the complete response with metadata
const response = await result . toResponse ();
console . log ( ' \n Finish reason:' , response . finishReason );
console . log ( 'Usage:' , response . usage );
The toResponse() method aggregates the stream into a complete response object. You can call it after iterating through the stream.
Switch providers
One of Core AI’s key features is provider portability. Switch from OpenAI to Anthropic with just two lines:
OpenAI
Anthropic
Google GenAI
Mistral
import { generate } from '@core-ai/core-ai' ;
import { createOpenAI } from '@core-ai/openai' ;
const openai = createOpenAI ({ apiKey: process . env . OPENAI_API_KEY });
const model = openai . chatModel ( 'gpt-5-mini' );
const result = await generate ({
model ,
messages: [{ role: 'user' , content: 'Hello!' }],
});
Next steps
Now that you have a working chat completion, explore more advanced features:
Structured outputs Generate type-safe JSON objects with Zod schemas
Tool calling Let AI call functions with validated parameters
Embeddings Generate vector embeddings for semantic search
Image generation Create images from text prompts with DALL-E