Skip to main content

Overview

The Anthropic provider gives you access to Claude models with advanced reasoning capabilities through adaptive and manual thinking modes.

Installation

npm install @core-ai/anthropic

createAnthropic()

Create an Anthropic provider instance.
import { createAnthropic } from '@core-ai/anthropic';

const anthropic = createAnthropic({
  apiKey: process.env.ANTHROPIC_API_KEY,
  defaultMaxTokens: 4096 // optional
});

Options

apiKey
string
Your Anthropic API key. Defaults to ANTHROPIC_API_KEY environment variable.
baseURL
string
Custom base URL for API requests. Useful for proxies or custom endpoints.
defaultMaxTokens
number
default:"4096"
Default maximum tokens for completions. Can be overridden per request.
client
Anthropic
Provide your own configured Anthropic client instance.

Returns

AnthropicProvider - Provider instance with methods to create models.

Provider Methods

chatModel()

Create a chat model instance.
const model = anthropic.chatModel('claude-opus-4-6');
modelId
string
required
Model identifier. See Supported Models below.

Supported Models

Latest generation with adaptive thinking mode.
  • claude-opus-4-6 - Most capable, supports max effort
  • claude-sonnet-4-6 - Balanced performance and speed
Previous generation with manual thinking budget control.
  • claude-opus-4-5 - High capability
  • claude-sonnet-4-5 - Efficient reasoning
  • claude-haiku-4-5 - Fast and lightweight
Earlier Claude models with manual thinking.
  • claude-opus-4-1 - Enhanced reasoning
  • claude-opus-4 - Strong performance
  • claude-sonnet-4 - Balanced model
  • claude-sonnet-3-7 - Previous generation

Capabilities

FeatureSupport
Chat Completionβœ“
Streamingβœ“
Function Callingβœ“
Visionβœ“
Reasoning Effortβœ“
Embeddingsβœ—
Image Generationβœ—

Thinking Modes

Claude models use two different thinking modes:

Adaptive Thinking

Used by Claude 4.6 models. The model automatically determines thinking depth based on the effort level.
const result = await generateText({
  model: anthropic.chatModel('claude-opus-4-6'),
  prompt: 'Solve this complex problem...',
  reasoning: {
    effort: 'high' // 'low' | 'medium' | 'high' | 'max'
  }
});
Only claude-opus-4-6 supports 'max' effort level.

Manual Thinking Budget

Used by Claude 4.5 and earlier. You control the thinking token budget:
  • minimal β†’ 1,024 tokens
  • low β†’ 2,048 tokens
  • medium β†’ 8,192 tokens
  • high β†’ 32,768 tokens
  • max β†’ 65,536 tokens
const result = await generateText({
  model: anthropic.chatModel('claude-opus-4-5'),
  prompt: 'Complex reasoning task...',
  reasoning: {
    effort: 'high' // Allocates 32,768 thinking tokens
  }
});

Examples

Basic Chat

import { createAnthropic } from '@core-ai/anthropic';
import { generateText } from '@core-ai/core-ai';

const anthropic = createAnthropic({
  apiKey: process.env.ANTHROPIC_API_KEY
});

const result = await generateText({
  model: anthropic.chatModel('claude-opus-4-6'),
  prompt: 'Explain the theory of relativity'
});

console.log(result.text);

Extended Thinking

const result = await generateText({
  model: anthropic.chatModel('claude-opus-4-6'),
  prompt: `
    Analyze this complex dataset and provide insights:
    [Large dataset here]
  `,
  reasoning: {
    effort: 'max' // Maximum reasoning effort
  },
  maxTokens: 8192
});

// Access thinking process if available
if (result.reasoning) {
  console.log('Thinking tokens:', result.reasoning.thinkingTokens);
}

Streaming with Thinking

import { streamText } from '@core-ai/core-ai';

const stream = await streamText({
  model: anthropic.chatModel('claude-sonnet-4-6'),
  prompt: 'Write a detailed analysis of...',
  reasoning: {
    effort: 'high'
  }
});

for await (const chunk of stream) {
  if (chunk.type === 'thinking') {
    console.log('Thinking:', chunk.text);
  } else if (chunk.type === 'text') {
    console.log('Output:', chunk.text);
  }
}

Function Calling

import { generateText, tool } from '@core-ai/core-ai';
import { z } from 'zod';

const result = await generateText({
  model: anthropic.chatModel('claude-opus-4-6'),
  prompt: 'What is the weather in San Francisco?',
  tools: {
    getWeather: tool({
      description: 'Get weather for a location',
      parameters: z.object({
        location: z.string()
      }),
      execute: async ({ location }) => {
        return { temp: 72, condition: 'sunny' };
      }
    })
  }
});

Vision

const result = await generateText({
  model: anthropic.chatModel('claude-opus-4-6'),
  messages: [
    {
      role: 'user',
      content: [
        { type: 'text', text: 'What is in this image?' },
        {
          type: 'image',
          image: 'https://example.com/image.jpg'
        }
      ]
    }
  ]
});

Custom Max Tokens

const anthropic = createAnthropic({
  apiKey: process.env.ANTHROPIC_API_KEY,
  defaultMaxTokens: 8192 // Higher default for all requests
});

// Or override per request
const result = await generateText({
  model: anthropic.chatModel('claude-opus-4-6'),
  prompt: 'Long response...',
  maxTokens: 16000
});

Error Handling

import { APIError } from '@core-ai/core-ai';

try {
  const result = await generateText({
    model: anthropic.chatModel('claude-opus-4-6'),
    prompt: 'Hello!'
  });
} catch (error) {
  if (error instanceof APIError) {
    console.error('Anthropic API error:', error.message);
    console.error('Status:', error.statusCode);
  }
}

Best Practices

  • Use Claude 4.6 (adaptive) for most use cases - it’s more efficient
  • Use Claude 4.5 (manual) when you need precise control over thinking budget
  • low - Simple questions, quick responses
  • medium - Standard reasoning tasks
  • high - Complex analysis, multi-step reasoning
  • max - Most challenging problems (Opus 4-6 only)
  • Set defaultMaxTokens based on your typical use case
  • Remember thinking tokens count toward your usage
  • Use streaming to show progress for long responses

OpenAI Provider

GPT models with reasoning capabilities

Google GenAI Provider

Gemini models with multimodal support

Chat Completion Guide

Learn how to use chat completion effectively

Build docs developers (and LLMs) love