POST /v1/messages/count_tokens
Count the number of tokens that would be used by a messages request. This is useful for estimating costs and ensuring you stay within model token limits.This endpoint is specific to Anthropic’s API format. For OpenAI-format requests, token counting is typically done client-side or through the provider’s tokenizer.
Authentication
Requires provider authentication headers:Request
Headers
Must be set to
anthropicBearer token for Anthropic API
API version (e.g.,
2023-06-01)Body Parameters
The model to count tokens for (e.g.,
claude-3-5-sonnet-20241022)Array of message objects in Anthropic format
System prompt (optional)
Array of tool definitions (if using tool calling)
Response
Number of tokens in the input (messages + system prompt + tools)
Example
Response Example
Use Cases
Cost Estimation
Cost Estimation
Calculate the cost of a request before sending it by counting tokens and multiplying by the model’s per-token price.
Context Window Management
Context Window Management
Ensure your messages fit within the model’s context window (e.g., 200K tokens for Claude 3.5 Sonnet).
Prompt Optimization
Prompt Optimization
Compare token counts across different prompt formulations to optimize for cost and efficiency.
Dynamic Context Trimming
Dynamic Context Trimming
Determine which messages to keep or remove when approaching token limits in multi-turn conversations.
Token Counting with Tools
When using tool calling, tools are included in the token count:Pricing
Token counting requests do not consume any tokens or incur costs. Use this endpoint freely to estimate costs before making actual API calls.Related Endpoints
Create Message
Send a messages request to Claude
Anthropic Provider
Anthropic integration guide