Skip to main content
This guide will walk you through setting up Helicone and sending your first request. You’ll use our AI Gateway to access 100+ LLM models through the familiar OpenAI SDK with automatic logging built in.
Already using OpenAI, Anthropic, or other providers? Helicone integrates with just a URL change—no refactoring required.

Prerequisites

  • A Helicone account (sign up free)
  • Node.js or Python installed (or just use cURL)
  • 2 minutes

Step 1: Create Your Account

1

Sign up for Helicone

Navigate to helicone.ai/signup and create your free account.
Helicone offers a generous free tier with 10,000 requests per month—no credit card required.
2

Generate your API key

After signing up, go to Settings > API Keys and generate a new API key.Save this key securely—you’ll use it to authenticate your requests.
3

Add credits (optional)

If you want to use Helicone’s AI Gateway to access 100+ models without managing individual provider keys:
  1. Visit helicone.ai/credits
  2. Add credits to your account (starting at $10)
  3. Access any model instantly with 0% markup
Credits let you access 100+ LLM providers (OpenAI, Anthropic, Google, etc.) without signing up for each one individually. Here’s how it works:
  • 0% markup: You pay exactly what providers charge
  • Unified billing: One account for all providers
  • Instant access: No need to sign up for OpenAI, Anthropic, etc.
  • Automatic fallbacks: Switch providers when one is down
  • Simplified management: We handle provider API keys
Alternatively, you can bring your own provider keys for direct billing and full control.

Step 2: Send Your First Request

Helicone’s AI Gateway provides an OpenAI-compatible API. Simply point your existing OpenAI SDK to our gateway URL:
import { OpenAI } from "openai";

// Initialize the OpenAI client with Helicone's gateway
const client = new OpenAI({
  baseURL: "https://ai-gateway.helicone.ai",
  apiKey: process.env.HELICONE_API_KEY,
});

// Make a request to any supported model
const response = await client.chat.completions.create({
  model: "gpt-4o-mini", // Try: "claude-sonnet-4", "gemini-2.0-flash"
  messages: [
    { role: "user", content: "Explain Helicone in one sentence." }
  ],
});

console.log(response.choices[0].message.content);
Make sure to set your HELICONE_API_KEY environment variable. Never hardcode API keys in your source code.

Step 3: View Your Request in the Dashboard

Within seconds of sending your request, it will appear in your Helicone dashboard:
  1. Navigate to us.helicone.ai/requests
  2. You’ll see your request with full details:
    • Request and response bodies
    • Cost breakdown
    • Latency metrics (total time, time to first token)
    • Token usage
    • Model and provider information
Your first request in the Helicone dashboard
Click on any request to see the full conversation, including system prompts, function calls, and streaming details.

Try More Models

One of Helicone’s superpowers is unified access to 100+ models. Try switching models by just changing the model parameter:
const response = await client.chat.completions.create({
  model: "claude-sonnet-4",
  messages: [{ role: "user", content: "Hello!" }],
});

Explore All Models

Browse our catalog of 100+ supported models across 20+ providers

Add Custom Metadata

Enhance your requests with custom properties for better filtering and debugging:
const response = await client.chat.completions.create(
  {
    model: "gpt-4o-mini",
    messages: [{ role: "user", content: "Hello!" }],
  },
  {
    headers: {
      "Helicone-Property-User-Id": "user_123",
      "Helicone-Property-Environment": "production",
      "Helicone-Property-Feature": "chatbot",
    },
  }
);
These properties become searchable dimensions in your dashboard, letting you filter requests by user, feature, environment, or any custom tag.
Use custom properties to track costs per user, debug specific features, or analyze performance across environments.

Track Sessions (Multi-Step Workflows)

Building an AI agent or chatbot with multiple LLM calls? Use sessions to group related requests:
import { randomUUID } from "crypto";

const sessionId = randomUUID();

// First request in the session
await client.chat.completions.create(
  {
    model: "gpt-4o-mini",
    messages: [{ role: "user", content: "Generate a blog outline" }],
  },
  {
    headers: {
      "Helicone-Session-Id": sessionId,
      "Helicone-Session-Name": "Blog Writer",
      "Helicone-Session-Path": "/outline",
    },
  }
);

// Second request in the same session
await client.chat.completions.create(
  {
    model: "gpt-4o-mini",
    messages: [{ role: "user", content: "Write the introduction" }],
  },
  {
    headers: {
      "Helicone-Session-Id": sessionId,
      "Helicone-Session-Name": "Blog Writer",
      "Helicone-Session-Path": "/introduction",
    },
  }
);
View your session in the Sessions dashboard to see the complete trace tree of your AI workflow.

Learn More About Sessions

Deep dive into session tracking for complex AI agents and workflows

What’s Next?

Now that you’re logging requests, explore what else Helicone can do:

Platform Overview

Understand how Helicone works and explore the architecture

Gateway Features

Set up automatic fallbacks, caching, and rate limits

Cost Tracking

Track costs per user, feature, or any custom dimension

Prompt Management

Deploy and version prompts without code changes

Common Integration Patterns

import { ChatOpenAI } from "@langchain/openai";

const model = new ChatOpenAI({
  configuration: {
    baseURL: "https://ai-gateway.helicone.ai",
    defaultHeaders: {
      "Helicone-Auth": `Bearer ${process.env.HELICONE_API_KEY}`,
    },
  },
});
View full LangChain integration guide →
import { createOpenAI } from "@ai-sdk/openai";

const openai = createOpenAI({
  baseURL: "https://ai-gateway.helicone.ai",
  apiKey: process.env.HELICONE_API_KEY,
});
View full Vercel AI SDK integration guide →
If you prefer not to use a proxy, you can log requests asynchronously:
import { HeliconeManualLogger } from "@helicone/helpers";

const logger = new HeliconeManualLogger({
  apiKey: process.env.HELICONE_API_KEY,
});

await logger.logRequest(requestBody, async (recorder) => {
  const response = await fetch("https://api.openai.com/v1/chat/completions", {
    method: "POST",
    headers: {
      "Authorization": `Bearer ${process.env.OPENAI_API_KEY}`,
      "Content-Type": "application/json",
    },
    body: JSON.stringify(requestBody),
  });
  
  const data = await response.json();
  recorder.appendResults(data);
  return data;
});
View async logging documentation in our integrations guide

Need Help?

We’re here to support you:

Build docs developers (and LLMs) love