Skip to main content
Integrate Helicone with Vercel AI SDK to track and monitor your AI applications with streaming support and edge runtime compatibility.

Quick Start

Integrate Helicone with Vercel AI SDK by configuring the provider with Helicone’s base URL:
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';

export async function POST(req: Request) {
  const { messages } = await req.json();

  // Configure OpenAI provider with Helicone
  const result = await streamText({
    model: openai('gpt-4o-mini', {
      baseURL: 'https://oai.helicone.ai/v1',
      headers: {
        'Helicone-Auth': `Bearer ${process.env.HELICONE_API_KEY}`,
      },
    }),
    messages,
  });

  return result.toDataStreamResponse();
}

Installation

npm install ai @ai-sdk/openai

Configuration

Environment Variables

Set up your environment:
.env.local
OPENAI_API_KEY=sk-...
HELICONE_API_KEY=sk-helicone-...

Provider Configuration

Configure different providers:
import { openai } from '@ai-sdk/openai';

const model = openai('gpt-4o-mini', {
  baseURL: 'https://oai.helicone.ai/v1',
  headers: {
    'Helicone-Auth': `Bearer ${process.env.HELICONE_API_KEY}`,
  },
});

Streaming Support

Helicone fully supports streaming with Vercel AI SDK:

Text Streaming

import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';

export async function POST(req: Request) {
  const { messages } = await req.json();

  const result = await streamText({
    model: openai('gpt-4o-mini', {
      baseURL: 'https://oai.helicone.ai/v1',
      headers: {
        'Helicone-Auth': `Bearer ${process.env.HELICONE_API_KEY}`,
      },
    }),
    messages,
  });

  // Stream response to client
  return result.toDataStreamResponse();
}

Object Streaming

import { streamObject } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';

const schema = z.object({
  recipe: z.object({
    name: z.string(),
    ingredients: z.array(z.string()),
    steps: z.array(z.string()),
  }),
});

export async function POST(req: Request) {
  const result = await streamObject({
    model: openai('gpt-4o-mini', {
      baseURL: 'https://oai.helicone.ai/v1',
      headers: {
        'Helicone-Auth': `Bearer ${process.env.HELICONE_API_KEY}`,
      },
    }),
    schema,
    prompt: 'Generate a recipe for chocolate chip cookies',
  });

  return result.toTextStreamResponse();
}

Client-Side Integration

Use Helicone with client-side Vercel AI SDK hooks:
app/page.tsx
'use client';

import { useChat } from 'ai/react';

export default function Chat() {
  const { messages, input, handleInputChange, handleSubmit } = useChat({
    api: '/api/chat', // Points to your API route with Helicone
  });

  return (
    <div>
      {messages.map((message) => (
        <div key={message.id}>
          {message.role}: {message.content}
        </div>
      ))}
      <form onSubmit={handleSubmit}>
        <input value={input} onChange={handleInputChange} />
        <button type="submit">Send</button>
      </form>
    </div>
  );
}

Advanced Features

Session Tracking

Track multi-turn conversations:
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
import { v4 as uuidv4 } from 'uuid';

export async function POST(req: Request) {
  const { messages } = await req.json();
  const sessionId = uuidv4();

  const result = await streamText({
    model: openai('gpt-4o-mini', {
      baseURL: 'https://oai.helicone.ai/v1',
      headers: {
        'Helicone-Auth': `Bearer ${process.env.HELICONE_API_KEY}`,
        'Helicone-Session-Id': sessionId,
        'Helicone-Session-Name': 'Chat Application',
        'Helicone-Session-Path': '/api/chat',
      },
    }),
    messages,
  });

  return result.toDataStreamResponse();
}

User Tracking

Track requests by user:
const result = await streamText({
  model: openai('gpt-4o-mini', {
    baseURL: 'https://oai.helicone.ai/v1',
    headers: {
      'Helicone-Auth': `Bearer ${process.env.HELICONE_API_KEY}`,
      'Helicone-User-Id': 'user-123',
    },
  }),
  messages,
});

Custom Properties

Add metadata to your requests:
const result = await streamText({
  model: openai('gpt-4o-mini', {
    baseURL: 'https://oai.helicone.ai/v1',
    headers: {
      'Helicone-Auth': `Bearer ${process.env.HELICONE_API_KEY}`,
      'Helicone-Property-Environment': 'production',
      'Helicone-Property-App': 'chatbot',
      'Helicone-Property-Version': 'v1.2.0',
    },
  }),
  messages,
});

Tool Calls

Track tool usage:
import { streamText, tool } from 'ai';
import { z } from 'zod';

const result = await streamText({
  model: openai('gpt-4o-mini', {
    baseURL: 'https://oai.helicone.ai/v1',
    headers: {
      'Helicone-Auth': `Bearer ${process.env.HELICONE_API_KEY}`,
    },
  }),
  messages,
  tools: {
    weather: tool({
      description: 'Get the weather in a location',
      parameters: z.object({
        location: z.string().describe('The location to get the weather for'),
      }),
      execute: async ({ location }) => ({
        location,
        temperature: 72,
      }),
    }),
  },
});

// Helicone tracks tool calls automatically

Edge Runtime Support

Helicone works seamlessly with Edge Runtime:
app/api/chat/route.ts
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';

export const runtime = 'edge';

export async function POST(req: Request) {
  const { messages } = await req.json();

  const result = await streamText({
    model: openai('gpt-4o-mini', {
      baseURL: 'https://oai.helicone.ai/v1',
      headers: {
        'Helicone-Auth': `Bearer ${process.env.HELICONE_API_KEY}`,
      },
    }),
    messages,
  });

  return result.toDataStreamResponse();
}

Troubleshooting

Make sure to pass headers in the provider configuration:
const model = openai('gpt-4o-mini', {
  baseURL: 'https://oai.helicone.ai/v1',
  headers: {  // ✓ Correct
    'Helicone-Auth': `Bearer ${process.env.HELICONE_API_KEY}`,
  },
});
Ensure you’re using .toDataStreamResponse() or .toTextStreamResponse():
const result = await streamText(...);
return result.toDataStreamResponse(); // ✓ Correct
Helicone is fully compatible with Edge runtime. If you encounter issues:
  1. Verify environment variables are set in your hosting platform
  2. Check that runtime = 'edge' is exported
  3. Ensure you’re using supported dependencies

Examples from Source

See real integration examples:

Next Steps

Sessions

Track multi-turn conversations

Custom Properties

Add metadata to requests

User Analytics

Analyze user behavior

Dashboard

Monitor streaming performance

Build docs developers (and LLMs) love