Skip to main content

Overview

The AutoGen.OpenAI package provides seamless integration with OpenAI’s GPT models, including GPT-4, GPT-4 Turbo, GPT-3.5, and Azure OpenAI Service.

Installation

dotnet add package AutoGen.OpenAI

OpenAI Setup

Connect to OpenAI’s API:
1

Get your API key

Obtain an API key from OpenAI Platform.
2

Set environment variable

$env:OPENAI_API_KEY="sk-..."
3

Create an agent

using AutoGen.OpenAI;
using AutoGen.OpenAI.Extension;
using OpenAI;

var apiKey = Environment.GetEnvironmentVariable("OPENAI_API_KEY");
var openAIClient = new OpenAIClient(apiKey);

var agent = new OpenAIChatAgent(
    chatClient: openAIClient.GetChatClient("gpt-4"),
    name: "assistant",
    systemMessage: "You are a helpful AI assistant")
    .RegisterMessageConnector()
    .RegisterPrintMessage();

var response = await agent.SendAsync("Hello!");

OpenAIChatAgent

The main agent class for OpenAI models:
using AutoGen.OpenAI;
using AutoGen.OpenAI.Extension;
using OpenAI;

var openAIClient = new OpenAIClient(apiKey);

var agent = new OpenAIChatAgent(
    chatClient: openAIClient.GetChatClient("gpt-4-turbo"),
    name: "assistant",
    systemMessage: "You are a helpful assistant",
    seed: 0,                    // For reproducible outputs
    temperature: 0.7f,          // Creativity level
    maxTokens: 2000,            // Max response length
    responseFormat: null)       // JSON mode if needed
    .RegisterMessageConnector()
    .RegisterPrintMessage();

Constructor Parameters

chatClient
ChatClient
required
OpenAI ChatClient instance for the specific model
name
string
required
Unique identifier for the agent
systemMessage
string
default:"You are a helpful assistant"
Instructions defining the agent’s behavior
seed
int?
Random seed for deterministic outputs (when supported by model)
temperature
float
default:"0.7"
Sampling temperature (0.0 = deterministic, 2.0 = very creative)
maxTokens
int
default:"1024"
Maximum tokens to generate in response
responseFormat
ChatResponseFormat?
Response format (e.g., JSON mode)

Available Models

// GPT-4 Turbo (Recommended)
var gpt4Turbo = openAIClient.GetChatClient("gpt-4-turbo");
var agent = new OpenAIChatAgent(
    chatClient: gpt4Turbo,
    name: "assistant")
    .RegisterMessageConnector();

// GPT-4 (Original)
var gpt4 = openAIClient.GetChatClient("gpt-4");

// GPT-4 32K (Large context)
var gpt4_32k = openAIClient.GetChatClient("gpt-4-32k");

Azure OpenAI

Connect to models deployed on Azure:
1

Set up credentials

# Set your Azure OpenAI credentials
export AZURE_OPENAI_API_KEY="your-azure-key"
export AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com/"
export AZURE_OPENAI_DEPLOY_NAME="your-deployment-name"
2

Create Azure OpenAI agent

using System.ClientModel;
using AutoGen.OpenAI.Extension;
using Azure.AI.OpenAI;

var apiKey = Environment.GetEnvironmentVariable("AZURE_OPENAI_API_KEY");
var endpoint = Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT");
var deploymentName = Environment.GetEnvironmentVariable("AZURE_OPENAI_DEPLOY_NAME");

// Create Azure OpenAI client
var azureClient = new AzureOpenAIClient(
    new Uri(endpoint),
    new ApiKeyCredential(apiKey));

var agent = new OpenAIChatAgent(
    chatClient: azureClient.GetChatClient(deploymentName),
    name: "assistant",
    systemMessage: "You are a helpful assistant",
    seed: 0)
    .RegisterMessageConnector()
    .RegisterPrintMessage();

var response = await agent.SendAsync(
    "Can you write a piece of C# code to calculate 100th Fibonacci?");

Streaming Responses

Stream responses token-by-token for real-time output:
using AutoGen.Core;

var agent = new OpenAIChatAgent(
    chatClient: openAIClient.GetChatClient("gpt-4"),
    name: "assistant")
    .RegisterMessageConnector();

var messages = new[]
{
    new TextMessage(Role.User, "Write a long story about a robot")
};

await foreach (var message in agent.GenerateStreamingReplyAsync(messages))
{
    if (message.GetContent() is string content)
    {
        Console.Write(content);
    }
}

JSON Mode

Force responses in JSON format:
using OpenAI.Chat;

var agent = new OpenAIChatAgent(
    chatClient: openAIClient.GetChatClient("gpt-4"),
    name: "assistant",
    systemMessage: "You output valid JSON only",
    responseFormat: ChatResponseFormat.CreateJsonObjectFormat())
    .RegisterMessageConnector();

var response = await agent.SendAsync(@"
    Create a JSON object for a person with name, age, and hobbies.
");

Console.WriteLine(response.GetContent());
// Output: {"name": "John", "age": 30, "hobbies": ["reading", "gaming"]}

Structured Output

Use JSON schema for strongly-typed responses:
using System.Text.Json;
using System.Text.Json.Serialization;
using OpenAI.Chat;

// Define your schema
public class Person
{
    [JsonPropertyName("name")]
    public string Name { get; set; }

    [JsonPropertyName("age")]
    public int Age { get; set; }

    [JsonPropertyName("email")]
    public string Email { get; set; }
}

var jsonSchema = JsonSerializer.Serialize(new
{
    type = "object",
    properties = new
    {
        name = new { type = "string" },
        age = new { type = "integer" },
        email = new { type = "string" }
    },
    required = new[] { "name", "age", "email" },
    additionalProperties = false
});

var agent = new OpenAIChatAgent(
    chatClient: openAIClient.GetChatClient("gpt-4"),
    name: "assistant",
    systemMessage: "Extract person information as JSON",
    responseFormat: ChatResponseFormat.CreateJsonSchemaFormat(
        "person",
        BinaryData.FromString(jsonSchema)))
    .RegisterMessageConnector();

var response = await agent.SendAsync(
    "John Doe is 30 years old. His email is [email protected]");

var person = JsonSerializer.Deserialize<Person>(response.GetContent());
Console.WriteLine($"Name: {person.Name}, Age: {person.Age}");

Function Calling

Combine with AutoGen’s function calling:
using AutoGen.Core;
using AutoGen.OpenAI;
using AutoGen.OpenAI.Extension;
using Microsoft.Extensions.AI;

public partial class WeatherFunctions
{
    /// <summary>
    /// Get current weather
    /// </summary>
    /// <param name="location">city name</param>
    [Function]
    public async Task<string> GetWeather(string location)
    {
        return $"Weather in {location}: Sunny, 72°F";
    }
}

var tools = new WeatherFunctions();
var gpt4 = openAIClient.GetChatClient("gpt-4");

var functionCallMiddleware = new FunctionCallMiddleware(
    functions: [tools.GetWeatherFunctionContract],
    functionMap: new Dictionary<string, Func<string, Task<string>>>
    {
        { nameof(tools.GetWeather), tools.GetWeatherWrapper }
    });

var agent = new OpenAIChatAgent(
    chatClient: gpt4,
    name: "assistant")
    .RegisterMessageConnector()
    .RegisterStreamingMiddleware(functionCallMiddleware)
    .RegisterPrintMessage();

var response = await agent.SendAsync("What's the weather in Seattle?");
Console.WriteLine(response.GetContent());
// Output: The weather in Seattle is sunny with a temperature of 72°F.

Vision (GPT-4 Vision)

Process images with GPT-4 Vision models:
using AutoGen.Core;

var agent = new OpenAIChatAgent(
    chatClient: openAIClient.GetChatClient("gpt-4-vision-preview"),
    name: "vision_assistant")
    .RegisterMessageConnector();

// Create a multimodal message
var imageMessage = new ImageMessage(
    Role.User,
    "https://example.com/image.jpg",
    from: "user");

var textMessage = new TextMessage(
    Role.User,
    "What's in this image?",
    from: "user");

var response = await agent.SendAsync(
    new[] { imageMessage, textMessage });

Console.WriteLine(response.GetContent());

Message Connector

The message connector converts between AutoGen and OpenAI message formats:
using AutoGen.OpenAI.Extension;

// Register message connector to handle AutoGen message types
var agent = new OpenAIChatAgent(/*...*/)
    .RegisterMessageConnector();  // Required for AutoGen messages

// Now supports:
// - TextMessage
// - ImageMessage  
// - ToolCallMessage
// - ToolCallResultMessage
// - ToolCallAggregateMessage

Configuration Options

ConversableAgentConfig (Legacy)

For agents using the older configuration style:
using AutoGen;
using AutoGen.OpenAI;

var openAIConfig = new OpenAIConfig(apiKey, "gpt-4");

var agent = new AssistantAgent(
    name: "assistant",
    systemMessage: "You are helpful",
    llmConfig: new ConversableAgentConfig
    {
        Temperature = 0,
        MaxToken = 2000,
        ConfigList = [openAIConfig],
        TimeoutInSeconds = 60,
        StopSequence = ["END"]
    });

Connecting to Ollama

Use OpenAI-compatible endpoints:
using OpenAI;

// Point to Ollama's OpenAI-compatible endpoint
var ollamaClient = new OpenAIClient(
    new ApiKeyCredential("not-used"),
    new OpenAIClientOptions
    {
        Endpoint = new Uri("http://localhost:11434/v1")
    });

var agent = new OpenAIChatAgent(
    chatClient: ollamaClient.GetChatClient("llama2"),
    name: "assistant")
    .RegisterMessageConnector();

var response = await agent.SendAsync("Hello!");

Best Practices

  • GPT-4o / GPT-4 Turbo: Best for complex reasoning, function calling
  • GPT-4o Mini: Fast, cost-effective for simple tasks
  • GPT-3.5 Turbo: Budget-friendly for high-volume applications
  • O1 Models: Advanced reasoning for complex problems
// Use cheaper models for simple tasks
var simpleAgent = new OpenAIChatAgent(
    chatClient: openAIClient.GetChatClient("gpt-3.5-turbo"),
    name: "simple_assistant",
    maxTokens: 500,  // Limit response length
    temperature: 0)  // Faster, more deterministic
    .RegisterMessageConnector();

// Reserve GPT-4 for complex reasoning
var expertAgent = new OpenAIChatAgent(
    chatClient: openAIClient.GetChatClient("gpt-4-turbo"),
    name: "expert_assistant")
    .RegisterMessageConnector();
try
{
    var response = await agent.SendAsync(message);
}
catch (OpenAIException ex) when (ex.StatusCode == 429)
{
    // Rate limit - implement backoff
    await Task.Delay(TimeSpan.FromSeconds(5));
    // Retry
}
catch (OpenAIException ex) when (ex.StatusCode == 500)
{
    // Server error - retry with different model
    Console.WriteLine($"Server error: {ex.Message}");
}
  • Reuse OpenAIClient instances
  • Use streaming for long responses
  • Set appropriate maxTokens to control costs
  • Cache responses when possible
  • Use seed parameter for reproducible outputs

Environment Variables

OPENAI_API_KEY
string
required
Your OpenAI API key from platform.openai.com
AZURE_OPENAI_API_KEY
string
Your Azure OpenAI API key
AZURE_OPENAI_ENDPOINT
string
Your Azure OpenAI endpoint URL
AZURE_OPENAI_DEPLOY_NAME
string
Your Azure OpenAI deployment name

Next Steps

Anthropic

Use Claude models with AutoGen

Function Calling

Add tools to OpenAI agents

Group Chat

Create multi-agent workflows

Examples

See complete examples

Build docs developers (and LLMs) love