Skip to main content
Chapi Assistant supports multiple AI providers with automatic fallback. Configure your preferred provider or let the system choose based on availability.

Supported Providers

Gemini

Google’s Gemini models with multi-model fallback

OpenAI

GPT-4 and other OpenAI models

Claude

Anthropic’s Claude models

Configuration

AI provider settings are stored in user.api.settings.json in your application data folder: Location: %AppData%/Chapi/user.api.settings.json

Settings Structure

GeminiApiKey
string
default:""
Your Google AI API key for Gemini models. Get one at Google AI Studio.
OpenAiApiKey
string
default:""
Your OpenAI API key. Get one at OpenAI Platform.
ClaudeApiKey
string
default:""
Your Anthropic API key for Claude. Get one at Anthropic Console.
PreferredAiProvider
string
default:"Gemini"
Your preferred AI provider. Options:
  • Gemini: Google’s Gemini models
  • OpenAI: OpenAI GPT models
  • Claude: Anthropic Claude models

Example Configuration

user.api.settings.json
{
  "GeminiApiKey": "AIzaSyD...",
  "OpenAiApiKey": "sk-proj-...",
  "ClaudeApiKey": "sk-ant-...",
  "PreferredAiProvider": "Gemini",
  "ProxyEnabled": false,
  "ProxyUrl": "",
  "ProxyUser": "",
  "ProxyPass": ""
}

Provider Selection Logic

The system uses a smart fallback mechanism to ensure AI features always work:
1

Preferred Provider

Attempts to use the provider specified in PreferredAiProvider if its API key is configured
2

Fallback Order

If preferred provider is unavailable, tries providers in this order:
  1. Gemini
  2. OpenAI
  3. Claude
3

Error Handling

If no provider is configured, throws an error prompting you to configure at least one
App.xaml.cs (lines 95-123)
services.AddTransient<IChatClient>(sp => 
{
    var settings = UserSettingsService.LoadSettings();
    
    // 1. Try preferred provider
    if (settings.PreferredAiProvider == "OpenAI" && !string.IsNullOrWhiteSpace(settings.OpenAiApiKey))
        return new OpenAiChatClient(settings.OpenAiApiKey);
    
    if (settings.PreferredAiProvider == "Claude" && !string.IsNullOrWhiteSpace(settings.ClaudeApiKey))
        return new ClaudeChatClient(settings.ClaudeApiKey);
    
    if ((settings.PreferredAiProvider == "Gemini" || string.IsNullOrEmpty(settings.PreferredAiProvider)) 
        && !string.IsNullOrWhiteSpace(settings.GeminiApiKey))
        return new GeminiChatClient(settings.GeminiApiKey);

    // 2. Fallback: Try any available (Gemini > OpenAI > Claude)
    if (!string.IsNullOrWhiteSpace(settings.GeminiApiKey))
        return new GeminiChatClient(settings.GeminiApiKey);

    if (!string.IsNullOrWhiteSpace(settings.OpenAiApiKey))
        return new OpenAiChatClient(settings.OpenAiApiKey);
    
    if (!string.IsNullOrWhiteSpace(settings.ClaudeApiKey))
        return new ClaudeChatClient(settings.ClaudeApiKey);

    throw new InvalidOperationException("No AI provider configured");
});

Gemini Configuration

Supported Models

Gemini uses multi-model fallback for reliability:
  1. gemini-3.0-flash (primary)
  2. gemini-2.5-flash (fallback)
  3. gemma-3 (fallback)
apiKey
string
required
Google AI API key from Google AI Studio

Features

Streaming Support

Uses streaming API to avoid HTTP/2 connection hanging

Timeout Protection

35-second timeout per model attempt

Auto-Retry

Automatically tries next model on failure

Quota Handling

Detects and reports quota limit errors

Usage Example

GeminiChatClient.cs
public GeminiChatClient(string apiKey)
{
    _apiKey = apiKey;
}

public async Task<ChatResponse> GetResponseAsync(
    IEnumerable<ChatMessage> chatMessages, 
    ChatOptions? options = null, 
    CancellationToken cancellationToken = default)
{
    var prompt = BuildPrompt(chatMessages);
    var lastError = string.Empty;

    // Try each model with timeout
    foreach (var modelId in _models)
    {
        try
        {
            using var cts = CancellationTokenSource.CreateLinkedTokenSource(cancellationToken);
            cts.CancelAfter(TimeSpan.FromSeconds(35));

            var googleAI = new GoogleAI(apiKey: _apiKey);
            var model = googleAI.GenerativeModel(model: modelId);

            var fullResponse = new StringBuilder();
            await foreach (var chunk in model.GenerateContentStream(prompt, cancellationToken: cts.Token))
            {
                if (chunk.Text != null)
                    fullResponse.Append(chunk.Text);
            }

            var text = CleanResponse(fullResponse.ToString());
            if (!string.IsNullOrWhiteSpace(text))
                return new ChatResponse(new[] { new ChatMessage(ChatRole.Assistant, text) });
        }
        catch (Exception ex)
        {
            lastError = HandleError(ex, modelId);
        }
    }

    throw new Exception($"All Gemini models failed. Last error: {lastError}");
}

OpenAI Configuration

Supported Models

modelId
string
default:"gpt-4o"
The OpenAI model to use. Can be customized when instantiating the client.
apiKey
string
required
OpenAI API key from OpenAI Platform

Features

  • Standard Chat Completions: Uses OpenAI’s /v1/chat/completions endpoint
  • Bearer Authentication: API key passed via Authorization header
  • Flexible Model Selection: Default to gpt-4o, customizable

Usage Example

OpenAiChatClient.cs
public OpenAiChatClient(string apiKey, string modelId = "gpt-4o")
{
    _apiKey = apiKey;
    _modelId = modelId;
    _httpClient = new HttpClient();
    _httpClient.DefaultRequestHeaders.Authorization = 
        new AuthenticationHeaderValue("Bearer", _apiKey);
}

public async Task<ChatResponse> GetResponseAsync(
    IEnumerable<ChatMessage> chatMessages, 
    ChatOptions? options = null, 
    CancellationToken cancellationToken = default)
{
    var messages = chatMessages.Select(m => new 
    { 
        role = m.Role.Value.ToLower(), 
        content = m.Text 
    }).ToList();

    var requestBody = new
    {
        model = _modelId,
        messages = messages
    };

    var response = await _httpClient.PostAsJsonAsync(
        "https://api.openai.com/v1/chat/completions", 
        requestBody, 
        cancellationToken);
        
    response.EnsureSuccessStatusCode();

    var jsonResponse = await response.Content.ReadFromJsonAsync<OpenAiResponse>(
        cancellationToken: cancellationToken);
        
    var content = jsonResponse?.Choices?.FirstOrDefault()?.Message?.Content ?? string.Empty;

    return new ChatResponse(new[] { new ChatMessage(ChatRole.Assistant, content) });
}

Claude Configuration

Supported Models

modelId
string
default:"claude-3-opus-20240229"
The Claude model to use. Can be customized when instantiating the client.
apiKey
string
required
Anthropic API key from Anthropic Console

Features

  • Messages API: Uses Anthropic’s /v1/messages endpoint
  • API Version: Fixed to 2023-06-01
  • Token Limit: Default max_tokens set to 1024

Usage Example

ClaudeChatClient.cs
public ClaudeChatClient(string apiKey, string modelId = "claude-3-opus-20240229")
{
    _apiKey = apiKey;
    _modelId = modelId;
    _httpClient = new HttpClient();
    _httpClient.DefaultRequestHeaders.Add("x-api-key", _apiKey);
    _httpClient.DefaultRequestHeaders.Add("anthropic-version", "2023-06-01");
}

public async Task<ChatResponse> GetResponseAsync(
    IEnumerable<ChatMessage> chatMessages, 
    ChatOptions? options = null, 
    CancellationToken cancellationToken = default)
{
    var messages = chatMessages.Select(m => new 
    { 
        role = m.Role.Value.ToLower(), 
        content = m.Text 
    }).ToList();

    var requestBody = new
    {
        model = _modelId,
        messages = messages,
        max_tokens = 1024
    };

    var response = await _httpClient.PostAsJsonAsync(
        "https://api.anthropic.com/v1/messages", 
        requestBody, 
        cancellationToken);
        
    response.EnsureSuccessStatusCode();

    var jsonResponse = await response.Content.ReadFromJsonAsync<ClaudeResponse>(
        cancellationToken: cancellationToken);
        
    var content = jsonResponse?.Content?.FirstOrDefault()?.Text ?? string.Empty;

    return new ChatResponse(new[] { new ChatMessage(ChatRole.Assistant, content) });
}

Proxy Configuration

If you’re behind a corporate proxy, configure these settings:
ProxyEnabled
boolean
default:false
Enable proxy for AI provider connections
ProxyUrl
string
default:""
Proxy server URL (e.g., http://proxy.company.com:8080)
ProxyUser
string
default:""
Proxy authentication username (if required)
ProxyPass
string
default:""
Proxy authentication password (if required)

Getting API Keys

1

Gemini (Google AI)

  1. Visit Google AI Studio
  2. Sign in with your Google account
  3. Click “Create API Key”
  4. Copy the generated key
2

OpenAI

  1. Visit OpenAI Platform
  2. Sign in or create an account
  3. Click “Create new secret key”
  4. Copy the key (it won’t be shown again)
3

Claude (Anthropic)

  1. Visit Anthropic Console
  2. Sign in or create an account
  3. Navigate to API Keys
  4. Create and copy your API key

Troubleshooting

Error: InvalidOperationException: No se ha configurado ningún proveedor de IASolution:
  • Add at least one API key to user.api.settings.json
  • Ensure the key is not empty or whitespace
  • Restart the application after updating settings
Error: ⚠️ Se alcanzó el límite de cuota del modelo de IASolution:
  • Check your usage at Google AI Studio
  • Wait for quota reset or upgrade your plan
  • Configure an alternative provider (OpenAI/Claude)
Error: Fallaron todos los modelos de GeminiSolution:
  • Verify your API key is valid
  • Check network connectivity
  • Review the last error message for specific issues
  • Try alternative providers
Solution:
  • Regenerate API key from provider’s console
  • Update user.api.settings.json with new key
  • Ensure no extra spaces or characters in the key
Solution:
  • Check internet connection
  • Configure proxy settings if behind firewall
  • Increase timeout if possible (35s for Gemini)
Keep your API keys secure. Never commit user.api.settings.json to version control or share your keys publicly.

Source Code Reference

  • Gemini Client: ~/workspace/source/Chapi/Infrastructure/AI/GeminiChatClient.cs
  • OpenAI Client: ~/workspace/source/Chapi/Infrastructure/AI/OpenAiChatClient.cs
  • Claude Client: ~/workspace/source/Chapi/Infrastructure/AI/ClaudeChatClient.cs
  • Settings Model: ~/workspace/source/Chapi/Infrastructure/Persistence/Settings/UserSettings.cs
  • Provider Selection: ~/workspace/source/Chapi/App.xaml.cs:95-123

Build docs developers (and LLMs) love