Skip to main content
LLM Providers connect Support Bot to AI model services like OpenAI, Anthropic, Google, or custom endpoints. Configure multiple providers and test connections before deploying to production.

Supported Providers

Support Bot supports multiple LLM provider types:

OpenAI

Connect to GPT-4, GPT-3.5, and other OpenAI models.

Anthropic

Use Claude models (Claude 3 Opus, Sonnet, Haiku).

Google

Access Gemini and PaLM models.

Custom

Connect to self-hosted or custom API endpoints.

Adding a Provider

1

Navigate to AI/ML Settings

From the sidebar, go to Settings > AI / ML Configuration.
2

Click Add Provider

In the LLM Providers section, click the Add Provider button.
3

Enter Provider Details

Fill in the provider configuration:
A friendly name for this provider (e.g., “Production OpenAI”, “Staging Claude”).
Select from:
  • OpenAI
  • Anthropic
  • Google
  • Custom
The API endpoint:
  • OpenAI: https://api.openai.com/v1
  • Anthropic: https://api.anthropic.com/v1
  • Google: https://generativelanguage.googleapis.com/v1
  • Custom: Your self-hosted endpoint
Your authentication key for the provider. This is encrypted before storage.
Never share API keys or commit them to version control.
Optionally specify which models to make available. Leave empty to use defaults for the provider type.Examples:
  • OpenAI: gpt-4, gpt-3.5-turbo
  • Anthropic: claude-3-opus-20240229, claude-3-sonnet-20240229
  • Google: gemini-pro, gemini-pro-vision
4

Discover Models (Optional)

Click Discover Models to automatically fetch the list of available models from the provider’s API.
This requires valid credentials and makes a live API call to the provider.
5

Test Connection

Click Test Connection to verify:
  • API endpoint is reachable
  • Credentials are valid
  • Provider is responding correctly
Response time is displayed on successful connection.
6

Save

Click Save to add the provider to your configuration.

Managing Providers

View All Providers

The LLM Providers section displays all configured providers with:
  • Name and provider type
  • Status indicator (active/inactive)
  • Health check status (last test result)
  • API key status (configured/not configured)
  • Available models count

Edit Provider

1

Click Edit

Click the edit icon on the provider card.
2

Update Settings

Modify any provider details:
  • Name
  • Base URL
  • API Key (only if you need to change it)
  • Models list
3

Test Connection

After making changes, test the connection to ensure it still works.
4

Save Changes

Click Save to apply your updates.
API keys are never displayed in the UI. You’ll only see whether a key is configured.

Test Provider Connection

Regularly test your providers to ensure they’re operational:
1

Click Test

On the provider card, click Test Connection.
2

Review Results

The system will:
  • Attempt to connect to the provider
  • Verify credentials
  • Measure response time
Results are displayed in a toast notification and saved to the provider’s health check history.

Health Check Status

Each provider shows its last health check:
success
boolean
Whether the last connection test succeeded
response_time_ms
number
Response time in milliseconds (lower is better)
last_checked
timestamp
When the test was last run
error
string
Error message if the test failed

Delete Provider

Deleting a provider removes it permanently. If models from this provider are currently selected in AI/ML settings, they’ll stop working.
1

Click Delete

Click the delete icon on the provider card.
2

Confirm Deletion

A confirmation dialog will appear.
3

Confirm

Click Delete to permanently remove the provider.

Model Discovery

Support Bot can automatically discover available models from your providers.

Discover from Existing Provider

1

Open Provider

Click edit on an existing provider.
2

Click Discover

Click the Discover Models button.
3

Review Results

The system fetches the model list from the provider’s API and displays how many were found.
4

Save

Save the provider to use the discovered models.

Discover Before Saving

When creating a new provider, you can discover models before saving:
1

Enter Credentials

Fill in the provider type, base URL, and API key.
2

Click Discover

Click Discover Models to test the credentials and fetch available models.
3

Review and Save

If discovery succeeds, the models field will be populated. Save the provider.
Model discovery is optional. If you leave the models field empty, Support Bot will use default models for that provider type.

Using Providers in Production

Once providers are configured, they become available in the Model & Generation settings:
1

Select Model

In AI / ML Configuration > Model & Generation, open the model dropdown.
2

Choose Provider and Model

Models from all active providers are shown. Select the one you want to use.
3

Save Configuration

Click Save to apply the model selection.
The model dropdown shows models in the format: [Provider] Model Name

Required Permissions

ActionPermission Required
View providersllm_provider.view
Add providerllm_provider.create
Edit providerllm_provider.edit
Delete providerllm_provider.delete
Test connectionllm_provider.test

Security Best Practices

Rotate Keys Regularly

Update API keys on a regular schedule (e.g., every 90 days).

Use Separate Keys

Use different API keys for development, staging, and production environments.

Monitor Usage

Track provider usage and costs through the provider’s dashboard.

Test Before Deploy

Always test connections after adding or updating a provider.

Troubleshooting

Connection Test Fails

  • Verify your API key is correct
  • Check if the key has expired
  • Ensure the key has appropriate permissions
  • Verify the base URL is correct
  • Check firewall rules allow outbound HTTPS
  • Ensure the provider’s API is operational
  • Wait a few minutes and try again
  • Check your provider’s rate limit settings
  • Consider using a higher-tier API key

Models Not Appearing

  1. Ensure the provider is marked as active
  2. Check that the provider has a valid API key
  3. Verify the provider’s health check status is successful
  4. Try running model discovery again

Response Time Issues

1

Test Connection

Run a connection test to measure current response time.
2

Check Provider Status

Visit the provider’s status page (e.g., status.openai.com) to see if there are known issues.
3

Consider Alternatives

If one provider is slow, configure an alternative provider as backup.

API Reference

For programmatic provider management, see the LLM Providers API documentation.

Build docs developers (and LLMs) love