Overview
NeuraTrade supports multiple AI providers with automatic failover for autonomous trading decisions. The platform uses a provider chain that tries providers in order until one succeeds.Supported Providers
Anthropic
Claude models (recommended)
OpenAI
GPT-4 and GPT-3.5 models
Zhipu
Chinese AI provider (GLM models)
MiniMax
Alternative Chinese provider
MLX
Local inference (Apple Silicon)
Provider Configuration
Primary Provider
Set your primary AI provider in~/.neuratrade/config.json:
config.json
Primary AI provider name.Options:
anthropic, openai, zhipu, minimax, mlxAPI key for the primary provider.
Custom base URL for the provider API (optional).Defaults:
- Anthropic:
https://api.anthropic.com/v1 - OpenAI:
https://api.openai.com/v1 - Zhipu:
https://open.bigmodel.cn/api/coding/paas/v4 - MiniMax:
https://api.minimax.chat/v1 - MLX:
http://localhost:8080/v1
Failover Chain Configuration
Environment Variable
Comma-separated list of fallback providers.The primary provider is automatically added first, followed by the chain.
Chain Behavior
The failover chain tries providers in order:- Primary Provider (from config.json)
- Failover Providers (from NEURATRADE_AI_PROVIDER_CHAIN)
Max Failover Hops
Maximum number of failover attempts.
0: No failover (primary only)1: Try primary + 1 fallback2: Try primary + 2 fallbacks-1: Try all providers in chain
Provider-Specific Configuration
Anthropic (Claude)
claude-3-5-sonnet-20241022(recommended)claude-3-opus-20240229claude-3-haiku-20240307
OpenAI
gpt-4-turbo-previewgpt-4-1106-previewgpt-3.5-turbo-1106
Zhipu (GLM)
glm-4glm-3-turbo
MiniMax
MiniMax exposes an Anthropic-compatible API endpoint.
MLX (Local Inference)
Setting up MLX Local Inference
Setting up MLX Local Inference
MLX is for local inference on Apple Silicon Macs:
- Install MLX:
pip install mlx-lm - Download a model:
mlx_lm.download --model mistralai/Mistral-7B-v0.1 - Start server:
mlx_lm.server --model mistralai/Mistral-7B-v0.1 --port 8080 - Configure NeuraTrade to use
mlxprovider
MLX does not require an API key. It runs entirely on your local machine.
Advanced Configuration
Request Timeout
HTTP timeout for AI provider requests in seconds.
Retry Configuration
Maximum number of retries for failed requests.
Model Override
Override models for specific providers:Failover Example
Configuration
.env
Execution Flow
Cost Tracking
NeuraTrade tracks AI costs per request:Budget Limits
Maximum daily AI spending in USD.
Maximum monthly AI spending in USD.
Error Handling
Retryable Errors
- Rate limiting (HTTP 429)
- Timeout errors
- Server errors (HTTP 5xx)
- Network connection errors
Non-Retryable Errors
- Invalid API key (HTTP 401)
- Context length exceeded
- Content filtered
- Invalid request format
Monitoring
Check AI Status
Response
Budget Status
Response
Security Best Practices
- Rotate Keys: Change API keys quarterly
- Limit Budgets: Set conservative daily/monthly limits
- Monitor Usage: Track spending on provider dashboards
- Separate Keys: Use different keys for dev/staging/prod
- Environment Isolation: Never use production keys in development
Troubleshooting
Provider Not Responding
Provider Not Responding
Budget Exceeded
Budget Exceeded
Failover Not Working
Failover Not Working