ai command manages AI provider integrations and model availability.
List AI Models
List all available AI models:Example Output
Example Output (Backend Unreachable)
Model Capabilities
Each model shows its supported capabilities:- tools - Function calling / tool use
- vision - Image understanding
How It Works
- Calls
/api/v1/ai/modelson backend API - Backend queries configured AI providers
- Returns list of available models with capabilities
- CLI formats and displays results
List AI Providers
List configured AI providers:Example Output
Provider Status
- active - Provider configured with valid API key
- inactive - Provider not configured or API key missing
Example Output (Backend Unreachable)
Configuration
In config.json
Configure AI providers in the configuration file:
Environment Variables
Override config with environment variables:| Variable | Description | Default |
|---|---|---|
AI_PROVIDER | Default AI provider | From config |
AI_API_KEY | Provider API key | From config |
AI_BASE_URL | Provider API endpoint | Provider default |
AI_MODEL | Default model | From config |
Multiple Providers
Configure multiple providers for fallback or different use cases. The backend supports:- OpenAI - GPT models
- Anthropic - Claude models
- Google AI - Gemini models
- Mistral AI - Mistral models
- Custom Providers - OpenAI-compatible APIs
API Response Format
Models Endpoint
Providers Endpoint
Model Selection
NeuraTrade automatically selects models based on task requirements:- Vision tasks - Uses models with
supports_vision: true - Function calling - Uses models with
supports_tools: true - Cost optimization - Selects cheaper models for simple tasks
- Fallback - Tries alternative providers if primary fails
Checking Provider Status
Verify AI provider connectivity:Supported Providers
OpenAI
- gpt-4-turbo
- gpt-4
- gpt-3.5-turbo
Anthropic
- claude-3-opus-20240229
- claude-3-sonnet-20240229
- claude-3-haiku-20240307
Google AI
- gemini-pro
- gemini-pro-vision
Mistral AI
- mistral-large-latest
- mistral-medium-latest
- mistral-small-latest
Custom OpenAI-Compatible Providers
- LM Studio
- LocalAI
- Ollama (with OpenAI compatibility)
- Azure OpenAI
- Any OpenAI-compatible API
Security Best Practices
Key Masking
The CLI and backend automatically mask API keys in logs and output:File Permissions
Protect config files:Troubleshooting
”No AI models available”
Ensure at least one provider is configured:“Could not reach API”
Start the backend:“Provider inactive”
Check API key configuration:- Log in to provider dashboard
- Check API key status
- Verify key has required permissions
- Check spending limits and quotas
Rate Limits
If you hit rate limits:- Configure multiple providers for fallback
- Increase delay between requests
- Upgrade provider plan
- Use cheaper models for non-critical tasks
Cost Monitoring
Monitor AI provider costs:- Check provider dashboard
- Set up spending alerts
- Review
ai_modelfields in logs - Use cheaper models for testing
NeuraTrade automatically selects cost-effective models based on task complexity. Override with the
AI_MODEL environment variable if needed.