Other LLM Providers
Beyond Azure OpenAI, OpenAI, and Anthropic, Microsoft Agent Framework supports several additional providers including AWS Bedrock, Ollama for local models, GitHub Copilot, and more.Supported Providers
AWS Bedrock
Access models via Amazon Bedrock
Ollama
Run models locally with Ollama
GitHub Copilot
Use GitHub Copilot models
Azure AI Foundry Local
Local model inference via Foundry
AWS Bedrock
AWS Bedrock provides access to foundation models from multiple providers through a single API.Installation
Authentication
Bedrock uses AWS credentials:- Environment Variables
- AWS Profiles
Basic Usage
Available Models
Bedrock provides access to models from multiple providers:| Provider | Model ID | Best For |
|---|---|---|
| Anthropic | anthropic.claude-3-sonnet-20240229-v1:0 | General purpose |
| Anthropic | anthropic.claude-3-haiku-20240307-v1:0 | Speed and cost |
| Anthropic | anthropic.claude-3-opus-20240229-v1:0 | Maximum capability |
| Meta | meta.llama3-70b-instruct-v1:0 | Open source, reasoning |
| Amazon | amazon.titan-text-premier-v1:0 | AWS-native |
| AI21 Labs | ai21.jamba-instruct-v1:0 | Long context |
| Cohere | cohere.command-r-plus-v1:0 | Retrieval, summarization |
| Mistral | mistral.mistral-large-2407-v1:0 | Multilingual |
Model availability varies by AWS region. Check the Bedrock documentation for details.
Configuration
Function Calling
Ollama
Ollama enables running large language models locally on your machine.Installation
- Install Ollama from ollama.com
- Pull a model:
ollama pull llama3.2 - Install the framework package:
Basic Usage
Configuration
- Environment Variables
- Explicit Configuration
Available Models
Popular models available via Ollama:| Model | Size | Best For | Function Calling |
|---|---|---|---|
| llama3.2 | 3B | Fast, general purpose | ✅ Limited |
| llama3.1 | 8B/70B | Reasoning, coding | ✅ Limited |
| mistral | 7B | Instruction following | ⚠️ Limited |
| codellama | 7B/13B/34B | Code generation | ❌ |
| phi3 | 3.8B | Small, efficient | ⚠️ Limited |
| gemma2 | 9B/27B | Google’s model | ⚠️ Limited |
| qwen2.5 | 0.5B-72B | Multilingual | ✅ Good |
| deepseek-coder | 6.7B/33B | Code understanding | ❌ |
Install models with
ollama pull <model-name>. Not all models support function calling - check model capabilities before using tools.Multimodal Models
Some Ollama models support vision:GitHub Copilot
Use GitHub Copilot models through the Copilot CLI.Installation
- Install GitHub Copilot CLI
- Install the framework package:
Basic Usage
Configuration
- Environment Variables
- Explicit Configuration
Available Models
GitHub Copilot provides access to multiple models:gpt-5- Latest GPT modelclaude-sonnet-4- Anthropic Claudeo1-preview- OpenAI reasoning modelo3-mini- Compact reasoning model
Model availability depends on your GitHub Copilot subscription and organization settings.
Azure AI Foundry Local
Run models locally via Azure AI Foundry for development and testing.Installation
Basic Usage
Choosing a Provider
Here’s guidance on when to use each provider:Use AWS Bedrock when...
Use AWS Bedrock when...
- You’re already using AWS infrastructure
- You need access to multiple model providers
- You want managed scaling and availability
- You require AWS compliance features
- You need region-specific deployments
Use Ollama when...
Use Ollama when...
- You want to run models locally
- You need offline operation
- You’re concerned about data privacy
- You want to avoid API costs
- You’re doing local development
- You need fast iteration without rate limits
Use GitHub Copilot when...
Use GitHub Copilot when...
- You have a GitHub Copilot subscription
- You want access to multiple models through one API
- You’re building developer tools
- You want model selection flexibility
Use Foundry Local when...
Use Foundry Local when...
- You’re developing Azure AI Foundry applications
- You need local testing before cloud deployment
- You want to prototype without cloud costs
- You’re working offline or in restricted environments
Provider Comparison
| Feature | Bedrock | Ollama | GitHub Copilot | Foundry Local |
|---|---|---|---|---|
| Cost | $$ | Free (local) | $ (subscription) | Free (local) |
| Internet Required | ✅ | ❌ | ✅ | ❌ |
| Setup Complexity | Medium | Low | Low | Medium |
| Model Selection | Multiple providers | Large catalog | Multiple | Limited |
| Function Calling | ✅ Model dependent | ⚠️ Limited | ✅ | ⚠️ Limited |
| Streaming | ✅ | ✅ | ✅ | ✅ |
| Production Ready | ✅ | ⚠️ Depends | ✅ | ❌ Dev only |
Best Practices
AWS Bedrock
AWS Bedrock
- Use IAM roles for authentication in production
- Enable CloudWatch logging for debugging
- Choose region based on data residency requirements
- Monitor costs - different models have different pricing
- Test model availability in your target region
Ollama
Ollama
- Ensure sufficient RAM for your chosen model
- Use GPU acceleration when available
- Keep Ollama updated for latest models
- Test model capabilities before production use
- Not all models support function calling
- Consider model size vs. quality tradeoffs
GitHub Copilot
GitHub Copilot
- Verify your organization allows Copilot use
- Check model availability for your subscription
- Monitor token usage
- Implement retry logic for rate limits
- Test fallback to other providers
Foundry Local
Foundry Local
- Only use for development and testing
- Transition to cloud for production
- Test with same models as production
- Monitor resource usage
- Keep dependencies updated
Troubleshooting
Bedrock Connection Issues
Bedrock Connection Issues
- Verify AWS credentials are configured correctly
- Check IAM permissions for Bedrock access
- Ensure the model is available in your region
- Verify network connectivity to AWS
- Check CloudWatch logs for detailed errors
Ollama Not Responding
Ollama Not Responding
- Verify Ollama is running:
ollama serve - Check if the model is pulled:
ollama list - Verify endpoint URL (default: http://localhost:11434)
- Check system resources (RAM, GPU)
- Review Ollama logs for errors
GitHub Copilot Errors
GitHub Copilot Errors
- Verify Copilot CLI is installed
- Check authentication:
gh auth status - Verify subscription is active
- Check model availability
- Review CLI logs for details
Next Steps
Provider Comparison
Compare all available providers
Function Tools
Add function calling capabilities
Workflows
Build multi-agent workflows
Hosting & Deployment
Deploy agents to production