Overview
This portfolio integrates Azure AI’s chat completion API to power an intelligent chat assistant. The integration uses Azure’s serverless inference endpoint through a Netlify Function proxy.Azure AI Setup
Create Azure Account
Sign up for a Microsoft Azure account at azure.microsoft.com
Azure offers free credits for new users, which is perfect for testing and small-scale deployments.
Access Azure AI Services
Navigate to Azure AI Studio:
- Go to Azure AI Studio
- Sign in with your Azure account
- Create a new project or select an existing one
Deploy a Model
Deploy a chat completion model:
- Navigate to Deployments in your Azure AI project
- Click Create new deployment
- Select a model (e.g.,
gpt-4o-minifor cost-effective inference) - Configure deployment settings
- Deploy the model
API Endpoint
The integration uses Azure’s serverless inference endpoint:This is a unified endpoint that routes requests to your deployed models based on the model name in your request.
Integration Architecture
The Netlify Function acts as a secure proxy:- Frontend sends chat requests to
/.netlify/functions/chat - Netlify Function adds authentication and forwards to Azure AI
- Azure AI processes the request and returns completions
- Netlify Function proxies the response back to the frontend
Configuration
Environment Variables
Set your Azure AI API token in Netlify:Function Implementation
The Netlify Function handles authentication automatically:netlify/functions/chat.ts
Request Format
Send requests following the OpenAI chat completion format:Request Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
messages | Array | Yes | Array of message objects with role and content |
model | String | Yes | Model identifier (e.g., gpt-4o-mini) |
max_tokens | Number | No | Maximum tokens in response (default: 1000) |
temperature | Number | No | Randomness level 0-2 (default: 0.7) |
Response Format
Azure AI returns responses in OpenAI-compatible format:Frontend Integration Example
Available Models
Azure AI supports various models. Popular options:- gpt-4o-mini: Cost-effective, fast responses, good for general chat
- gpt-4o: More capable, better for complex tasks
- gpt-4-turbo: Advanced reasoning and longer context
Check the Azure AI Model Catalog for the latest available models and pricing.
Cost Optimization
Choose the Right Model
Use
gpt-4o-mini for general chat interactions to minimize costs while maintaining quality.Troubleshooting
Authentication Errors
If you receive 401 Unauthorized errors:- Verify your
API_TOKENenvironment variable is set correctly in Netlify - Check that the token hasn’t expired
- Ensure you’re using the correct token (not the endpoint URL)
Model Not Found
If you get model not found errors:- Verify the model is deployed in your Azure AI project
- Check the model name matches exactly
- Ensure your API token has access to the model
Timeout Issues
If requests timeout:- Reduce
max_tokensto speed up generation - Check Azure AI service status
- Consider upgrading Netlify plan for longer function timeouts
Security Best Practices
- Never expose API tokens in frontend code or version control
- Use environment variables for all sensitive credentials
- Implement rate limiting to prevent abuse and unexpected costs
- Restrict CORS to your specific domain in production
- Monitor API usage regularly through Azure portal
- Rotate tokens periodically for enhanced security