Getting Your OpenAI API Key
Create OpenAI Account
Visit OpenAI Platform and create an account if you don’t have one.
You’ll need to provide a phone number for verification.
Generate API Key
- Go to API Keys in your dashboard
- Click Create new secret key
- Give it a descriptive name (e.g., “LinkedIn Job Analyzer”)
- Copy the key immediately - it won’t be shown again
sk-proj- or sk-Configuring the API Key
Environment Variable Setup
Add your API key to the.env file in the project root:
.env
How the Application Uses the API Key
The application loads the API key at startup (gpt_analyzer.py:14-23):If the API key is missing, the application will start but return a warning message when attempting to analyze jobs.
Model Selection
The application currently uses GPT-3.5 Turbo by default (gpt_analyzer.py:63):GPT-3.5 Turbo vs GPT-4
| Feature | GPT-3.5 Turbo | GPT-4 |
|---|---|---|
| Speed | Fast (~2-3 seconds) | Slower (~10-20 seconds) |
| Cost per 1M tokens | 1.50 output | 60 output |
| Quality | Good for straightforward tasks | Better reasoning, more accurate |
| Best for | High-volume analysis | Complex requirements analysis |
Changing the Model
To use GPT-4 or other models, editinteligencia_artificial/gpt_analyzer.py:63:
The application limits input to the first 30 skills (gpt_analyzer.py:41) to control token usage and costs.
Cost Considerations
Token Usage Per Request
Each job analysis uses approximately:- Input tokens: ~300-500 (prompt + job description)
- Output tokens: Up to 500 (configured limit)
- Total: ~800-1000 tokens per analysis
Estimated Costs
Using GPT-3.5 Turbo:- Per job analysis: ~0.002 (0.1-0.2 cents)
- 100 job analyses: ~0.20
- 1,000 job analyses: ~2
- Per job analysis: ~0.06 (3-6 cents)
- 100 job analyses: ~6
- 1,000 job analyses: ~60
API Usage Limits
Rate Limits
OpenAI enforces rate limits based on your account tier:| Tier | Requests per Minute | Tokens per Minute |
|---|---|---|
| Free Trial | 3 | 40,000 |
| Tier 1 ($5+ spent) | 3,500 | 200,000 |
| Tier 2 ($50+ spent) | 3,500 | 450,000 |
For normal usage (a few analyses per minute), you won’t hit these limits. Batch processing might require delays between requests.
Handling Rate Limits
If you encounter rate limits, the application will return an error. Consider:- Add retry logic with exponential backoff
- Implement delays between batch requests
- Upgrade account tier by spending $5+ to increase limits
Error Handling
The application handles common API errors (gpt_analyzer.py:75-77):Missing API Key
.env file
API Request Errors
- Invalid API key
- Insufficient credits/quota exceeded
- Network connectivity issues
- Rate limit exceeded
Troubleshooting Steps
Check OpenAI Dashboard
Visit the OpenAI Platform to:
- Verify billing status
- Check usage and limits
- Review error logs
Security Best Practices
Key Rotation
If your key is compromised:- Immediately revoke it in OpenAI dashboard
- Generate a new key
- Update your
.envfile - Restart the application
Environment-Specific Keys
For production deployments:Monitoring Usage
OpenAI Dashboard
Monitor your usage at platform.openai.com/usage:- Daily/monthly token consumption
- Cost breakdown by model
- Request counts and error rates
Application Logging
The analyzer logs each request (gpt_analyzer.py:38):- Number of tokens used per request
- Response times
- Error rates
Testing the Configuration
Verify your OpenAI setup is working:Next Steps
Basic Usage
Start analyzing LinkedIn job postings
Architecture Overview
Understand how the AI analyzer integrates