Skip to main content
The LinkedIn Job Analyzer uses OpenAI’s GPT models to analyze job descriptions and generate structured summaries. This guide covers API key setup, model selection, and cost management.

Getting Your OpenAI API Key

1

Create OpenAI Account

Visit OpenAI Platform and create an account if you don’t have one.
You’ll need to provide a phone number for verification.
2

Add Payment Method

Navigate to Settings → Billing and add a payment method.
OpenAI’s free trial has limited credits. You’ll need to add a payment method for production use.
3

Generate API Key

  1. Go to API Keys in your dashboard
  2. Click Create new secret key
  3. Give it a descriptive name (e.g., “LinkedIn Job Analyzer”)
  4. Copy the key immediately - it won’t be shown again
The key will start with sk-proj- or sk-
4

Set Usage Limits (Recommended)

In Settings → Billing → Usage limits, set:
  • Monthly budget cap (e.g., $10)
  • Email notifications at 75% and 90%
This prevents unexpected charges.

Configuring the API Key

Environment Variable Setup

Add your API key to the .env file in the project root:
.env
OPENAI_API_KEY=sk-proj-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Security Best Practices:
  • Never commit .env files to version control
  • Don’t share API keys in screenshots or logs
  • Rotate keys if accidentally exposed
  • Use separate keys for development and production

How the Application Uses the API Key

The application loads the API key at startup (gpt_analyzer.py:14-23):
from dotenv import load_dotenv
import os
from openai import OpenAI

load_dotenv()
api_key = os.getenv("OPENAI_API_KEY")

if api_key:
    client = OpenAI(api_key=api_key)
else:
    print("Warning: OPENAI_API_KEY not found in .env file")
If the API key is missing, the application will start but return a warning message when attempting to analyze jobs.

Model Selection

The application currently uses GPT-3.5 Turbo by default (gpt_analyzer.py:63):
response = client.chat.completions.create(
    model="gpt-3.5-turbo",
    messages=[...],
    temperature=0.7,
    max_tokens=500
)

GPT-3.5 Turbo vs GPT-4

FeatureGPT-3.5 TurboGPT-4
SpeedFast (~2-3 seconds)Slower (~10-20 seconds)
Cost per 1M tokens0.50input/0.50 input / 1.50 output30input/30 input / 60 output
QualityGood for straightforward tasksBetter reasoning, more accurate
Best forHigh-volume analysisComplex requirements analysis

Changing the Model

To use GPT-4 or other models, edit inteligencia_artificial/gpt_analyzer.py:63:
# Options:
model="gpt-3.5-turbo"      # Default - fast and cheap
model="gpt-4o-mini"        # Smaller GPT-4, good balance
model="gpt-4"              # Most capable, expensive
model="gpt-4-turbo"        # Fast GPT-4 variant
The application limits input to the first 30 skills (gpt_analyzer.py:41) to control token usage and costs.

Cost Considerations

Token Usage Per Request

Each job analysis uses approximately:
  • Input tokens: ~300-500 (prompt + job description)
  • Output tokens: Up to 500 (configured limit)
  • Total: ~800-1000 tokens per analysis

Estimated Costs

Using GPT-3.5 Turbo:
  • Per job analysis: ~0.0010.001 - 0.002 (0.1-0.2 cents)
  • 100 job analyses: ~0.100.10 - 0.20
  • 1,000 job analyses: ~11 - 2
Using GPT-4:
  • Per job analysis: ~0.030.03 - 0.06 (3-6 cents)
  • 100 job analyses: ~33 - 6
  • 1,000 job analyses: ~3030 - 60
Cost Control Tips:
  • Start with GPT-3.5 Turbo for testing
  • Set monthly budget limits in OpenAI dashboard
  • The app limits skills to 30 items to reduce costs
  • Monitor usage in OpenAI dashboard regularly

API Usage Limits

Rate Limits

OpenAI enforces rate limits based on your account tier:
TierRequests per MinuteTokens per Minute
Free Trial340,000
Tier 1 ($5+ spent)3,500200,000
Tier 2 ($50+ spent)3,500450,000
For normal usage (a few analyses per minute), you won’t hit these limits. Batch processing might require delays between requests.

Handling Rate Limits

If you encounter rate limits, the application will return an error. Consider:
  1. Add retry logic with exponential backoff
  2. Implement delays between batch requests
  3. Upgrade account tier by spending $5+ to increase limits

Error Handling

The application handles common API errors (gpt_analyzer.py:75-77):

Missing API Key

Warning: No se ha configurado la API Key de OpenAI. 
Crea un archivo .env con OPENAI_API_KEY=...
Solution: Add the API key to your .env file

API Request Errors

Error al consultar ChatGPT: [error message]
Common causes:
  • Invalid API key
  • Insufficient credits/quota exceeded
  • Network connectivity issues
  • Rate limit exceeded

Troubleshooting Steps

1

Verify API Key

Check that your key is correctly set:
python -c "import os; from dotenv import load_dotenv; load_dotenv(); print(os.getenv('OPENAI_API_KEY')[:20] if os.getenv('OPENAI_API_KEY') else 'Not found')"
2

Test API Key

Test the key directly:
curl https://api.openai.com/v1/models \
  -H "Authorization: Bearer YOUR_API_KEY"
3

Check OpenAI Dashboard

Visit the OpenAI Platform to:
  • Verify billing status
  • Check usage and limits
  • Review error logs
4

Check Application Logs

The application prints debug information:
[IA] Generando resumen con ChatGPT...
Look for error messages in the console output.

Security Best Practices

Protecting Your API Key:
  • Store keys in .env files, never in code
  • Add .env to .gitignore
  • Use separate keys for dev/staging/production
  • Rotate keys regularly
  • Enable usage alerts in OpenAI dashboard
  • Never log or display full API keys

Key Rotation

If your key is compromised:
  1. Immediately revoke it in OpenAI dashboard
  2. Generate a new key
  3. Update your .env file
  4. Restart the application

Environment-Specific Keys

For production deployments:
# .env.development
OPENAI_API_KEY=sk-proj-dev-key...

# .env.production
OPENAI_API_KEY=sk-proj-prod-key...
This allows tracking usage separately per environment.

Monitoring Usage

OpenAI Dashboard

Monitor your usage at platform.openai.com/usage:
  • Daily/monthly token consumption
  • Cost breakdown by model
  • Request counts and error rates

Application Logging

The analyzer logs each request (gpt_analyzer.py:38):
[IA] Generando resumen con ChatGPT...
You can enhance logging to track:
  • Number of tokens used per request
  • Response times
  • Error rates

Testing the Configuration

Verify your OpenAI setup is working:
# test_openai.py
from inteligencia_artificial.gpt_analyzer import AIAnalyzer

analyzer = AIAnalyzer()
result = analyzer.generar_resumen(
    titulo="Software Engineer",
    habilidades=["Python", "Flask", "REST APIs", "Docker"]
)

print(result)
Run the test:
python test_openai.py
You should see a structured analysis of the job requirements.

Next Steps

Basic Usage

Start analyzing LinkedIn job postings

Architecture Overview

Understand how the AI analyzer integrates

Build docs developers (and LLMs) love