Skip to main content

Overview

The LinkedIn Job Analyzer integrates OpenAI’s GPT-3.5-turbo model to generate intelligent summaries and insights from raw job descriptions. The AI analysis is triggered on-demand when users request a professional breakdown of the job requirements.
AI analysis is optional and only runs when explicitly requested by the user

Architecture

The AI analysis system is implemented in a single class that handles OpenAI API communication:
gpt_analyzer.py:9-23
class AIAnalyzer:
    """
    Clase responsable de la comunicación con modelos de Inteligencia Artificial (OpenAI).
    """
    
    def __init__(self):
        # Buscamos la llave de API en las variables de entorno
        api_key = os.getenv("OPENAI_API_KEY")
        
        # Inicializamos el cliente solo si existe la llave
        if api_key:
            self.client = OpenAI(api_key=api_key)
        else:
            self.client = None
            print("[IA] Advertencia: No se encontró OPENAI_API_KEY en el archivo .env")

Environment Configuration

The analyzer requires an OpenAI API key stored in environment variables:
OPENAI_API_KEY=sk-proj-...
If the API key is not configured, the analyzer will return a warning message instead of crashing

Analysis Method

The core analysis method generar_resumen takes a job title and list of skills, then generates a structured professional summary:
gpt_analyzer.py:25-36
def generar_resumen(self, titulo: str, habilidades: List[str]) -> str:
    """
    Envía las habilidades a ChatGPT para obtener un resumen profesional estructurado.
    """
    # Verificación de seguridad por si no hay API Key
    if not self.client:
        return "⚠️ No se ha configurado la API Key de OpenAI. Crea un archivo .env con OPENAI_API_KEY=..."

    # Verificación de seguridad por si la lista de habilidades está vacía
    if not habilidades:
        return "⚠️ No se encontraron habilidades para analizar."

Safety Checks

Verifies that the OpenAI client was initialized successfully before making API calls
Returns a user-friendly message if no skills were extracted from the job posting

Prompt Engineering

The analyzer uses a carefully crafted prompt to generate structured, actionable insights:
gpt_analyzer.py:38-59
try:
    print("[IA] Generando resumen con ChatGPT...")
    
    # Convertimos la lista a texto (Limitamos a las primeras 30 para ahorrar tokens/dinero)
    lista_texto = "\n- ".join(habilidades[:30]) 
    
    # Construcción del Prompt (Instrucciones para la IA)
    prompt = f"""
    Actúa como un reclutador experto en tecnología. Analiza la siguiente oferta de trabajo.
    
    Título del puesto: {titulo}
    
    Fragmentos extraídos de la descripción:
    {lista_texto}
    
    Por favor, genera un resumen estructurado en Markdown que incluya:
    1. **Objetivo del Rol**: En una frase, ¿qué buscan?
    2. **Stack Tecnológico Principal**: Las 5 herramientas más importantes mencionadas.
    3. **Skills Blandas**: ¿Qué aptitudes personales buscan?
    4. **Nivel de Experiencia**: ¿Parece Junior, Mid o Senior? (Deduce basado en el texto).
    
    Sé conciso y profesional.
    """

Prompt Structure

1

Role Assignment

Instructs the AI to act as an expert technology recruiter
2

Context Provision

Provides the job title and up to 30 skill fragments (limited for cost efficiency)
3

Output Format Specification

Requests a structured Markdown response with 4 specific sections
4

Tone Guidance

Emphasizes conciseness and professionalism
The prompt limits analysis to the first 30 skills to control API costs while maintaining quality

API Request Configuration

The analyzer makes a chat completion request with optimized parameters:
gpt_analyzer.py:61-73
# Llamada a la API de OpenAI
response = self.client.chat.completions.create(
    model="gpt-3.5-turbo", # Puedes actualizarlo a gpt-4o-mini o gpt-4 si lo deseas
    messages=[
        {"role": "system", "content": "Eres un asistente experto en RRHH y tecnología."},
        {"role": "user", "content": prompt}
    ],
    temperature=0.7,
    max_tokens=500
)

# Retornamos el contenido del mensaje generado
return response.choices[0].message.content

Parameter Breakdown

model
string
default:"gpt-3.5-turbo"
The OpenAI model used for generation. Can be upgraded to gpt-4o-mini or gpt-4 for higher quality
messages
array
Conversation context with system role and user prompt
temperature
float
default:"0.7"
Controls randomness (0.7 balances creativity with consistency)
max_tokens
integer
default:"500"
Limits response length to control costs

Generated Insights

The AI generates a structured analysis with four key sections:

Objetivo del Rol

A one-sentence summary of what the employer is looking for

Stack Tecnológico Principal

The 5 most important technologies and tools mentioned

Skills Blandas

Soft skills and personal attributes required

Nivel de Experiencia

Inferred seniority level (Junior, Mid, or Senior)

Example Output

Sample AI-Generated Summary
**Objetivo del Rol**: Buscan un desarrollador Full Stack con experiencia en React y Node.js para construir aplicaciones web escalables.

**Stack Tecnológico Principal**:
1. React.js
2. Node.js
3. MongoDB
4. Docker
5. AWS

**Skills Blandas**:
- Trabajo en equipo
- Comunicación efectiva
- Resolución de problemas
- Adaptabilidad

**Nivel de Experiencia**: Mid-Senior (3-5 años de experiencia requeridos)

Error Handling

The analyzer includes robust error handling for API failures:
gpt_analyzer.py:75-77
except Exception as e:
    # Manejo de errores (por ejemplo, si te quedas sin saldo o no hay internet)
    return f"❌ Error al consultar ChatGPT: {str(e)}"

Common Error Scenarios

Returns authentication error message if the key is incorrect or expired
Handles OpenAI rate limiting errors gracefully
Returns a clear message if the OpenAI account has no remaining balance
Catches connection errors and timeouts

Usage Example

Using the AIAnalyzer
from inteligencia_artificial.gpt_analyzer import AIAnalyzer

# Initialize analyzer
analyzer = AIAnalyzer()

# Generate summary
job_title = "Senior Python Developer"
skills = [
    "Python 3.10+",
    "FastAPI",
    "PostgreSQL",
    "Docker",
    "AWS Lambda",
    "Strong communication skills",
    "5+ years experience"
]

summary = analyzer.generar_resumen(job_title, skills)
print(summary)

Cost Optimization

The analyzer implements several strategies to minimize API costs
  • Skill Limit: Only the first 30 skills are analyzed (gpt_analyzer.py:41)
  • Token Cap: Maximum response length set to 500 tokens (gpt_analyzer.py:69)
  • Model Choice: Uses GPT-3.5-turbo by default (cheaper than GPT-4)
  • On-Demand Only: Analysis only runs when explicitly requested

Customization Options

Upgrade Model

Change model="gpt-3.5-turbo" to gpt-4o-mini or gpt-4 for better quality

Adjust Temperature

Modify temperature (0.0-2.0) to control output randomness

Increase Token Limit

Raise max_tokens for longer, more detailed responses

Analyze More Skills

Change habilidades[:30] to include more skills in the analysis

Build docs developers (and LLMs) love