Overview
The PromptTemplates class provides static methods for generating structured prompts used throughout the interview process. These templates ensure consistent, high-quality AI responses.
Template Methods
first_question_generation()
Generates a prompt for creating the opening interview question.
from utils.prompt_templates import PromptTemplates
prompt = PromptTemplates.first_question_generation(
cv_text="5+ years Python development experience...",
job_description="Looking for senior backend engineer...",
job_title="Senior Backend Engineer",
company_name="TechCorp"
)
The candidate’s CV/resume text (truncated to first 2000 characters)
The job description text (truncated to first 2000 characters)
The position title being interviewed for
Formatted prompt string instructing the AI to generate an opening interview question
Prompt Characteristics:
- Requests a warm, professional opener
- Focuses on most relevant experience for the role
- Encourages conversational, open-ended questions
- Instructs AI to return only the question text without formatting
- Location:
utils/prompt_templates.py:4
Example Output:
prompt = PromptTemplates.first_question_generation(
cv_text="Senior developer with Django and Flask experience",
job_description="We need a Python expert for backend development",
job_title="Senior Python Developer",
company_name="Acme Inc"
)
# The AI will receive a prompt like:
# "You are an experienced interviewer starting an interview for Senior Python Developer at Acme Inc.
# ...
# Generate the opening question now:"
followup_question_generation()
Generates a prompt for creating contextual follow-up questions.
prompt = PromptTemplates.followup_question_generation(
conversation_history="Interviewer: Tell me about Python\nCandidate: I've used it for 5 years...",
cv_text="5+ years Python development...",
job_description="Backend engineer role...",
question_count=1,
max_questions=8
)
Formatted conversation transcript (use format_conversation_history() helper)
The candidate’s CV text (truncated to first 1500 characters)
The job description (truncated to first 1500 characters)
Current number of questions asked (zero-indexed)
Total number of questions planned for the interview
Formatted prompt instructing the AI to generate a contextual follow-up question
Prompt Instructions:
- Build naturally on previous answer
- Explore different aspects of experience
- Assess skills from job description
- Maintain conversational tone
- Return only the question without prefixes
- Location:
utils/prompt_templates.py:31
Example:
formatted_history = PromptTemplates.format_conversation_history([
{"role": "assistant", "content": "Tell me about your Python experience"},
{"role": "user", "content": "I've worked with Python for 5 years"}
])
prompt = PromptTemplates.followup_question_generation(
conversation_history=formatted_history,
cv_text=cv_text,
job_description=job_desc,
question_count=1,
max_questions=8
)
feedback_generation()
Generates a prompt for comprehensive interview performance analysis.
prompt = PromptTemplates.feedback_generation(
conversation_history="Interviewer: ...\nCandidate: ...",
cv_text="5+ years Python development...",
job_description="Backend engineer role...",
job_title="Senior Backend Engineer"
)
Complete formatted interview transcript
The candidate’s CV text (truncated to first 2000 characters)
The complete job description text
Formatted prompt instructing the AI to generate structured JSON feedback
Feedback Components:
-
Overall Performance Score (1-10):
- Answer relevance
- Communication clarity
- Technical depth
- Job alignment
-
Strengths (3-5 points):
- What candidate did well
- Demonstrated skills
- Specific examples
-
Areas for Improvement (3-5 points):
- What could be stronger
- Answers lacking depth
- Skills needing demonstration
-
CV Improvement Suggestions:
- Role-specific modifications
- Experiences to highlight
- Missing requirements
- Wording suggestions
Expected JSON Output Format:
{
"score": 8,
"strengths": "• Strong technical depth in Python\n• Clear communication style\n• Good examples from past projects",
"weaknesses": "• Could elaborate more on system design\n• Limited discussion of testing practices",
"cv_improvements": "• Add metrics to achievements\n• Highlight distributed systems experience\n• Include more backend-specific keywords"
}
Location: utils/prompt_templates.py:62
Full Example:
from utils.prompt_templates import PromptTemplates
convo = [
{"role": "assistant", "content": "Tell me about your Python experience"},
{"role": "user", "content": "I've built REST APIs with Flask and Django"},
{"role": "assistant", "content": "Describe a challenging project"},
{"role": "user", "content": "I redesigned our microservices architecture..."}
]
formatted = PromptTemplates.format_conversation_history(convo)
prompt = PromptTemplates.feedback_generation(
conversation_history=formatted,
cv_text=cv_text,
job_description=job_desc,
job_title="Senior Backend Engineer"
)
# Use this prompt with AIClient.generate_feedback()
format_conversation_history()
Helper method to format conversation messages into a readable transcript.
messages = [
{"role": "assistant", "content": "What is your Python experience?"},
{"role": "user", "content": "I have 5 years of experience"}
]
formatted = PromptTemplates.format_conversation_history(messages)
print(formatted)
# Output:
# Interviewer: What is your Python experience?
#
# Candidate: I have 5 years of experience
List of message dictionaries with role and content keys
Formatted transcript with “Interviewer:” and “Candidate:” labels, separated by double newlines
Message Format:
role: Either "assistant" (interviewer) or "user" (candidate)
content: The message text
Location: utils/prompt_templates.py:118
Usage Example:
from utils.prompt_templates import PromptTemplates
conversation = [
{"role": "assistant", "content": "Tell me about yourself"},
{"role": "user", "content": "I'm a software engineer with 5 years experience"},
{"role": "assistant", "content": "What technologies do you use?"},
{"role": "user", "content": "Primarily Python, Django, and PostgreSQL"}
]
transcript = PromptTemplates.format_conversation_history(conversation)
# Use in other template methods
prompt = PromptTemplates.followup_question_generation(
conversation_history=transcript,
cv_text="...",
job_description="...",
question_count=2,
max_questions=8
)
Complete Workflow Example
from utils.prompt_templates import PromptTemplates
from client.ai_client import AIClient
from client.ai_provider_manager import ProviderManager
from client.gemini_provider import GeminiProvider
import os
# Setup
provider = GeminiProvider(api_key=os.getenv("GEMINI_API_KEY"))
manager = ProviderManager(providers=[provider])
client = AIClient(provider_manager=manager)
# Interview data
cv_text = "Senior Python Developer with 5 years experience..."
job_desc = "Looking for backend engineer skilled in Python and Django..."
job_title = "Senior Backend Engineer"
company = "TechCorp"
# 1. Generate first question
first_q = client.generate_first_question(
cv_text=cv_text,
job_desc=job_desc,
job_title=job_title,
company_name=company
)
conversation = [
{"role": "assistant", "content": first_q},
{"role": "user", "content": "I've built REST APIs with Django..."}
]
# 2. Generate follow-up questions
for i in range(3):
followup = client.generate_followup_question(
convo_history=conversation,
cv_text=cv_text,
job_desc=job_desc,
question_count=i + 1,
max_questions=8
)
conversation.append({"role": "assistant", "content": followup})
# Get candidate answer (from user input in real app)
conversation.append({"role": "user", "content": "[answer]"})
# 3. Generate feedback
feedback = client.generate_feedback(
convo_history=conversation,
cv_text=cv_text,
job_desc=job_desc,
job_title=job_title
)
print(f"Score: {feedback['score']}/10")
print(f"Strengths:\n{feedback['strengths']}")
print(f"Weaknesses:\n{feedback['weaknesses']}")
print(f"CV Improvements:\n{feedback['cv_improvements']}")
Text Truncation
To manage token limits, templates automatically truncate input text:
| Template | CV Text | Job Description |
|---|
first_question_generation() | 2000 chars | 2000 chars |
followup_question_generation() | 1500 chars | 1500 chars |
feedback_generation() | 2000 chars | No truncation |
Conversation history is not truncated, so manage interview length appropriately.
Prompt Engineering Notes
Question Generation:
- Explicit instruction to return only question text
- Prefixes like “Question:” are stripped by
AIClient
- Emphasis on conversational, open-ended style
Feedback Generation:
- Strict JSON format requirement
- Bullet points using • character
- Constructive, actionable language
- Specific examples from interview transcript