Model: openai/gpt-4o-miniPurpose: Convert free-text onboarding responses into structured JSON with normalized badge phrases.
src/lib/profileStructuring.ts
const MODEL = "openai/gpt-4o-mini";const { text } = await generateText({ model: MODEL, prompt: `Convert this dating onboarding profile into structured JSON...`, maxOutputTokens: 700,});
Model: mistral/mistral-mediumPurpose: Analyze natural language scheduling responses and generate date/time proposals.
src/lib/tpoScheduling.ts
const MODEL = "mistral/mistral-medium";// Propose initial time slotconst { text } = await generateText({ model: MODEL, prompt: `You are scheduling a first date for a dating app...`, maxOutputTokens: 30,});// Analyze scheduling responseconst { text } = await generateText({ model: MODEL, prompt: `You are analyzing a user's message in a dating app scheduling conversation...`, maxOutputTokens: 160,});// Suggest date spotconst { text } = await generateText({ model: MODEL, prompt: `You are suggesting a first date venue for two people on a dating app...`, maxOutputTokens: 80,});
Token Budgets:
Time proposal: 30 tokens
Response analysis: 160 tokens
Venue suggestion: 80 tokens
Why Mistral Medium?
Superior natural language understanding for ambiguous responses
Better at conversational context resolution
Strong reasoning for multi-turn scheduling negotiations
Model: openai/gpt-4o-miniPurpose: Extract structured data from driver’s license images using vision capabilities.
src/lib/dlExtract.ts
const MODEL = "openai/gpt-4o-mini";const { text } = await generateText({ model: MODEL, messages: [ { role: "system", content: `You extract structured data from US driver's license images...`, }, { role: "user", content: [ { type: "text", text: "Extract the full legal name in FIRST [MIDDLE] LAST order, plus date of birth and height from this driver's license.", }, { type: "image", image: `data:image/jpeg;base64,${imageBase64}`, }, ], }, ], maxOutputTokens: 200,});
Model: openai/gpt-4o-miniPurpose: Generate personalized transition messages between onboarding questions.
src/lib/tpoOnboardingAdlib.ts
const MODEL = "openai/gpt-4o-mini";const { text } = await generateText({ model: MODEL, prompt: `Generate a brief, natural acknowledgment of the user's answer...`, maxOutputTokens: 30,});