Overview
Teak’s AI pipeline automatically enriches cards with metadata, making them easier to search, organize, and rediscover. All processing happens asynchronously after card creation.
Processing Workflow
When you create a card, it enters an automated AI workflow:
The workflow is defined in packages/convex/workflows/cardProcessing.ts:43-298.
Step 1: Classification
Determines the card type if not explicitly provided.
Check Explicit Type
If type was provided at creation, skip classification:{
content: "My idea",
type: "text" // Explicit type
}
// Classification: { type: "text", confidence: 1.0 }
Run AI Classifier
For ambiguous content, an AI model analyzes the content:{
content: "Check out this amazing tutorial video"
// No type specified
}
// AI determines: { type: "link", confidence: 0.85 }
Step 2: Link Processing
For link cards, Teak fetches rich metadata and categorizes the content.
Fetches Open Graph data, page title/description, and images:
// Link card metadata structure
{
metadata: {
linkPreview: {
title: "Understanding React Hooks",
description: "A comprehensive guide to React Hooks...",
imageUrl: "https://example.com/og-image.jpg",
imageStorageId: "k1abc...",
screenshotStorageId: "k2def...",
status: "success"
}
},
metadataTitle: "Understanding React Hooks",
metadataDescription: "A comprehensive guide to React Hooks...",
metadataStatus: "completed"
}
Metadata extraction uses the workflow in packages/convex/workflows/steps/linkMetadata/fetchMetadata.ts.
2b. Categorization
Classifies links into 15 categories:
Tab Title
Tab Title
Tab Title
| Category | Description |
|---|
| book | Books, eBooks |
| movie | Movies, films |
| tv | TV shows, series |
| article | Long-form articles |
| news | News briefs, updates |
| podcast | Podcast episodes |
| music | Music tracks, albums |
| product | Shopping pages |
| recipe | Recipes, cooking guides |
| course | Courses, tutorials |
| research | Research papers |
| event | Events, webinars |
| software | Software, apps, GitHub |
| design_portfolio | Design portfolios |
| other | Miscellaneous |
// Book link
{
url: "https://amazon.com/book/...",
metadata: {
linkCategory: {
category: "book",
confidence: 0.95,
detectedProvider: "amazon",
sourceUrl: "https://amazon.com/..."
}
}
}
// GitHub repo
{
url: "https://github.com/facebook/react",
metadata: {
linkCategory: {
category: "software",
confidence: 1.0,
detectedProvider: "github",
sourceUrl: "https://github.com/..."
}
}
}
Categories can include structured data:{
metadata: {
linkCategory: {
category: "product",
confidence: 0.9,
facts: [
{ label: "Price", value: "$49.99" },
{ label: "Rating", value: "4.5/5" },
{ label: "Reviews", value: "1,234" }
],
imageUrl: "https://...",
raw: { /* provider-specific data */ }
}
}
}
Categories are defined in packages/convex/shared/linkCategories.ts:1-116.
Generates tags, summaries, and transcripts based on card type.
Automatically extracts relevant topics and keywords:
{
aiTags: [
"react",
"hooks",
"javascript",
"frontend",
"web-development"
]
}
AI tags are searchable via the search_ai_tags index.
AI Summary
Generates concise summaries for:
- Links: Summarizes page content
- Videos: Summarizes visual content
- Documents: Extracts key points
- Audio: Summarizes spoken content
- Images: Describes visual content
{
aiSummary: "A comprehensive guide covering useState, useEffect, and custom hooks in React. Includes practical examples and best practices for functional components."
}
Summaries are searchable via the search_ai_summary index.
AI Transcript
Generates transcripts for audio and video content:
{
aiTranscript: "Welcome to this tutorial on React Hooks. Today we'll cover the basics of useState and useEffect...",
type: "audio"
}
Transcripts make audio/video content fully searchable via the search_ai_transcript index.
Step 4: Renderables
Generates thumbnails and visual assets.
Thumbnail Generation
Applies to:
- Videos: Extracts first frame
- Documents: Renders first page
- SVG Images: Rasterizes for palette extraction
{
thumbnailId: "k3ghi...",
thumbnailUrl: "https://..." // Generated by storage.getUrl()
}
For SVGs, thumbnails enable color palette extraction which would otherwise fail on vector graphics.
Processing Status
Track AI processing progress:
interface ProcessingStatus {
classify: {
status: "pending" | "running" | "completed" | "failed";
startedAt?: number;
completedAt?: number;
confidence?: number;
};
categorize?: {
status: "pending" | "running" | "completed" | "failed";
startedAt?: number;
completedAt?: number;
};
metadata: {
status: "pending" | "running" | "completed" | "failed";
startedAt?: number;
completedAt?: number;
};
renderables?: {
status: "pending" | "running" | "completed" | "failed";
startedAt?: number;
completedAt?: number;
};
}
Querying Status
const card = useQuery(api.cards.getCard, { cardId });
if (card?.processingStatus?.metadata?.status === "running") {
// Show loading indicator
} else if (card?.processingStatus?.metadata?.status === "completed") {
// Display AI tags and summary
}
Retry Logic
Teak uses exponential backoff for resilient processing:
| Step | Max Attempts | Initial Backoff | Base |
|---|
| Classification | 8 | 400ms | 1.8 |
| Link Metadata | 5 | 5000ms | 2.0 |
| Categorization | 5 | 1200ms | 1.6 |
| AI Metadata | 8 | 400ms | 1.8 |
Retry configuration is in packages/convex/workflows/cardProcessing.ts:19-33.
AI Provider Configuration
Teak’s AI features use OpenAI models (configurable via environment variables):
# .env.local
OPENAI_API_KEY=sk-...
AI features require valid API keys. Cards will fail gracefully if keys are missing.
Parallel Processing
For most cards, metadata and renderables run in parallel:// From cardProcessing.ts:241-260
const [metadataResult, renderablesResult] = await Promise.all([
step.runAction(metadata.generate, { cardId }),
step.runAction(renderables.generate, { cardId })
]);
Sequential for Videos
Videos need thumbnails before AI analysis:// Generate thumbnail first
const renderablesResult = await step.runAction(
renderables.generate,
{ cardId }
);
// Then run AI metadata (uses thumbnail)
const metadataResult = await step.runAction(
metadata.generate,
{ cardId }
);
Palette Extraction
Images get palette extraction in parallel with other processing:// Runs alongside metadata generation
const palettePromise = step.runAction(
extractPaletteFromImage,
{ cardId }
);
Disabling AI Features
Currently, AI processing is automatic and cannot be disabled per-card. This may change in future versions.
If you want to skip AI processing, you can check processingStatus and only show AI fields when status === "completed".
Best Practices
Provide Context in Notes
The notes field helps AI generate better tags and summaries:await createCard({
content: "Design inspiration",
url: "https://dribbble.com/shots/...",
notes: "Minimal UI design for a productivity app, focus on the sidebar navigation"
});
Use User Tags to Supplement AI
Combine manual tags with AI tags:{
tags: ["work", "urgent"], // Manual organization
aiTags: ["react", "typescript", "frontend"] // AI-generated
}
Monitor Processing Status
Show loading states while AI processes:{card.processingStatus?.metadata?.status === "running" && (
<Spinner>Generating AI tags...</Spinner>
)}
Source Reference
- Main workflow:
packages/convex/workflows/cardProcessing.ts:43-298
- Link metadata:
packages/convex/linkMetadata.ts
- Link categories:
packages/convex/shared/linkCategories.ts
- Processing status:
packages/convex/card/processingStatus.ts