Skip to main content

Overview

Toots uses Google’s Gemini AI model to power intelligent features:
  • Ticket generation - Analyze project descriptions and generate actionable tickets
  • Project chat - Interactive assistant that asks clarifying questions
  • Smart recommendations - Priority, effort, and dependency suggestions

Getting a Gemini API key

  1. Visit Google AI Studio
  2. Sign in with your Google account
  3. Click “Get API key” or “Create API key”
  4. Copy your API key
Keep your API key secure. Never commit it to version control or share it publicly.

Configuration

Add your Gemini API key to apps/web/.env:
apps/web/.env
GOOGLE_GENERATIVE_AI_API_KEY="your-api-key-here"

AI model

Toots uses the gemini-2.5-flash model:
import { google } from "@ai-sdk/google";

const model = google("gemini-2.5-flash");
This model provides:
  • Fast response times
  • High-quality text generation
  • Tool calling and function execution
  • Structured output support

AI features

Ticket generation

The AI generates structured tickets based on project descriptions:
type Ticket = {
  title: string;
  type: "Story" | "Task" | "Bug" | "Epic";
  priority: "P0" | "P1" | "P2" | "P3";
  description: string;
  acceptanceCriteria: string[];
  estimatedEffort: string;
  dependencies: string[];
  labels: string[];
};

Project chat assistant

The AI chat assistant helps refine project scope:
  1. Clarifying questions - Asks 2-4 targeted questions about:
    • Goals and success criteria
    • Stakeholders and audience
    • Key deliverables and milestones
    • Timeline and constraints
    • Dependencies
  2. Context gathering - Remembers conversation history and project details
  3. Ticket refinement - Add, update, or remove tickets based on feedback

AI tools

The chat assistant has access to several tools:
  • generateTickets - Create new tickets for the project
  • listTickets - Retrieve existing tickets
  • updateTickets - Modify ticket properties
  • removeTickets - Delete tickets by ID
  • setClarifyingQuestions - Present structured questions to users

Implementation details

Vercel AI SDK

Toots uses the Vercel AI SDK for AI integration:
import { streamText } from "ai";
import { google } from "@ai-sdk/google";

const result = streamText({
  model: google("gemini-2.5-flash"),
  system: systemPrompt,
  messages: conversationHistory,
  tools: aiTools,
});
Key features:
  • Streaming responses - Real-time token streaming
  • Tool calling - Execute functions from AI responses
  • Message history - Maintain conversation context
  • UI components - React hooks for chat interfaces

API endpoint

AI chat is handled by the /api/chat route:
apps/web/app/api/chat/route.ts
export async function POST(req: Request) {
  if (!process.env.GOOGLE_GENERATIVE_AI_API_KEY) {
    return new Response(
      JSON.stringify({
        error: "Google Generative AI API key is missing."
      }),
      { status: 503 }
    );
  }

  const { messages, project } = await req.json();
  
  const result = streamText({
    model: google("gemini-2.5-flash"),
    system: systemPrompt,
    messages: messages,
    tools: createTools(project?.id),
  });

  return createUIMessageStreamResponse({ stream: result });
}

System prompts

The AI uses different prompts based on context: Initial project creation:
  • Analyzes new project ideas
  • Asks clarifying questions
  • Generates initial tickets
Project chat:
  • References existing project context
  • Avoids asking user to re-describe the project
  • Focuses on refinement and iteration
The system prompts are designed to handle various project types (product launches, marketing campaigns, events, process changes) not just software projects.

Usage limits and pricing

Gemini API has usage limits and pricing:
  • Free tier - 60 requests per minute
  • Rate limits - May vary by region and model
  • Pricing - Check Google AI pricing for current rates
Monitor your API usage to avoid unexpected charges. Set up billing alerts in Google Cloud Console.

Error handling

Toots handles AI errors gracefully:

Missing API key

If the API key is not configured:
{
  "error": "Google Generative AI API key is missing. Set GOOGLE_GENERATIVE_AI_API_KEY in your .env file.",
  "status": 503
}

API failures

If the Gemini API fails:
  • Error messages are logged to the console
  • Users see a friendly error message
  • Chat history is preserved

Testing AI features

Development workflow

  1. Set your API key in .env
  2. Start the development server: pnpm dev
  3. Create a new project or open an existing one
  4. Test the chat assistant:
    • Ask questions
    • Request ticket generation
    • Try updating and removing tickets

Example prompts

Initial project:
I want to launch a new onboarding flow for our SaaS product
Clarifying responses:
The goal is to reduce time-to-value. Target audience is small business owners. Timeline is 6 weeks.
Refinement:
Add a ticket for user research interviews
Bump the "design mockups" ticket to P0
Remove the analytics dashboard ticket

Production considerations

API key security

  • Store API keys in environment variables
  • Use different keys for staging and production
  • Rotate keys periodically
  • Never log or expose keys in client-side code

Rate limiting

Implement rate limiting to prevent abuse:
  • Limit requests per user
  • Add request queuing
  • Implement exponential backoff for retries

Monitoring

Track AI performance metrics:
  • Response times
  • Error rates
  • Token usage
  • User satisfaction

Future AI features

Planned enhancements:
  • Insight extraction - Analyze customer feedback and interviews
  • Evidence-based recommendations - Suggest features backed by data
  • Semantic search - Find relevant information using embeddings
  • Multi-source synthesis - Combine insights from multiple data sources
See the product discovery roadmap for details on upcoming AI features.

Troubleshooting

API key invalid

If you see authentication errors:
  1. Verify your API key is correct
  2. Check the key hasn’t been revoked
  3. Ensure you’re using a valid Google account

Slow responses

If AI responses are slow:
  1. Check your internet connection
  2. Verify you’re not hitting rate limits
  3. Consider using a different model or parameters

Unexpected outputs

If the AI generates unexpected content:
  1. Review your system prompts
  2. Check conversation history for context issues
  3. Add more specific instructions or constraints

Next steps

After setting up AI integration:
  1. Start the development server
  2. Create your first project and test ticket generation
  3. Explore the project chat assistant
  4. Review generated tickets on the Kanban board

Build docs developers (and LLMs) love