Skip to main content

Installation Guide

This comprehensive guide covers everything you need to install, configure, and deploy the ADK Utils Example project. Whether you’re setting up for local development, production deployment, or contributing to the project, this guide has you covered.

Prerequisites

Before installing ADK Utils Example, ensure your system meets these requirements:

Required Software

ADK Utils Example requires Node.js 18+ for modern JavaScript features and Next.js 16 compatibility.Check your version:
node --version
Install or update:
  • Download from nodejs.org
  • Or use a version manager:
    # Using nvm
    nvm install 18
    nvm use 18
    
    # Using fnm
    fnm install 18
    fnm use 18
    
Node.js 20 LTS is recommended for the best performance and long-term support.
Choose your preferred package manager:npm - Included with Node.js
npm --version
yarn - Fast, reliable package manager
npm install -g yarn
yarn --version
pnpm - Efficient disk space usage
npm install -g pnpm
pnpm --version
This guide uses npm in examples, but all commands work with yarn/pnpm equivalents.
Required for cloning the repository and managing source control.
git --version
Install from git-scm.com if needed.

Ollama

For running AI models locally without cloud dependencies

Visual Studio Code

Recommended IDE with excellent TypeScript support

Gemini API Key

For using Google’s Gemini models instead of Ollama

Docker

For containerized deployment (optional)

Installation Steps

1

Clone the Repository

Clone the ADK Utils Example repository from GitHub:
git clone https://github.com/YagoLopez/adk-utils-example.git
cd adk-utils-example
Alternative: Use GitHub CLI
gh repo clone YagoLopez/adk-utils-example
cd adk-utils-example
Alternative: Download ZIP
2

Install Dependencies

Install all required packages:
npm install
This installs:
  • Framework: Next.js 16.1.6, React 19.2.4
  • AI Core: @google/adk, @yagolopez/adk-utils, ai SDK
  • UI Libraries: Tailwind CSS 4, Radix UI, Lucide React
  • Dev Tools: TypeScript 5, ESLint, Prettier, Jest
The installation creates a node_modules directory (~200MB) and may take 2-3 minutes.
3

Verify Installation

Confirm all packages installed successfully:
npm list --depth=0
You should see all major dependencies:
[email protected]
├── @google/[email protected]
├── @yagolopez/[email protected]
├── [email protected]
├── [email protected]
└── ...
Check for vulnerabilities:
npm audit

Environment Configuration

While the project works out-of-the-box with Ollama’s cloud endpoint, you may want to configure environment variables for different models or settings.

Using Ollama (Default)

The default configuration in app/agents/agent1.ts:67 uses Ollama’s cloud endpoint:
app/agents/agent1.ts
import { OllamaModel } from "@yagolopez/adk-utils";

export const rootAgent = new LlmAgent({
  model: new OllamaModel("gpt-oss:120b-cloud", "https://ollama.com"),
  // ... rest of configuration
});
No environment variables needed for cloud Ollama.

Using Local Ollama

For local AI inference with privacy and zero latency:
1

Install Ollama

Download and install from ollama.com:
brew install ollama
2

Pull a Model

Download a model for local inference:
# Lightweight model (recommended for testing)
ollama pull qwen3:0.6b

# Alternative models
ollama pull llama2
ollama pull mistral
ollama pull codellama
Model sizes vary from 400MB (qwen3:0.6b) to 7GB+ (llama2). Choose based on your hardware.
3

Start Ollama Server

Run the Ollama service:
ollama serve
Verify it’s running:
curl http://localhost:11434/api/version
4

Update Agent Configuration

Modify app/agents/agent1.ts:67 to use local Ollama:
app/agents/agent1.ts
export const rootAgent = new LlmAgent({
  model: new OllamaModel("qwen3:0.6b", "http://localhost:11434"),
  // ... rest of configuration
});

Using Google Gemini

To use Google’s Gemini models instead of Ollama:
1

Get API Key

  1. Visit Google AI Studio
  2. Sign in with your Google account
  3. Create a new API key
  4. Copy the key (starts with AIza...)
2

Create Environment File

Create a .env.local file in the project root:
touch .env.local
Add your Gemini API key:
.env.local
GOOGLE_GENERATIVE_AI_API_KEY=your_api_key_here
Never commit .env.local to version control. It’s already in .gitignore.
3

Update Agent Configuration

Modify app/agents/agent1.ts:62 to use Gemini:
app/agents/agent1.ts
export const rootAgent = new LlmAgent({
  model: 'gemini-2.5-flash',
  // ... rest of configuration
});
Available Gemini models:
  • gemini-2.5-flash - Fast, cost-effective
  • gemini-2.5-pro - Advanced reasoning
  • gemini-1.5-flash - Previous generation

Configuration Options

Agent Configuration

Customize your agent in app/agents/agent1.ts:
app/agents/agent1.ts
export const rootAgent = new LlmAgent({
  name: "agent1",
  model: new OllamaModel("qwen3:0.6b", "http://localhost:11434"),
  description: "Agent with three function tools...",
  instruction: `You are a helpful assistant.
                If the user asks for the time, use 'get_current_time'.
                If the user asks for a diagram, use 'create_mermaid_diagram'.
                If the user asks to view source code, use 'view_source_code'.`,
  tools: [getCurrentTime, createMermaidDiagram, viewSourceCode],
});
Configurable Options:
  • name - Agent identifier
  • model - AI model (Gemini string or OllamaModel instance)
  • description - Agent purpose and capabilities
  • instruction - System prompt and behavior guidelines
  • tools - Array of FunctionTool instances

Rate Limiting

Adjust rate limits in lib/constants.ts:26:
lib/constants.ts
export const LIMIT = 20; // Messages per window
export const ONE_HOUR_IN_MS = 60 * 60 * 1000; // Time window
Modify for your needs:
  • Development: Set LIMIT to 100 or higher
  • Production: Keep at 20-50 based on your API quotas
  • No limit: Remove rate limiting from app/page.tsx:29

Custom Tools

Add your own function tools in app/agents/agent1.ts:
import { FunctionTool } from "@google/adk";
import { z } from "zod";

const myCustomTool = new FunctionTool({
  name: "my_tool",
  description: "Description of what this tool does",
  parameters: z.object({
    param1: z.string().describe("Parameter description"),
    param2: z.number().optional(),
  }),
  execute: async ({ param1, param2 }) => {
    // Your tool logic here
    return {
      status: "success",
      report: `Result: ${param1}`,
    };
  },
});

// Add to agent tools array
export const rootAgent = new LlmAgent({
  // ... other config
  tools: [getCurrentTime, createMermaidDiagram, viewSourceCode, myCustomTool],
});
Use Zod schemas for type-safe parameter validation. The ADK automatically validates inputs before calling your tool.

Development Workflow

Available Scripts

The project includes several npm scripts defined in package.json:26:
# Start Next.js dev server with HMR
npm run dev
# Runs on http://localhost:3000

Development Tips

Next.js automatically reloads your application when you save changes. No need to restart the server for:
  • Component changes
  • Style updates
  • API route modifications
Restart required for:
  • Changes to next.config.ts
  • Environment variable updates
  • Package installations
Run TypeScript compiler in watch mode:
npx tsc --noEmit --watch
This catches type errors without building.
Add breakpoints in VS Code:
  1. Add a .vscode/launch.json file
  2. Set breakpoints in your code
  3. Press F5 to start debugging
Or use Chrome DevTools:
NODE_OPTIONS='--inspect' npm run dev
Then open chrome://inspect
Next.js supports multiple env files:
  • .env.local - Local development (not committed)
  • .env.development - Development defaults
  • .env.production - Production settings
Load order: .env.local > .env.development > .env

Troubleshooting

Error: Port 3000 is already in useSolutions:
# Use a different port
PORT=3001 npm run dev

# Kill process on port 3000 (macOS/Linux)
lsof -ti:3000 | xargs kill -9

# Kill process on port 3000 (Windows)
netstat -ano | findstr :3000
taskkill /PID <PID> /F
Error: Cannot find module '@/components/...'Solutions:
  1. Check tsconfig.json has correct path mapping
  2. Restart TypeScript server in VS Code (Cmd+Shift+P → “Restart TS Server”)
  3. Clear Next.js cache:
    rm -rf .next
    npm run dev
    
Error: Failed to connect to Ollama at http://localhost:11434Diagnosis:
# Check if Ollama is running
curl http://localhost:11434/api/version

# Check if model is available
ollama list
Solutions:
  1. Start Ollama: ollama serve
  2. Pull the model: ollama pull qwen3:0.6b
  3. Check firewall isn’t blocking port 11434
  4. Try cloud endpoint instead:
    model: new OllamaModel("gpt-oss:120b-cloud", "https://ollama.com")
    
Error: API key not validSolutions:
  1. Verify .env.local exists and has correct key
  2. Restart dev server after adding env vars
  3. Check key at AI Studio
  4. Ensure billing is enabled for your Google Cloud project
Error: Quota exceeded
  • You’ve hit the free tier limit
  • Upgrade to paid plan or switch to Ollama
Error: Type errors during buildSolutions:
# Check for type errors
npx tsc --noEmit

# Update dependencies
npm update

# Clear cache and rebuild
rm -rf .next node_modules
npm install
npm run build
Error: Rate limit exceeded alertSolutions:
  1. Wait for the rate limit window to reset (1 hour by default)
  2. Increase limit in lib/constants.ts:26
  3. Clear browser local storage to reset client-side counter
  4. For development, disable rate limiting in app/page.tsx:29
Check for updates:
npm outdated
Update safely:
# Update patch versions only
npm update

# Update to latest (may break)
npm install @google/adk@latest
npm install next@latest
Major version updates may introduce breaking changes. Always check release notes.

Production Deployment

Build and Deploy

1

Create Production Build

npm run build
This creates an optimized .next directory with:
  • Minified JavaScript and CSS
  • Optimized images
  • Static and server-side pages
2

Test Production Build Locally

npm run start
Visit http://localhost:3000 to verify everything works.
3

Deploy to Vercel (Recommended)

Vercel is optimized for Next.js:
# Install Vercel CLI
npm install -g vercel

# Deploy
vercel
Or connect your GitHub repository to Vercel for automatic deployments.Add environment variables in Vercel Dashboard:
  • GOOGLE_GENERATIVE_AI_API_KEY (if using Gemini)

Alternative Deployment Options

Netlify

Supports Next.js with automatic deployments

Railway

Simple deployment with built-in databases

AWS Amplify

Full AWS integration and scaling

Testing

Run tests for your application

Next Steps

Architecture Overview

Understand how components interact

Agent Tools

Learn about built-in agent tools

API Reference

Explore available APIs and utilities

Configuration

Configure your application

Getting Help

If you encounter issues not covered in this guide:
  1. Check GitHub Issues: github.com/YagoLopez/adk-utils-example/issues
  2. Create a New Issue: Include error messages, configuration, and steps to reproduce
  3. Join Discussions: Share your use case and get community help
  4. Review Documentation:
When reporting issues, include your Node.js version (node --version), package manager, and any relevant error logs.

Build docs developers (and LLMs) love