Skip to main content

Quickstart Guide

Get the ADK Utils Example chat application up and running in less than 5 minutes. This guide will walk you through cloning the repository, installing dependencies, and starting your first AI conversation.
This quickstart uses the Ollama model for local AI inference. You can switch to Gemini by following the configuration steps in the Installation guide.

Prerequisites

Before you begin, ensure you have:
  • Node.js 18+ installed on your system
  • npm, yarn, or pnpm package manager
  • (Optional) Ollama installed for local model support
If you plan to use Google’s Gemini models, you’ll need a Gemini API key. See the Installation guide for detailed setup.

Step-by-Step Setup

1

Clone the Repository

First, clone the ADK Utils Example repository to your local machine:
git clone https://github.com/YagoLopez/adk-utils-example.git
cd adk-utils-example
This will download the complete project with all source code, components, and configuration files.
2

Install Dependencies

Install all required packages using your preferred package manager:
npm install
This will install all dependencies including:
  • Next.js 16 and React 19
  • Google ADK and ADK Utils
  • AI SDK and related packages
  • UI components and styling libraries
The installation might take a few minutes depending on your internet connection.
3

Configure the Agent Model

The project comes pre-configured with an Ollama model. Open app/agents/agent1.ts to see the agent configuration:
app/agents/agent1.ts
import { LlmAgent } from "@google/adk";
import { OllamaModel } from "@yagolopez/adk-utils";

export const rootAgent = new LlmAgent({
  name: "agent1",
  // Choose your model:
  // model: 'gemini-2.5-flash',  // For Gemini (requires API key)
  model: new OllamaModel("gpt-oss:120b-cloud", "https://ollama.com"),
  description: "Agent with function tools for time, diagrams, and code",
  instruction: `You are a helpful assistant...`,
  tools: [getCurrentTime, createMermaidDiagram, viewSourceCode],
});
The default configuration uses Ollama’s cloud endpoint. For local models, change to:
model: new OllamaModel("qwen3:0.6b", "http://localhost:11434")
Make sure Ollama is running locally if you use http://localhost:11434. Install Ollama from ollama.com and run ollama pull qwen3:0.6b.
4

Start the Development Server

Launch the Next.js development server:
npm run dev
You should see output similar to:
 Next.js 16.1.6
- Local:        http://localhost:3000
- Network:      http://192.168.1.100:3000

 Ready in 2.3s
The development server includes hot module replacement (HMR), so your changes will be reflected immediately.
5

Open the Application

Navigate to http://localhost:3000 in your web browser.You should see the chat interface with:
  • A clean, modern UI
  • Three suggested prompts:
    • “What agent tools do you have?”
    • “Give me a simple code example of javascript closure”
    • “Create an example of pie diagram”
  • A chat input ready for your messages Chat Interface Empty State
6

Test the Chat Interface

Try one of the suggested prompts or type your own message. For example:Try this: “What agent tools do you have?”The agent will respond with information about its three built-in tools:
  • get_current_time - Time retrieval for any city
  • create_mermaid_diagram - Visual diagram generation
  • view_source_code - Source code visualization
Try this: “Create a flowchart showing a user login process”The agent will use the create_mermaid_diagram tool to generate an interactive Mermaid.js diagram:Try this: “What time is it in Tokyo?”The agent will call the get_current_time tool to retrieve the time.

What’s Running?

Once your development server is running, here’s what’s happening:
The React application running at http://localhost:3000 includes:
  • Chat UI components with real-time streaming
  • Markdown and Mermaid rendering
  • Rate limiting (20 messages per hour by default)
  • Auto-scrolling message list
The Next.js API route at /api/genai-agent handles:
  • Message processing from the frontend
  • Agent initialization with GenAIAgentService
  • Streaming responses back to the client
  • Error handling and validation
Located in: app/api/genai-agent/route.ts:1
The rootAgent defined in app/agents/agent1.ts:62 includes:
  • Ollama model integration via @yagolopez/adk-utils
  • Three custom function tools
  • Zod schemas for parameter validation
  • Custom instructions for behavior

Expected Output

When you send your first message, you should see:
  1. Typing Indicator - Shows the agent is processing
  2. Streaming Response - Text appears word by word in real-time
  3. Formatted Output - Markdown rendering with:
    • Code blocks with syntax highlighting
    • Mermaid diagrams (if requested)
    • Proper text formatting (bold, italic, lists)
  4. Tool Calls - Visual feedback when tools are executed

Explore the ADK Web Tool

The project includes the ADK Web Tool for debugging and inspecting agents:
npm run adk:web
This launches an interactive web interface where you can:
  • Inspect agent configurations
  • View tool definitions
  • Test agent interactions
  • Debug tool calls and responses
The ADK Web Tool runs on a different port (typically 3001) and provides a development-focused interface for agent management.

Verify Installation

To ensure everything is working correctly:
1

Check the Console

Open your browser’s developer console (F12). You should see no errors, only informational logs.
2

Send a Test Message

Type “Hello” and press Enter. You should receive a response within 1-2 seconds.
3

Test Tool Execution

Ask for a diagram: “Create a simple pie chart”. Verify that the Mermaid diagram renders correctly.
4

Check Rate Limiting

The app limits you to 20 messages per hour. The counter is visible in the UI.

Common Issues

If port 3000 is already occupied, Next.js will automatically use port 3001. Check the terminal output for the correct URL.Alternatively, specify a custom port:
PORT=3002 npm run dev
If you see “Failed to connect to Ollama”, ensure:
  • Ollama is installed and running
  • The model is downloaded: ollama pull qwen3:0.6b
  • The endpoint in agent1.ts matches your Ollama server
For cloud Ollama (default), no local installation is needed.
If you see module errors, try:
rm -rf node_modules package-lock.json
npm install
If TypeScript errors appear, ensure you’re using Node.js 18+ and TypeScript 5:
node --version
npx tsc --version

Next Steps

Now that you have the application running:

Installation Guide

Learn about detailed configuration options and environment setup

Architecture

Understand how the components work together

Agent Tools

Learn about built-in agent tools

Chat UI Features

Explore the chat interface capabilities

What You’ve Accomplished

Congratulations! You’ve successfully:
  • ✅ Cloned and set up the ADK Utils Example project
  • ✅ Installed all dependencies and development tools
  • ✅ Configured an AI agent with Ollama model support
  • ✅ Started the development server
  • ✅ Tested the chat interface with streaming responses
  • ✅ Verified tool execution (time, diagrams, code)
You’re now ready to explore the codebase and build your own AI-powered applications!
Join the community on GitHub to share your projects, ask questions, and contribute to the ADK Utils ecosystem.

Build docs developers (and LLMs) love