Web tools enable agents to search the internet and retrieve web content with automatic content extraction.
web_search
Search the web using Brave Search API and return titles, URLs, and snippets.
Parameters
The search query to execute
Number of results to return (1-10). Defaults to configured max_results (5).
Return Value
Returns a formatted list of search results with:
- Title
- URL
- Description/snippet (if available)
Returns an error message if the API key is not configured.
Configuration
WebSearchTool(
api_key="your-brave-api-key", # Or set BRAVE_API_KEY env var
max_results=5, # Default result count
proxy=None # Optional HTTP proxy
)
The API key can be configured in:
- Tool initialization parameter
BRAVE_API_KEY environment variable
~/.nanobot/config.json under tools.web.search.apiKey
Example
{
"query": "nanobot AI agents",
"count": 3
}
Returns:
Results for: nanobot AI agents
1. Nanobot - Lightweight AI Agent Framework
https://github.com/example/nanobot
A minimal, extensible framework for building AI agents with tool support.
2. Building AI Agents with Nanobot
https://blog.example.com/nanobot-guide
Complete guide to creating autonomous AI agents using the nanobot framework.
3. Nanobot Documentation
https://docs.example.com/
Official documentation for the nanobot AI agent platform.
No Results
{
"query": "xyzabc123nonexistent"
}
Returns:
No results for: xyzabc123nonexistent
web_fetch
Fetch and extract readable content from a URL using Readability algorithm. Supports HTML, JSON, and plain text.
Parameters
The URL to fetch (must be http:// or https://)
Content extraction mode: "markdown" or "text". Default: "markdown"
Maximum characters to return (minimum 100). Default: 50,000
Return Value
Returns a JSON object containing:
{
"url": "requested URL",
"finalUrl": "final URL after redirects",
"status": 200,
"extractor": "readability|json|raw",
"truncated": false,
"length": 12345,
"text": "extracted content"
}
Or on error:
{
"error": "error message",
"url": "requested URL"
}
Configuration
WebFetchTool(
max_chars=50000, # Default character limit
proxy=None # Optional HTTP proxy
)
The tool automatically detects content type and applies appropriate extraction:
-
HTML: Uses Readability algorithm to extract main content
- Removes navigation, ads, sidebars
- Converts to markdown (links, headings, lists) or plain text
- Preserves article title as H1
-
JSON: Pretty-prints with 2-space indentation
-
Other: Returns raw text content
Example: Fetch Article (Markdown)
{
"url": "https://example.com/article",
"extractMode": "markdown"
}
Returns:
{
"url": "https://example.com/article",
"finalUrl": "https://example.com/article",
"status": 200,
"extractor": "readability",
"truncated": false,
"length": 2543,
"text": "# How to Build AI Agents\n\nAI agents are autonomous programs...\n\n[Learn more](https://example.com/more)"
}
Example: Fetch API Response
{
"url": "https://api.example.com/data"
}
Returns:
{
"url": "https://api.example.com/data",
"finalUrl": "https://api.example.com/data",
"status": 200,
"extractor": "json",
"truncated": false,
"length": 156,
"text": "{\n \"status\": \"ok\",\n \"data\": [...]\n}"
}
Example: Text Mode
{
"url": "https://example.com/article",
"extractMode": "text",
"maxChars": 1000
}
Returns:
{
"url": "https://example.com/article",
"finalUrl": "https://example.com/article",
"status": 200,
"extractor": "readability",
"truncated": true,
"length": 1000,
"text": "How to Build AI Agents\n\nAI agents are autonomous programs..."
}
Error: Invalid URL
{
"url": "ftp://example.com/file"
}
Returns:
{
"error": "URL validation failed: Only http/https allowed, got 'ftp'",
"url": "ftp://example.com/file"
}
Error: Network Issue
{
"url": "https://nonexistent.invalid"
}
Returns:
{
"error": "[Errno -2] Name or service not known",
"url": "https://nonexistent.invalid"
}
Security
- Only HTTP and HTTPS protocols are allowed
- URLs must have valid domain names
- Follows up to 5 redirects (prevents redirect loops)
- 30-second timeout per request
- User-Agent header is set to avoid bot blocking
Implementation
See nanobot/agent/tools/web.py for full implementations:
- WebSearchTool: line 47
- WebFetchTool: line 109