Skip to main content
The Grounded Docs web interface provides a user-friendly way to manage your documentation libraries, monitor indexing jobs, and test search queries.

Accessing the Web UI

The web interface is available at: http://localhost:6280
The default port is 6280. You can change it using --port or DOCS_MCP_SERVER_PORT.

Interface Overview

The web UI consists of four main sections:

Libraries

View all indexed libraries and their versions

Add Documentation

Submit URLs or file paths for indexing

Jobs

Monitor scraping and indexing progress

Search

Test search queries and see results

Managing Libraries

Viewing Libraries

The Libraries page shows all indexed documentation:
  • Library name and organization
  • Version numbers
  • Indexing status (Ready, Indexing, Failed)
  • Document count
  • Last updated timestamp

Adding Documentation

1

Navigate to Add Documentation

Click the Add Documentation button or tab
2

Enter details

Fill in the form:
  • Library Name: Package or project name (e.g., react, next.js)
  • Version: Version number or pattern (e.g., 18.3.1, 18.x, latest)
  • URL: Documentation URL or file path
  • Organization (optional): Group related libraries
3

Configure options

Expand Advanced Options to set:
  • Max pages to scrape
  • Max depth for link following
  • Include/exclude URL patterns
  • Scrape mode (auto, markdown, html, text)
4

Submit

Click Scrape Documentation to start indexing

URL Examples

https://react.dev/reference/react
https://docs.python.org/3/library/
https://nextjs.org/docs

Removing Libraries

To remove a library:
1

Find the library

Locate it in the Libraries list
2

Click delete

Click the Delete button next to the version
3

Confirm

Confirm the deletion
Deleting a library removes all indexed documents and embeddings. This action cannot be undone.

Monitoring Jobs

The Jobs page shows all indexing jobs:

Job States

Queued

Waiting to start (shown in blue)

Running

Currently processing (shown in yellow)

Completed

Successfully finished (shown in green)

Failed

Encountered an error (shown in red)

Cancelled

Stopped by user (shown in gray)

Job Details

Each job displays:
  • Library name and version
  • Current status and progress percentage
  • Pages scraped / total pages
  • Start time and duration
  • Error message (if failed)

Job Actions

For running jobs:
1

Find running job

Locate the job with “Running” status
2

Click Cancel

Click the Cancel button
3

Wait

The job will stop after the current page finishes

Real-Time Updates

The jobs page automatically updates every 3 seconds to show:
  • Progress bar updates
  • New jobs starting
  • Jobs completing or failing
  • Page counts increasing
No page refresh needed - the UI updates live using HTMX.

Searching Documentation

The Search page lets you test queries:
1

Select library

Choose a library from the dropdown (or leave blank for all libraries)
2

Enter query

Type your search query (e.g., “useState hook”, “authentication”)
3

Set version (optional)

Specify a version pattern to search (e.g., “18.x”, “latest”)
4

Execute search

Click Search to see results

Search Results

Each result shows:
  • Document title
  • Library name and version
  • Relevance score
  • Text excerpt with highlighted matches
  • Link to source URL
Use the search page to verify your documentation is indexed correctly before using it with your AI assistant.

Advanced Options

URL Filtering

Control which pages are scraped:
Include Patterns
string[]
Only scrape URLs matching these patterns (glob syntax)Example: **/api/**, **/reference/**
Exclude Patterns
string[]
Skip URLs matching these patternsExample: **/blog/**, **/changelog/**

Crawling Scope

Only index the exact URL providedUse case: Single documentation page

Scrape Mode

Auto (Default)

Automatically detect and use the best processing method

Markdown

Convert HTML to Markdown, preserving structure

HTML

Extract text from HTML while keeping some formatting

Text

Extract plain text only
For most documentation sites, Auto mode works best. Use Markdown for cleaner output or Text for PDFs.

Embedded Server Mode

If you’re running the server embedded in your AI tool (stdio mode), it doesn’t expose a web interface by default.

Launch Temporary Web UI

Connect to the embedded server’s database:
OPENAI_API_KEY="your-key" npx @arabold/docs-mcp-server@latest web --port 6281
This starts a read/write web UI on port 6281 that shares the same database.
Avoid running concurrent scraping operations from both the embedded server and web UI to prevent database lock conflicts.

Troubleshooting

Problem: Can’t access http://localhost:6280Solutions:
  • Check the server is running: ps aux | grep docs-mcp-server
  • Verify the port: Check --port or DOCS_MCP_SERVER_PORT
  • Check firewall settings
  • Try http://127.0.0.1:6280 instead of localhost
Problem: Job shows as running but not progressingSolutions:
  • Cancel the job and restart it
  • Check server logs for errors
  • Verify the URL is accessible
  • Check for network issues or rate limiting
Problem: Search doesn’t find indexed documentationSolutions:
  • Verify the library is indexed (check Libraries page)
  • Check the version pattern matches (e.g., 18.x vs 18.3.1)
  • Try a broader search query
  • Ensure indexing completed successfully (check Jobs page)
Problem: file:// URLs fail to scrapeSolutions:
  • Use absolute paths, not relative
  • Check file permissions (server must be able to read)
  • For Docker: Verify the volume is mounted correctly
  • Windows: Use forward slashes: file:///C:/Users/...

Next Steps

Scraping Sources

Learn about different source types and best practices

CLI Usage

Manage documentation using the command line

Configuration

Customize server settings and behavior

MCP Tools

Use documentation with your AI assistant

Build docs developers (and LLMs) love