Accessing the Web UI
The web interface is available at:http://localhost:6280
The default port is 6280. You can change it using
--port or DOCS_MCP_SERVER_PORT.Interface Overview
The web UI consists of four main sections:Libraries
View all indexed libraries and their versions
Add Documentation
Submit URLs or file paths for indexing
Jobs
Monitor scraping and indexing progress
Search
Test search queries and see results
Managing Libraries
Viewing Libraries
The Libraries page shows all indexed documentation:- Library name and organization
- Version numbers
- Indexing status (Ready, Indexing, Failed)
- Document count
- Last updated timestamp
Adding Documentation
Enter details
Fill in the form:
- Library Name: Package or project name (e.g.,
react,next.js) - Version: Version number or pattern (e.g.,
18.3.1,18.x,latest) - URL: Documentation URL or file path
- Organization (optional): Group related libraries
Configure options
Expand Advanced Options to set:
- Max pages to scrape
- Max depth for link following
- Include/exclude URL patterns
- Scrape mode (auto, markdown, html, text)
URL Examples
- Website
- GitHub Repository
- npm Package
- PyPI Package
- Local Files
- ZIP Archive
Removing Libraries
To remove a library:Monitoring Jobs
The Jobs page shows all indexing jobs:Job States
Queued
Waiting to start (shown in blue)
Running
Currently processing (shown in yellow)
Completed
Successfully finished (shown in green)
Failed
Encountered an error (shown in red)
Cancelled
Stopped by user (shown in gray)
Job Details
Each job displays:- Library name and version
- Current status and progress percentage
- Pages scraped / total pages
- Start time and duration
- Error message (if failed)
Job Actions
- Cancel Job
- View Details
- Clear Completed
Real-Time Updates
The jobs page automatically updates every 3 seconds to show:- Progress bar updates
- New jobs starting
- Jobs completing or failing
- Page counts increasing
No page refresh needed - the UI updates live using HTMX.
Searching Documentation
The Search page lets you test queries:Search Results
Each result shows:- Document title
- Library name and version
- Relevance score
- Text excerpt with highlighted matches
- Link to source URL
Advanced Options
URL Filtering
Control which pages are scraped:Only scrape URLs matching these patterns (glob syntax)Example:
**/api/**, **/reference/**Skip URLs matching these patternsExample:
**/blog/**, **/changelog/**Crawling Scope
- No Subpages
- Same Hostname
- Same Domain
Only index the exact URL providedUse case: Single documentation page
Scrape Mode
Auto (Default)
Automatically detect and use the best processing method
Markdown
Convert HTML to Markdown, preserving structure
HTML
Extract text from HTML while keeping some formatting
Text
Extract plain text only
For most documentation sites, Auto mode works best. Use Markdown for cleaner output or Text for PDFs.
Embedded Server Mode
If you’re running the server embedded in your AI tool (stdio mode), it doesn’t expose a web interface by default.Launch Temporary Web UI
Connect to the embedded server’s database:Troubleshooting
Web UI not loading
Web UI not loading
Problem: Can’t access http://localhost:6280Solutions:
- Check the server is running:
ps aux | grep docs-mcp-server - Verify the port: Check
--portorDOCS_MCP_SERVER_PORT - Check firewall settings
- Try
http://127.0.0.1:6280instead oflocalhost
Jobs stuck in Running state
Jobs stuck in Running state
Problem: Job shows as running but not progressingSolutions:
- Cancel the job and restart it
- Check server logs for errors
- Verify the URL is accessible
- Check for network issues or rate limiting
Search returns no results
Search returns no results
Problem: Search doesn’t find indexed documentationSolutions:
- Verify the library is indexed (check Libraries page)
- Check the version pattern matches (e.g.,
18.xvs18.3.1) - Try a broader search query
- Ensure indexing completed successfully (check Jobs page)
File paths not working
File paths not working
Problem:
file:// URLs fail to scrapeSolutions:- Use absolute paths, not relative
- Check file permissions (server must be able to read)
- For Docker: Verify the volume is mounted correctly
- Windows: Use forward slashes:
file:///C:/Users/...
Next Steps
Scraping Sources
Learn about different source types and best practices
CLI Usage
Manage documentation using the command line
Configuration
Customize server settings and behavior
MCP Tools
Use documentation with your AI assistant
