What is Adapt?
Adapt is a comprehensive website health and performance tool that monitors site health, detects broken links, identifies slow pages, and warms cache for optimal performance after publishing. Built for modern web platforms, Adapt integrates seamlessly with Webflow via OAuth with automated scheduling and webhook-triggered crawls.Key Features
Site Health Monitoring
Broken link detection, 404 tracking, and performance bottleneck identification
Cache Warming
Smart warming with automatic retry on cache MISS and priority processing
Automation & Integration
Scheduled crawls and Webflow OAuth integration with real-time dashboard
RESTful API
Platform integrations with comprehensive API endpoints
Site Health Monitoring
- Broken link detection across your entire site
- Identify 404s, timeouts, and redirect chains before your users do
- Detect slow-loading pages and performance bottlenecks
- Track broken links and performance over time with historical data
- Lightning fast speed without being blocked or spamming your site
Cache Warming
- Smart warming with automatic retry on cache MISS
- Priority processing - homepage and critical pages first
- Improved initial page load times after publishing
- Robots.txt compliance with crawl-delay honouring
Automation
- Scheduled crawls at 6/12/24/48 hour intervals per site
- Webflow OAuth integration with auto-crawl on publish webhooks
- Real-time dashboard with live job progress via WebSockets
- Slack notifications via DMs when jobs complete or fail
- Multi-organisation support with Supabase Auth and RLS
- Technology detection for CMS, CDN, and frameworks
Architecture Overview
Adapt is built in Go with a focus on reliability, performance, and observability:- Backend: Go 1.26 with PostgreSQL (Supabase)
- Frontend: Vanilla JavaScript with data-binding (no build process)
- Infrastructure: Fly.io (app + DB), Cloudflare CDN, Supabase (auth + realtime)
- Monitoring: Sentry (errors), Grafana Cloud (traces), Codecov (coverage)
Worker Pool System
Adapt uses a concurrent worker pool architecture for efficient URL crawling:- Multiple workers process tasks simultaneously using PostgreSQL’s
FOR UPDATE SKIP LOCKED - Job breakdown into individual URL tasks distributed across workers
- Automatic recovery of stalled or failed tasks with exponential backoff
- Real-time monitoring of task progress and status
Get Started
Quickstart
Get from signup to your first crawl job in minutes
Installation
Set up local development environment
API Reference
Explore API endpoints and integration options
GitHub
View source code and contribute
Support
Built by the Good Native team in Castlemaine, Victoria, Australia.Adapt is currently ~65% complete - Stage 4 of 7 (Core Authentication & MVP Interface). Check the roadmap for detailed progress tracking.