Overview
Jobs represent individual crawl operations for a domain. Each job crawls pages, collects performance data, and identifies issues like broken links and slow-loading pages.Create Job
Request Body
The domain to crawl (e.g., “example.com”)
Whether to discover pages from sitemap.xml
Whether to discover pages by crawling internal links
Whether to follow links across subdomains
Maximum number of pages to crawl (0 = unlimited)
Number of concurrent requests (max 100)
Source that created the job (e.g., “dashboard”, “api”, “scheduler”)
Additional detail about the source (e.g., “create_job”, “recurring”)
Response Fields
Unique job identifier
The domain being crawled
Internal domain identifier
Job status:
created, running, completed, failed, cancelledOrganisation that owns this job
ISO 8601 timestamp of job creation
List Jobs
Query Parameters
Number of results per page (max 100)
Number of results to skip
Filter by job status:
created, running, completed, failed, cancelledDate range filter (e.g., “7d”, “30d”)
Timezone offset in minutes
Additional fields to include (comma-separated):
stats, progress, domainResponse Fields
Array of job objects
Get Job
Path Parameters
Unique job identifier
Response Fields
ISO 8601 timestamp when job started (null if not started)
ISO 8601 timestamp when job completed (null if not completed)
Cancel Job
Path Parameters
Unique job identifier
Response Fields
Updated job status (will be “cancelled”)
ISO 8601 timestamp when job was cancelled
Alternative: Delete Job
You can also cancel a job using DELETE:POST /v1/jobs/{job_id}/cancel.