Tasks are the core annotation units in CVAT. Each task contains uploaded data (images or videos) that will be annotated according to the project’s label schema.
Task Lifecycle
A task progresses through several stages:
Creation : Task is created with configuration
Data upload : Images/videos are uploaded to the task
Job generation : Task is split into jobs for annotators
Annotation : Annotators label the data
Validation : Quality review and corrections
Acceptance : Final approval
Completion : Task marked as complete
Creating a Task
Using the Web UI
Navigate to Projects → Select your project
Click Create a new task
Configure task settings:
Name : Descriptive task name
Subset : Optional subset identifier (e.g., “train”, “val”, “test”)
Advanced configuration :
Overlap : Number of frames overlapping between jobs
Segment size : Frames per job (0 = single job)
Image quality : Compressed frame quality (0-100)
Data chunk size : Frames per chunk for streaming
Upload data (see Uploading Data )
Click Submit
Using the REST API
Create a task with POST /api/tasks:
curl -X POST "https://app.cvat.ai/api/tasks" \
-H "Authorization: Token <your-token>" \
-H "Content-Type: application/json" \
-d '{
"name": "Traffic Scene Batch 1",
"project_id": 1,
"overlap": 5,
"segment_size": 100,
"subset": "train"
}'
Response (201 Created):
{
"id" : 10 ,
"name" : "Traffic Scene Batch 1" ,
"project_id" : 1 ,
"owner" : {
"id" : 1 ,
"username" : "admin"
},
"assignee" : null ,
"status" : "annotation" ,
"overlap" : 5 ,
"segment_size" : 100 ,
"subset" : "train" ,
"mode" : "annotation" ,
"dimension" : "2d" ,
"size" : 0 ,
"created_date" : "2026-03-04T10:30:00.000Z" ,
"updated_date" : "2026-03-04T10:30:00.000Z" ,
"jobs" : {
"count" : 0 ,
"completed" : 0 ,
"validation" : 0
}
}
Using the Python SDK
from cvat_sdk import Client, models
client = Client( url = "https://app.cvat.ai" )
client.login(( "username" , "password" ))
# Create task
task = client.tasks.create(
spec = models.TaskWriteRequest(
name = "Street Scene Annotations" ,
project_id = 1 ,
overlap = 5 ,
segment_size = 100 ,
subset = "validation"
)
)
print ( f "Created task ID: { task.id } " )
Using the CLI
# Create task in a project
cvat-cli --auth user:password task create \
--project_id 1 \
--overlap 5 \
--segment_size 100 \
--subset train \
"Pedestrian Detection Task 1"
# Create standalone task with labels
cvat-cli --auth user:password task create \
--labels '[{"name": "person"}, {"name": "bike"}]' \
--overlap 0 \
"Independent Task"
Uploading Data
Local Files
Upload images or videos from your computer:
Using the SDK:
from cvat_sdk.core.proxies.tasks import ResourceType
# Upload local images
task.upload_data(
resources = [
"path/to/image1.jpg" ,
"path/to/image2.jpg" ,
"path/to/image3.jpg"
],
resource_type = ResourceType. LOCAL ,
params = {
"image_quality" : 70 ,
"chunk_size" : 72 ,
"sorting_method" : "natural"
}
)
# Upload video
task.upload_data(
resources = [ "path/to/video.mp4" ],
resource_type = ResourceType. LOCAL ,
params = {
"frame_filter" : "step=5" , # Every 5th frame
"start_frame" : 0 ,
"stop_frame" : 1000
}
)
Using the API:
Server Files
Use files from a mounted share:
task.upload_data(
resources = [
"/mnt/share/dataset/image1.jpg" ,
"/mnt/share/dataset/image2.jpg"
],
resource_type = ResourceType. SHARE ,
params = { "copy_data" : True }
)
Remote Files (URLs)
Provide URLs to download:
task.upload_data(
resources = [
"https://example.com/image1.jpg" ,
"https://example.com/image2.jpg"
],
resource_type = ResourceType. REMOTE
)
Cloud Storage
Use files from cloud storage:
task.upload_data(
resources = [ "manifest.jsonl" ],
resource_type = ResourceType. SHARE ,
params = {
"cloud_storage_id" : 5 ,
"filename_pattern" : "*.jpg"
}
)
Data Upload Parameters
Parameter Type Description image_qualityinteger Compression quality 0-100 (default: 70) chunk_sizeinteger Frames per chunk (default: 72) start_frameinteger First frame to include (default: 0) stop_frameinteger Last frame to include frame_filterstring Frame filter: step=N to skip frames sorting_methodstring lexicographical, natural, predefined, randomuse_cacheboolean Enable server-side caching copy_databoolean Copy data from share to CVAT
Job Management
Understanding Jobs
When data is uploaded, CVAT automatically splits the task into jobs based on segment_size. Jobs are assigned to annotators for parallel work.
Job types:
annotation: Standard annotation job
ground_truth: Ground truth validation job
consensus_replica: Consensus annotation job
Viewing Jobs
# Get all jobs for a task
jobs = task.get_jobs()
for job in jobs:
print ( f "Job { job.id } : frames { job.start_frame } - { job.stop_frame } " )
print ( f " Status: { job.status } " )
print ( f " Stage: { job.stage } " )
print ( f " State: { job.state } " )
print ( f " Assignee: { job.assignee } " )
Assigning Jobs
Using the API:
curl -X PATCH "https://app.cvat.ai/api/jobs/{job_id}" \
-H "Authorization: Token <your-token>" \
-H "Content-Type: application/json" \
-d '{"assignee_id": 5}'
Using the SDK:
job = client.jobs.retrieve( job_id = 15 )
job.update(models.PatchedJobWriteRequest( assignee_id = 5 ))
Job States and Stages
Stages (workflow phase):
annotation: Initial annotation phase
validation: Review and quality check phase
acceptance: Final approval phase
States (completion status):
new: Not started
in progress: Work in progress
completed: Finished
rejected: Needs rework
# Move job to validation stage
job.update(models.PatchedJobWriteRequest(
stage = "validation" ,
state = "new"
))
# Mark job as completed
job.update(models.PatchedJobWriteRequest(
state = "completed"
))
Task States
Tasks inherit their status from constituent jobs:
annotation: Jobs are being annotated
validation: Jobs are in validation/review
completed: All jobs completed
Deprecated status field is computed from stage and state:
# Get task status
task_info = client.tasks.retrieve( task_id = 10 )
print ( f "Status: { task_info.status } " )
print ( f "Jobs: { task_info.jobs[ 'count' ] } total, { task_info.jobs[ 'completed' ] } completed" )
Validation Modes
CVAT supports different validation configurations:
Ground Truth Validation
Create a ground truth job for quality assessment:
# Create ground truth job
gt_job = client.jobs.create(
spec = models.JobWriteRequest(
task_id = 10 ,
type = "ground_truth" ,
frame_selection_method = "random_uniform" ,
frame_count = 50 , # 50 random frames
random_seed = 42
)
)
Frame selection methods:
random_uniform: Random frames across entire task
random_per_job: Random frames from each job
manual: Manually specified frame list
# Manual frame selection
gt_job = client.jobs.create(
spec = models.JobWriteRequest(
task_id = 10 ,
type = "ground_truth" ,
frame_selection_method = "manual" ,
frames = [ 0 , 10 , 20 , 30 , 40 ] # Specific frames
)
)
Honeypot Validation
Create tasks with hidden validation frames:
# Upload data with validation params
task.upload_data(
resources = images,
params = {
"validation_params" : {
"mode" : "gt_pool" ,
"frame_selection_method" : "random_per_job" ,
"frames_per_job_count" : 10
}
}
)
Consensus Annotation
Enable multiple annotators for the same data:
# Create task with consensus replicas
task = client.tasks.create(
spec = models.TaskWriteRequest(
name = "Consensus Task" ,
project_id = 1 ,
consensus_replicas = 3 # 3 annotators per job
)
)
This creates 3x jobs, allowing agreement analysis.
Updating Tasks
Update Task Properties
task.update(models.PatchedTaskWriteRequest(
name = "Updated Task Name" ,
assignee_id = 10 ,
subset = "test"
))
Update Task Labels (Standalone Tasks)
For tasks not in a project:
task.update(models.PatchedTaskWriteRequest(
labels = [
models.PatchedLabelRequest(
id = 5 ,
name = "updated_label_name"
),
models.PatchedLabelRequest(
name = "new_label" ,
color = "#ff00ff"
)
]
))
Tasks in a project inherit labels from the project. Update labels at the project level.
Deleting Tasks
# REST API
curl -X DELETE "https://app.cvat.ai/api/tasks/{task_id}" \
-H "Authorization: Token <your-token>"
# Python SDK
task = client.tasks.retrieve( task_id = 10 )
task.remove()
# Or directly
client.tasks.remove_by_id( 10 )
# CLI
cvat-cli --auth user:password task delete 10
Best Practices
Optimize job segmentation
Set segment_size to 100-500 frames per job
Use overlap (5-10 frames) for video tasks to ensure continuity
Smaller jobs = easier task management and recovery
Consider annotator capacity when sizing jobs
Configure data upload efficiently
Use subset field for train/val/test splits
Assign descriptive names with batch/date info
Group related tasks in projects
Assign tasks to appropriate team members
Check jobs.completed count regularly
Review job states and stages
Use quality reports to track annotation quality
Set up validation workflows early
Next Steps
Quality Control Set up validation and quality metrics
Annotation Guide Learn about annotation tools and workflows
Export Annotations Export annotated data in various formats
API Reference Complete task API documentation