Borg UI provides powerful backup scheduling with visual cron expression builder, multi-repository support, execution history tracking, and automatic maintenance operations.
# From schedule.py:44-51run_prune_after: bool = Falserun_compact_after: bool = Falseprune_keep_daily: int = 7prune_keep_weekly: int = 4prune_keep_monthly: int = 6prune_keep_yearly: int = 1
Backup multiple repositories in a single schedule:
# From schedule.py:273-306if job_data.repository_ids: logger.info("Creating multi-repo schedule", repository_ids=job_data.repository_ids, count=len(job_data.repository_ids)) # Remove duplicates while preserving order seen = set() unique_repo_ids = [] for repo_id in job_data.repository_ids: if repo_id not in seen: seen.add(repo_id) unique_repo_ids.append(repo_id) # Validate repositories for repo_id in unique_repo_ids: repo = db.query(Repository).filter_by(id=repo_id).first() if not repo: raise HTTPException(status_code=400, detail=f"Repository ID {repo_id} not found")
Junction Table:
# From schedule.py:367-379for order, repo_id in enumerate(unique_repo_ids): repo_link = ScheduledJobRepository( scheduled_job_id=scheduled_job.id, repository_id=repo_id, execution_order=order ) db.add(repo_link)
Multi-repository schedules execute backups sequentially in the specified order. This ensures resources aren’t overwhelmed by concurrent operations.
# From schedule.py:1041-1049backup_job = BackupJob( repository=repo.path, status="pending", scheduled_job_id=job.id, # Link to scheduled job created_at=datetime.now(timezone.utc))db.add(backup_job)db.commit()
# From schedule.py:45-50prune_keep_hourly: int = 0prune_keep_daily: int = 7prune_keep_weekly: int = 4prune_keep_monthly: int = 6prune_keep_quarterly: int = 0prune_keep_yearly: int = 1
Retention Policy:
Keep 7 daily backups
Keep 4 weekly backups
Keep 6 monthly backups
Keep 1 yearly backup
Delete everything older
Compact Settings
Reclaim disk space after pruning
run_compact_after: bool = False
What it does:
Removes deleted archive data
Defragments repository
Frees disk space
Can take significant time for large repos
Compact operations can be time-consuming and resource-intensive. Only enable if disk space is a concern.
# From schedule.py:787-830@router.delete("/{job_id}")async def delete_scheduled_job( job_id: int, current_user: User = Depends(get_current_user), db: Session = Depends(get_db)): # Set scheduled_job_id to NULL for all backup jobs db.query(BackupJob).filter_by(scheduled_job_id=job_id).update( {"scheduled_job_id": None} ) # Delete junction table entries db.query(ScheduledJobRepository).filter_by(scheduled_job_id=job_id).delete() # Delete the schedule db.delete(job) db.commit()
Deleting a schedule preserves backup history by setting scheduled_job_id to NULL instead of cascading deletes.
Scheduling Best Practices
Off-Peak Hours: Schedule during low-activity periods (e.g., 2-4 AM)
Stagger Schedules: Avoid running all backups simultaneously
Retention Policies: Use prune to manage storage automatically
Test Runs: Use “Run Now” to test schedules before enabling
Archive Names: Use templates for consistent naming
Multi-Repo: Group related repositories in single schedules