Skip to main content
Uncloud includes built-in container image distribution through Unregistry, an embedded registry that uses your local Docker daemon’s image store as a backend. This eliminates the need for external registries like Docker Hub for cluster-internal images.

Unregistry Architecture

Every Uncloud machine runs an embedded Unregistry server that:
  • Shares local images: Exposes images stored in the local Docker daemon
  • Listens on port 5000: Uses the machine’s WireGuard mesh IP address
  • Requires no configuration: Starts automatically with the Uncloud daemon
  • Uses containerd backend: Only works when Docker uses containerd image store
Unregistry requires Docker with the containerd image store enabled. This is the default on modern Docker installations (20.10+). If you’re using the legacy image store, Uncloud will skip Unregistry setup but you can still use external registries.

How Unregistry Works

When you push an image to a machine using uc build --push:
  1. Local build: Docker builds the image on your local machine
  2. Layer detection: Unregistry checks which image layers exist on the target machine
  3. Incremental transfer: Only missing layers are transferred over the network
  4. Direct import: Layers are imported directly into the remote Docker daemon
This is much faster than traditional registry workflows that involve:
  • Pushing to a registry
  • Pulling from the registry
  • Transferring all layers even if they already exist

Building Images with uc build

The uc build command builds images defined in your compose.yaml file using your local Docker daemon.

Basic Usage

compose.yaml
services:
  web:
    build:
      context: .
      dockerfile: Dockerfile
    image: myapp/web:latest
# Build all services that have a build section
uc build

# Build specific services
uc build web api

# Build and push to cluster machines
uc build --push

# Build with no cache
uc build --no-cache

# Pull newer base images before building
uc build --pull

Build Arguments

Pass build-time variables to your Dockerfile:
uc build --build-arg NODE_VERSION=20 --build-arg ENV=production
Corresponding Dockerfile:
ARG NODE_VERSION=18
FROM node:${NODE_VERSION}

ARG ENV=development
ENV NODE_ENV=${ENV}

Building Dependencies

If your services reference other services in their build context:
compose.yaml
services:
  web:
    build:
      context: .
      additional_contexts:
        shared: service:shared-lib
  shared-lib:
    build:
      context: ./shared
# Build web and its dependencies (shared-lib)
uc build web --deps

Pushing Images to Cluster Machines

After building, you can push images to cluster machines in several ways:

Push to All Machines

# Build and push to all cluster machines
uc build --push
This transfers images to every machine in the cluster, making them available for deployment.

Push to Specific Machines

# Push to specific machines by name
uc build --push -m machine1,machine2

# Push to machines by ID
uc build --push -m 550e8400-e29b-41d4-a716-446655440000

Push Using x-machines Extension

You can specify target machines in the compose file:
compose.yaml
services:
  web:
    build:
      context: .
    image: myapp/web:latest
    x-machines:
      - edge-1
      - edge-2
# Push to machines specified in x-machines
uc build --push
The --push flag respects service-level x-machines unless you override with -m.

Manual Image Push

You can also push pre-built images without building:
# Push an existing local image to all machines
uc image push myapp/web:latest

# Push to specific machines
uc image push myapp/web:latest -m production-1,production-2

Image Distribution Across Machines

When you deploy a service, Uncloud needs the image on the target machine. Here’s how image distribution works:

Pull Policies

Services can specify when to pull images:
compose.yaml
services:
  web:
    image: myapp/web:latest
    pull_policy: missing  # Options: always, missing, never
  • always: Always pull from registry before starting container
  • missing (default): Pull only if image doesn’t exist locally
  • never: Never pull; fail if image missing

Image Pull Flow

  1. Check local availability: Machine checks if image exists in local Docker
  2. Try Unregistry peers: If missing, try pulling from other cluster machines
  3. Fall back to external registry: If not in cluster, pull from Docker Hub or configured registry
This layered approach minimizes external bandwidth usage.

Authentication for External Registries

If pulling from private external registries:
# Authenticate Docker on the target machine
ssh user@machine
docker login registry.example.com
Uncloud uses the machine’s Docker credentials when pulling images. For deployment from your local machine:
# Authenticate locally
docker login registry.example.com

# Deploy (credentials are forwarded)
uc deploy
The CLI automatically forwards your local Docker registry credentials when deploying.

Layer Caching and Transfer

Unregistry’s biggest advantage is efficient layer transfer.

How Layer Deduplication Works

Docker images are built from layers. When you push an image:
  1. Layer enumeration: Unregistry lists all layers in the image
  2. Existence check: Queries target machine for which layers it already has
  3. Delta transfer: Only missing layers are transferred
  4. Reassembly: Target machine reassembles the image from existing + new layers
Example scenario:
FROM node:20-alpine      # Layer 1: 50 MB (shared with many images)
WORKDIR /app             # Layer 2: 100 bytes
COPY package.json .      # Layer 3: 1 KB
RUN npm install          # Layer 4: 150 MB
COPY . .                 # Layer 5: 5 MB
First push of this image:
  • Transfers all 205 MB
Second push after changing source code:
  • Only transfers Layer 5 (5 MB)
  • Layers 1-4 are already on the machine
This makes iterative development extremely fast.

Maximizing Layer Reuse

Optimize your Dockerfile for layer caching:
# Good: Dependencies change less frequently than code
FROM node:20-alpine
WORKDING /app
COPY package.json package-lock.json ./
RUN npm ci --production
COPY . .
CMD ["npm", "start"]
# Bad: Code changes invalidate npm install layer
FROM node:20-alpine
WORKDIR /app
COPY . .                 # Changes often
RUN npm ci --production  # Rebuilds every time
CMD ["npm", "start"]
Follow Docker best practices for layer caching.

Base Image Sharing

Base images (like node:20-alpine, python:3.11) are typically large but rarely change. Once a base image exists on a machine, all services built from that base share those layers. If you have 10 Node.js services on a machine:
  • First service: Pulls full node:20-alpine (50 MB)
  • Services 2-10: Reuse those layers (0 MB transfer)

Using External Registries

You can use external registries alongside Unregistry:

Docker Hub

compose.yaml
services:
  web:
    image: username/myapp:latest
# Build and push to Docker Hub
uc build --push-registry
This uses standard Docker push to your configured registry.

Private Registries

compose.yaml
services:
  api:
    image: registry.example.com/myteam/api:v1.2.3
# Authenticate with your private registry
docker login registry.example.com

# Build and push
uc build --push-registry

GitHub Container Registry

compose.yaml
services:
  worker:
    image: ghcr.io/username/worker:latest
# Authenticate with GitHub (requires personal access token)
echo $GITHUB_TOKEN | docker login ghcr.io -u username --password-stdin

# Build and push
uc build --push-registry

Mixed Strategy

You can mix internal and external registries:
compose.yaml
services:
  # Push to cluster via Unregistry (private development)
  web:
    build: .
    image: myapp/web:dev

  # Pull from external registry (public base service)
  database:
    image: postgres:16-alpine

  # Pull from private registry (shared company image)
  cache:
    image: registry.company.com/redis:custom
# Build web and push via Unregistry
uc build web --push

# Deploy all services (database pulls from Docker Hub, cache from company registry)
uc deploy

Image Management Commands

List Images on Machines

# List images on all machines
uc image ls

# List images on specific machine
uc image ls -m machine-name
Output shows:
  • Image repository and tag
  • Image ID
  • Size
  • Which machines have the image

Check Image Availability

Before deploying, verify images exist:
# Check if image exists in cluster
uc image ls | grep myapp/web:latest

# Push if missing
uc image push myapp/web:latest

Remove Unused Images

Uncloud doesn’t automatically prune images. Clean up manually:
# On specific machine via SSH
ssh user@machine
docker image prune -a --filter "until=720h"  # Remove images older than 30 days

Performance Considerations

Transfer Speed

Image transfer speed depends on:
  • Network bandwidth: Limited by machine-to-machine WireGuard throughput
  • Layer size: Larger layers take longer to compress and transfer
  • Compression: Unregistry compresses layers before transfer
Typical transfer speeds:
  • Local network: 50-100 MB/s
  • Cross-region: 5-20 MB/s (limited by internet bandwidth)

Build Caching

Docker build cache is local to your machine. Builds always happen on your local Docker daemon, so:
  • Layer cache persists between builds
  • --no-cache forces rebuilding all layers
  • --pull ensures base images are fresh

Storage Requirements

Each machine stores:
  • Images for services running on that machine
  • Base images pulled for builds
  • Dangling images from old deployments
Monitor disk usage:
# Check Docker disk usage
ssh user@machine
docker system df
Reclaim space periodically:
docker system prune -a

Troubleshooting Image Issues

Image Not Found

If deployment fails with “image not found”:
  1. Check image exists locally: Run docker images on your machine
  2. Push to cluster: Run uc image push <image-name>
  3. Verify on target machine: SSH and run docker images
  4. Check pull policy: Ensure it’s not set to never

Push Failures

If uc build --push fails:
  1. Check connectivity: Verify WireGuard mesh is working (uc wg show)
  2. Check Unregistry: Ensure port 5000 is accessible on target machines
  3. Verify containerd: Confirm Docker uses containerd image store
  4. Check logs: Review Uncloud daemon logs on target machine

Slow Transfers

If image transfers are slow:
  1. Check network bandwidth: Test with iperf3 between machines
  2. Optimize Dockerfile: Reduce layer sizes, merge RUN commands
  3. Use multi-stage builds: Keep final image small
  4. Pre-push base images: Push large base images once, reuse for all services
Unregistry requires Docker with containerd image store. If your Docker installation uses the legacy image store, you must use external registries (--push-registry) or manually transfer images using docker save and docker load.

Build docs developers (and LLMs) love