cog predict or pushed to a registry with cog push.
Usage
Flags
A name for the built image in the form
repository:tagThe name of the config file
Do not use cache when building the image
Separate model weights from code in image layers. This creates a more efficient image structure where weights are in a separate layer that can be cached independently.
Set type of build progress output:
auto, tty, plain, or quietSecrets to pass to the build environment in the form
id=foo,src=/path/to/file. Can be specified multiple times.Load OpenAPI schema from a file instead of generating it
Use pre-built Cog base image for faster cold boots
Use Nvidia CUDA base image:
true, false, or auto. Setting to false uses a Python base image, resulting in a smaller image but may cause problems for non-torch projects.Examples
Build with default settings
Build and tag the image
my-model:latest that you can run with Docker:
Build without using cache
Useful when you want to ensure a fresh build:Build with model weights in a separate layer
This optimization is especially useful for large models:- Faster rebuilds when only code changes
- More efficient image storage
- Better layer caching
Build with secrets
Pass secrets to the build process (useful for private package registries):Build with custom progress output
How It Works
When you runcog build, Cog:
- Reads your
cog.yamlconfiguration - Generates an optimized Dockerfile with:
- The correct base image (CUDA-enabled if needed)
- System package installations
- Python environment setup
- Your application code
- Builds the Docker image using BuildKit
- Tags the image with the specified name
Environment Variables
The build process respects these environment variables:BUILDKIT_PROGRESS- Sets default progress output typeDOCKER_BUILDKIT- Enables BuildKit (recommended)
Notes
- The first build may take several minutes as it downloads base images and installs dependencies
- Subsequent builds are faster due to Docker layer caching
- Built images include the Cog HTTP server for serving predictions
- Images can be run locally or pushed to any Docker registry
See Also
- cog push - Build and push to a registry in one command
- cog predict - Run predictions on a built image
- cog.yaml reference - Configuration file format