Skip to main content
Local development is a convenient way to run MaxDiffusion on a single host. This deployment method doesn’t scale to multiple hosts but is ideal for development and testing.

Prerequisites

Minimum requirements:
  • Ubuntu Version 22.04
  • Python 3.12
  • Tensorflow >= 2.12.0

Running on Cloud TPUs

1

Create and SSH to a single-host TPU

Create a single-host TPU (v6-8) using the following command:
gcloud compute tpus tpu-vm create TPU_NAME \
  --zone=ZONE \
  --accelerator-type=v6-8 \
  --version=v2-alpha-tpuv6e
Then SSH into your TPU:
gcloud compute tpus tpu-vm ssh TPU_NAME --zone=ZONE
Find zones that support v6 (Trillium) TPUs in the regions and zones documentation.
We recommend using the base VM image “v2-alpha-tpuv6e”, which meets the version requirements.
2

Clone MaxDiffusion

Clone the MaxDiffusion repository on your TPU VM:
git clone https://github.com/AI-Hypercomputer/maxdiffusion.git
cd maxdiffusion
3

Install dependencies

Within the root directory of the MaxDiffusion git repo, install dependencies:
# If a Python 3.12+ virtual environment doesn't already exist,
# you'll need to run the install command three times.
bash setup.sh MODE=stable DEVICE=tpu
The setup script will:
  • Install system dependencies
  • Install JAX for TPU
  • Install MaxDiffusion and its requirements
4

Activate your virtual environment

Activate the virtual environment created during setup:
# Replace with your virtual environment name if not using this default name
venv_name="maxdiffusion_venv"
source ~/$venv_name/bin/activate

Next steps

After completing the setup, you can:
  • Run inference on various models (Flux, SDXL, Stable Diffusion)
  • Train models on your single-host TPU
  • Scale to multihost deployment when needed
For production workloads and multi-host training, see the multihost deployment or XPK deployment guides.

Build docs developers (and LLMs) love