Quick Start Guide
This guide will help you launch DigiPathAI and perform your first whole slide image segmentation.Prerequisites
Before starting, ensure you have:- DigiPathAI installed (see Installation)
- At least one WSI file in a supported format (TIFF, SVS, MRXS, NDPI, etc.)
- For AI segmentation: GPU dependencies installed via
pip install "DigiPathAI[gpu]"
Launch the Server
Navigate to Your Slides Directory
Open a terminal and navigate to the directory containing your whole slide images:
Start the DigiPathAI Server
Launch the server with the Command syntax:
digipathai command:host: IP address to bind to (default:localhostor127.0.0.1)port: Port number to listen on (default:8080)
The server runs in viewer-only mode by default if GPU dependencies are not installed. To explicitly disable segmentation features, use the
--viewer-only flag.Open the Web Interface
Open your web browser and navigate to:You should see the DigiPathAI file browser listing all available WSI files in your directory.
View a Whole Slide Image
Click on any WSI file in the browser to open the interactive viewer. You can:
- Pan: Click and drag to move around the image
- Zoom: Use mouse wheel or zoom controls
- Navigate: Use the navigation pane in the corner
Running Segmentation
DigiPathAI provides two ways to perform AI-powered tissue segmentation:Method 1: Through the Web UI
Select Tissue Type
Choose the appropriate tissue type from the dropdown menu:
- Colon: For colorectal tissue (DigestPath models)
- Liver: For liver tissue (PAIP models)
- Breast: For breast tissue (Camelyon models)
Run Segmentation
Click the “Segment” button to start the AI analysis.The process will:
- Download pre-trained models (first time only)
- Process the image in patches
- Generate segmentation mask
- Display results overlaid on the original image
Model files are cached in
~/.DigiPathAI/ and automatically downloaded on first use. This may take several minutes depending on your connection.Method 2: Using the Python API
For programmatic access and custom workflows, use the Python API:API Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
img_path | str | Required | Path to WSI file (TIFF, SVS, etc.) |
patch_size | int | 256 | Size of patches for inference |
stride_size | int | 128 | Stride between patches (lower = more overlap) |
batch_size | int | 32 | Number of patches per GPU batch |
quick | bool | True | Single model (True) vs ensemble (False) |
tta_list | list | None | Test-time augmentations to apply |
crf | bool | False | Apply CRF post-processing |
mode | str | ’colon’ | Tissue type: ‘colon’, ‘liver’, ‘breast’ |
model | str | ’dense’ | Model: ‘dense’, ‘inception’, ‘deeplabv3’ |
Understanding the Output
Generated Files
When segmentation completes, DigiPathAI creates three files:- Probability Map (
-dgai-probs.tiff): Raw probability values (0-1) for each pixel - Binary Mask (
-dgai-mask.tiff): Thresholded segmentation (0 or 255) - Uncertainty Map (
-dgai-uncertainty.tiff): Model uncertainty/variance
Interpreting Results
- White regions in the mask indicate detected cancerous tissue
- Black regions indicate normal/background tissue
- Uncertainty map shows where the model is less confident (brighter = more uncertain)
Performance Optimization
GPU Memory Management
If you encounter out-of-memory errors:Speed vs Accuracy Trade-offs
Advanced Server Options
Thedigipathai command supports additional configuration:
digipathai --help to see all available options from main_server.py:246-275.
Next Steps
API Reference
Detailed documentation of all functions and parameters
User Guide
Learn how to run segmentation and understand results
Tissue Types
Explore models for colon, liver, and breast cancer
Contributing
Contribute to the project on GitHub