Overview
TheDeviceManager class provides utility methods for automatically detecting and selecting the best available PyTorch device for model inference. It supports MPS (Apple Silicon), CUDA (NVIDIA GPUs), and CPU fallback.
Class Definition
Static Methods
get_device() -> torch.device
Automatically detects and returns the optimal PyTorch device based on hardware availability.
A PyTorch device object representing the selected compute device. The selection follows this priority order:
- MPS (Metal Performance Shaders) - Apple Silicon GPUs
- CUDA - NVIDIA GPUs
- CPU - Fallback option
Device Selection Logic
The method uses the following priority-based selection:The device selection is automatic and prioritizes GPU acceleration when available. MPS is checked first for Apple Silicon users, followed by CUDA for NVIDIA GPU users.
Behavior
- Checks for MPS availability (Apple Silicon M1/M2/M3 chips)
- Falls back to CUDA if MPS is not available
- Falls back to CPU if neither GPU option is available
- Automatically logs the selected device using
log_device()
log_device(device: torch.device)
Logs information about the selected device to the console.
The PyTorch device to log information about.
Logging Behavior
- MPS devices: Logs as “MPS”
- CUDA devices: Logs the specific GPU name (e.g., “NVIDIA GeForce RTX 3080”)
- CPU devices: Logs as “CPU”
Return Specifications
| Method | Return Type | Possible Values | Description |
|---|---|---|---|
get_device() | torch.device | mps, cuda, cpu | The optimal available device |
log_device() | None | N/A | Logs device info, no return value |
Usage Examples
Basic Usage
Manual Device Logging
Check Device Type
Complete Pipeline Example
Device Selection Priority
The device selection follows this priority order:-
MPS (Metal Performance Shaders)
- Available on: Apple Silicon (M1, M2, M3+)
- Best for: MacBook Pro, Mac Studio, Mac Mini with Apple chips
- Performance: Excellent GPU acceleration
-
CUDA
- Available on: Systems with NVIDIA GPUs
- Best for: Workstations and servers with NVIDIA graphics cards
- Performance: Excellent GPU acceleration
-
CPU
- Available on: All systems
- Best for: Systems without GPU support
- Performance: Slower than GPU options
GPU acceleration (MPS or CUDA) can provide 5-10x faster inference compared to CPU processing for deep learning models.
Logging Format
The DeviceManager uses Python’s logging module with the following format:Source Reference
Implementation:trash_classificator/segmentation/device_manager.py:6-27