Skip to main content

Install from PyPI

The simplest way to install GPU Memory Profiler is from PyPI:
pip install gpu-memory-profiler
The base installation includes core profiling functionality for both PyTorch and TensorFlow. Framework-specific and visualization dependencies are optional.

Optional dependencies

GPU Memory Profiler supports several optional dependency groups that you can install based on your needs:

Visualization support

For generating plots, charts, and interactive dashboards:
pip install gpu-memory-profiler[viz]
  • matplotlib>=3.3.0
  • seaborn>=0.11.0
  • plotly>=5.0.0
  • dash>=2.0.0
  • dash-bootstrap-components>=1.6.0

Framework extras

Install framework-specific dependencies:
pip install gpu-memory-profiler[torch]

Terminal UI

For the interactive Textual dashboard:
pip install gpu-memory-profiler[tui]
  • textual>=0.57.0
  • pyfiglet>=1.0.2

Development tools

For contributing to the project:
pip install gpu-memory-profiler[dev]
  • pytest>=8.0.0
  • pytest-cov>=2.10.0
  • black>=21.0.0
  • flake8>=3.8.0
  • mypy>=0.910
  • isort>=5.9.0
  • pre-commit>=2.15.0
  • And more development dependencies

Testing dependencies

For running the test suite:
pip install gpu-memory-profiler[test]

Combined installation

You can combine multiple extras in a single command:
# Install with visualization and PyTorch support
pip install gpu-memory-profiler[viz,torch]

# Install with all frameworks and TUI
pip install gpu-memory-profiler[all,tui,viz]

# Install everything including dev tools
pip install gpu-memory-profiler[all,viz,tui,dev]

Install from source

For the latest development version or to contribute to the project:
1

Clone the repository

git clone https://github.com/Silas-Asamoah/gpu-memory-profiler.git
cd gpu-memory-profiler
2

Install in development mode

# Basic installation
pip install -e .

# With optional dependencies
pip install -e .[viz,torch,tui]
The -e flag installs the package in editable mode, so changes to the source code are immediately reflected.
3

Verify installation

python -c "import gpumemprof; print(gpumemprof.__version__)"
python -c "import tfmemprof; print(tfmemprof.__version__)"

Development setup

For active development with all tools:
1

Set up virtual environment

python3 -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate
2

Install with dev dependencies

pip install -e .[dev,test,all]
3

Install pre-commit hooks

pre-commit install
This sets up automatic code formatting and linting on commit.
4

Run tests

pytest tests/

System requirements

Required

  • Python: 3.10 or later
  • Operating System: Linux, macOS, or Windows
  • Core dependencies:
    • numpy>=1.19.0
    • pandas>=1.2.0
    • psutil>=5.8.0
    • scipy>=1.7.0

Optional

  • CUDA-capable GPU: For GPU memory profiling (automatic fallback to CPU mode)
  • PyTorch: 1.8+ for PyTorch profiling
  • TensorFlow: 2.4+ for TensorFlow profiling
  • CUDA Toolkit: Recommended for GPU support

Verify GPU support

Check if CUDA is available:
import torch
print(f"CUDA available: {torch.cuda.is_available()}")
if torch.cuda.is_available():
    print(f"GPU: {torch.cuda.get_device_name(0)}")
    print(f"CUDA version: {torch.version.cuda}")

CPU compatibility mode

Don’t have a GPU? No problem! GPU Memory Profiler automatically falls back to CPU memory profiling using psutil when CUDA is not available.
The CPU compatibility mode provides:
  • RSS (Resident Set Size) memory tracking
  • Same API as GPU profiling
  • CSV/JSON export functionality
  • Full CLI and TUI support
# Works on any system, with or without GPU
gpumemprof monitor --interval 1.0

Command-line tools

After installation, three command-line tools are available:

gpumemprof

PyTorch profiling CLI
gpumemprof --help

tfmemprof

TensorFlow profiling CLI
tfmemprof --help

gpu-profiler

Interactive TUI dashboard
gpu-profiler

Troubleshooting

PyTorch is required for GPU profiling features. Install it with:
pip install gpu-memory-profiler[torch]
Or follow the official PyTorch installation guide: https://pytorch.org/get-started/locally/
TensorFlow is required for TF profiling features. Install it with:
pip install gpu-memory-profiler[tf]
Or follow the official TensorFlow installation guide.
Visualization features require optional dependencies. Install with:
pip install gpu-memory-profiler[viz]
If you have a CUDA-capable GPU but torch.cuda.is_available() returns False:
  1. Verify CUDA drivers are installed: nvidia-smi
  2. Install PyTorch with CUDA support:
    pip install torch --index-url https://download.pytorch.org/whl/cu118
    
  3. Check CUDA compatibility with your GPU
Alternatively, use CPU compatibility mode (automatic fallback).

Next steps

Quick start guide

Get your first memory profile running in 5 minutes

CLI reference

Learn about all available command-line options

Build docs developers (and LLMs) love