Skip to main content
OVHcloud AI & Machine Learning provides three managed services for the full AI development lifecycle: AI Notebooks for interactive development, AI Training for submitting large-scale training jobs, and AI Deploy for serving trained models as API endpoints. All three run on OVHcloud GPU infrastructure and integrate with OVHcloud Object Storage for data.

AI Notebooks

AI Notebooks are managed JupyterLab and VSCode environments with pre-installed AI frameworks and dedicated CPU or GPU resources. You do not need to install or maintain any software — launch a notebook, code, and stop it when you are done. You are billed only while the notebook is running.

Supported frameworks

AI Notebooks ship with pre-configured images for the most common frameworks:
  • TensorFlow
  • PyTorch
  • Hugging Face Transformers
  • Scikit-learn
  • MXNet
  • Conda (custom environment)
  • FastAI
Each framework is available in multiple versions. You choose the version at notebook creation time.

Launching an AI Notebook

1

Navigate to AI Notebooks

In the OVHcloud Control Panel, go to your Public Cloud project. In the left menu under AI & Machine Learning, click AI Notebooks, then click Create a Notebook.
2

Name your notebook

Enter a name that makes the notebook easy to identify when you have several running simultaneously.
3

Select a location

Choose the OVHcloud region where the notebook will run. Different regions may offer different GPU models.
4

Choose compute resources

Select CPU or GPU resources. Use the + and - buttons to set the number of CPUs or GPUs. GPU resources are billed per GPU per minute.Available GPU types include NVIDIA V100s and other models depending on the region. Check the Public Cloud pricing page for current rates.
5

Select a framework

Choose the AI framework and version you want pre-installed. For example, select PyTorch 2.x for deep learning workloads.
6

Choose a code editor

Select JupyterLab for a notebook-first experience or Visual Studio Code for a full IDE environment. Both editors run in your browser and use the same compute resources.
7

Set access controls

Choose Restricted access to require credentials (username/password or an AI token) before anyone can open the notebook. Avoid Public access for notebooks containing sensitive data or proprietary code.
8

Configure lifecycle settings

By default, a notebook automatically shuts down after 7 consecutive days in a RUNNING state. Enable Automatic Restart to restart it every 7 days automatically. Contact OVHcloud support to extend the period to 28 days.
9

Attach data volumes (optional)

Mount OVHcloud Object Storage containers or Git repositories into the notebook. This lets you access training datasets and save results persistently even after stopping the notebook.
10

Order the notebook

Review your configuration and click Order. The notebook URL becomes available once the notebook reaches the RUNNING state. Open the URL and authenticate to begin coding.

Managing notebooks with the ovhai CLI

# Install the CLI (Linux/macOS)
curl https://cli.gra.ai.cloud.ovh.net/install.sh | bash

# Authenticate
ovhai login

# Launch a notebook (PyTorch on 1 GPU)
ovhai notebook run pytorch jupyterlab \
  --name my-pytorch-notebook \
  --gpu 1

# List your notebooks
ovhai notebook list

# Stop a notebook
ovhai notebook stop <NOTEBOOK_UUID>

# Start a stopped notebook
ovhai notebook start <NOTEBOOK_UUID>

# Delete a notebook
ovhai notebook delete <NOTEBOOK_UUID>

Billing and notebook lifecycle

Compute is billed from the moment the Docker image starts pulling (STARTING state) until the notebook reaches STOPPED. Storage charges for the /workspace folder begin after the first 10 GB free and continue for 30 days after the notebook is stopped, then standard Object Storage rates apply.
The notebook lifecycle moves through these states:
StateDescriptionBilled?
STARTINGResources allocated, image pulling, data syncingYes
RUNNINGNotebook accessible, compute in useYes
STOPPINGCompute released, workspace syncing to Object StorageYes
STOPPEDNo compute running, workspace preservedStorage only
FAILEDNotebook ended with a non-zero exit codeNo
DELETINGNotebook being removedNo
Pricing example: A notebook with 2 NVIDIA V100s running for 10 hours: 10 hours × 2 GPUs × €1.93/GPU/hour = €38.60 Rates are shown per hour but billed per minute.
After deletion, the notebook’s local storage is permanently removed. Data stored in your /workspace Object Storage container is not deleted automatically and will continue to be billed until you remove it.

AI Training

AI Training lets you submit Docker container-based training jobs on dedicated CPU or GPU resources. Jobs run to completion and terminate automatically, making them suited for longer training runs rather than interactive sessions.

Submitting a training job

  1. In the Control Panel, go to AI & Machine Learning > AI Training and click Launch a new Job.
  2. Enter a job name and select a region.
  3. Choose the number of GPUs or CPUs for the job.
  4. Provide a Docker image. You can use:
    • OVHcloud preset images (JupyterLab with TensorFlow or MXNet)
    • A custom image from Docker Hub, GitHub Container Registry, or OVHcloud Managed Private Registry
  5. Configure privacy settings, lifecycle options, and optional data volumes.
  6. Click Order to submit the job.
# Submit a training job with the ovhai CLI
ovhai job run \
  --name my-training-job \
  --gpu 2 \
  --volume my-data-container@GRA:/workspace/data:ro \
  --volume my-results@GRA:/workspace/results:rw \
  pytorch/pytorch:2.1.0-cuda11.8-cudnn8-runtime \
  -- python train.py --epochs 50
Jobs are billed from start to finish. Like notebooks, compute billing is per GPU or CPU per minute.

AI Deploy

AI Deploy lets you expose trained models or AI applications as HTTP API endpoints with autoscaling. Each deployment runs a Docker container on one or more replicas.

Deploying an application

  1. In the Control Panel, go to AI & Machine Learning > AI Deploy and click Deploy an app.
  2. Name your application and choose a region.
  3. Select the compute resources (1–4 GPUs or 1–12 CPUs per replica).
  4. Provide the Docker image to deploy (e.g. ovhcom/ai-deploy-hello-world).
  5. Set the number of replicas and choose a scaling strategy:
    • Static scaling — fixed replica count (1–10 replicas)
    • Autoscaling — scale based on CPU or RAM usage, down to 0 replicas when idle
  6. Set access to Restricted and create an AI token to authenticate API calls.
  7. Click Order now.

Calling a deployed endpoint

# Export your AI token
export AI_APP_TOKEN=your_token_here

# Call the deployed endpoint
curl --request POST \
  --url https://YOUR-APP-UUID.app.gra.ai.cloud.ovh.net \
  --header "Authorization: Bearer $AI_APP_TOKEN" \
  --header "Content-Type: application/json" \
  --data '"your_input"'

GPU instance types

AI services use dedicated GPU resources from OVHcloud’s infrastructure. Available GPU models vary by region and include:
  • NVIDIA V100s — general-purpose deep learning and inference
  • NVIDIA A100 — large model training and high-throughput inference (available in select regions)
Check capabilities per region in the AI Training capabilities guide before selecting a region.

Data volumes and Object Storage integration

All three AI services (Notebooks, Training, Deploy) can mount OVHcloud Object Storage containers as volumes. Data is synchronised between Object Storage and the running workload at start and stop time.
# Mount a container as read-only input and a second as read-write output
ovhai job run \
  --volume datasets@GRA:/workspace/input:ro \
  --volume model-outputs@GRA:/workspace/output:rw \
  my-training-image \
  -- python train.py
Mount modes:
  • ro — read-only: data is synced into the workload at start
  • rw — read-write: data is synced back to Object Storage when the workload stops

Public Cloud compute

Launch GPU instances for AI workloads on OVHcloud Public Cloud infrastructure.

Object Storage

Store datasets and model outputs with S3-compatible object storage.

Containers & Kubernetes

Deploy containerized AI applications on Managed Kubernetes.

OVHcloud API

Automate AI workload management with the OVHcloud API and ovhai CLI.

Build docs developers (and LLMs) love