AI Notebooks
AI Notebooks are managed JupyterLab and VSCode environments with pre-installed AI frameworks and dedicated CPU or GPU resources. You do not need to install or maintain any software — launch a notebook, code, and stop it when you are done. You are billed only while the notebook is running.Supported frameworks
AI Notebooks ship with pre-configured images for the most common frameworks:- TensorFlow
- PyTorch
- Hugging Face Transformers
- Scikit-learn
- MXNet
- Conda (custom environment)
- FastAI
Launching an AI Notebook
Navigate to AI Notebooks
In the OVHcloud Control Panel, go to your Public Cloud project. In the left menu under AI & Machine Learning, click AI Notebooks, then click Create a Notebook.
Name your notebook
Enter a name that makes the notebook easy to identify when you have several running simultaneously.
Select a location
Choose the OVHcloud region where the notebook will run. Different regions may offer different GPU models.
Choose compute resources
Select CPU or GPU resources. Use the
+ and - buttons to set the number of CPUs or GPUs. GPU resources are billed per GPU per minute.Available GPU types include NVIDIA V100s and other models depending on the region. Check the Public Cloud pricing page for current rates.Select a framework
Choose the AI framework and version you want pre-installed. For example, select PyTorch 2.x for deep learning workloads.
Choose a code editor
Select JupyterLab for a notebook-first experience or Visual Studio Code for a full IDE environment. Both editors run in your browser and use the same compute resources.
Set access controls
Choose Restricted access to require credentials (username/password or an AI token) before anyone can open the notebook. Avoid Public access for notebooks containing sensitive data or proprietary code.
Configure lifecycle settings
By default, a notebook automatically shuts down after 7 consecutive days in a RUNNING state. Enable Automatic Restart to restart it every 7 days automatically. Contact OVHcloud support to extend the period to 28 days.
Attach data volumes (optional)
Mount OVHcloud Object Storage containers or Git repositories into the notebook. This lets you access training datasets and save results persistently even after stopping the notebook.
Managing notebooks with the ovhai CLI
Billing and notebook lifecycle
Compute is billed from the moment the Docker image starts pulling (STARTING state) until the notebook reaches STOPPED. Storage charges for the
/workspace folder begin after the first 10 GB free and continue for 30 days after the notebook is stopped, then standard Object Storage rates apply.| State | Description | Billed? |
|---|---|---|
| STARTING | Resources allocated, image pulling, data syncing | Yes |
| RUNNING | Notebook accessible, compute in use | Yes |
| STOPPING | Compute released, workspace syncing to Object Storage | Yes |
| STOPPED | No compute running, workspace preserved | Storage only |
| FAILED | Notebook ended with a non-zero exit code | No |
| DELETING | Notebook being removed | No |
10 hours × 2 GPUs × €1.93/GPU/hour = €38.60
Rates are shown per hour but billed per minute.
AI Training
AI Training lets you submit Docker container-based training jobs on dedicated CPU or GPU resources. Jobs run to completion and terminate automatically, making them suited for longer training runs rather than interactive sessions.Submitting a training job
- In the Control Panel, go to AI & Machine Learning > AI Training and click Launch a new Job.
- Enter a job name and select a region.
- Choose the number of GPUs or CPUs for the job.
- Provide a Docker image. You can use:
- OVHcloud preset images (JupyterLab with TensorFlow or MXNet)
- A custom image from Docker Hub, GitHub Container Registry, or OVHcloud Managed Private Registry
- Configure privacy settings, lifecycle options, and optional data volumes.
- Click Order to submit the job.
AI Deploy
AI Deploy lets you expose trained models or AI applications as HTTP API endpoints with autoscaling. Each deployment runs a Docker container on one or more replicas.Deploying an application
- In the Control Panel, go to AI & Machine Learning > AI Deploy and click Deploy an app.
- Name your application and choose a region.
- Select the compute resources (1–4 GPUs or 1–12 CPUs per replica).
- Provide the Docker image to deploy (e.g.
ovhcom/ai-deploy-hello-world). - Set the number of replicas and choose a scaling strategy:
- Static scaling — fixed replica count (1–10 replicas)
- Autoscaling — scale based on CPU or RAM usage, down to 0 replicas when idle
- Set access to Restricted and create an AI token to authenticate API calls.
- Click Order now.
Calling a deployed endpoint
GPU instance types
AI services use dedicated GPU resources from OVHcloud’s infrastructure. Available GPU models vary by region and include:- NVIDIA V100s — general-purpose deep learning and inference
- NVIDIA A100 — large model training and high-throughput inference (available in select regions)
Data volumes and Object Storage integration
All three AI services (Notebooks, Training, Deploy) can mount OVHcloud Object Storage containers as volumes. Data is synchronised between Object Storage and the running workload at start and stop time.ro— read-only: data is synced into the workload at startrw— read-write: data is synced back to Object Storage when the workload stops
Related guides
Public Cloud compute
Launch GPU instances for AI workloads on OVHcloud Public Cloud infrastructure.
Object Storage
Store datasets and model outputs with S3-compatible object storage.
Containers & Kubernetes
Deploy containerized AI applications on Managed Kubernetes.
OVHcloud API
Automate AI workload management with the OVHcloud API and ovhai CLI.