Skip to main content
OVHcloud Object Storage is a scalable, S3-compatible storage service that lets you store and retrieve any amount of data as objects organised into buckets. It is well suited to backups, static web assets, data lake ingestion, and media archives.

Getting started

Create a bucket and upload your first objects in a few steps.

Lifecycle policies

Automatically expire or transition objects to reduce storage costs.

Access control

Control who can read and write to your buckets.

Cold Archive

Long-term, tape-based archival for rarely accessed data.

Key concepts

Buckets, objects, and access policies

An object is a file along with its metadata. Objects are stored inside buckets, which are flat namespaces identified by a globally unique name within a region. By default, all buckets and objects are private — only the user account that creates a resource has access to it. You grant access through two mechanisms:
  • ACLs (Access Control Lists): attached directly to a bucket or individual object, granting basic read/write permissions to specific accounts or predefined groups.
  • User policies: attached to a specific OVHcloud Public Cloud user via IAM, controlling that user’s permissions across resources.
OVHcloud Object Storage supports the predefined groups AllUsers (anonymous/public), AuthenticatedUsers (all OVHcloud Public Cloud users), and LogDelivery (used for server access logging).

S3-compatible API

OVHcloud Object Storage is compatible with the S3 API. You can use any S3-compatible client, including the AWS CLI, rclone, s3cmd, or any S3 SDK, by pointing the endpoint at your OVHcloud region. Endpoints follow this pattern:
https://s3.<region>.io.cloud.ovh.net/
For example, for the Roubaix (RBX) region:
https://s3.rbx.io.cloud.ovh.net/
OVHcloud offers two endpoint suffixes per region: .io.cloud.ovh.net (recommended, supports lifecycle rules) and .perf.cloud.ovh.net (legacy). Lifecycle policies are only available on the .io endpoint.

Create your first bucket

1

Create an Object Storage user

In the OVHcloud Control Panel, go to Public Cloud and select your project. Navigate to Object Storage in the left menu, then to the Object Storage users tab. Create a user and save the Access key and Secret key displayed — you will need these to configure the AWS CLI.
2

Configure the AWS CLI

Install the AWS CLI, then run:
aws configure
Enter your credentials and region. Then edit ~/.aws/config to add the OVHcloud endpoint:
[default]
region = rbx
output = json
services = ovh-rbx

[services ovh-rbx]
s3 =
  endpoint_url = https://s3.rbx.io.cloud.ovh.net/
  signature_version = s3v4

s3api =
  endpoint_url = https://s3.rbx.io.cloud.ovh.net/
And ~/.aws/credentials:
[default]
aws_access_key_id = <your_access_key>
aws_secret_access_key = <your_secret_key>
3

Create a bucket

aws s3 mb s3://my-bucket-name
Alternatively, from the Control Panel click Create Object Container, choose your offer (Standard, High Performance, etc.), select a deployment mode (1-AZ or 3-AZ), and pick a region.
4

Upload an object

aws s3 cp /path/to/file.txt s3://my-bucket-name/
aws s3 cp uses the STANDARD storage class by default. To upload to the High Performance tier, use aws s3api put-object with --storage-class EXPRESS_ONEZONE.
5

Verify the upload

aws s3 ls s3://my-bucket-name/

Common S3 CLI operations

aws s3 cp s3://my-bucket-name/file.txt .

Bucket ACLs and user policies

By default, all resources are private. To make a bucket publicly readable, apply the public-read predefined ACL:
aws s3api create-bucket --bucket my-bucket-name --acl public-read
To verify the ACL currently applied to a bucket:
aws s3api get-bucket-acl --bucket my-bucket-name
To grant write permissions to a specific user in another OVHcloud Public Cloud account:
aws s3api put-bucket-acl \
  --bucket my-bucket-name \
  --grant-write id=<project_name>:<user_name>

Supported permissions

PermissionBucket levelObject level
READList all objects in the bucketRead an object and its metadata
WRITECreate, delete, or overwrite objectsn/a
READ_ACPRead the bucket ACLRead the object ACL
WRITE_ACPModify the bucket ACLModify the object ACL
FULL_CONTROLAll of the above on the bucketREAD + READ_ACP + WRITE_ACP on the object

Predefined ACLs

ACLWho gets access
privateOwner only (default)
public-readOwner has full control; everyone can read
public-read-writeOwner has full control; everyone can read and write
authenticated-readOwner has full control; all OVHcloud users can read
log-delivery-writeOVHcloud log delivery service can write
ACLs and policies can be combined. The principle of least privilege always applies: access is denied unless there is an explicit allow and no explicit deny.

Object lifecycle policies

Lifecycle rules let you automatically expire or transition objects to lower-cost storage tiers. Rules are applied asynchronously, typically within 24 hours.

Apply a lifecycle configuration

Create a JSON file with your rules:
{
  "Rules": [
    {
      "ID": "expire-old-logs",
      "Status": "Enabled",
      "Filter": {
        "Prefix": "logs/"
      },
      "Expiration": {
        "Days": 30
      }
    }
  ]
}
Upload the configuration to your bucket:
aws s3api put-bucket-lifecycle-configuration \
  --bucket my-bucket-name \
  --lifecycle-configuration file://lifecycle.json

Transition objects between storage tiers

You can transition objects from a higher-cost tier to a lower-cost tier automatically. The minimum transition delay is 30 days.
{
  "Rules": [
    {
      "ID": "move-to-standard-after-30-days",
      "Status": "Enabled",
      "Filter": {},
      "Transitions": [
        {
          "Days": 30,
          "StorageClass": "STANDARD"
        }
      ]
    }
  ]
}

Supported storage tier transitions

FromTo High PerformanceTo StandardTo Infrequent AccessTo Cold Archive
High PerformanceYesYesYes
StandardNoYesYes
Infrequent AccessNoNoYes
Objects smaller than 128 KB are not automatically transitioned. Use an ObjectSizeGreaterThan filter to explicitly include or exclude small objects.

Abort incomplete multipart uploads

Large file uploads that fail part-way through leave stored parts that accrue charges. Use a lifecycle rule to clean them up:
{
  "Rules": [
    {
      "ID": "abort-incomplete-mpus",
      "Status": "Enabled",
      "Filter": {},
      "AbortIncompleteMultipartUpload": {
        "DaysAfterInitiation": 7
      }
    }
  ]
}

Cold Archive

Cold Archive is a storage class designed for long-term retention of rarely accessed data. It uses magnetic tape storage, providing:
  • Durability of 99.999%
  • Immutability by design (WORM — Write Once, Read Many)
  • Data retrieval within 48 hours
  • Minimum archival duration of 180 days
Cold Archive is accessible via the standard S3 API and can be used as a lifecycle destination. You can transition objects to the Cold Archive class directly from an existing Object Storage bucket using lifecycle rules, or upload objects directly into the Cold Archive storage class.
Cold Archive is certified HDS and ISO 27001, making it suitable for healthcare, financial, and regulatory archiving workloads.

Use cases

  • Regulatory and compliance archiving
  • Media asset preservation
  • Scientific data storage
  • Healthcare and financial record archiving

Shared responsibility model

OVHcloud and you share responsibility for the Object Storage service.
ResponsibilityYouOVHcloud
Choosing storage class and regionYes
Managing access policies and ACLsYes
Configuring lifecycle rulesYes
Encrypting data (SSE-C)Yes
Maintaining physical infrastructure and hardwareYes
Operating the S3-compatible control planeYes
Ensuring durability and replication of stored dataYes
Certifications (HDS, ISO 27001)Yes
OVHcloud manages infrastructure durability and availability. You are responsible for access control, encryption choices, and data organisation within your buckets.

Block Storage

Attach persistent volumes to Public Cloud instances for databases and file systems.

vRack — Private Network

Connect Object Storage to other OVHcloud services over a private Layer 2 network.

Build docs developers (and LLMs) love