Skip to main content
OVHcloud provides a fully managed observability stack built around Logs Data Platform (LDP) — a log management service powered by OpenSearch and Graylog. You can ingest logs from any source, store them with configurable retention, build dashboards, and trigger alerts, all without managing the underlying infrastructure.

Logs Data Platform

Manage your LDP account, streams, and dashboards in the Control Panel.

Graylog interface

Query and visualise logs in real time using Graylog.

LDP guides

Full guide catalogue for Logs Data Platform.

What is Logs Data Platform?

Logs Data Platform (LDP) is OVHcloud’s fully managed log management solution. It ingests logs from your infrastructure and applications, indexes them for fast querying, and exposes them through multiple interfaces: a Graylog web UI, the OpenSearch API, OpenSearch Dashboards, and Grafana. LDP handles all scaling automatically. There is no limit on how many logs a stream can store, and indexed logs are immutable — once ingested, a log entry cannot be modified or individually deleted before the configured retention period expires.

Key concepts

ConceptDescription
LDP ServiceYour top-level tenancy unit within LDP. Identified by a name like ldp-xy-98765.
Data streamA logical partition of logs. Each stream has a unique write token. Configure retention, archival, and alerting per stream.
IndexAn OpenSearch index. Use when you need direct OpenSearch API access for custom data or enrichment.
AliasA virtual index mapping one or more streams or indices. Required by tools like Grafana or OpenSearch Dashboards.
InputAn ingestion endpoint. Mutualized inputs are shared; dedicated inputs (Logstash, Flowgger) are provisioned on demand.

Supported log formats

LDP accepts logs in several formats over TCP, TCP+TLS, or UDP:
FormatPort (TLS)Port (TCP)
Syslog RFC 54246514514
GELF122022202
LTSV (null delimiter)122002200
LTSV (line delimiter)122012201
Cap’n’Proto122042204
Beats (Filebeat, Metricbeat)5044
The cluster address is shown on your LDP service home page.

Setting up your first log stream

1

Create an LDP account

Open the Logs Data Platform page in the OVHcloud Control Panel. If you do not have an LDP account yet, order one — there is no charge to activate the service. You pay only for usage (storage, retention, and optional dedicated inputs).When setting up your account, enable OVHcloud IAM as the authentication method. This is the recommended approach and allows you to control access using IAM policies.
2

Create a data stream

On the LDP control panel home page, click Add data stream in the Data streams panel.Configure the stream:
  • Name — a descriptive name for the stream (e.g. production-app-logs)
  • Description — optional context about what this stream contains
  • Retention — choose how long to keep indexed logs: 14 days, 1 month, 3 months, or 1 year. This cannot be changed after creation.
  • Limit — optionally set a maximum storage size to control costs
Click Save. The stream is created immediately.
3

Copy the stream write token

On the Data streams page, click the menu next to your stream and select Copy the write token. This X-OVH-TOKEN value authenticates log writes to this stream.
4

Send your first log

Test the stream by sending a GELF-formatted log using openssl:
echo -e '{"version":"1.1","_X-OVH-TOKEN":"<your-token>","host":"my-server","short_message":"Test log from setup","timestamp":'"$(date +%s)"',"level":6}'\0 | \
  openssl s_client -quiet -no_ign_eof -connect <your-cluster>.logs.ovh.com:12202
Replace <your-token> with the stream token and <your-cluster> with the cluster address from your LDP home page.
5

View your logs in Graylog

On the Data streams page, click > Graylog access next to your stream. Log in using your OVHcloud credentials. Your test log should appear in the stream view within a few seconds.Use the search bar to filter logs. For example, to search for all logs from my-server:
host:my-server

Data input methods

Fluent Bit (Kubernetes)

Fluent Bit is a lightweight log forwarder well suited to Kubernetes environments. Deploy it as a DaemonSet to collect logs from all pods in your cluster.
1

Create the logging namespace and token secret

kubectl create namespace logging

kubectl --namespace logging create secret generic ldp-token \
  --from-literal=ldp-token=<your-stream-token>
2

Configure the Helm values file

Add the following to your values.yaml for the Fluent Bit Helm chart:
env:
  - name: FLUENT_LDP_TOKEN
    valueFrom:
      secretKeyRef:
        name: ldp-token
        key: ldp-token

config:
  filters: |
    [FILTER]
        Name kubernetes
        Match kube.*
        Merge_Log On
        Keep_Log Off
        K8S-Logging.Parser On
        K8S-Logging.Exclude On

    [FILTER]
        Name record_modifier
        Match *
        Record X-OVH-TOKEN ${FLUENT_LDP_TOKEN}

    [FILTER]
        Name nest
        Match *
        Wildcard pod_name
        Operation lift
        Nested_under kubernetes
        Add_prefix kubernetes_

    [FILTER]
        Name modify
        Match *
        Copy kubernetes_pod_name host

    [FILTER]
        Name modify
        Match *
        Add log "none"
Replace <your-cluster> with the cluster address from your LDP home page.
3

Install with Helm

helm repo add fluent https://fluent.github.io/helm-charts
helm upgrade --install --namespace logging -f values.yaml fluent-bit fluent/fluent-bit

# Verify pods are running
kubectl get pods --namespace logging

Logstash (dedicated input)

For more complex log transformation pipelines, you can provision a managed Logstash instance on LDP. This is useful when you need to parse, filter, or enrich logs before ingestion.
input {
  tcp {
    port => 5000
    type => syslog
  }
}

filter {
  grok {
    match => { "message" => "%{SYSLOGBASE}" }
  }
  date {
    match => ["timestamp", "MMM dd HH:mm:ss"]
    target => "timestamp"
    timezone => "Europe/Paris"
  }
}

output {
  gelf {
    host => "<your-cluster>.logs.ovh.com"
    protocol => "TCP"
    port => 2202
    custom_fields => ['X-OVH-TOKEN', '<your-stream-token>']
  }
}
To provision a dedicated Logstash input on LDP, go to Data-gathering tools in the LDP control panel and click Add input.

Filebeat

Filebeat ships logs from files to LDP using the Beats protocol (port 5044):
output.logstash:
  hosts: ["<your-cluster>.logs.ovh.com:5044"]
  ssl.enabled: true
  ssl.certificate_authorities: ["/etc/ssl/certs/ldp.pem"]
Download the LDP SSL certificate from the Home page of your LDP service under SSL Configuration.

VPS and dedicated servers (syslog)

For Linux servers, configure syslog-ng or rsyslog to forward system logs to LDP over TCP+TLS using RFC 5424 format. Detailed configuration examples are available in the syslog-ng guide.

Log forwarding from OVHcloud services

Many OVHcloud services support native log forwarding directly to an LDP stream. This allows you to centralise infrastructure logs without deploying any additional agent.

Setting up log forwarding

Each service that supports log forwarding uses a subscription model. You create a subscription linking the service to one of your LDP streams. For example, to forward IAM audit logs:
POST /me/logs/audit/log/subscription
{
  "streamId": "ab51887e-0b98-4752-a514-f2513523a5cd",
  "kind": "default"
}
Available log forwarding APIs:
SourceAPI
Audit logs (login, password changes)POST /me/logs/audit/log/subscription
Activity logs (all API and Control Panel actions)POST /me/api/log/subscription
IAM access policy evaluationsPOST /iam/log/subscription
Log forwarding activation is free. You are charged only for storage in your LDP stream at standard LDP pricing.

Metrics and dashboards

LDP exposes your indexed log data through multiple visualisation tools.

Graylog dashboards

In Graylog, you can build dashboards directly from search results. For example:
  1. In your stream, search for some_metric_num:>30.
  2. On the left panel, expand the user_id field and select Show top values.
  3. Click Copy to Dashboard to add the widget to an existing or new dashboard.
Graylog dashboards are interactive — they update in real time and support filtering using the top search bar.

OpenSearch Dashboards

For more advanced visualisations and index pattern management, you can provision a managed OpenSearch Dashboards instance on LDP. Go to the OpenSearch Dashboards tab in the LDP control panel and click Add. OpenSearch Dashboards connects to your LDP data via aliases. Create an alias that maps to your stream, then configure it as an index pattern in OpenSearch Dashboards.

Grafana

OVHcloud Public Cloud includes a managed Grafana service. You can connect Grafana to LDP’s OpenSearch API endpoint (port 9200) to query logs alongside other metrics. Configure the Grafana datasource with:
  • URL: https://<your-cluster>.logs.ovh.com:9200
  • Auth: Use your LDP credentials or an IAM-issued token
  • Index name: the alias name that maps to your streams

Alerting on log patterns

LDP supports three types of stream alerts, all configured from the stream’s Manage alerts menu in the control panel:
Alert typeUse case
Message countAlert when the number of logs drops below or exceeds a threshold (e.g. detect a stopped application)
Field aggregationAlert on numeric field statistics — mean, min, max, sum, standard deviation (e.g. slow response times)
Field contentAlert when a specific field contains an exact value (e.g. HTTP 500 errors)
All alert types support a grace period to prevent repeated notifications for the same condition.

Example: alert on HTTP 500 errors

In the stream’s alert management interface:
  1. Click Create an alert and select Field content.
  2. Set the field name to status_int and value to 500.
  3. Set a grace period (e.g. 5 minutes) to avoid alert spam.
  4. Click Save.
When the condition is triggered, LDP sends an email with the matching log messages included.

IAM logs forwarding (audit trail)

Forwarding IAM account logs to LDP creates a complete audit trail of all account activity. This is essential for security monitoring and compliance. Three types of account logs are available: Audit logs record security-relevant events:
FieldDescription
accountOVHcloud account affected
authDetails_userDetails_typeACCOUNT (root), USER (local), or PROVIDER (federated)
loginSuccessDetails_mfaTypeMFA method used: NONE, SMS, TOTP, U2F, etc.
typeEvent type: LOGIN_SUCCESS, ACCOUNT_PASSWORD_CHANGED, etc.
Access policy logs record IAM evaluation results:
FieldDescription
identities_arrayURNs of the user and their groups
requested_actions_arrayActions the user attempted
authorized_actions_arrayActions IAM allowed
unauthorized_actions_arrayActions IAM denied
To find all IAM denials for a user named ines in Graylog:
identities_array:*ines* AND unauthorized_actions_array:*
See IAM — Enabling IAM logs forwarding for how to set up subscriptions.

Shared responsibility model

OVHcloud and you share responsibility for the observability stack:
ResponsibilityCustomerOVHcloud
Install, configure, and maintain LDP platform componentsRA
Order and configure streams, set retention policiesRAI
Install and configure log forwarder agents (Fluent Bit, Logstash, Filebeat)RA
Manage data confidentiality and integrityRA
Monitor LDP service performance and infrastructureRA
Handle LDP platform patches and upgradesIRA
Ensure external tools remain compatible with LDP updatesRA
Define and maintain business continuity plan for logsRAI
Logs stored in streams are immutable. Individual log entries cannot be modified or deleted before the configured retention period expires. You can delete an entire stream, but not individual messages.

Next steps

OVHcloud API

Automate LDP stream creation and management using the Terraform OVH provider.

Identity & Access Management

Control access to your LDP streams using IAM policies.

Build docs developers (and LLMs) love