Service integrations connect different Aiven services together to enable advanced functionality like metrics visualization, log aggregation, data streaming, and cross-service data flows.
Integration overview
Aiven supports two types of integrations:
Service-to-Service Direct integration between Aiven services
Metrics: PostgreSQL + Grafana
Logs: Any service + OpenSearch
Data streams: Kafka + other services
Cross-service queries
External Endpoints Integration with external platforms
Datadog for metrics
Prometheus exporters
Rsyslog servers
Jolokia for JMX metrics
Common integration patterns
Metrics visualization
Send service metrics to PostgreSQL and visualize in Grafana:
Create PostgreSQL service
avn service create metrics-db \
--project my-project \
--service-type pg \
--plan business-4 \
--cloud aws-us-east-1
This service stores time-series metrics data
Create Grafana service
avn service create metrics-grafana \
--project my-project \
--service-type grafana \
--plan startup-4 \
--cloud aws-us-east-1
Grafana provides dashboards for visualization
Integrate Grafana with PostgreSQL
avn service integration-create \
--project my-project \
--source-service metrics-db \
--dest-service metrics-grafana \
--integration-type dashboard
This creates the data source connection in Grafana
Send service metrics to PostgreSQL
# Send PostgreSQL metrics
avn service integration-create \
--project my-project \
--source-service postgres-prod \
--dest-service metrics-db \
--integration-type metrics
# Send Kafka metrics
avn service integration-create \
--project my-project \
--source-service kafka-prod \
--dest-service metrics-db \
--integration-type metrics
Access Grafana dashboards
Get Grafana Service URI from Console
Log in with credentials from connection information
View predefined Aiven dashboards
Create custom dashboards
Predefined dashboards are automatically created and maintained by Aiven. Dashboard names start with “Aiven” and are automatically updated. Don’t modify these - create copies or custom dashboards instead.
Log aggregation
Centralize logs from all services in OpenSearch:
Create OpenSearch service
avn service create logs-opensearch \
--project my-project \
--service-type opensearch \
--plan business-8 \
--cloud aws-us-east-1
Choose plan with enough disk for log retention needs
Enable log integrations
# Send PostgreSQL logs
avn service integration-create \
--project my-project \
--source-service postgres-prod \
--dest-service logs-opensearch \
--integration-type logs
# Send Kafka logs
avn service integration-create \
--project my-project \
--source-service kafka-prod \
--dest-service logs-opensearch \
--integration-type logs
# Send Redis logs
avn service integration-create \
--project my-project \
--source-service redis-cache \
--dest-service logs-opensearch \
--integration-type logs
Configure log retention
In OpenSearch, set up Index Lifecycle Management (ILM): {
"policy" : {
"phases" : {
"hot" : {
"actions" : {}
},
"delete" : {
"min_age" : "30d" ,
"actions" : {
"delete" : {}
}
}
}
}
}
Access OpenSearch Dashboards
Use Service URI to access Dashboards and create visualizations
Log retention is limited by OpenSearch disk space. Monitor disk usage and adjust retention policies or service plan accordingly.
Integration types
Metrics integrations
Aiven Metrics
Datadog
Prometheus
PostgreSQL + Grafana
Service-specific detailed metrics
Predefined dashboards automatically created
Query performance insights
Resource utilization tracking
# Setup
avn service integration-create \
--project my-project \
--source-service < any-servic e > \
--dest-service metrics-db \
--integration-type metrics
Available metrics:
System: CPU, memory, disk, network
Database: Connections, queries, cache hits
Kafka: Messages/sec, lag, partition metrics
Redis: Commands/sec, key evictions
External metrics platform
Send metrics to Datadog
Use Datadog’s alerting and dashboards
Integrate with other Datadog services
# Create endpoint
avn service integration-endpoint-create \
--project my-project \
--endpoint-name datadog-us \
--endpoint-type datadog \
--user-config '{"datadog_api_key": "<API_KEY>", "site": "datadoghq.com"}'
# Enable integration
avn service integration-create \
--project my-project \
--source-service postgres-prod \
--endpoint-id < ENDPOINT_I D > \
--integration-type datadog
Metrics appear in Datadog with aiven. prefix Prometheus-compatible metrics
Expose metrics in Prometheus format
Scrape with your Prometheus server
Use with any Prometheus-compatible tool
# Create Prometheus endpoint
avn service integration-endpoint-create \
--project my-project \
--endpoint-name prometheus \
--endpoint-type prometheus
# Enable on service
avn service integration-create \
--project my-project \
--source-service postgres-prod \
--endpoint-id < ENDPOINT_I D > \
--integration-type prometheus
Configure Prometheus to scrape the endpoint
Log integrations
Aiven OpenSearch
Centralized log storage and search
OpenSearch Dashboards for visualization
Long-term retention (disk-limited)
Full-text search capabilities
avn service integration-create \
--project my-project \
--source-service < any-servic e > \
--dest-service logs-opensearch \
--integration-type logs
External syslog server
Send to existing syslog infrastructure
RFC5424 or RFC3164 format
TCP or UDP transport
# Create rsyslog endpoint
avn service integration-endpoint-create \
--project my-project \
--endpoint-name syslog-server \
--endpoint-type rsyslog \
--user-config '{
"server": "syslog.company.com",
"port": 514,
"format": "rfc5424",
"tls": true
}'
# Enable integration
avn service integration-create \
--project my-project \
--source-service postgres-prod \
--endpoint-id < ENDPOINT_I D > \
--integration-type rsyslog
Data streaming integrations
Kafka Connect
Kafka MirrorMaker
Flink
Data pipelines
Stream data between Kafka and other services
Source connectors (databases → Kafka)
Sink connectors (Kafka → databases/storage)
# Create Kafka Connect service
avn service create kafka-connect \
--project my-project \
--service-type kafka_connect \
--plan business-4 \
--cloud aws-us-east-1
# Integrate with Kafka
avn service integration-create \
--project my-project \
--source-service kafka-prod \
--dest-service kafka-connect \
--integration-type kafka_connect
Cross-cluster replication
Replicate topics between Kafka clusters
Disaster recovery
Multi-region deployments
# Integrate source and target Kafka clusters
avn service integration-create \
--project my-project \
--source-service kafka-source \
--dest-service kafka-target \
--integration-type kafka_mirrormaker
Stream processing
Process Kafka streams with Flink
Real-time analytics
Complex event processing
# Create Flink service
avn service create flink \
--project my-project \
--service-type flink \
--plan business-4 \
--cloud aws-us-east-1
# Integrate with Kafka
avn service integration-create \
--project my-project \
--source-service kafka-prod \
--dest-service flink \
--integration-type flink
Database integrations
Read Replicas
ClickHouse + Kafka
PostgreSQL + OpenSearch
PostgreSQL/MySQL replicas
Create read-only replicas
Scale read workloads
Cross-region replicas
# Create read replica
avn service create postgres-replica \
--project my-project \
--service-type pg \
--plan business-4 \
--cloud aws-eu-west-1
# Set up replication
avn service integration-create \
--project my-project \
--source-service postgres-primary \
--dest-service postgres-replica \
--integration-type read_replica
Real-time analytics
Stream data from Kafka to ClickHouse
Real-time data warehouse
High-performance analytics
avn service integration-create \
--project my-project \
--source-service kafka-prod \
--dest-service clickhouse-analytics \
--integration-type clickhouse_kafka
Full-text search
Sync PostgreSQL data to OpenSearch
Add search capabilities to applications
Keep data in sync automatically
avn service integration-create \
--project my-project \
--source-service postgres-prod \
--dest-service opensearch \
--integration-type opensearch_logs
Managing integrations
List integrations
All integrations in project
Integrations for specific service
Integration endpoints
avn service integration-list \
--project my-project
Update integrations
# Update integration configuration
avn service integration-update \
--project my-project \
--integration-id < INTEGRATION_I D > \
--user-config '{
"index_prefix": "aiven-logs-",
"timeout": 120
}'
Delete integrations
# Remove integration
avn service integration-delete \
--project my-project \
--integration-id < INTEGRATION_I D >
# Delete integration endpoint
avn service integration-endpoint-delete \
--project my-project \
--endpoint-id < ENDPOINT_I D >
Integration use cases
Complete observability stack
Set up comprehensive monitoring and logging:
# 1. Create infrastructure services
avn service create metrics-db --service-type pg --plan business-4
avn service create metrics-grafana --service-type grafana --plan startup-4
avn service create logs-opensearch --service-type opensearch --plan business-8
# 2. Connect Grafana to PostgreSQL
avn service integration-create \
--source-service metrics-db \
--dest-service metrics-grafana \
--integration-type dashboard
# 3. Send all service metrics to PostgreSQL
for service in postgres-prod kafka-prod redis-cache ; do
avn service integration-create \
--source-service $service \
--dest-service metrics-db \
--integration-type metrics
done
# 4. Send all service logs to OpenSearch
for service in postgres-prod kafka-prod redis-cache ; do
avn service integration-create \
--source-service $service \
--dest-service logs-opensearch \
--integration-type logs
done
Kafka streaming architecture
Build a complete streaming data platform:
# 1. Create Kafka cluster
avn service create kafka-prod --service-type kafka --plan business-4
# 2. Create Kafka Connect for data ingestion
avn service create kafka-connect --service-type kafka_connect --plan business-4
avn service integration-create \
--source-service kafka-prod \
--dest-service kafka-connect \
--integration-type kafka_connect
# 3. Create Flink for stream processing
avn service create flink --service-type flink --plan business-4
avn service integration-create \
--source-service kafka-prod \
--dest-service flink \
--integration-type flink
# 4. Create ClickHouse for analytics
avn service create clickhouse --service-type clickhouse --plan business-8
avn service integration-create \
--source-service kafka-prod \
--dest-service clickhouse \
--integration-type clickhouse_kafka
# 5. Monitor the platform
avn service integration-create \
--source-service kafka-prod \
--dest-service metrics-db \
--integration-type metrics
Multi-region database setup
Set up cross-region read replicas:
# 1. Primary database in US
avn service create postgres-primary \
--service-type pg \
--plan business-8 \
--cloud aws-us-east-1
# 2. Read replica in EU
avn service create postgres-eu \
--service-type pg \
--plan business-8 \
--cloud aws-eu-west-1
avn service integration-create \
--source-service postgres-primary \
--dest-service postgres-eu \
--integration-type read_replica
# 3. Read replica in Asia
avn service create postgres-asia \
--service-type pg \
--plan business-8 \
--cloud aws-ap-southeast-1
avn service integration-create \
--source-service postgres-primary \
--dest-service postgres-asia \
--integration-type read_replica
Integration billing
Integrations themselves don’t cost extra, but integrated services incur their normal hourly charges.
Cost considerations
Metrics integration: Requires PostgreSQL + Grafana services
Log integration: Requires OpenSearch service (size depends on log volume)
Data streaming: Requires Kafka Connect, Flink, or other streaming services
Read replicas: Full cost of replica service
Optimizing costs
Share observability services
Use one PostgreSQL + Grafana for all services in a project
Right-size OpenSearch
Monitor disk usage and adjust plan based on actual log volume
Use log retention policies
Configure ILM to delete old logs automatically
Selective metrics
Only integrate metrics for critical production services
Best practices
Set up observability early
Deploy metrics and logging integrations from day one
Use one metrics stack per project
Share PostgreSQL + Grafana across all services in the project
Centralize logs
Send all service logs to the same OpenSearch instance
Configure retention policies
Set appropriate log retention in OpenSearch to manage disk usage
Monitor integration health
Check that integrations are active and data is flowing
Document integrations
Maintain documentation of which services integrate with what
Test before production
Verify integrations in development before enabling in production
Use cross-region carefully
Be aware of data transfer costs for cross-region integrations
Troubleshooting
Integration shows as failed
Cause: Destination service not ready or network issueSolution:
Check both services are running
Verify services are in same project
For cross-region, check connectivity
Try recreating the integration
Metrics not appearing in Grafana
Cause: Integration needs time to populate or not configured correctlySolution:
Wait 1-2 minutes for initial data
Verify metrics integration is active
Check PostgreSQL has free disk space
Refresh Grafana dashboard
Logs not appearing in OpenSearch
Cause: Integration issue or OpenSearch disk fullSolution:
Verify log integration is active
Check OpenSearch disk space
Review ILM policies
Check for ingestion errors in OpenSearch
Grafana dashboards disappeared
Cause: Dashboard name conflict or modificationSolution:
Don’t modify dashboards starting with “Aiven”
Create copies or custom dashboards instead
Predefined dashboards are auto-maintained
Check for typos in dashboard names
High latency in read replica
Cause: Network latency or replication lagSolution:
Check replication lag metrics
Verify network connectivity
Reduce write load on primary
Ensure replica has adequate resources
API reference
List integrations
Create integration
Delete integration
Create endpoint
curl -H "Authorization: Bearer $TOKEN " \
https://api.aiven.io/v1/project/{project}/integration
Next steps
Monitoring & Logs Learn about monitoring and viewing logs
VPC & Networking Configure networking for integrations
Security Secure your service integrations
Billing Understand integration costs