Configuration
Core Parameters
A list of Elasticsearch endpoints to send logs to. The endpoint must contain an HTTP scheme, and may specify a hostname or IP address and port.Credentials can be embedded in the URL (e.g.,
https://user:[email protected]), but this cannot be combined with the auth configuration.Elasticsearch indexing mode. Options:
bulk: Standard bulk indexing using the Bulk APIdata_stream: Use Elasticsearch Data Streams (requires Elasticsearch 7.9+)
The API version of Elasticsearch. Set to
auto for automatic detection, or explicitly specify v6, v7, or v8.Amazon OpenSearch Serverless requires auto.Authentication
The Elasticsearch sink supports multiple authentication strategies.Basic Authentication
Use HTTP Basic Authentication with username and password.
Basic authentication username.
Basic authentication password (supports environment variables).
AWS Authentication (OpenSearch)
For AWS OpenSearch Service, use AWS IAM authentication:Use AWS SigV4 authentication for Amazon OpenSearch Service.
Bulk Mode Configuration
When usingmode = "bulk", configure the bulk indexing behavior:
The name of the index to write events to. Supports template syntax and date formatting using strftime specifiers.
Bulk API action. Options:
index, create, update.Default index name if the template in
bulk.index cannot be resolved.Event field name to use for Elasticsearch’s
_id field. If unspecified, Elasticsearch auto-generates IDs.Data Stream Mode
Data streams provide a convenient way to index time-series data in Elasticsearch:Data stream type (first component of data stream name).
Data stream dataset (second component of data stream name).
Data stream namespace (third component of data stream name).
Automatically route events by deriving data stream name from event fields
data_stream.type, data_stream.dataset, and data_stream.namespace.Automatically add and sync
data_stream.* event fields to match the data stream name.logs-nginx-production.
Batching
Configure batching to optimize throughput:Maximum number of events to batch before flushing.
Maximum size of a batch in bytes.
Maximum time to wait before flushing a partial batch.
Encoding
Configure how events are encoded before sending to Elasticsearch. Supports field transformations and filtering.
Compression
Compression algorithm. Options:
none, gzip, zstd, snappy.Advanced Options
Name of the Elasticsearch ingest pipeline to apply.
Custom query parameters to add to each HTTP request.
Whether to retry successful requests containing partial failures. Use with
id_key to avoid duplicates.Document type for Elasticsearch 6.x and below. Ignored in Elasticsearch 7.x+.
TLS Configuration
Enable TLS/SSL connections.
Path to CA certificate file for verifying the server.
Path to client certificate file for mutual TLS.
Path to client private key file for mutual TLS.
Verify the server’s TLS certificate.
Verify the server’s hostname matches the certificate.
Request Configuration
Request timeout in seconds.
Maximum number of requests per time window.
Number of retry attempts for failed requests.
Complete Examples
Basic Configuration
Data Streams with AWS OpenSearch
High-Throughput Configuration
Troubleshooting
Connection Issues
If you can’t connect to Elasticsearch:- Verify endpoints are correct and accessible
- Check authentication credentials
- Ensure TLS settings match your cluster configuration
- Review firewall and network policies
Indexing Errors
For document indexing failures:- Check index template exists and matches your data
- Verify field mappings in Elasticsearch
- Enable
request_retry_partialwithid_keyfor partial failures - Review Elasticsearch cluster logs
Performance Optimization
- Increase batch size: Higher
batch.max_eventsreduces overhead - Use compression: Enable
compression = "gzip"orcompression = "zstd" - Multiple endpoints: Load balance across multiple nodes
- Adjust concurrency: Tune
request.tower.concurrency - Data streams: Use data streams for time-series data
Best Practices
- Use data streams for time-series data (logs, metrics)
- Set
id_keyto prevent duplicates when retrying - Enable compression to reduce network usage
- Configure batching to balance latency and throughput
- Use AWS authentication for OpenSearch Service
- Monitor cluster health to prevent backpressure
- Index lifecycle management to manage data retention