-
Notifications
You must be signed in to change notification settings - Fork 1.8k
Description
Component(s)
exporter/exporterhelper, exporter/exporterhelper/internal/queuebatch
Describe the issue you're reporting
After deploying some new services are sending data through an collector, I am now seeing this log:
2025-12-08T20:38:34.054Z warn queuebatch/partition_batcher.go:75 Failed to split request. {"resource": {"service.instance.id": "", "service.name": "foo", "service.version": "1.1.0"}, "otelcol.component.id": "otlp/newrelic", "otelcol.component.kind": "exporter", "otelcol.signal": "logs", "error": "one log record size is greater than max size, dropping items: 4553"}
I am using this exporter configuration:
otlp/newrelic:
endpoint: ${env:NEW_RELIC_OTLP_ENDPOINT:-https://otlp.nr-data.net:443}
headers:
api-key: ${NR_KEY}
tls:
insecure: false
compression: zstd
sending_queue:
queue_size: ${env:EXPORTER_SENDING_QUEUE_SIZE:-1000} # Default
num_consumers: ${env:EXPORTER_QUEUE_NUM_CONSUMERS:-6}
batch:
flush_timeout: 200
min_size: ${env:EXPORTER_SENDING_QUEUE_SIZE:-1000}
max_size: ${env:EXPORTER_SEND_BATCH_MAX_SIZE:-750000}
sizer: ${env:EXPORTER_SEND_BATCH_SIZER:-bytes}
timeout: ${env:EXPORTER_SEND_TIMEOUT:-5s}
I am guessing that one of these new services is sending a log message that is very large. I already have attribute length truncation set via system property "otel.attribute.value.length.limit": 4095, which is NewRelic's limit. But I am not truncating the log record body.
Is my guess correct? Should I implement a transform processor to truncate the body, too?
Also, why does this need to drop the entire batch? Is it too expensive to simply drop the offending log?
Thanks
Tip
React with 👍 to help prioritize this issue. Please use comments to provide useful context, avoiding +1 or me too, to help us triage it. Learn more here.