Metrics do not have instance_id or container_id in case when deployed using docker or kubernetes #5073
Labels
feature-request
pkg:exporter-prometheus
spec-feature
This is a request to implement a new feature which is already specified by the OTel specification
up-for-grabs
Good for taking. Extra help will be provided by maintainers
What version of OpenTelemetry are you using?
"@opentelemetry/api": "1.9.0",
"@opentelemetry/auto-instrumentations-node": "0.49.1",
What version of Node are you using?
Node 14.17.6 and Node 16.19.0 both versions tested
What did you do?
I have multiple instances of a service running on 2 AWS EC2 (2 instances). Due to this the counter values maintained for http_server_duration_milliseconds_count have some difference that leads to Grafana Alloy (based on opentelemetry collector) showing a zig zag pattern (due to time diff between the 2 instances start). In java and python for multiple instances getting an instance__id associated with metrics but in case of Nodejs that was not the case.
Thought might be due to docker container or k8 cluster, tried the same for this application but still not getting any containerid or instance_id to distinguish between different instances metrics
What did you expect to see?
expected to see behaviour similar to Java and Python Opentelemetry wherein getting the instance_id attribute in the metrics
What did you see instead?
no instance_id attribute in metrics for nodejs
Additional context
This really affects the graph in grafana alloy resulting in rate graphs having a much higher value than showing zero, which really put us off
The text was updated successfully, but these errors were encountered: