Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inconsistent container_cpu_usage_seconds_total Shows Extremely High Values (e.g., 1.73 Billion) #3611

Open
roshini-cp opened this issue Oct 22, 2024 · 1 comment

Comments

@roshini-cp
Copy link

Hello team,

I'm encountering an issue with the container_cpu_usage_seconds_total metric, where it occasionally spikes to very high values, such as 1.73 billion, for some containers. These values seem unusually large and only occur sporadically. After a short time, the metric returns to its expected range.

Details:

Metric: container_cpu_usage_seconds_total
Example of high value: ~1.73 Billion
Occurrence: Happens infrequently but returns to normal after some time
Frequency: Once in a while
Expected behavior: The metric should show the CPU usage in seconds, and these spikes seem abnormal.

Image

Questions:

What might be causing these sporadic spikes in the metric?
Could this be an issue with the metric collection or reporting system?
Are there any recommended steps to debug or prevent this from happening?
Any help or insights would be greatly appreciated!

Thank you in advance for your time and assistance.

Best regards,
Roshni

@roshini-cp roshini-cp changed the title Insistent container_cpu_usage_seconds_total Shows Extremely High Values (e.g., 1.73 Billion) Inconsistent container_cpu_usage_seconds_total Shows Extremely High Values (e.g., 1.73 Billion) Oct 22, 2024
@rasoanaivo-r
Copy link

rasoanaivo-r commented Mar 4, 2025

Encountering the same thing but with container_cpu_user_seconds_total

Image

Cadvisor version : cadvisor:v0.46.0
I'm using Docker version 26.1.3, build b72abbb

Cadvisor is running inside a container. The launch options are these :

    cadvisor:
      command:
        - "--allow_dynamic_housekeeping=true"
        - "--housekeeping_interval=30s"
        - "--global_housekeeping_interval=2m"
        - "--store_container_labels=false"
        - "--whitelisted_container_labels=com.docker.compose.service,com.docker.compose.version"
        - "--enable_metrics=cpu,diskIO,memory,network,oom_event"
        - "--enable_load_reader=false"
        - "--docker_only=true"
        - "--disable_root_cgroup_stats=true"

The wrong display begins when my entire docker-compose stack is updated and restarted.
It grows until either I stop and relaunch the supervised container OR stop and relaunch the cadvisor container.

The supervised container is using NODEJS 16.

--- I tried doing a docker stats on the host directly to see but no spikes at the same moment (third column)

a8af6030fe85   service_xxxx                  0.02%     173.6MiB / 28.48GiB   0.60%     428MB / 231MB     240MB / 0B        12
a8af6030fe85   service_xxxx                  0.02%     173.6MiB / 28.48GiB   0.60%     428MB / 231MB     240MB / 0B        12
a8af6030fe85   service_xxxx                  0.52%     173.7MiB / 28.48GiB   0.60%     428MB / 231MB     240MB / 0B        12
a8af6030fe85   service_xxxx                  0.52%     173.7MiB / 28.48GiB   0.60%     428MB / 231MB     240MB / 0B        12
a8af6030fe85   service_xxxx                  0.53%     173.7MiB / 28.48GiB   0.60%     428MB / 231MB     240MB / 0B        12
a8af6030fe85   service_xxxx                  0.53%     173.7MiB / 28.48GiB   0.60%     428MB / 231MB     240MB / 0B        12
a8af6030fe85   service_xxxx                  0.53%     173.7MiB / 28.48GiB   0.60%     428MB / 231MB     240MB / 0B        12
a8af6030fe85   service_xxxx                  0.51%     173.7MiB / 28.48GiB   0.60%     428MB / 231MB     240MB / 0B        12
a8af6030fe85   service_xxxx                  0.51%     173.7MiB / 28.48GiB   0.60%     428MB / 231MB     240MB / 0B        12
a8af6030fe85   service_xxxx                  0.63%     173.8MiB / 28.48GiB   0.60%     428MB / 231MB     240MB / 0B        12
a8af6030fe85   service_xxxx                  0.63%     173.8MiB / 28.48GiB   0.60%     428MB / 231MB     240MB / 0B        12
a8af6030fe85   service_xxxx                  0.34%     173.8MiB / 28.48GiB   0.60%     428MB / 231MB     240MB / 0B        12
a8af6030fe85   service_xxxx                  0.34%     173.8MiB / 28.48GiB   0.60%     428MB / 231MB     240MB / 0B        12
a8af6030fe85   service_xxxx                  0.34%     173.8MiB / 28.48GiB   0.60%     428MB / 231MB     240MB / 0B        12

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants