-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Data Corruption on dcgm_fi_dev_gpu_util Metric #199
Comments
Confirmed that this problem is with dcgmi directly, and not with the go-wrapper translation layer. To reproduce you can run the following script as a background job for a few days:
|
Could you check if the values you get are derivative of the DCGM_INT32_BLANK? |
@nikkon-dev , interesting thought but it seems like the values are more diverse than that. In the past 14 days I'm seeing:
The only one that would match something from the list would be |
Hi all,
Currently using dcgm_fi_dev_gpu_util to monitor GPU utilization but running into an issue where it will occasionally spit out a data point that isn't between 0 and 100. The highest observed value was 4294967295 (max supported by UINT32 which might be a hint), but most often it's in range of 1k to 200k. This appears to happen both in situations where there is load on the GPUs and also in situations where the GPUs are sitting at 0% before and after the erroneous data point. Has anyone else encountered problems with this metric?
I've seen it suggested elsewhere that there's a newer DCGM_FI_PROF_GR_ENGINE_ACTIVE which might replace it, but I don't know whether the root cause here is the metric itself or something in the collection code. Anyone know whether collecting the 'prof' metric would incur a greater performance penalty than the 'dev' metric?
Thanks!
(cross post of NVIDIA/go-dcgm#75 since I'm not sure whether this a problem with the metric itself of the go wrapper being used to extract it)
The text was updated successfully, but these errors were encountered: