-
Notifications
You must be signed in to change notification settings - Fork 1.4k
WINA-1275: add agent_loaded_modules.json with module metadata and fla… #44786
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Static quality checks✅ Please find below the results from static quality gates Successful checksInfo
17 successful checks with minimal change (< 2 KiB)
On-wire sizes (compressed)
|
Regression DetectorRegression Detector ResultsMetrics dashboard Baseline: 969d924 Optimization Goals: ✅ No significant changes detected
|
| perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
|---|---|---|---|---|---|---|
| ➖ | docker_containers_cpu | % cpu utilization | +0.65 | [-2.28, +3.59] | 1 | Logs |
Fine details of change detection per experiment
| perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
|---|---|---|---|---|---|---|
| ➖ | quality_gate_metrics_logs | memory utilization | +2.36 | [+2.14, +2.57] | 1 | Logs bounds checks dashboard |
| ➖ | quality_gate_logs | % cpu utilization | +0.93 | [-0.54, +2.40] | 1 | Logs bounds checks dashboard |
| ➖ | ddot_metrics_sum_cumulativetodelta_exporter | memory utilization | +0.73 | [+0.50, +0.96] | 1 | Logs |
| ➖ | docker_containers_cpu | % cpu utilization | +0.65 | [-2.28, +3.59] | 1 | Logs |
| ➖ | file_tree | memory utilization | +0.12 | [+0.07, +0.17] | 1 | Logs |
| ➖ | docker_containers_memory | memory utilization | +0.09 | [+0.02, +0.16] | 1 | Logs |
| ➖ | ddot_metrics_sum_cumulative | memory utilization | +0.09 | [-0.08, +0.25] | 1 | Logs |
| ➖ | quality_gate_idle | memory utilization | +0.06 | [+0.01, +0.11] | 1 | Logs bounds checks dashboard |
| ➖ | file_to_blackhole_1000ms_latency | egress throughput | +0.05 | [-0.36, +0.46] | 1 | Logs |
| ➖ | file_to_blackhole_0ms_latency | egress throughput | +0.02 | [-0.51, +0.56] | 1 | Logs |
| ➖ | file_to_blackhole_100ms_latency | egress throughput | +0.01 | [-0.03, +0.05] | 1 | Logs |
| ➖ | quality_gate_idle_all_features | memory utilization | +0.01 | [-0.03, +0.04] | 1 | Logs bounds checks dashboard |
| ➖ | uds_dogstatsd_to_api_v3 | ingress throughput | +0.00 | [-0.13, +0.13] | 1 | Logs |
| ➖ | uds_dogstatsd_to_api | ingress throughput | +0.00 | [-0.14, +0.14] | 1 | Logs |
| ➖ | tcp_dd_logs_filter_exclude | ingress throughput | +0.00 | [-0.09, +0.09] | 1 | Logs |
| ➖ | file_to_blackhole_500ms_latency | egress throughput | -0.05 | [-0.44, +0.34] | 1 | Logs |
| ➖ | otlp_ingest_metrics | memory utilization | -0.10 | [-0.25, +0.06] | 1 | Logs |
| ➖ | ddot_logs | memory utilization | -0.27 | [-0.33, -0.21] | 1 | Logs |
| ➖ | otlp_ingest_logs | memory utilization | -0.30 | [-0.41, -0.19] | 1 | Logs |
| ➖ | uds_dogstatsd_20mb_12k_contexts_20_senders | memory utilization | -0.30 | [-0.35, -0.25] | 1 | Logs |
| ➖ | tcp_syslog_to_blackhole | ingress throughput | -0.52 | [-0.61, -0.43] | 1 | Logs |
| ➖ | ddot_metrics_sum_delta | memory utilization | -0.57 | [-0.78, -0.36] | 1 | Logs |
| ➖ | ddot_metrics | memory utilization | -0.62 | [-0.85, -0.38] | 1 | Logs |
Bounds Checks: ✅ Passed
| perf | experiment | bounds_check_name | replicates_passed | links |
|---|---|---|---|---|
| ✅ | docker_containers_cpu | simple_check_run | 10/10 | |
| ✅ | docker_containers_memory | memory_usage | 10/10 | |
| ✅ | docker_containers_memory | simple_check_run | 10/10 | |
| ✅ | file_to_blackhole_0ms_latency | lost_bytes | 10/10 | |
| ✅ | file_to_blackhole_0ms_latency | memory_usage | 10/10 | |
| ✅ | file_to_blackhole_1000ms_latency | lost_bytes | 10/10 | |
| ✅ | file_to_blackhole_1000ms_latency | memory_usage | 10/10 | |
| ✅ | file_to_blackhole_100ms_latency | lost_bytes | 10/10 | |
| ✅ | file_to_blackhole_100ms_latency | memory_usage | 10/10 | |
| ✅ | file_to_blackhole_500ms_latency | lost_bytes | 10/10 | |
| ✅ | file_to_blackhole_500ms_latency | memory_usage | 10/10 | |
| ✅ | quality_gate_idle | intake_connections | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_idle | memory_usage | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_idle_all_features | intake_connections | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_idle_all_features | memory_usage | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_logs | intake_connections | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_logs | lost_bytes | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_logs | memory_usage | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | cpu_usage | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | intake_connections | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | lost_bytes | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | memory_usage | 10/10 | bounds checks dashboard |
Explanation
Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%
Performance changes are noted in the perf column of each table:
- ✅ = significantly better comparison variant performance
- ❌ = significantly worse comparison variant performance
- ➖ = no significant change in performance
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
Replicate Execution Details
We run multiple replicates for each experiment/variant. However, we allow replicates to be automatically retried if there are any failures, up to 8 times, at which point the replicate is marked dead and we are unable to run analysis for the entire experiment. We call each of these attempts at running replicates a replicate execution. This section lists all replicate executions that failed due to the target crashing or being oom killed.
Note: In the below tables we bucket failures by experiment, variant, and failure type. For each of these buckets we list out the replicate indexes that failed with an annotation signifying how many times said replicate failed with the given failure mode. In the below example the baseline variant of the experiment named experiment_with_failures had two replicates that failed by oom kills. Replicate 0, which failed 8 executions, and replicate 1 which failed 6 executions, all with the same failure mode.
| Experiment | Variant | Replicates | Failure | Logs | Debug Dashboard |
|---|---|---|---|---|---|
| experiment_with_failures | baseline | 0 (x8) 1 (x6) | Oom killed | Debug Dashboard |
The debug dashboard links will take you to a debugging dashboard specifically designed to investigate replicate execution failures.
❌ Retried Profiling Replicate Execution Failures (target internal profiling)
Note: Profiling replicas may still be executing. See the debug dashboard for up to date status.
| Experiment | Variant | Replicates | Failure | Debug Dashboard |
|---|---|---|---|---|
| quality_gate_idle_all_features | baseline | 11 (x4) | Oom killed | Debug Dashboard |
| quality_gate_idle_all_features | comparison | 11 (x4) | Oom killed | Debug Dashboard |
CI Pass/Fail Decision
✅ Passed. All Quality Gates passed.
- quality_gate_metrics_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check cpu_usage: 10/10 replicas passed. Gate passed.
- quality_gate_idle_all_features, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_idle_all_features, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
- quality_gate_idle, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_idle, bounds check memory_usage: 10/10 replicas passed. Gate passed.
pkg/util/lsof/modules_windows.go
Outdated
| ProcessName string `json:"process_name"` | ||
| ProcessPID int `json:"process_pid"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We do not need pid and process name for each process, they are identical, right? Also are we collecting it for all Agent processes or only core.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You’re right, they’re the same. I moved process metadata to the report header and removed it from each module entry.
Change (schema):
Report header now has ProcessName/ProcessPID
ModuleEntry no longer includes process fields. And for core processes!
pkg/util/lsof/modules_windows.go
Outdated
| ProductVersion string `json:"product_version,omitempty"` | ||
| OriginalFilename string `json:"original_filename,omitempty"` | ||
| InternalName string `json:"internal_name,omitempty"` | ||
| Size int64 `json:"size_bytes,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What size is it"? In memory or on disk (I wonder).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On-disk file size. I renamed the field to SizeBytes and documented it in the struct tags.
pkg/util/lsof/modules_windows.go
Outdated
|
|
||
| // ListLoadedModulesReportJSON returns a JSON payload describing DLLs loaded by the current agent process. | ||
| func ListLoadedModulesReportJSON() ([]byte, error) { | ||
| files, err := ListOpenFilesFromSelf() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is crazy mislabeling. ListOpenFilesFromSelf does not return open files by loaded module files. And files var does not even reveal that (loadModFiles e.g. probably would).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok. I renamed vars and the helper to reflect “loaded modules,” and removed the “files".
pkg/util/lsof/modules_windows.go
Outdated
| DLLName string `json:"dll_name"` | ||
| DLLPath string `json:"dll_path"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting question. DLL name is in the DLL path and seemingly redundant. But you probably right it is easier to read.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok! I removed DLLName from the JSON
pkg/util/lsof/modules_windows.go
Outdated
| pid := os.Getpid() | ||
|
|
||
| report := LoadedModulesReport{ | ||
| GeneratedAt: time.Now().UTC().Format(time.RFC3339), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How is the time reported for other Flare artifacts. Is it the same way or different? Here and lower in the file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using UTC RFC3339, which matches other flare sections. I added a short comment too.
pkg/util/lsof/modules_windows.go
Outdated
| var perms string | ||
| if fi, err := os.Stat(modPath); err == nil { | ||
| size = fi.Size() | ||
| perms = fi.Mode().Perm().String() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder would we even need perm. @clarkb7?
pkg/util/winutil/winver.go
Outdated
| langCodePage := fmt.Sprintf("%04x%04x", langCode, codePage) | ||
|
|
||
| // Helper to read a specific string value | ||
| readString := func(key string) string { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not a plain function?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point. I took out the inner closure to a small helper so it’s reusable and clearer.
pkg/util/winutil/winver.go
Outdated
| info.FileVersion = readString("FileVersion") | ||
| info.ProductVersion = readString("ProductVersion") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also looks like some of the information also retrieved slightly differently in https://github.com/DataDog/datadog-agent/pull/44786/changes#diff-111b15151741e55b83953b0b56365d0ff53083208a1df35aca7108e82d8d4a26R113. Potentially there are other Live Process code may benefit from the refactoring in GetFileDescription (DataDog\datadog-agent\pkg\util\winutil\process.go)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added a small shared helper in winutil and switched both call sites.
What does this PR do?
Adds a new Windows-only flare artifact,
agent_loaded_modules.json, which shows the Agent’s loaded DLLs along with relevant metadata. Theexisting _open_files.txtfile still exists for backward compatibility.For each loaded module, the flare includes:
• Full path and module name
• File timestamp, size, and permissions
• Windows version info fields (CompanyName, ProductName, FileVersion, ProductVersion, OriginalFilename, InternalName)
Implementation details:
• Introduces a helper for reading Windows version info strings (pkg/util/winutil)
• Adds a JSON builder for loaded modules (pkg/util/lsof)
• Wires this into the Windows lsof flare provider (comp/core/lsof/impl)
Motivation
• Addresses WINA-1275: the current Windows flare output is misnamed and not machine-parsable, making it harder for support to look at loaded modules.
• Unblocks WINA-1274 by providing a structured view that can be matched against known antimalware/interference lists (e.g., Panda, SentinelOne, Carbon Black).
Describe how you validated your changes
With Roberto
Done:
agent_loaded_modules.jsonis included in the flare output andexisting _open_files.txtis the same for backward compatibility.Verify JSON schema:
Check scope
agent_loaded_modules.json
Additional Notes
•This PR intentionally limits scope to Windows version info fields. Other details can be added incrementally in a follow-up PR
• The logic only runs when generating a flare; normal Agent runtime is unaffected.
• The change is additive and backward compatible with existing flare consumers.