You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks to sharding it's really hard to find the performance of a single test, especially across multiple builds. Moreover, thanks to bazel-diff the target might not have been run at all.
We're already analyzing test_bep.json when --print_shard_summary is set. Consequently, it should be easy to dump per-target metrics into some database (Cloud SQL or whatever).
The text was updated successfully, but these errors were encountered:
Previously we hard-coded non-test steps such as Buildifier, which can cause problem when adding new steps.
With this commit we detect test steps by looking at their command, which has to contain `bazelci.py runner`.
Progress towards bazelbuild#2044
Previously we hard-coded the names of non-test steps such as Buildifier,
which can cause problem when adding new steps. With this commit we
detect test steps by looking at their command, which has to contain
`bazelci.py runner`.
Progress towards
#2044
Thanks to sharding it's really hard to find the performance of a single test, especially across multiple builds. Moreover, thanks to bazel-diff the target might not have been run at all.
We're already analyzing test_bep.json when --print_shard_summary is set. Consequently, it should be easy to dump per-target metrics into some database (Cloud SQL or whatever).
The text was updated successfully, but these errors were encountered: