Performance benchmarking can take multiple forms:
- relatively quick (< 1 hour) tests to run locally during development to understand perf impact of changes
- continuous dashboard of perf changes over time, covering a variety of realistic deployment scenarios with multiple machines and configurations
- continuous-integration tests to prevent checking in performance regressions -- similar to coverage tests
- cloudperf/ contains what appears to be an attempt at measuring performance in a realistic multi-machine scenario. However, the insructions don't work, and it hasn't been touched in a year (other than moving the files).
- siege/ contains an initial attempt at a simple test to run iteratively during development to get a view of the time/space impact of the changes under configuration.
- salvo/ contains a framework that abstracts nighthawk benchmark execution. This is still under active development