You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Please describe the new features with any relevant use cases.
We would like the ability to measure the performance of SST over time and possibly within any PRs. This means collecting the time taken to run each test. We will also need to collect timing information from larger benchmarks that are external to the test suites in core and elements.
Describe the proposed solution you plan to implement
The time to run each test is already printed to stdout when running sst-test-core and sst-test-elements. The test suite infrastructure will be modified to save the time, plus other information (see below). The direct output will be JSON following a schema (TBD) that can be inserted into a SQLite database. (Work on creating an API around the database would follow afterward, since we eventually want a web-based visualization tool.)
Since no test infrastructure is present, collecting info (not just the result of time) would require either a wrapper or to put some functionality inside of sst. I need feedback on this. This also raises the question of whether or not statistics output would ever be collected. If so, some deeper integration with compiled SST may be beneficial rather than parsing CSV outputs.
In the meantime, starting with only modifying the test framework isn't wasted work.
Testing plan
For the test framework-based modifications, there should be a test that the output from the executing run matches the schema. The code will also be executed for each test session anyway.
Additional context
Rough outline of to be saved:
benchmark for multiple tests in core, possibly elements
Please describe the new features with any relevant use cases.
We would like the ability to measure the performance of SST over time and possibly within any PRs. This means collecting the time taken to run each test. We will also need to collect timing information from larger benchmarks that are external to the test suites in core and elements.
Describe the proposed solution you plan to implement
The time to run each test is already printed to stdout when running
sst-test-core
andsst-test-elements
. The test suite infrastructure will be modified to save the time, plus other information (see below). The direct output will be JSON following a schema (TBD) that can be inserted into a SQLite database. (Work on creating an API around the database would follow afterward, since we eventually want a web-based visualization tool.)One issue with this approach is executing external benchmarks. For example, the calling model in https://github.com/sstsimulator/sst-benchmark/ is to run SST directly:
sst src/sst/benchmark/benchmark.py -- 10 all-to-all $PWD/stats_output.csv
Since no test infrastructure is present, collecting info (not just the result of
time
) would require either a wrapper or to put some functionality inside ofsst
. I need feedback on this. This also raises the question of whether or not statistics output would ever be collected. If so, some deeper integration with compiled SST may be beneficial rather than parsing CSV outputs.In the meantime, starting with only modifying the test framework isn't wasted work.
Testing plan
For the test framework-based modifications, there should be a test that the output from the executing run matches the schema. The code will also be executed for each test session anyway.
Additional context
Rough outline of to be saved:
The text was updated successfully, but these errors were encountered: