Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add infrastructure needed for performance testing #1067

Open
berquist opened this issue Apr 5, 2024 · 0 comments
Open

Add infrastructure needed for performance testing #1067

berquist opened this issue Apr 5, 2024 · 0 comments
Assignees
Milestone

Comments

@berquist
Copy link
Member

berquist commented Apr 5, 2024

Please describe the new features with any relevant use cases.

We would like the ability to measure the performance of SST over time and possibly within any PRs. This means collecting the time taken to run each test. We will also need to collect timing information from larger benchmarks that are external to the test suites in core and elements.

Describe the proposed solution you plan to implement

The time to run each test is already printed to stdout when running sst-test-core and sst-test-elements. The test suite infrastructure will be modified to save the time, plus other information (see below). The direct output will be JSON following a schema (TBD) that can be inserted into a SQLite database. (Work on creating an API around the database would follow afterward, since we eventually want a web-based visualization tool.)

One issue with this approach is executing external benchmarks. For example, the calling model in https://github.com/sstsimulator/sst-benchmark/ is to run SST directly:

sst src/sst/benchmark/benchmark.py -- 10 all-to-all $PWD/stats_output.csv

Since no test infrastructure is present, collecting info (not just the result of time) would require either a wrapper or to put some functionality inside of sst. I need feedback on this. This also raises the question of whether or not statistics output would ever be collected. If so, some deeper integration with compiled SST may be beneficial rather than parsing CSV outputs.

In the meantime, starting with only modifying the test framework isn't wasted work.

Testing plan

For the test framework-based modifications, there should be a test that the output from the executing run matches the schema. The code will also be executed for each test session anyway.

Additional context

Rough outline of to be saved:

  • benchmark for multiple tests in core, possibly elements
  • starting granularity: total time, memory footprint
  • long term: where is location of dedicated non-test benchmarks?
@berquist berquist added this to the Future milestone Apr 5, 2024
@berquist berquist self-assigned this Apr 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant