-
Notifications
You must be signed in to change notification settings - Fork 74
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create benchmarks directory and move babelstream into it #2237
Create benchmarks directory and move babelstream into it #2237
Conversation
d1769ae
to
ac3b661
Compare
Maybe we should add a cmake target The benchmarks need to be build in the CI. I see no reason, why we should not enable the benchmarks for all builds. Therefore we can add the CMake argument here: Lines 80 to 94 in dcc87b3
|
8d30818
to
1e4d0ae
Compare
I generally like the idea of separating benchmarks and examples, but could you please elaborate a bit on your motivation for doing this? Specifically, are you going to add more benchmarks? Are you planning to build the benchmarks differently than examples? Thx! |
In my opinion, examples could be designed for any reason, pedagogical or showing implementation of a new feature etc. Benchmarks will mainly focus on performance and visualising it's change through time with CI will show general performance effects of each PR merged. |
Alright, so you are preparing for some kind of performance CI? Here is a ticket for that: #1264 |
We discussed it last week. At the moment, a CI is not possible because of lacking resources. But we want to have benchmarks to run regression benchmarks locally on laptops, workstation or server. For example, we thought about to use |
bb04e0d
to
614fb7f
Compare
614fb7f
to
8cc55d6
Compare
There's also #1723 which I've just rebased on top of actual develop. It uses Catch2 for benchmarking infrastructure (thus integrated with e.g. ctest). I tried to implement a generaic fixture for benchmarking kernels that would allow us to write simple benchmarks for basic features, but I didn't implement any other use case than the random generator, so I didn't know what would be some actual sensible requirements for such a fixture. Using Catch2 to handle the benchmarks is IMHO still a good idea since we're already using it to handle tests. |
a4317ef
to
6ecabf2
Compare
6ecabf2
to
91f04ab
Compare
A simple PR. A directory called "benchmarks" is created and babelstream example is copied into it. There is a new cmake flag alpaka_BUILD_BENCHMARKS. If this flag is ON then alpaka_ACC_CPU_B_SEQ_T_SEQ_ENABLE is turned ON (Like alpaka_BUILD_EXAMPLES flag)
The codes under benchmark directory is compiled. But Babelstream example is not run at the CI, as it was in the Examples directory before.