You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Establishing and documenting a standard benchmark suite for Parcels would bring an active focus to performance within the Parcels project.
This benchmarking suite could include whole simulation tests, as well as tests that target specific parts of the codebase (e.g., particle file writing, as would be important in #1661). Note that tests relating to MPI should be realistic whole simulation tests as the loading of fieldsets and locations of particles have a significant impact on performance.
Ideally:
Have a suite of tests that can run on CI (when requested via a change in PR label) testing various core parts of the codebase, saving and uploading a waterfall report for the execution time and memory use for each function (such as those generated by sciagraph as well as IO.
The benchmarks should be run on a machine with consistent resources (large simulations can be run on Lorenz at IMAU which has significant resources and access to hydrodynamic forcing for realistic simulations).
The more I look at projects in this domain, the more I see asv being used to create benchmarks and compare them across the Git history for a project. I propose that we use asv for creating benchmarks of notable functions and classes so that we can track their performance. Whole simulation benchmarks will likely need to be done in another way (perhaps using an offline approach).
I'll be opting for a similar approach used in scikit-image, xarray etc. where we can run them on demand using a label for the PR. I'll work on the infrastructure and we can decide the exact benchmarks to include at a later time.
Looking into asv it seems very versatile! With ability to measure timing, memory consumption, and give an insight into profiling
Establishing and documenting a standard benchmark suite for Parcels would bring an active focus to performance within the Parcels project.
This benchmarking suite could include whole simulation tests, as well as tests that target specific parts of the codebase (e.g., particle file writing, as would be important in #1661). Note that tests relating to MPI should be realistic whole simulation tests as the loading of fieldsets and locations of particles have a significant impact on performance.
Ideally:
Have a suite of tests that can run on CI (when requested via a change in PR label) testing various core parts of the codebase, saving and uploading a waterfall report for the execution time and memory use for each function (such as those generated by sciagraph as well as IO.
Known tools:
asv
on Conda (used in numpy and scipy) (this is used by the xarray team)pytest-benchmark
on CondaThe benchmarks should be run on a machine with consistent resources (large simulations can be run on Lorenz at IMAU which has significant resources and access to hydrodynamic forcing for realistic simulations).
Related:
The text was updated successfully, but these errors were encountered: