Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add infrastructure for performance/benchmark tests #585

Open
vnmabus opened this issue Oct 13, 2023 · 0 comments
Open

Add infrastructure for performance/benchmark tests #585

vnmabus opened this issue Oct 13, 2023 · 0 comments

Comments

@vnmabus
Copy link
Member

vnmabus commented Oct 13, 2023

Is your feature request related to a problem? Please describe.
We currently are checking the performance of our methods in a very informal case-by-case way. This mean that we do not have accurate measurements of performance gains/loses when we change things around, nor can we easily detect if a change impacts performance negatively.

Describe the solution you'd like
A common way to measure and compare performance in the Python world is to use asv tests. These are used all around the Scientific Python ecosystem, for example in NumPy, SciPy, scikit-learn, or Pandas.

We should try to integrate this type of tests in order to work in performance topics more effectively. As opposed to unitary tests, these tests are not usually run in CI, but only under demand.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant