-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use pyperformance to run the benchmarks. #2
Comments
This sounds awesome! We're not tied to our current implementation and would love to be on something more standard, especially if it lets us upload to a codespeed instance easily. Our current benchmark runner collects a couple things that I'm not sure are tracked by pyperformance: p99 latency, warmup time, and memory usage. It'd be awesome if pyperformance supported these things, but assuming that it doesn't currently it'd be nice to keep our existing runner available even if we primarily use pyperformance to run the benchmarks. |
Thanks for the info! I'll take care of that. |
Hi @ericsnowcurrently unfortunately I had to disable the non-legacy code path because it is currently generating bad results -- it ends up measuring mostly just process creation time. I think the issue is that pyperformance currently doesn't support there being any overhead to the benchmarking function. |
The pyperformance project is useful for running benchmarks in a consistent way and for analyzing the results. The CPython project uses it to generate the results you can find on https://speed.python.org. The "faster cpython" project, on which I work with Guido and others, is also using it regularly.
We'd like to incorporate the benchmarks here into the suite we run. That involves getting them to run under pyperformance. (Note that pyperformance hasn't supported running external benchmarks, but I recently changed/am changing that.) I'm happy to do the work to update the benchmarks here. (Actually, I already did it, in part to verify the changes I made to pyperformance.)
So there are a few questions to answer:
Aside from that, I'll need help to verify that my changes preserve the intent of each benchmark.
Keep in mind that this change will allow you (and us) to take advantage of pyperformance for results stability and analysis, as well as posting results to speed.python.org. (You'd have to talk to @pablogsal about the possibility of posting results there.)
So, what do you think? I'd be glad to jump into a call to discuss, if that would help.
The text was updated successfully, but these errors were encountered: