Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use pyperformance to run the benchmarks. #2

Open
ericsnowcurrently opened this issue Nov 17, 2021 · 3 comments
Open

Use pyperformance to run the benchmarks. #2

ericsnowcurrently opened this issue Nov 17, 2021 · 3 comments

Comments

@ericsnowcurrently
Copy link
Contributor

The pyperformance project is useful for running benchmarks in a consistent way and for analyzing the results. The CPython project uses it to generate the results you can find on https://speed.python.org. The "faster cpython" project, on which I work with Guido and others, is also using it regularly.

We'd like to incorporate the benchmarks here into the suite we run. That involves getting them to run under pyperformance. (Note that pyperformance hasn't supported running external benchmarks, but I recently changed/am changing that.) I'm happy to do the work to update the benchmarks here. (Actually, I already did it, in part to verify the changes I made to pyperformance.)

So there are a few questions to answer:

  • are there any objections to updating these benchmarks to work with pyperformance? (I'll do the work.)
  • would it be okay if the output format from the benchmarks changes?
  • would it be okay to change the command for invoking these benchmarks? (use pyperformance directly instead of the existing "run_all.sh" script)

Aside from that, I'll need help to verify that my changes preserve the intent of each benchmark.

Keep in mind that this change will allow you (and us) to take advantage of pyperformance for results stability and analysis, as well as posting results to speed.python.org. (You'd have to talk to @pablogsal about the possibility of posting results there.)

So, what do you think? I'd be glad to jump into a call to discuss, if that would help.

@kmod
Copy link
Contributor

kmod commented Nov 18, 2021

This sounds awesome! We're not tied to our current implementation and would love to be on something more standard, especially if it lets us upload to a codespeed instance easily.

Our current benchmark runner collects a couple things that I'm not sure are tracked by pyperformance: p99 latency, warmup time, and memory usage. It'd be awesome if pyperformance supported these things, but assuming that it doesn't currently it'd be nice to keep our existing runner available even if we primarily use pyperformance to run the benchmarks.

@ericsnowcurrently
Copy link
Contributor Author

Thanks for the info! I'll take care of that.

@kmod kmod closed this as completed Jan 20, 2022
@kmod
Copy link
Contributor

kmod commented Mar 22, 2022

Hi @ericsnowcurrently unfortunately I had to disable the non-legacy code path because it is currently generating bad results -- it ends up measuring mostly just process creation time. I think the issue is that pyperformance currently doesn't support there being any overhead to the benchmarking function.

8bc1312

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants