-
-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tracking test coverage and performance #90
Comments
I know there's services that do the coverage stuff. IDK if there exists any for performance. |
@astronouth7303 I only know codecov and coveralls. I don't know of any (free for open-source) performance-tracking software available as a service. There are self-hostable options, such as Google's Dana (which seems unmaintained since almost a year) or codespeed, but I assumed it would be preferable not to host services for this. That's why I was suggesting making something which we can run directly in CI, and would upload a report (a human-readable one, and a machine-readable one) directly.
|
Problem is, travis requires a place to upload to. 😛 But yeah, automated CI stuff would be a good. |
@astronouth7303 We can store the benchmark and coverage results in git notes, since they are small, which would be an extremely convenient way to track which commit they relate to. We would just need a For human-readable reports, we could just push those to Github pages, or S3, or whatever else is convenient. :3 |
@astronouth7303 As I said when I opened the issue, if that seems like a reasonable solution, I can implement that. |
Ping? |
I suppose the next step is to discuss details? Or just do a quick prototype and be ready to iterate. |
Yeah, I was waiting for a confirmation it was a reasonable enough plan to go and prototype it. |
I know we kinda-discussed that in #59, but I thought it would be useful to resume that discussion & track it in a separate issue. (And if you feel it's inappropriate for me to bring it up again, please let me know and close the issue :3)
I think it would be pretty nice to have coverage and performance tracking, if only because we could answer questions like “how bad is the slowdown of #89” or “is this adequately tested” without having to reinvent a new way to get that data.
I totally agree with @pathunstrom that we should minimise the amount of tooling a user has to interact with, so it should happen automatically for them. I'd like to suggest doing it during CI, and automatically posting a message to the PR (if appropriate) with:
I would happily do the tooling & integration work, if there's consensus on it being desirable (and how it should behave). :)
The text was updated successfully, but these errors were encountered: