-
Notifications
You must be signed in to change notification settings - Fork 256
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provide helpers for performance testing #1165
Comments
Additionally, it appears the
|
This is absolutely by design. It allows users to debug through the test code. |
That's ~fine, just unexpected from my end. I'm more documenting that if we address this performance testing issue we'd need to revisit that decision, or not leverage the existing testing packages for perf testing, or something else. |
It wouldn't significantly impact benchmarking if it was used. It would really just change the results for sections of code that aren't part of the optimization process. |
I'm not sure there are any guidelines or suggestions related to analyzer performance testing like this. I question the usefulness in practice of the test suite you mentioned, as well as its ability to guide analyzer authors in the direction of better performing analyzers. The reasons for this are super complex so its hard to describe in short form. |
To my knowledge:
The current state of the art that we use daily is: when a scenario involving one or more analyzers is observably slow, use PerfView to collect performance information about that scenario, and fix the low-hanging fruit. |
One example of a difficult scenario: sometimes analyzers are slow in a specific large project. In practice, we frequently observe that the analyzer spends a significant (often majority) on a very small section of code, e.g. one file. It's also often the case that the slow file isn't the largest file in the project, or really notable in any obvious way. BenchmarkDotNet is not very well suited to locating the slow file among all the rest such that it could be evaluated and/or extracted specifically. |
Those are all good points, and in an ideal world we'd cover them all, however today we have approximately nothing to go on, so I'd assert that even a small doc with "considerations" and some best practices is an improvement worth considering. The point of a benchmarking tool isn't to be a comprehensive regression suite; currently we're asserting that the way to correctly design an analyzer is "just already be an expert". As an author I currently have no way to know which design is better for any scenario, forget all scenarios. I'll assert that the existence of https://github.com/dotnet/roslyn-analyzers/tree/main/src/PerformanceTests is a proof by contradiction that BenchmarkDotNet is a useful tool. |
Here's a small but concrete example: https://github.com/rjmurillo/moq.analyzers/pull/109/files#diff-5b0caf31d695c71b66dc3a6978f20d0da9738667e2627824b198eefdf86b0ed4 Writing a benchmark seems to require copy/pasting:
in order to write a benchmark. I also tried using the full testing harness but substituting my own For instance, one step forward (though I don't know if it's the right one or not) would be to expose the CompilationHelpers as public classes, and to build the Verifier classes on top. I don't know if that's feasible or not given the current design. |
Add the project `Moq.Analyzers.Benchmarks` with a sample benchmark for diagnostic Moq1300. This pattern is copied from https://github.com/dotnet/roslyn-analyzers/blob/f1115edce8633ebe03a86191bc05c6969ed9a821/src/PerformanceTests/Tests/Enabled/CSharp_CA1416.cs, but where possible I used the types exposed by the Microsoft.CodeAnalysis.*.Testing libraries instead. As part of the exercise, I filed dotnet/roslyn-sdk#1165 to discuss if some of these helpers should move into the testing packages, and / or to document any existing profiling best practices.
The roslyn-analyzers repo has a set of helpers to make performance testing of analyzers using BenchmarkDotNet easier. See https://github.com/dotnet/roslyn-analyzers/tree/main/src/PerformanceTests/Utilities/Common
It would be nice if we provided:
The text was updated successfully, but these errors were encountered: