Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provide helpers for performance testing #1165

Open
MattKotsenas opened this issue Jun 20, 2024 · 9 comments
Open

Provide helpers for performance testing #1165

MattKotsenas opened this issue Jun 20, 2024 · 9 comments
Labels
Area-MS.CA.Testing Microsoft.CodeAnalysis.Testing

Comments

@MattKotsenas
Copy link
Member

The roslyn-analyzers repo has a set of helpers to make performance testing of analyzers using BenchmarkDotNet easier. See https://github.com/dotnet/roslyn-analyzers/tree/main/src/PerformanceTests/Utilities/Common

It would be nice if we provided:

  1. A set of guidance around how to do write benchmarks for analyzers that don't run afoul of compilation caching nor measure project setup itself
  2. Where appropriate, add the necessary helpers to the Microsoft.CodeAnalysis.Testing.* packages to avoid copy/pasting code.
@MattKotsenas
Copy link
Member Author

Additionally, it appears the Microsoft.CodeAnalysis.Analyzer.Testing libraries are releasing DEBUG versions instead of RELEASE, which trips up BenchmarkDotNet:

// Validating benchmarks:
//    * Assembly MyAssembly which defines benchmarks references non-optimized Microsoft.CodeAnalysis.Analyzer.Testing
        If you own this dependency, please, build it in RELEASE.
        If you don't, you can disable this policy by using 'config.WithOptions(ConfigOptions.DisableOptimizationsValidator)'.

@sharwell
Copy link
Member

it appears the Microsoft.CodeAnalysis.Analyzer.Testing libraries are releasing DEBUG versions instead of RELEASE,

This is absolutely by design. It allows users to debug through the test code.

@MattKotsenas
Copy link
Member Author

MattKotsenas commented Jun 21, 2024

That's ~fine, just unexpected from my end.

I'm more documenting that if we address this performance testing issue we'd need to revisit that decision, or not leverage the existing testing packages for perf testing, or something else.

@sharwell
Copy link
Member

sharwell commented Jun 21, 2024

It wouldn't significantly impact benchmarking if it was used. It would really just change the results for sections of code that aren't part of the optimization process.

@sharwell
Copy link
Member

sharwell commented Jun 21, 2024

I'm not sure there are any guidelines or suggestions related to analyzer performance testing like this. I question the usefulness in practice of the test suite you mentioned, as well as its ability to guide analyzer authors in the direction of better performing analyzers. The reasons for this are super complex so its hard to describe in short form.

@sharwell
Copy link
Member

To my knowledge:

  1. There are no documented procedures for testing the impact of changes on the scoped code analysis performed in the IDE
  2. There are no documented procedures for testing the impact of incremental code analysis for the purpose of executing code fixes in the IDE
  3. There are no documented procedures for testing the impact of analyzers on dotnet-format, etc.
  4. There are no documented procedures for testing the impact of analyzers on the build.
  5. There are no documented procedures for analyzing performance results in the context of costs that are shared across multiple analyzers.

The current state of the art that we use daily is: when a scenario involving one or more analyzers is observably slow, use PerfView to collect performance information about that scenario, and fix the low-hanging fruit.

@sharwell
Copy link
Member

One example of a difficult scenario: sometimes analyzers are slow in a specific large project. In practice, we frequently observe that the analyzer spends a significant (often majority) on a very small section of code, e.g. one file. It's also often the case that the slow file isn't the largest file in the project, or really notable in any obvious way. BenchmarkDotNet is not very well suited to locating the slow file among all the rest such that it could be evaluated and/or extracted specifically.

@MattKotsenas
Copy link
Member Author

Those are all good points, and in an ideal world we'd cover them all, however today we have approximately nothing to go on, so I'd assert that even a small doc with "considerations" and some best practices is an improvement worth considering.

The point of a benchmarking tool isn't to be a comprehensive regression suite; currently we're asserting that the way to correctly design an analyzer is "just already be an expert". As an author I currently have no way to know which design is better for any scenario, forget all scenarios. I'll assert that the existence of https://github.com/dotnet/roslyn-analyzers/tree/main/src/PerformanceTests is a proof by contradiction that BenchmarkDotNet is a useful tool.

@MattKotsenas
Copy link
Member Author

Here's a small but concrete example: https://github.com/rjmurillo/moq.analyzers/pull/109/files#diff-5b0caf31d695c71b66dc3a6978f20d0da9738667e2627824b198eefdf86b0ed4

Writing a benchmark seems to require copy/pasting:

  • CSharpCompilationHelper
  • CompilationHelper
  • ExportProviderExtensions

in order to write a benchmark. I also tried using the full testing harness but substituting my own EmptyVerifier instead of DefaultVerifier (see rjmurillo/moq.analyzers@55bb099), but that was ~2x slower than this method (for both baseline and test case) which then makes my analyzer look better than it actually is (as a percentage of the baseline).

For instance, one step forward (though I don't know if it's the right one or not) would be to expose the CompilationHelpers as public classes, and to build the Verifier classes on top. I don't know if that's feasible or not given the current design.

MattKotsenas added a commit to rjmurillo/moq.analyzers that referenced this issue Jun 21, 2024
Add the project `Moq.Analyzers.Benchmarks` with a sample benchmark for
diagnostic Moq1300. This pattern is copied from
https://github.com/dotnet/roslyn-analyzers/blob/f1115edce8633ebe03a86191bc05c6969ed9a821/src/PerformanceTests/Tests/Enabled/CSharp_CA1416.cs,
but where possible I used the types exposed by the
Microsoft.CodeAnalysis.*.Testing libraries instead.

As part of the exercise, I filed
dotnet/roslyn-sdk#1165 to discuss if some of
these helpers should move into the testing packages, and / or to
document any existing profiling best practices.
@sharwell sharwell added the Area-MS.CA.Testing Microsoft.CodeAnalysis.Testing label Jul 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Area-MS.CA.Testing Microsoft.CodeAnalysis.Testing
Projects
None yet
Development

No branches or pull requests

2 participants