-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: Do not compute prettify_macro_expansion()
unless the "Inline macro" assist has actually been invoked
#18900
Conversation
… assist has actually been invoked And not just called to be listed. This was a major performance hang when repeatedly switching back-and-forth between a large `include!`d file (but there are others)..
👍 we should introduce a bit of caching for diagnostics somewhere at some point
Also 👍 a lot of assists do the same upfront work that could likely be shared in some capacity. |
let expanded = ctx.sema.parse_or_expand(macro_call.as_file()); | ||
let span_map = ctx.sema.db.expansion_span_map(macro_call.as_macro_file()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can also pull these two in
Let's close that and open an issue with the points you raised here as a replacement (could also link back to that issue there) |
Merging this and kicking off another nightly as running into this is quite bad |
This comment was marked as outdated.
This comment was marked as outdated.
I'll include that in my next PR |
And not just called to be listed.
This was a major performance hang when repeatedly switching back-and-forth between a large
include!
d file, aka. #18879 (but there are others). This does not fix this issue because, well, it was a major hurdle hiding everything else in the profiler, but now that it's fixed other things show up (and it's still slow).To categorize the main reasons remained, as analyzed by my profiler:
We have a lot of diagnostics, influenced mainly by two salsa queries,
body_with_source_map()
andborrowck()
. These two are lru'ed so they of course evict with such large files with lots of small bodies. If I didn't let the file to analyze well before starting my back-and-forth, we also have a lot of calls toinfer()
, half of them are blocking on the other. This is also expected since we cancel them when we switch out of the file, so they never complete and cached, only consume CPU.There is an important observation here, though: diagnostics are called twice. Once for assists and the other for, well, diagnostics. Since some diagnostics are not fully cached in the db this may cause actual slowdown for real users. We probably want to fix this, perhaps by measure of caching the diagnostics of the last file.
There is also a fair share of semantic highlighting, which is also fair, given that a lot of its work doesn't (and couldn't) be cached in the db. Another large part is assist, that tend to execute a lot of code when listing them, for deciding whether they are available or not. Perhaps we could improve on this metric too.
Overall, I'm tending to close (oh no GitHub, don't close it!) #18879 as "won't fix" given this is a very atypical scenario. But perhaps there are still a few points we can improve in, guided by the information from this issue.