Skip to content

Conversation

@bylaws
Copy link
Collaborator

@bylaws bylaws commented Dec 29, 2025

A significant proportion of time with the prior approach was spent in the unbuffered write calls used once per code page/entrypoint; taking upwards of 15 seconds to serialize a 700MB P3R cache. Switching to an mmap approach reduces the CPU time there to less than a second, leaving I/O to dominate.

A two-pass approach is used:

  1. Calculate the required size for the cache file, issue a callback to frontend to map a cache file of that size.
  2. Write cache data into the buffer provided by that callback.

CC: @neobrain needs linux testing

A significant proportion of time with the prior approach was spent in
the unbuffered write calls used once per code page/entrypoint; taking
upwards of 15 seconds to serialize a 700MB P3R cache. Switching to an
mmap approach reduces the CPU time there to less than a second, leaving
I/O to dominate.

A two-pass approach is used:
1. Calculate the required size for the cache file, issue a callback to
   frontend to map a cache file of that size.
2. Write cache data into the buffer provided by that callback.
@bylaws bylaws marked this pull request as draft December 29, 2025 18:12
@neobrain
Copy link
Member

Sorry, but Serialize.template operator()<false>() is an instant red flag for me :)

But yeah, good find - the core issue here is the number of small-size writes that became necessary after the recent changes to the LookupCache data structures. I would much prefer using a simple buffering (std::vector) here, since it doesn't involve nested template lambda expressions or a funky mmapping-callback. Using mmap over write plays a negligible role, so I expect equivalent performance here.

@bylaws
Copy link
Collaborator Author

bylaws commented Jan 12, 2026

Sorry, but Serialize.template operator()<false>() is an instant red flag for me :)

But yeah, good find - the core issue here is the number of small-size writes that became necessary after the recent changes to the LookupCache data structures. I would much prefer using a simple buffering (std::vector) here, since it doesn't involve nested template lambda expressions or a funky mmapping-callback. Using mmap over write plays a negligible role, so I expect equivalent performance here.

Hmm I was wanting to avoid just doubling the memory usage for no reason, especially when mmap is basically free, we could just buffer some of them and not the code buffer write but felt like mmap was just a nicer catch all future proof solution. I could split the lambda out into a separate function? Like the nested lambda isn't inherent to the approach, main reason for templates is as I thought it was kinda cool to reuse the same expression to calculate size as to write out. But a templated static function could do exactly the same.

With that I suppose the only thing would be the mapping callback, which I feel isn't that obtuse? Up to you but when it allows a technically better solution that's easier to extend in future for cache changes I lean towards. (It also means running out of disk space is safely handled which is kinda nice)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants