-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cache all cids in memory in pruner #175
base: master
Are you sure you want to change the base?
Conversation
for performance, keep a list of cids in memory
Codecov ReportBase: 5.43% // Head: 5.31% // Decreases project coverage by
📣 This organization is not using Codecov’s GitHub App Integration. We recommend you install it so Codecov can continue to function properly for your repositories. Learn more Additional details and impacted files@@ Coverage Diff @@
## master #175 +/- ##
=========================================
- Coverage 5.43% 5.31% -0.12%
=========================================
Files 14 14
Lines 1639 1674 +35
=========================================
Hits 89 89
- Misses 1545 1580 +35
Partials 5 5
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report at Codecov. |
I still hate this file-writing business. I reckon now it'd be more efficient, and safe, to just iterate over the CID map and roll the dice on each one, with some probability of 1% or something small. Delete when they lose the dice roll and aren't pinned, stop iterating if we reach our threshold, or iterate again if we haven't (maybe with some safety around it like don't loop more than 20 times). Go's unstable map iteration ordering even helps a bit here. |
Co-authored-by: Rod Vagg <[email protected]>
Co-authored-by: Rod Vagg <[email protected]>
Co-authored-by: Rod Vagg <[email protected]>
i didn't take the effort to estimate how much memory cids would take up when i wrote this. in my head it sounded big. but u are right, with a blockstore target size of around 100GiB it seems fairly negligible, a server who wants bigger cache should have plenty of memory available too. if we do keep all the cids in memory, it opens the door for a much less dumb pruner solution - fifo? doesn't need to be implemented now but probably should do at some point. |
and i think @rvagg is right, file io definitely should go away now if keys are in memory already |
Co-authored-by: Rod Vagg <[email protected]>
Goals
Performance testing indicates the pruner is still a huge bottleneck on performance, and the hot spot is reading the all keys chan.
By our calculations, with not too much penalty we can keep a list of all keys in memory, and thus avoid this bottleneck.
Implementation
For discussion
This is complicated enough I want to write tests before anyone merges it, but want folks to review the approach now.