-
-
Notifications
You must be signed in to change notification settings - Fork 64
Support for user-defined cost in cache classes #982
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
At cache level, "size" is renamed to "cost", representing how much a cache can store (and how much an item cost). "size" is preserved for the count of items. At higher level, we keep the "size" sementics as we are speaking about the size of the cache, whatever this is. This is the first step to a cache limited by memory usage.
... and enhanced two of the lru_cache unittests with log_debug-assisted testing.
The changes in the unit tests are better understood if whitespace changes are ignored (git diff -w).
This test too intermittently fails (likely to the suspected but not yet
tracked-down race condition in logging and/or namedthread code). This
time however it fails either due to a deadlock, or obvious memory corruption
like below:
[ RUN ] ConcurrentCacheMultithreadedTest.accessSameExistingItem
../../../../home/leon/freelancing/kiwix/builddir/SOURCE/libzim/test/concurrentcache.cpp:429: Failure
Expected equality of these values:
zim::Logging::getInMemLogContent()
Which is: "thread#0: Output of interest starts from the next line\na : ConcurrentCache::getOrPut(5) {\n b: ConcurrentCache::getOrPut(5) {\na : ConcurrentCache::getCacheSlot(5) {\na : entered synchronized section\n b: ConcurrentCache::getCacheSlot(5) {\na : lru_cache::getOrPut(5) {\na : already in cache, moved to the beginning of the LRU list.\na : }\na : exiting synchronized section\na : }\n b: entered synchronized section\na : Obtained the cache slot\n b: lru_cache::getOrPut(5) {\na : } (return value: 50)\n\0 b: already in cache, moved to the beginning of the LRU list.\n b: }\n b: exiting synchronized section\n b: }\n b: Obtained the cache slot\n b: } (return value: 50)\n\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 "
targetOutput
Which is: "thread#0: Output of interest starts from the next line\na : ConcurrentCache::getOrPut(5) {\n b: ConcurrentCache::getOrPut(5) {\na : ConcurrentCache::getCacheSlot(5) {\na : entered synchronized section\n b: ConcurrentCache::getCacheSlot(5) {\na : lru_cache::getOrPut(5) {\na : already in cache, moved to the beginning of the LRU list.\na : }\na : exiting synchronized section\na : }\n b: entered synchronized section\na : Obtained the cache slot\n b: lru_cache::getOrPut(5) {\na : } (return value: 50)\n b: already in cache, moved to the beginning of the LRU list.\n b: }\n b: exiting synchronized section\n b: }\n b: Obtained the cache slot\n b: } (return value: 50)\n"
With diff:
@@ -14,5 +14,5 @@
b: lru_cache::getOrPut(5) {
a : } (return value: 50)
-\0 b: already in cache, moved to the beginning of the LRU list.
+ b: already in cache, moved to the beginning of the LRU list.
b: }
b: exiting synchronized section
@@ -19,4 +19,3 @@
b: }
b: Obtained the cache slot
- b: } (return value: 50)
-\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0
+ b: } (return value: 50)\n
[ FAILED ] ConcurrentCacheMultithreadedTest.accessSameExistingItem (2 ms)
If an item that is in the process of being added to the cache is requested from the cache within the time window when it has been (temporarily) evicted from the cache (as a result of the concurrent turmoil), a concurrent evaluation of the same item is started while the previous one is still in progress.
An item being added to the cache is never evicted due to concurrent turmoil. LRU eviction heuristics was changed to only evict items with non-zero cost.
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #982 +/- ##
==========================================
- Coverage 57.97% 57.65% -0.32%
==========================================
Files 103 103
Lines 4850 4922 +72
Branches 2018 2066 +48
==========================================
+ Hits 2812 2838 +26
- Misses 704 706 +2
- Partials 1334 1378 +44 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
There seems to be a bug in concurrency orchestration of |
|
@veloman-yunkan OK, so I guess this is something to fix before merging? |
@kelson42 Yes, found the bug and filed it as #983. Will fix it tomorrow. |
|
@veloman-yunkan Considering you will treat the issue separatly, does that mean this PR is ready? |
@kelson42 Yes, it is. |
Bug-fix is ready (#984). |
This PR introduces support for user-defined cost to
lru_cacheandConcurrentCacheclasses as an enhancement enabling to fix #947. It was extracted from #960 which in turn was based on #956.