-
Notifications
You must be signed in to change notification settings - Fork 391
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Linear search when releasing a sample scales very poorly #2221
Comments
Took another look. There doesn't seem to be any issue with removing a chunk when we know its index in the
|
Yes so, after my first quick pass I'm thinking we can update the API's in the following way to achieve the goal of eliminating this linear search:
|
@gpalmer-latai This approach could work for the good case. But keep in mind, the But the bad case still has some problems. When an application crashes and someone has to iterate once over 100.000 chunks to clean them up manually you will see a hick up in the system where suddenly, due to a crash, the runtime increases. This is maybe unacceptable for a real-time system and for a safety-critical one (freedom from interference). The point I am trying to make is, that we can for sure apply your solution but in my opinion it maybe solves a problem that should not exist in this current form. I do not doubt the use case but the used messaging pattern (publish-subscribe). Could you tell us a bit more about the setup, especially:
In iceoryx2 we have planned messaging patterns also like: pipeline & blackboard and those could be easily manually implemented in iceoryx since all the right building blocks are already here. |
This isn't relevant as the data structure is not meant to be used concurrently. It is even explicitly NOT thread safe. It exists in shared memory only to allow RouDi to clean up when the subscriber process crashes.
Not so. While RouDi will not have the index, it doesn't really matter. RouDi doesn't need to release the samples in any particular order so it can simple iterate forward through the list, releasing everything with O(n) time complexity. For 100,000 samples this is probably just a ~10ms operation. The problem described in this issue has to do with random access time complexity. But the solution is the same as it is for a |
As for our setup.
As a mitigation we are already exploring batching to reduce publisher sample count, but that can only help to an extent and is not always possible. It is also worth noting that we can't rely on our pattern of access here to solve the problem. For example - when we drain samples with oldest samples first, we are hitting the worst case scenario because the oldest samples are at the back of the used chunk list. You might solve this problem by somehow allowing reverse iteration of the list, but we will still have other release patterns with relatively new and in-the-middle samples. |
But it is possible that I get this wrong, since I am not so familiar with the source code anymore. @elBoberido I think you are the one who implemented it, maybe you can shine some light on it. Addendum: I was wrong, the list must just be synced with RouDi. |
No worries 🙂 I just took a closer look and there is a slight complication to the approach I outlined. In true forward list fashion, list removal does unfortunately require knowing the index to the previous element:
Will have to do some more brainstorming to figure out what additional info needs to be stored where to make this work. The first obvious solution that pops out to me though is that we simply return from insert and take as an argument to removal:
|
Note that this information basically gets immediately plumbed into the deleter of a |
Ah, but the untyped subscriber doesn't so neatly encode a destructor: https://github.com/gpalmer-latai/iceoryx/blob/e97d209e13c36d71237c99e0246310b8029f8f26/iceoryx_posh/include/iceoryx_posh/internal/popo/untyped_subscriber_impl.inl#L54, so we'd have to store this information elsewhere... |
@gpalmer-latai Instead of using some kind of list, couldn't you fill the |
The performance hit isn't in removal of the elements. It is in locating where they are. Setting aside the quirks of implementation (every operation has to be on less than 64 bits because of torn writes), the |
So following along my line of thought here - using
in the The problem then becomes plumbing this through the rest of the stack. For the typed API's it is relatively simple. You just bubble this data structure up to where the sample is created and plumb it into the deleter of the The catch is untyped APIs. They currently only return
|
If it is preferable to you I'm happy to speak about this problem synchronously via some video chat. It is a bit of a high priority issue for us and I intend to implement some solution immediately. |
There is also a compromise solution. We could add the plumbing to allow typed endpoints to have fast removal by referencing the insertion index in those custom deleters, but leave the untyped endpoints alone and fallback to linear search. I'm not a fan of this solution because of the general complexity of supporting two different paths, and also because it will require us to replace our usage of the untyped subscriber with typed ones, which won't be trivial because we actually rely on the type-erased nature of samples received this way, extracting the size from the header. Using typed API's instead would require some ugly hackery. I think it is doable, but still... |
Ah shoot, realization just dawned on me about a flaw in my proposed solution here. We cannot simply store the previous element index in the returned data structure, because the previous element could be removed, invalidating this index. Instead what we probably need to do is make this a "doubly linked list" by adding a |
Hah, cool. |
FYI I'm working through an implementation right now. I've gone a different route in refactoring the UsedChunkList and am leveraging the |
Can confirm that using the Working through the other API layers now. |
Sorry for the late response. The mpmc_loffli approach should work but might add a higher cost than necessary due too memory synchronization. Have you thought about using the As you noted, the more critical part is the API breakage for the untyped API which is also used by the C Binding. We have to be careful here since we might break quite a lot of code out there. |
My intention is to try to run Iceperf to see the impact of the synchronization. If it is too problematic we can always fall back to a simpler slot map without the mpmc part. |
I think I might also be able to revert my changes to the untyped API's to use the |
There is another option. Leave the current API as legacy API and create a new one in the experimental folder. It is a bit more of work but also gives us the option to experiment with the best approach. Especially if you encounter more issues with your specific setup of 100k samples. We could use this chance to rethink the untyped API a bit. Instead of using If you can read some Rust, it could look similar to this https://github.com/eclipse-iceoryx/iceoryx-rs/blob/master/examples/publisher_untyped.rs#L20-L30 |
With the experimental API, I assume we still have to maintain the changes to the middleware layers But then we can maintain a |
By the way, I've got Iceperf building and running in my branch. It looks like average RTT latency went from 4.2 to 5.9. I'll try now with the FixedPositionContainer. |
WIth the FixedPositionContainer I get back down to 5.5. I suspect most of the added latency is therefore from the friction I've added through the upper API layers passing around a struct instead of a pointer. Also if Iceperf uses the untyped API I suspect the logic of creating a smart chunk might contribute. |
Question about this actually - does the container still satisfy this constraint?
I think that since there is no way to just "iterate over the data array", RouDi couldn't safely release all the shared chunks. So I AM forced to use a separate free list - though instead of the mpmc_loffli I could just use an iox::vector of indices. |
It's too bad because the iteration properties of the FixedPositionContainer would make it a suitable choice for backwards compatibility - allowing the existing untyped API to continue releasing with the raw pointer and just iterating over the container to match against that pointer. |
I need to look closer at the FixedPositionContainer but why do you think it is not suitable? Btw, those benchmark numbers are quite high. Did you test in release mode? |
Because of this invariant the UsedChunkList needs to uphold:
In order for RouDi to clean up the shared chunks, it needs to iterate over the actual data array. The FixedPositionContainer does not expose this directly (though I suppose you could always add some |
I don't know. I'm not sure it matters too much for comparison sake, though I could go back and fiddle around with it some more. The point is that my changes altered the Iceperf average under the default build configuration from 4.2 microseconds RTT to 5.9 for the mpmc_loffli implementation with altered APIs, 5.6 for the FixedPositionContainer implementation with altered APIs, and 4.6 when ONLY swapping out the FixedPositionContainer for the forward list but leaving the APIs unchanged. |
Right now I'm doing another pass using a simple |
From my gut feelings I think it will be worse for large amount of samples, especially when the ones from the beginning of the container are removed. |
It will be worse if you have to perform a linear search, yes. That would be the case if for example we wished to maintain the legacy API, since over time you may end up iterating over large swaths of tombstone values (unless we can adapt the However for the typed API and experimental new untyped subscriber, the removal will always be O(1) because you directly pass the slot handle with the index allocated from the free list as an argument to remove. |
When you call |
Not sure what you mean by iterating over tombstone values? The FixedPositionContainer skips removed elements on iterations. The FixedPositionContainer would need some adaptions but I think they would not be too intrusive. There are basically two options. Either adding a custom callback to the |
I meant for implementations using If we adapt the |
So just as a quick update - switching from I have incorporated a much more polished version of master...gpalmer-latai:iceoryx:iox-2221-constant-time-chunk-release into our fork of Iceoryx - one which backtracks the changes to the untyped API's to use smart wrappers, but instead returns and takes the slot map handle (renamed from Unfortunately I will have to context switch to another task for the time being and don't have an upstream PR / design proposal to share as of yet. Once I am able to free up some time to do so though, I would propose something along these lines:
|
@gpalmer-latai that sounds reasonable. Go ahead. |
Required information
Operating system:
Ubuntu 20.04 LTS
Compiler version:
GCC 9.4.0
Eclipse iceoryx version:
master
branchObserved result or behaviour:
The time it takes to drain a buffer of subscriber samples increases quadratically. At only 100,000 samples a process running on my desktop will hang for minutes.
See this minimal reproducing example: #2219
The quadratic time complexity of draining a subscriber buffer comes from this linear search in the
UsedChunkList
:iceoryx/iceoryx_posh/include/iceoryx_posh/internal/popo/used_chunk_list.inl
Line 70 in b2e4137
Expected result or behaviour:
Releasing a chunk should ideally be a O(1) operation.
I have not yet dived too deeply into the datastructure but I imagine it would be somehow possible for a sample to keep track of its location in the
UsedChunkList
and be able to directly remove it. Assuming however that the currentUsedChunkList
is just a forward list, we would need to go in and change it to be a doubly-linked list to facilitate O(1) removal.Increasing the subscriber queue depth beyond the default 256 as well as doing custom buffering to prolong the lifetime of interesting messages in a zero-copy fashion is a useful and common use case, so accumulating these samples should not have such a drastic effect on performance.
Conditions where it occurred / Performed steps:
A minimal, runnable example is provided here: #2219
Basically all one has to do is increase the
MAX_CHUNKS_HELD_PER_SUBSCRIBER_SIMULTANEOUSLY
to something large, buffer ~10,000 - 100,000 samples, and then try releasing them all at once. The amount of time it takes to do so grows quadratically. The linked example collects some statistics on overall time, as well as worst case time to release a single message.At ~10,000 on my desktop I came up with:
As you see here - the average time it takes to release a sample from a 10,000 sample buffer is ~.42 ms. Multiply this by 10,000 and it takes about 4.2 seconds to drain the buffer.
If you increase the sample size (no pun intended) to 100,000 you should see an average latency of 4.2ms, which, multiplied by 100,000 will result in a total time to drain the buffer of 420 seconds, or 7 minutes.
The text was updated successfully, but these errors were encountered: