-
Notifications
You must be signed in to change notification settings - Fork 192
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Garbage collecting processed data. #21
Comments
You clear data on mass by closing the Chronicle and deleting the files. On 12 March 2013 06:21, M2- [email protected] wrote:
|
Thanks for answering. How about more on the precise scale, deleting indexes that have been read, and not resetting the entire excerpt, possibly loosing queued data? And it pushed 450,000 integers every 20 mills(30 FPS). Overall its function is to break down a images ARGB array from its int-buffer, and push it the each 'frame packet' gateway. So it breaks the image down into 9 different packets of the ARGB array data, and sends it to be read and re-assembled by the client side of the system. Basically this allows for Inner Process Communication of real time applet graphics streaming. Which is meant for the ability to connect to the heap, and have multiple connections, along with the ability to serialize the gateways location, to have the entire read portion of the system transferred to another client, or 'tab transferring' from client to client, and continue the read and image reassembly to be ultimately displaying the given frame, but 30 times in a second, giving a real time effect, with complaints of your low latency. |
Assuming you mean every 33.3 ms (30 FPS) instead of every 20 ms (50 FPS) On 12 March 2013 15:16, M2- [email protected] wrote:
|
This issue is still in open status. Does this mean that there is still no ability to truncate chronicle in running application? |
Not yet. Chronicle 2.0 will be released soon with improved functionality On 5 September 2013 16:44, Maxim [email protected] wrote:
|
I am glad to hear it. Actually, I have just developed the file rolling system myself. Had some troubles with asynchronous chronicle closing thou, so it is not perfectly stable at the moment. All my hopes are on 2.1 then. : ) |
File rolling is not easy but do-able for local chronicles. Where rolling is Btw working on tge javadocs for chronicle 2.0. Something missing until now.
|
@johnkajava You best option is to switch to the newer version under https://github.com/OpenHFT/Chronicle-Queue |
Hello,
Currently I'm using your software to push roughly 400,000 integers, thew 9 different Excerpts gateways, which means about 45,000 nodes /excerpt. This system works as crossfire between read and write, for two different applications. One pushes the data, while one waits for there to be data, and extracts it from the excerpt. Basically the problem is with so much data being pushed and potentially Queued, there Excerpt heap gets pretty large, and needs to be cleared from what was read by the reading portion of the system(data that'll never be read or used again). So once data has been read from the excerpt gateway, how does it get deleted, or basically Garbage collected, without clearing potentially Queued data? I can't seem to find such a method.
Thank you for your time.
The text was updated successfully, but these errors were encountered: