-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory Leak Causing Pod Shutdowns #115
Comments
Thanks for reporting! We are investigating on this. Can you provide an estimate of the session size when serialization happens, or a project replicating the issue? |
Hi, I work at the same client as @MichaelPluessErni - let me append some details here. The first image is a time vector of |
@heruan Based on my rough estimations, a single VaadinSession seems to be about 7kb big. |
@MichaelPluessErni @anderslauri could you be able to keep a couple of heap dumps and compare them to check what objects are actually making the memory usage grow? This would help a lot in the investigation. |
Yes, this is possible. Let us do this. |
@MichaelPluessErni @anderslauri and additional question: which driver are you using to connect to Redis, Lettuce or Jedis? Did you perhaps try to change the driver to verify that the leak is independent of it? EDIT: looking at the other issues, it looks like Lettuce is in use |
@mcollovati We're using: It is not easy to test other versions as the bug appears on the productive system. |
@mcollovati I'm now able to produce heap dumps and analyze them. A heap dump from one of our productive pods: 477 MB (hprof file size) I hope this helps already. Otherwise I'm available for more specific analyses on heap dumps. |
@MichaelPluessErni thank you very much! Is this dump taken when memory is already leaked? If so, I would also take a dump before the memory grows to compare them. Memory Analyzer (MAT) is a great tool to inspect heap dumps. It also provides a Otherwise, if you can privately share the dump with me, I can do further analysis. |
@mcollovati this dump is pre-leak, meaning from a "healthy" pod. |
@mcollovati Using MAT proves difficult, as I'm not able to download it on the company laptop. We're investigating whether it is possible to send you the dump. Meanwhile, we've found a GC configuration that helps ameliorate the memory leak:
|
Error description:
Since introducing the session serialization, we are experiencing multiple pod kills in our environment.
The Vaadin application is running in production and the pods on which it runs are regularly reaching the memor limit of 3gb, causing the pods to be shut down. Sometimes they reach the limit in as little as 4 hours.
This only happens since we use the session serialization with the kubernetes kit. Before that, the memory usage would go down during the afternoon and evening, when we have less traffic.
It looks as though there is a memory leak somewhere, as the memory is not cleared properly.
We tried using a different garbage collector. This helped a bit, but did not solve the problem.
Expected behaviour:
The memory usage should not be permanently affected by the session serialization.
While it is clear that there will be more memory usage during the serialization itself, it should not cause lasting memory leaks.
Errors during serialization or deserialization should not cause memory leaks.
Details:
A comparison of the memory usage of our pods before and after the introduction of the session serialization (session serialization was introduced on the 3rd of March):
It is visible from the logs, that the frequency with which pods are killed has increased drastically:
The memory leaks seem to happen in "jumps".
This pod is soon going to be killed after one more memory leak event. Logically there are more memory leak events occuring during times of higher usage (in our case, during working hours).
The new garbage collector does not solve the problem:
The text was updated successfully, but these errors were encountered: