-
-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Random exception by reading wrong position #44
Comments
It seems that the last few bytes of null bitmap is "reused". According to tcpdump, I received the following bytes for the failed row.
But my print debug in
Seems a crystal's bug. |
I found the root cause. It only happens when a read request to mysql TCP socket is issued across buffer boundary. One solution is to check the number of bytes read and call But I feel the behavior of |
crystal-lang/crystal#4796 |
When I read many records (10,000 rows with 20 fields), crystal-mysql randomly raise exceptions when I call
rs.read()
. Stacktrace also changes randomly from the following 2 patterns in my case.I tried to find the reason and it seems that sometimes crystal-mysql forgets to read some bytes from IO and reads bytes from wrong place.
For example, after fetching a record with id = 221, it then try to fetch the content of the row of id = 222. But then
rs.read(Int32)
returns 56832 which equals 222 * 256 which indicates one byte shift of reading position.In another case there was two bytes shift (1975 expected but got 129433683 = 1975 * 65536 + 83).
If this happens it soon ends up with the exceptions above.
It starts to happen at somewhat random place of the response stream.
Sometimes it happens at 222th row, sometimes at 1975th row.
But it's not totally at random. If I run my code 10 times,
5 times at 222th row, 2 times at 1975th row, ...
I suspected GC but even if I
GC.disable
, the problem still occurs.Any help? I continue to find the reason but if anybody suggest me anything I'm willing to do (printf debug in io.cr, for example).
Maybe it is related to #39 .
If I limit the number of rows to 10, it does not happen in my environment.
So my workaround is to divide 10,000 records into 1,000 sql to read.
The text was updated successfully, but these errors were encountered: