-
-
Notifications
You must be signed in to change notification settings - Fork 65
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Data integrity during compact #163
Comments
Thanks for the report. That's bad indeed and so far has not been reported. 2 things you can try:
Countries.delete(txn); change to await Countries.delete(txn); Weird that this should have shown a lint warning in pedantic mode (or a runtime crash in debug mode). It seems you are catching the exception (any logs here to see if any error occurs?) in the transaction. Be aware that this means that the transaction can succeed partially, i.e. whatever has ran inside the transaction until an exception occurs.
sqflite is used as a journal database with less compacting issue to deal with. Such issues are hard to unit test. I will try though to reproduce |
Oh, dear me, I should've spotted those missing awaits. Yes, very very likely to cause issues, and exactly these kind of non-reproducible ones. Although I've already moved a bit away from that code to combat the performance issues, I'll restore it temporarily to see if it solves the problem. The catches are there from the Sqflite era where they simply warned me with a "Hot reload?" debug print (apart from logging the error, of course) because I seem to recall that I had some issues with hot reload and simply wanted to warn myself in debug mode that I need to restart the app instead. But they didn't get triggered at all. If the transaction can't throw an exception normally, I don't want to stick to them at all. What I did to reorganize was to separate my data into two databases, this means that instead of deleting, I can simply drop it with However, with |
The awaits were very much required for sure but I don't think that solved the issue completely: the following ones are already the same size. However, if I compare them (after sorting in a text editor to account for the different order, of course), they are only equal in size, not in contents. The database is not full in either case, just different items are missing in the different runs. I add a counter to my code to see how many records I am supposed to write, actually. |
That is why you should not catch exception inside a transaction unless needed. Or rethrow it and catch the exception around the transaction to know it has failed.
Do you close the previous instance properly before dropping the database and opening in empty mode? There is always a risk of having pending actions on the previous database that could mess up the database.
No. Only delete and update should trigger compacting. Just adding should be fine. Good if you manage to reproduce your issues in a unit test, not always easy though. |
Probably not, this is what I thought in the meantime as an error, too, but now I reverted to the previous code (one database, no You were right about pedantic, of course, although I had my own So, for now, I go on adding a counter and seeing if I get actually that much records as I'm supposed to put in. |
Retracting what I said earlier. Stupid of me. The differences are due to the few items that were So, in the end, it was basically my mistake of not using |
@alextekartik Alex, one more issue but if it's known, I don't want to report it separately. Is it a known limitation that |
I have a not loo large database, about one megabyte, around 2700 records in 12 stores. When I need to refresh the contents from the net, I use a transaction and delete most of the stores (only a few items remain) and then fill them up again.
plus a few more, of the same pattern.
So much deletion and insertion inevitably leads to a compact operation as the transaction finishes. Although this introduces some performance issues, the more pressing one seems to be that I lose data during the compacting. I use the same data source every time, and I make sure using
print()
counts that the data I push into the database is the same for sure. But the resulting database will not be the same. It varies in byte size and row size quite considerably in repeated tests. It's mostly similar to a flushing problem at the end of writing the new file: records that happen to be written at the end of the database are either there or not, in a completely random way. But I spotted a few missing records in other places as well.A sample of four consecutive runs:
![Annotation 2020-05-14 141931](https://user-images.githubusercontent.com/2234271/81933616-fc3f0e00-95ed-11ea-9880-358118bea03e.png)
The text was updated successfully, but these errors were encountered: