Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Data integrity during compact #163

Closed
deakjahn opened this issue May 14, 2020 · 8 comments
Closed

Data integrity during compact #163

deakjahn opened this issue May 14, 2020 · 8 comments

Comments

@deakjahn
Copy link

deakjahn commented May 14, 2020

I have a not loo large database, about one megabyte, around 2700 records in 12 stores. When I need to refresh the contents from the net, I use a transaction and delete most of the stores (only a few items remain) and then fill them up again.

bool success = await db.transaction<bool>((txn) async {
  try {
    progress(1);
    if (data.countries != null) {
      Countries.delete(txn);
      for (final country in data.countries) {
        await Countries.record(country.code).put(txn, country.toMap());
      }
    }
    if (data.currencies != null) {
      Currencies.delete(txn);
      for (final currency in data.currencies) {
        await Currencies.record(currency.id).put(txn, currency.toMap());
      }
      globals.currency = null;
    }

    progress(2);
    if (data.categories != null) {
      Categories.delete(txn);
      for (final category in data.categories) {
        await Categories.record(category.categoryId).put(txn, category.toMap());
      }
    }

plus a few more, of the same pattern.

So much deletion and insertion inevitably leads to a compact operation as the transaction finishes. Although this introduces some performance issues, the more pressing one seems to be that I lose data during the compacting. I use the same data source every time, and I make sure using print() counts that the data I push into the database is the same for sure. But the resulting database will not be the same. It varies in byte size and row size quite considerably in repeated tests. It's mostly similar to a flushing problem at the end of writing the new file: records that happen to be written at the end of the database are either there or not, in a completely random way. But I spotted a few missing records in other places as well.

A sample of four consecutive runs:
Annotation 2020-05-14 141931

@alextekartik
Copy link
Collaborator

Thanks for the report. That's bad indeed and so far has not been reported.

2 things you can try:

  • Can you await the delete operation:
Countries.delete(txn);

change to

await Countries.delete(txn);

Weird that this should have shown a lint warning in pedantic mode (or a runtime crash in debug mode). It seems you are catching the exception (any logs here to see if any error occurs?) in the transaction. Be aware that this means that the transaction can succeed partially, i.e. whatever has ran inside the transaction until an exception occurs.

  • Try sembast_sqflite.

sqflite is used as a journal database with less compacting issue to deal with.

Such issues are hard to unit test. I will try though to reproduce

@deakjahn
Copy link
Author

deakjahn commented May 14, 2020

Oh, dear me, I should've spotted those missing awaits. Yes, very very likely to cause issues, and exactly these kind of non-reproducible ones. Although I've already moved a bit away from that code to combat the performance issues, I'll restore it temporarily to see if it solves the problem.

The catches are there from the Sqflite era where they simply warned me with a "Hot reload?" debug print (apart from logging the error, of course) because I seem to recall that I had some issues with hot reload and simply wanted to warn myself in debug mode that I need to restart the app instead. But they didn't get triggered at all. If the transaction can't throw an exception normally, I don't want to stick to them at all.

What I did to reorganize was to separate my data into two databases, this means that instead of deleting, I can simply drop it with DatabaseMode.empty when refreshing. And, of course, this will also help me with startup time as I made the database needed for the actual startup as small as possible. This wasn't a pressing issue with Sqflite, of course.

However, with DatabaseMode.empty I still see increasing database size and very long compacting transactions. Is this normal? I kinda expected it to present me with an empty database that I can fill without fearing compact retributions. :-)

@deakjahn
Copy link
Author

deakjahn commented May 14, 2020

The awaits were very much required for sure but I don't think that solved the issue completely:

Annotation 2020-05-14 155603

the following ones are already the same size. However, if I compare them (after sorting in a text editor to account for the different order, of course), they are only equal in size, not in contents. The database is not full in either case, just different items are missing in the different runs.

I add a counter to my code to see how many records I am supposed to write, actually.

@alextekartik
Copy link
Collaborator

If the transaction can't throw an exception normally, I don't want to stick to them at all.

That is why you should not catch exception inside a transaction unless needed. Or rethrow it and catch the exception around the transaction to know it has failed.

instead of deleting, I can simply drop it with DatabaseMode.empty when refreshing

Do you close the previous instance properly before dropping the database and opening in empty mode? There is always a risk of having pending actions on the previous database that could mess up the database.

I still see increasing database size and very long compacting transactions. Is this normal?

No. Only delete and update should trigger compacting. Just adding should be fine.

Good if you manage to reproduce your issues in a unit test, not always easy though.

@deakjahn
Copy link
Author

Do you close the previous instance properly before dropping the database

Probably not, this is what I thought in the meantime as an error, too, but now I reverted to the previous code (one database, no empty to see if I can pin down or eliminate the problem altogether, not worrying about the transaction performance for now).

You were right about pedantic, of course, although I had my own analysis_options.yaml, it wasn't using pedantic.

So, for now, I go on adding a counter and seeing if I get actually that much records as I'm supposed to put in.

@deakjahn
Copy link
Author

deakjahn commented May 14, 2020

Retracting what I said earlier. Stupid of me. The differences are due to the few items that were added rather than put. Keys aren't stable then. Sorry. I'll go back to the empty mode and test it, too.

So, in the end, it was basically my mistake of not using await but I don't know how you can make this foolproof. Pedantic is a hit or miss, after all.

@deakjahn
Copy link
Author

deakjahn commented May 16, 2020

@alextekartik Alex, one more issue but if it's known, I don't want to report it separately. Is it a known limitation that await _dbFactory.openDatabase() (web factory) never returns after a hot reload? I don't have the slightest problem with it on first run but after the first reload, the next line will never be reached. No log, nothing, just stops in the tracks.

@alextekartik
Copy link
Collaborator

I don't want to report it separately

@deakjahn I've tracked the issue here: #165

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants