The problem
I've create PGlite instance and have create few tables with text field + tsvector + GIN index and then I've insert 1k rows with text downloaded from https://en.wikipedia.org/wiki/American_Civil_War?action=raw
Command SELECT pg_size_pretty(sum(pg_database_size(datname))) AS sum_all_db_sizes says my database size is just a 328Mb that is fine and well expected for this large texts
However, when i dump database with no compression, archive takes 1.1 GiB (1,130,903,040)
Here is archive content
It looks wal files adds this overhead. Also it is noticeable the wal files size is 1mb.
I've tried to do vacuum full and CHECKPOINT; SELECT pg_switch_wal(); it does not fix a problem.
The consequences
Such disk usage makes PGLite is unusable on real world applications with in-memory storage.
This problem looks even more fatal because of issue #810 that block us of implement own storage to sync only those files that has been changed. Currently we have to re-write 1.1GB on FS every time DB run any operation. Even with DB run in worker and debouncing for sync calls, it makes application freeze on a seconds.
Do you have any ideas why DB takes so much disk space and how to fix it?