Replies: 1 comment
-
Afaik, using sqlite on zfs is not recommended or even supported(?). It could cause write amplification as sqlite tends to do a lot of small writes and zfs will write new copies of entire blocks every time causing degradation of performance. Plus zfs might add additional overhead due to checksumming, compression and what not. It can cause significant write amplification since SQLite tends to do a lot of small writes, and ZFS, afaik, with its Copy-on-Write (CoW) nature, will write new copies of entire blocks every time, even if only a small part of the data is modified leading to severe performance degradation. The issue with any CoW filesystem is that it divides the disk into blocks, which are typically small. An SQLite database is going to span many of these blocks. When the SQLite driver aggregates writes (as it does), many of these blocks can be touched during a single set of CRUD operations. Every single block that has been touched, even if it only contains a fraction of a relational transaction, must then be copied. This leads to excessive data copying, which is a major source of performance degradation. Regarding the frequent writes to the WAL file of sqlite, every transaction, even the small ones are handled by the underlying orm (typeorm) and things such as the api cache (such as tmdb), session management and small operations such as this will run in the background that would cause small transactions that are written to the WAL (write ahead logging) before being committed to the database. This results in frequent write operations, which, on a CoW filesystem like ZFS, amplifies the problem further. TLDR; you should move your sqlite db off of zfs as this will cause a lot of issues. The driver will attempt to copy your database many many times per second. You could move the database to a different filesystem such as ext4, that can handle small writes more efficiently. |
Beta Was this translation helpful? Give feedback.
-
My somewhat convoluted setup has rather high synchronous disk write latency, and pulling up the front page of Jellyseerr has been a 10-20s process.
I noticed the array hosting the docker host's root filesystem image was pegged at 100% synchronous write load when I refresh Jellyseerr.
By setting
sync=disabled
on the zfs volume the VM hosting the Jellyseerr container runs from, the page loads instantly. Set it back tostandard
and we're back to slow. OK so it's definitely something furiously writing to disk!Running
strace -ffp <pid> -T -e trace=%file,fsync
on the docker host shows everything is grinding to a halt overfsync(84)
calls taking ~200ms, which turns out (ls -l /proc/<pid>/fd
) to be the sqlite3 WAL file. The number of calls is WAY short of what I'm seeing on the host, so I've obviously I've got a write amplification issue going on between the layers of virtualisation/abstraction that I need to fix before my SSDs explode...However my googling shows the issue of high disk latency being a problem for a number of users, so here's a potential workaround as long as you don't mind losing any data in Jellyseerr:
I have modified the Jellyseerr environment to run under eatmydata (https://github.com/stewartsmith/libeatmydata) which effectively disables
fsync()
calls. The app runs quickly for me now. For anyone looking to replicate, replaceimage: fallenbagel/jellyseerr
in yourdocker-compose.yml
file with:or if running natively, add eatmydata with your package manager and edit the init file to wrap it
Again THIS WORKAROUND CAN CAUSE THE LOSS OF ANY PERSISTENT DATA WRITTEN BY JELLYSEERR. Nothing else, just Jellyseerr. Still, be aware!
Does anyone know why Jellyseerr make so many separate write transactions to the sqlite db whenever you load the front page? I'm unfamiliar with the source and couldn't find any obvious signs of commit/insert/update/etc so I'm guessing it's abstracted away somewhere and used statelessly? Perhaps updating the last-seen attribute on a cache index?
Also, I see #628 which'll make the problem go away nicely, maybe. I suspect there may still be numerous transactions involved in viewing the homepage...
Beta Was this translation helpful? Give feedback.
All reactions