Truenas Scale - jelllyfin - library.db

Hi,

I have a question which has been probably asked several times already. But I can’t find an answer, which would satisfy my needs.
I am running the jellyfin App on TrueNas Scale. I persisted the config location in an extra dataset, because I had some troubles with it already. All worked liked a charm for about 2 months, maybe 3. No I got the problem, that the library.db of jellyfin is crashing nearly weekly, sometimes more often. I tried to recover the db with different approaches (dumps, recover, manual) but all are quite painful to do somehow. nearly nothing really works, the database is malformed, end of story. My only “real” solution for now is, to delete or rename the whole file, and let jellyfin create a new one. This means, that all library information, every seen status etc. are gone.

Has anyone come up with a solution for this problem?

Problem is, that my machine, which is currently running, is the only one, which has enough power to run jellyfin smoothly. I am on the brink of building a new server, which is capable of hosting jellyfin too and delete it as a whole from truenas scale.

Hope someone has may experienced the same and a solution.

Thank you!

Edit1: I forgot to mention, not even rolling back seems to help somehow (snapshots every 1h). I am really out of ideas right now.

It might help to post logs showing the problem you are seeing? The sqlite database should not just “go corrupt” and if restoring from a snapshot doesn’t fix then I wonder if there are maybe other issues.

This wouldn’t be related to corruption, but I also followed the guidance here Workload Tuning — OpenZFS documentation to set the config dataset record length and database page size. You can set the pagesize when you next rebuild the database. I won’t swear it does anything but if your server it might help with database I/O.

Do you already have tried steps described there?
IMHO copy the file somewhere, dont make test directly on the file, especially while jellyfin have access on it.

Sure thing:
simply integrity check:


probably some index which is corrupt.

Here is a simple recover try:

In an manual dump file I find entries like this:


sure, I can fix those lines and create a new db with the “rest” of the data, but doing that often sucks a bit.

At container start I get the default: db is malformed:


nothing more happens and the container does not come up again. I had situations where it came up, but there was no content anymore, which was displayed.

With the rollbacks, everythink looks fine. No rollback issues, no nothing, just a db which is somehow malormed. Even if i rollback to a point where everything worked like a charm. This part bugs me the most, because, whats the point of snapshotting, if it does not help if needed.

I dont really know, what logs you want to see. Hope this helps a bit?

yes I have.
true, I don’t copy it to somewhere else, but I always shutdown jellyfin, while I try to fix things on the database.

Edit: sorry, this reply was meant for @oxyde

1 Like

I don’t have so much direct experience with sqllite, to me seems that you tried all the things i would have done in your place.
I think that the better things to do for the moment, instead of try to recover the old data, is to understand why you have so many corruption.

Where this db “sit” specifically? on a network share?
Do you have some script/cron that stop containers? Or something else that access to this db for other purpose?

This really have no sense, i can understand why you are puzzled. But, if you change approach, i mean: instead of rollback, the same dataset with the file, create a new one and see if the corruption is still there

me neither, but I am a database guy professionally, so I get most of the things quick in it (and I don’t like it, just personal preferences).
I have 2 “Main Datasets”, on called Main on big chunky HDDs and on called SSD on so to speak, SSD’s. The dataset with the DB in it is a sub-sub-dataset only to host the jellyfin configs, which includes the databases needed.
ix-applications (where the containers run) are on the same “main-dataset” SSD. so basically it sits next to it, but on tier higher (SSD/ix-applications vs SSD/AppConfigs/jellyfinConfig).
No scripts are running regarding container or there files. I have Sonaar running to on the same machine (SSD/AppConfigs/SonaarConfig) and it happend once in serveral months.

Moving them back to ix-applications do not seem very good, because the container do not have sqlite3 or anything else to really be able to debug that. Furthermore, it is not consistent. if the container dies, all dies. Now I “just” have a big chunk dying on a regular basis, but not all of it.

Haven’t tried that one, maybe worth a shot.

I have tried it and now I am even more puzzled than before. With a restore to a new dataset, the database is just fine. integrity check returns ok.
image

so the question is, why doesn’t it restore properly?

i speak quite well mssql :joy: but for test purpose or really little project i have to admit, sqllite is pretty usefull (totally serverless), and i have used it with with pleasure.
Anyway :smile:

i suspect that is something else that corrupt your file → i never used sonarr, but AFAIK it can interacts with Jellyfin, maybe the culprit is a bad interaction?

For the moment move everything in another new dataset, test if corruption come again; if yes repeat those steps again, but disabling sonarr.
Now that you have achieved a working db, at least you don’t lose anything, backup it frequently

edit: also disable share on the dataset, if is active

for little projects it will be fine, but I like bigger systems better, even with the overhead of managing a server (for example postgreSQL, I manage serveral of those and they are mostly fire and forget setups :smiley: ran into zero issues until now)

Yeah, I restored it for the moment from the second dataset, so that my kids are satisfied again, I can have a moment of peace, until the next round starts. Thank you very much guys for standing by me in a deep moment of WTF.

I may update this thread in the future again, if I am able to find out, what exactly is the culprit of this.

1 Like