Exactly this config then!
Are you fed up with all the preachy naysayers in forums?
Are you a rebel that was born free and is not afraid to take risks?
Do you find it sometimes annoying that you always know better than everybody else?
Then you are exactly in the right place!
Letās go through some rules these nerds mumble in their stupid forums:
Follow hardware requirements
Pff, follow hardware requirements, donāt use RAID controllers, donāt do that, I hear them whining. To follow hardware requirements, I would have to read. Andrew Tate said that reading makes you stupid and is for losers.
Put VMs on mirrors not RAIDZ
Bro, you know I am the top G with flashy cars and all, but I donāt have that kind of money! Using mirrors? Only 50%? No way! I have a 40TB Jellyfin VM!
Donāt use block storage for files. Use datasets for your Jellyfin files
Nah, mounting a share in Linux is too much of a hassle. I would rather have my files stored in the VM.
By using block storage on RAIDZ, by using block storage instead of datasets, you run into many issues. Read up on how volblocksize is a static number while record size is a max value. Read up on pool geometry.
Bro, I already told you that reading is for loosers and now you expect me to do pool geometry math? Hell no!
If you are too lazy to read up on it, just follow these simple rules: Donāt use block storage for files! Instead, use datasets for files. Separate your files from VM data. Your VM data is stored in zvols and block storage. For block storage, use NVMe mirrors (or at least SSDs). Donāt use RAIDZ1. Never use RAIDZ2 for block storage. Use RAIDZ2 only for large files that are sequentially read and written.
I am a rebel, rules donāt apply to me!
You wonāt get the storage efficiency you think you will get with RAIDZ and block storage
Pfff idiot. I simply set the volblocksize to 64k. Problem solved.
64k volblocksize will lead to read and write amplification and fragmentation
That is a problem for future me. Right now, it works. If it becomes dogslow a few years from now or if my resilver takes forever, I will come back to the forum to whine about it. But mostly to shit on how bad ZFS is.
KISS! Keep it simple, stupid. For example, donāt use QCOW2 on ZFS. Just follow the defaults unless you really know what youāre doing.
I thought you nerds are so clever. Donāt you know about nested virtualization? I run a 40TB SMB share from Windows Server. That server is run by TrueNAS. TrueNAS itself is a VM of Proxmox. Donāt worry bro, I used passthrough for the disks.
If something goes wrong, I expect you nerds to go out of your way to troubleshoot my setup!
Pff, KISS! You can kiss my ass!
āZFS rules donāt apply to me!ā
I sometimes really get that feeling when discussing ZFS. But to be fair, it is pretty good in this forum here ![]()
Your Top G post is giving these same vibes.

I really like your last sentence ![]()
Donāt let grumpy TrueNAS veterans hold you back. You are an incredible human being! You can do anything! Follow your heart.
Lolz, why stop at 770? Real men like me only do chmod 777.
Today, i have the feel that i canāt continue ādressā my avatar in a such casual way, thanks Prakash and Dario for put this on my attention
Come on guys, itās important to maintain a certain decorum!
TBH, I thought you would suggest a fingerloan. Or some flight stuff.
Now I have a serious question (that perhaps doesnāt deserve a separate topic). Iām planning to use 9-wide RAIDZ1 in my offsite backup setup. Is this somehow a bad idea?
I will take it as āNo. Itās not a bad ideaā.
Well, itās arguably a better plan than not having an offsite backup.
But 9-wide raidz1 is not very safe (if one drive goes bad, thereās eight drives worth of data left at risk without redundancyā¦), and since this is an offsite backup you may not be able to react quickly when something goes wrong.
So the serious answer is that 9-wide or 10-wide raidz2 woud be even better.
Paranoids have their remote backup on raidz3 + spare(s).
Perhaps I should have asked it differently. My main consists of mirrored VDEVs. Also, my main is encrypted, and Iām planning to use raw replication (to an offsite RAIDZ). Iāve heard of some fancy issues with raidz and wasted space. Can raw replication from mirrors cause those?
Regarding the redundancy ā I think Iām ok with raidz1 for offsite, as it is kinda unlikely I would ever need it.
This is kindaā what I do also. Striped mirrors main and a RAIDZ1 backup, though my vdev is only 5-wide instead of 9-wide haha.
Due to padding, raidz takes slightly more space than mirrors to host the very same data, but the discrepancy is nothing to worry about. (Unless your data consists of bilions of tiny files a few kB each.)











