Memes! TrueNAS, ZFS, and related | (Share your own!)

4 Likes

7 Likes

2 Likes


7 Likes

Exactly this config then!

Are you fed up with all the preachy naysayers in forums?
Are you a rebel that was born free and is not afraid to take risks?
Do you find it sometimes annoying that you always know better than everybody else?

Then you are exactly in the right place!

Let’s go through some rules these nerds mumble in their stupid forums:

Follow hardware requirements

Pff, follow hardware requirements, don’t use RAID controllers, don’t do that, I hear them whining. To follow hardware requirements, I would have to read. Andrew Tate said that reading makes you stupid and is for losers.

Put VMs on mirrors not RAIDZ

Bro, you know I am the top G with flashy cars and all, but I don’t have that kind of money! Using mirrors? Only 50%? No way! I have a 40TB Jellyfin VM!

Don’t use block storage for files. Use datasets for your Jellyfin files

Nah, mounting a share in Linux is too much of a hassle. I would rather have my files stored in the VM.

By using block storage on RAIDZ, by using block storage instead of datasets, you run into many issues. Read up on how volblocksize is a static number while record size is a max value. Read up on pool geometry.

Bro, I already told you that reading is for loosers and now you expect me to do pool geometry math? Hell no!

If you are too lazy to read up on it, just follow these simple rules: Don’t use block storage for files! Instead, use datasets for files. Separate your files from VM data. Your VM data is stored in zvols and block storage. For block storage, use NVMe mirrors (or at least SSDs). Don’t use RAIDZ1. Never use RAIDZ2 for block storage. Use RAIDZ2 only for large files that are sequentially read and written.

I am a rebel, rules don’t apply to me!

You won’t get the storage efficiency you think you will get with RAIDZ and block storage

Pfff idiot. I simply set the volblocksize to 64k. Problem solved.

64k volblocksize will lead to read and write amplification and fragmentation

That is a problem for future me. Right now, it works. If it becomes dogslow a few years from now or if my resilver takes forever, I will come back to the forum to whine about it. But mostly to shit on how bad ZFS is.

KISS! Keep it simple, stupid. For example, don’t use QCOW2 on ZFS. Just follow the defaults unless you really know what you’re doing.

I thought you nerds are so clever. Don’t you know about nested virtualization? I run a 40TB SMB share from Windows Server. That server is run by TrueNAS. TrueNAS itself is a VM of Proxmox. Don’t worry bro, I used passthrough for the disks.
If something goes wrong, I expect you nerds to go out of your way to troubleshoot my setup!

Pff, KISS! You can kiss my ass!

ā€œZFS rules don’t apply to me!ā€

I sometimes really get that feeling when discussing ZFS. But to be fair, it is pretty good in this forum here :kissing_heart:

7 Likes

@winnielinnie

6 Likes

Your Top G post is giving these same vibes.

make-your-truenas-dreams-come-true

3 Likes

I really like your last sentence :heart:

Don’t let grumpy TrueNAS veterans hold you back. You are an incredible human being! You can do anything! Follow your heart.

2 Likes



3 Likes

Lolz, why stop at 770? Real men like me only do chmod 777.

3 Likes

Today, i have the feel that i can’t continue ā€œdressā€ my avatar in a such casual way, thanks Prakash and Dario for put this on my attention


Come on guys, it’s important to maintain a certain decorum!

7 Likes

TBH, I thought you would suggest a fingerloan. Or some flight stuff.

Now I have a serious question (that perhaps doesn’t deserve a separate topic). I’m planning to use 9-wide RAIDZ1 in my offsite backup setup. Is this somehow a bad idea?

Close enough.

2 Likes

I will take it as ā€œNo. It’s not a bad ideaā€.

4 Likes

Well, it’s arguably a better plan than not having an offsite backup.
But 9-wide raidz1 is not very safe (if one drive goes bad, there’s eight drives worth of data left at risk without redundancy…), and since this is an offsite backup you may not be able to react quickly when something goes wrong.
So the serious answer is that 9-wide or 10-wide raidz2 woud be even better.

Paranoids have their remote backup on raidz3 + spare(s).

3 Likes

Perhaps I should have asked it differently. My main consists of mirrored VDEVs. Also, my main is encrypted, and I’m planning to use raw replication (to an offsite RAIDZ). I’ve heard of some fancy issues with raidz and wasted space. Can raw replication from mirrors cause those?

Regarding the redundancy – I think I’m ok with raidz1 for offsite, as it is kinda unlikely I would ever need it.

This is kinda’ what I do also. Striped mirrors main and a RAIDZ1 backup, though my vdev is only 5-wide instead of 9-wide haha.

Due to padding, raidz takes slightly more space than mirrors to host the very same data, but the discrepancy is nothing to worry about. (Unless your data consists of bilions of tiny files a few kB each.)

1 Like

D**n this forum and his toxic community!!1!1!!!1!11

7 Likes