Can someone explain me, why an iSCSI Volume is allocating around 30% more space in ZFS pool than it has storage assigned?
We have an 74 TiB ZVOL available. What I expect is that we can create an iSCSI Volume with nearly same size. (When not paying attention to 80% rule)
We actually want to keep performance, so we obey 80% rule. We should be able to create an iSCSI Volume with around 58 TiB. But Trunas says no. “Not enough space available.”
So we reduce iSCSI size to around 52 TiB. Volume can now be created successfully. But it actually consumes 72 TiB on ZVOL.
I cant really comprehend, why there is such an difference in size. Can someone please explain? (for dummies)
20TiB Snapshots for a newly created iSCSI Volume? Doesn’t seem plausible to me. On the other hand, dont know, how snapshots work in TrueNAS. Will there be storage pre-allocated for snapshots?
So if it should be in fact the snapshot feature, that is consuming so much storage, how can we completely disable it? Since we don’t use this feature on TrueNAS level. iSCSI is shared storage for vSphere/ESXi - they manage snapshots on their own.
No, that’s not a zvol, that sounds a lot more like a vdev (or whole pool). But “12-wide instead of 24-wide” does not sound good. I’m not sure it would explain what you’re seeing, but there’s a chance it would - RAIDZ is completely inadequate for workloads with small blocks (e.g. iSCSI), partly because the space efficiency drops precipitously. Now, I’m not sure to what extent ZFS would account for that in your scenario…
In any case, let’s make sure it’s not a problem before it turns into one, what’s the output of zpool status?
Yeah, also heard of that. Dunno were using this scenario for quite some years now. (where it was still called FreeNAS) So iSCSI serving as shared medium for multiple ESXi. Its kind of working well, but im always open for “improvement suggestions”. I guess dRAID would not be an improvement, right? Just asking, because this is a new storage which is not in production yet. Ideal for some FAFOing around with the settings.
It’s even worse in that regard. Instead of having the same parity for fewer data chunks, you get a ton of empty data chunks - in DRAID, there are no partial stripes like in RAID.
As for your pool, you have a single RAIDZ2 vdev that’s 24-wide. That’s twice what is typically recommended as the maximum, which only really works if you’re doing large files anyway. So any sort of block storage would be a painful experience.
The only realistic solution is to use mirrors, unfortunately.