iSCSI Volume consuming more space than volume size assigned

Hello Folks,

Can someone explain me, why an iSCSI Volume is allocating around 30% more space in ZFS pool than it has storage assigned?

We have an 74 TiB ZVOL available. What I expect is that we can create an iSCSI Volume with nearly same size. (When not paying attention to 80% rule)

We actually want to keep performance, so we obey 80% rule. We should be able to create an iSCSI Volume with around 58 TiB. But Trunas says no. “Not enough space available.”

So we reduce iSCSI size to around 52 TiB. Volume can now be created successfully. But it actually consumes 72 TiB on ZVOL.

I cant really comprehend, why there is such an difference in size. Can someone please explain? (for dummies)


20TiB Snapshots for a newly created iSCSI Volume? Doesn’t seem plausible to me. On the other hand, dont know, how snapshots work in TrueNAS. Will there be storage pre-allocated for snapshots?

So if it should be in fact the snapshot feature, that is consuming so much storage, how can we completely disable it? Since we don’t use this feature on TrueNAS level. iSCSI is shared storage for vSphere/ESXi - they manage snapshots on their own.

Ok, at this point I’m confused by your units, you’re mixing a lot of TiB and GiB, can you double-check your post and make sure the units are correct?

Were in a hurry. Numbers are correct, Units were mixed up. Cant edit in start post - Units should be all TiB

Ok, that makes at least some sense. I’ve also fixed the post for you.

To figure out exactly what’s going on, we’ll need the output of zfs list -r -o space, at least for the dataset holding the zvols.

Just redoing these steps again to validate:

  • ZVOL has 74,37 TiB in total.
  • I want to create iSCSI Block Device consuming maximum advisable size on ZVOL. So I calculate: 74,37 x 0,8 = 59,496
  • I create iSCSI Block Device with iSCSI Wizard and assign 59 TiB for size.
  • Upon finishing wizard, there is an error thrown: “[EFAULT] Failed to create dataset: cannot create ‘z2-24/storage3’: out of space”
  • I reduce iSCSI size to 52 TiB to circumvent error message, iSCSI now gets created successfully.
  • Storage Dashboard now showing me a red, 94,3% full warning for ZVOL.
  • Used 70.15 TiB (Despite iSCSI Block Device has only 52TiB)

Now for your requested output:

root@truenas[~]# zfs list -r -o space
NAME                                                    AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
boot-pool                                                412G  2.36G        0B     96K             0B      2.36G
boot-pool/ROOT                                           412G  2.35G        0B     96K             0B      2.35G
boot-pool/ROOT/23.10.2                                   412G  2.35G     8.34M   2.34G             0B         0B
boot-pool/ROOT/Initial-Install                           412G     8K        0B      8K             0B         0B
boot-pool/grub                                           412G  8.20M        0B   8.20M             0B         0B
z2-24                                                   4.22T  70.1T        0B    256K             0B      70.1T
z2-24/.system                                           4.22T  94.2M        0B    320K             0B      93.9M
z2-24/.system/configs-ae32c386e13840b2bf9c0083275e7941  4.22T   352K        0B    352K             0B         0B
z2-24/.system/cores                                     1024M   256K        0B    256K             0B         0B
z2-24/.system/ctdb_shared_vol                           4.22T   256K        0B    256K             0B         0B
z2-24/.system/glusterd                                  4.22T   277K        0B    277K             0B         0B
z2-24/.system/netdata-ae32c386e13840b2bf9c0083275e7941  4.22T  91.4M        0B   91.4M             0B         0B
z2-24/.system/rrd-ae32c386e13840b2bf9c0083275e7941      4.22T   256K        0B    256K             0B         0B
z2-24/.system/samba4                                    4.22T   651K        0B    651K             0B         0B
z2-24/.system/services                                  4.22T   256K        0B    256K             0B         0B
z2-24/.system/webui                                     4.22T   256K        0B    256K             0B         0B
z2-24/storage3                                          74.4T  70.1T        0B    149K          70.1T         0B

What is causing this difference of 18,1TiB?

OK, that’s bizarre. The reservation makes sense, except for the whole being larger than it should be part. Is this Core or Scale?

OS Version:TrueNAS-SCALE-23.10.2

Product:Super Server

Model:AMD EPYC 9124 16-Core Processor

RAM:755 GiB

So, what if you create smaller zvols, are they proportionately larger than you asked for?

Smaller zvols? Like only doing a 12wide instead of 24?
Or smaller iSCSI?

No, that’s not a zvol, that sounds a lot more like a vdev (or whole pool). But “12-wide instead of 24-wide” does not sound good. I’m not sure it would explain what you’re seeing, but there’s a chance it would - RAIDZ is completely inadequate for workloads with small blocks (e.g. iSCSI), partly because the space efficiency drops precipitously. Now, I’m not sure to what extent ZFS would account for that in your scenario…

In any case, let’s make sure it’s not a problem before it turns into one, what’s the output of zpool status?

Yeah, also heard of that. Dunno were using this scenario for quite some years now. (where it was still called FreeNAS) So iSCSI serving as shared medium for multiple ESXi. Its kind of working well, but im always open for “improvement suggestions”. I guess dRAID would not be an improvement, right? Just asking, because this is a new storage which is not in production yet. Ideal for some FAFOing around with the settings.

root@truenas[~]# zpool status
  pool: boot-pool
 state: ONLINE
  scan: scrub repaired 0B in 00:00:04 with 0 errors on Wed Apr 24 03:45:06 2024

        NAME        STATE     READ WRITE CKSUM
        boot-pool   ONLINE       0     0     0
          sda3      ONLINE       0     0     0

errors: No known data errors

  pool: z2-24
 state: ONLINE

        NAME                                      STATE     READ WRITE CKSUM
        z2-24                                     ONLINE       0     0     0
          raidz2-0                                ONLINE       0     0     0
            417c1a2a-3b1d-42e5-86c0-7350e2a08dd6  ONLINE       0     0     0
            d428da5f-ca84-440e-a05a-71210ccae649  ONLINE       0     0     0
            1211930a-0344-4ad2-88d5-c6eccf8d8e93  ONLINE       0     0     0
            1d4d9cca-429e-4600-a57e-baa4315a7ad7  ONLINE       0     0     0
            7da14fee-e699-4e7b-869c-bd0bf72c12ff  ONLINE       0     0     0
            bd505b64-f450-4e2e-b2e9-7bd7b4ac27af  ONLINE       0     0     0
            d1575f20-5a77-4da7-936e-e071b2c908be  ONLINE       0     0     0
            2d09fd1a-cb6f-4c19-ae25-e95847018c38  ONLINE       0     0     0
            5ed8153f-b785-4db7-87cb-9a22151dc402  ONLINE       0     0     0
            c6cb0b0b-67d3-4eef-a8e5-63d93045f908  ONLINE       0     0     0
            db6775cc-9b0f-43d9-aa46-be1ea04fcfbe  ONLINE       0     0     0
            82d46a92-9cf2-4f0a-b3df-bae866e87473  ONLINE       0     0     0
            cd16a9ff-efa5-4f80-9278-2299d84ce56d  ONLINE       0     0     0
            fb291b01-0412-463a-bfe2-ec90f70ad8a2  ONLINE       0     0     0
            5325657a-00af-42ac-9aff-3e10617849a0  ONLINE       0     0     0
            65a0091a-0028-4f49-a93c-7a82a341960f  ONLINE       0     0     0
            a93d8d89-a408-4e56-8a50-20dff49422af  ONLINE       0     0     0
            86a6b062-8530-4960-bf95-19d7a3ee71ac  ONLINE       0     0     0
            b6420ae9-f440-43cb-b0f8-f91180846513  ONLINE       0     0     0
            43436072-782b-4ab1-8b50-ffe7edfed2c3  ONLINE       0     0     0
            7a21d09b-4fdf-4d2d-94f2-b5d5bdbfba85  ONLINE       0     0     0
            927aab4f-7039-466b-8a14-40d395b95252  ONLINE       0     0     0
            8dd6bd9a-552d-444b-8045-7ed2a824140d  ONLINE       0     0     0
            f3c9a1d2-11af-479f-92da-d572669638ad  ONLINE       0     0     0

errors: No known data errors

It’s even worse in that regard. Instead of having the same parity for fewer data chunks, you get a ton of empty data chunks - in DRAID, there are no partial stripes like in RAID.

As for your pool, you have a single RAIDZ2 vdev that’s 24-wide. That’s twice what is typically recommended as the maximum, which only really works if you’re doing large files anyway. So any sort of block storage would be a painful experience.
The only realistic solution is to use mirrors, unfortunately.