Where's my space?

Hey everyone,

I’m building a crazy media server for home use. I splurged on 22TB WD drives. I have a single vdev, 24 wide, raidz3 (I don’t need crazy speed, space is more important hence the single vdev).

Anyways, 22TB drives = 20 TiB. 24 drives - 3 for parity = 21 drives. 21 * 20 TiB = 420TB.

However, when I create the pool I only end up with 383.45 TiB.

What am I overlooking? Thanks!

Holy crap, don’t do that. Ever. You won’t get “bad” performance, you’ll get “unusable” performance.

420 TB * 0,909 TiB/TB = 380ish TiB

Everything checks out. Don’t forget that disks are sold in different units than those used by all relevant OSes to report disk dizes.

Edit: see @dan’s post below. I missed one of the steps while reading your post, and since my math made sense, it slipped through.

4 Likes

You likely want two 12-wide[1] RAIDZ3 VDEVs: that will bring down your total usable space[2] to around 315 TB[3].

Suggested readings are Introduction to ZFS | TrueNAS Community, iX's ZFS Pool Layout White Paper and Assessing the Potential for Data Loss | TrueNAS Community.


  1. The reccomended maximum width for a single VDEV. ↩︎

  2. usable space = total space - 15% (usually it’s 20%, I personally use 15%). ↩︎

  3. I used ZFS Capacity Calculator - WintelGuy.com for the calculations. ↩︎

2 Likes

OP had done the (rough) TB/TiB conversion already; the expected 420 TB should more or less account for that. What other effect the pathologically-bad pool design might have I don’t know.

2 Likes

OTOH, it would be an educational / illustrative data point to see how long a pool resilver that wide and large could take once the pool was reasonably filled (50-80%). Talk about edge case.

I also wonder just how badly system performance in general would be given the number of drives in the VDEV. It might be ok initially when the pool is completely empty but I imagine that things will start to bind up quickly as the pool fills.

Two 12-wide Z3 VDEVs is as wide as one would want to go with a WORM array.

2 Likes

Would that not be a great use case for dRAID?

Anyway, despite the bit byte conversion, there is also a padding and pool geometry penalty depending on the data.

@NASfan are you using blockstorage? How big are your files?

That’s what I’d expect–when there’s lots of free space, fragmentation isn’t a problem, but as the pool fills it would become a very big problem indeed.

1 Like

Given the OEM, I’d double and triple-check that you didn’t get a set of SMR drives. As in look up the exact model number of your drive, then look up the spec sheet, then verify it isn’t a SMR drive.

WD has lost all my trust after they tried quietly passing off SMR drives in their WD red ‘NAS’ line despite knowing that SMR drives are unsuitable for NAS use.

Always worth checking, but I’d understood SMR wasn’t a factor for drives larger than 8-10 TB. But regardless, that wouldn’t result in a 10% loss of capacity.

1 Like

“Great” is a strong word. I hesitate to use it with DRAID, since it’s more complicated than “RAIDZ, but faster resilvering”.

What version of TrueNAS are you using? I seem to recall recent work on the estimation of available space, it’s not impossible that it’s accounting for partial stripes due to the width of the vdev. 128k default max block size / 4k shift would imply 32 to be the maximum non-parity width of the vdev before wasted space is guaranteed… But if something is accounting for even 64k blocks, that drops down to 16-wide, meaning that you get 19-wide stripes (16+3 parity) instead of 24-wide (21+3 parity). In turn, that means ~16% parity instead of ~12,5% parity.

That’s not the 10ish percent you’re seeing, but I think this illustrates what might be going on.

Sidenote: space accounting for an empty 12-wide RAIDZ2 pool lines up with the fairly naïve estimate, IIRC. At least it never surprised me to the point of looking into it…

1 Like

From a user perspective or technological?

It would worry me that it is fairly new, but from a user perspective it seems pretty great in my opinion. I think it is somehow even simpler than RAIDZ because the storage efficiency problem is way more present.

I agree re loss of capacity, I disagree re: not being extremely paranoid re: WD truthfulness. I buy He10s used because I know what I will get. With WDs more recent offerings that is less likely to be the case, especially if a He drive is desired.

It’s like the old joke about whomever and a sheep, it only takes one “gotcha!” to be forevermore associated with a particular turn of events. Especially if it was as intentional and non-apologetic as WD has been.

Well, both.

Honestly, it’s best to just grab the datasheets and buy a specific model. No surprises expected there.

Depending on the reseller, it can be very difficult to know what you’re getting. Many resellers will not list anything beyond the first couple letters of a SKU. That allows them to keep the same listing as WD and the two other OEMs cycle through generations of drives.

Goharddrive is one of the few resellers that consistently lists all the SKU # contents (and prices them accordingly).

Thanks for all the GREAT replies!!

Just to provide some clarity: I’m using CMR WD Red Pro drives.

Ya’ll gave me a lot to think about. I rebuilt the array with dRAID2. 10 DATA disks, 12 CHILDREN, 2 VDEVs. 0 hot spares.

I’m getting a crash course into dRAID since I didn’t know it existed until a few minutes ago. If I’m correct, the VDEVS have reserved enough space to hold 2 parity copies across the VDEV? So I can lose 2 drives in each VDEV without losing data? 4 drives total if I luck out and it’s spread across the VDEVs? Is that correct?

If I buy an extra drive can I later add it as a distributed hot spare?

Back to my pool: I’m again shocked by how much space I’ve lost in overhead. a dRAID2 with 2 VDEVs gave me 320TiB usable space.

If I moved to a single VDEV with dRAID2 (or perhaps even 3), would this be a better configuration than a single wide RAIDZ2/3? The main complaint is rebuild speed which is addressed by dRAID, right? Or am I still missing something…

Also, someone asked me about my average file size: These are MKVs, so multiple gigs in size. There are subtitles though, which are a few hundred KB and there’s usually 8 of these files per MKV.

(Sorry for the wall of text, I haven’t figured out how to quote individual messages in this forum system!)

Thanks again everyone. Open to more feedback!

A general question… is there any point using draid with no spares?

My understanding is that the redundancy is built into the VDEVs. I don’t usually use hot spares in my VDEVs anyways, so it’s no different from before. The major advantage of dRAID from my 20 minutes of research is that the entire VDEV can be used to rebuild vs just the few parity drives. (Obviously please correct me if I’m wrong)

Edit: Updated language to be more consistent with ZFS

Not if you don’t have hot spares… At that point it’s just a worse RAIDZ.

3 Likes

I’ve been under the impression that dRAID is just not something you should consider until you hit a very large number of disks. The TrueNAS dRAID primer says (>100) attached disks.

I have not used it myself so I’m unsure. Just going by the documentation.

1 Like

I picked dRAID2, so it should be the same as RAIDZ2. RAIDZ doesn’t have hot spares by default either, right? Am I missing something. The default distributed spares is 0 when building a dRAID pool.