Why is my capacity so far off (much higher than expected)

version

space

1 Like

I take it youā€™re on SCALE 24.10 RC1 (or RC2)?

Is there any chance you can safely boot back into a previous Boot Environment, which has an older version of SCALE and uses ZFS 2.2.x, such as SCALE 24.04?[1]

I realize this is inconvenient, but it might rule out ā€œnew mathā€ being used by OpenZFS 2.3.[2]

You donā€™t have to do this if you donā€™t want to.


  1. :warning: This assumes you never ā€œupgradedā€ your pool! ā†©ļøŽ

  2. OpenZFS 2.2.99 is actually 2.3-RC ā†©ļøŽ

Iā€™m thinking there is something wrong in TrueNAS. Looks like TrueNAS is reporting the Raw storage capacity vice Usable storage capacity.

Please output the data in code ā€œ</>ā€ tags lsblk

This data was already requested but not yet provided.

Version:

ElectricEel-24.10-RC.2

I can install whatever we want. I have no problem wiping and re-installing. Thatā€™s what test is all about. I wonā€™t have any time this weekend but Iā€™ll get to it eventually (maybe tonight actually).

Do we want 24.04? If so, thatā€™s what Iā€™ll put on. Just let me know.

This is a Dell R510 with 12 2tb disks. LSI controller flashed to IT mode if I remember correctly.

SCALE 24.04 + OpenZFS 2.2.x, which can rule out OpenZFS 2.3 and possibly even something (new) about the GUI dashboard with SCALE 24.10.

If the numbers in the command-line accurately display the correct capacity under SCALE 24.04, then it could in fact be how OpenZFS 2.3 (incorrectly) ā€œcalculatesā€ the poolā€™s capacity. (Itā€™d be interesting to see the GUI/dashboard reflect this as well.)

Iā€™m also assuming you constructed your pool with all four RAIDZ1 vdevs at the time of creation.

1 Like

is 24.04.2.3 acceptable?

Yes, all 4 vdevs were RAIDZ1.

Yes.

I love apple-flavored glue sticks. (Minimum character limited needed to reply.)

Yeah, Iā€™ve been waiting since post 12!

2 Likes

Disks and partitions appear fine.

1 Like

Thanks for posting the results of that command and Iā€™m in agreement with the fine folks here, whatever they seem to think. They know what they are talking about. I think it is TrueNAS barfing up the wrong data.

He does, he orders them from Amazon on a subscription basis so he never goes without.

It will be interesting to see if the ZFS version is causing the issue.

@Roveer we all learn from others misfortunes unfortunately and if I see this problem come up again in the near future, we have your experiences to help the next person out.

1 Like

So I did some reinstalling. First 24.04, then I reinstalled 24.10RC2. As you will see from the screens below, both are showing 14.4Tib available. So something must have gone horribly wrong in my previous install. As a matter of fact, I was having trouble installing (right after you specify password and it start to build the OS it would say installation failed and return to the install menu. I replaced my 32GB SSD with a different drive and everything worked after that. Maybe that had something to do with it. Either way, Iā€™m back to where I expected to be.

You should take a look at this: [NAS-131728] - iXsystems TrueNAS Jira

And this topic: 24.10 RC2 Raidz expansion caused miscalculated available storage

While Iā€™m glad itā€™s now displaying the correct capacity, I feel that we lost the chance to test this issue more objectively.

From what I can tell, you re-did your entire pool from scratch (evident by the 0% used space).

What would have been interesting is reverting to 24.04, without changing anything else, and see how it displays your poolā€™s total capacity. (The same pool, not a new one or one that you deleted all the existing data.)

1 Like

Iā€™m going to reload RC2 when I have time. I think I might be able to reproduce. Would setting up the pool initially with 3 vdevā€™s and then adding the 4th be considered an expansion? I think I may have done that on the initial setup that produced the bad result.

Thatā€™s not ā€œRAIDZ expansionā€, which was introduced with OpenZFS 2.3 (SCALE 24.10). Thatā€™s just adding a new vdev to your pool to increase its total capacity.

However, it might also have been why your pool reported the wrong capacity. (Due to a different underlying reason/bug.)


Your issue was different, in that it added (or ā€œpresentedā€) the ā€œrawā€ capacity as the ā€œusableā€ capacity. Making you believe you have more pool storage than you actually do.

The other problem (regarding RAIDZ expansion) is different. It under-represents the poolā€™s capacity (and available space).

If the pool was created in 24.10, I donā€™t think that wouldā€™ve been possible.

Seems like it works if made in 24.04. Does it also work if made in 24.10?

Does OP recall the exact order of operations etc when creating the original pool?

Ideally a repro would be great :slight_smile:

Mmmh, here is another thread of someone reporting unexpected file sizes on 24.10. I wonder if this is related?

A bit premature to say.

1 Like

**** MAKE SURE YOU READ BELOW. THOUGHT THIS WAS A CLOSED ISSUE, BUT NOT SO FAST. ****

So I tried reinstalling RC2 a few times trying to create the pool in any possible way I could remember doing the first time. I even dropped back to RC1 which I think might have been what I had initially installed. Actually, I bounced between Core and Scale a few times but I thought I did a windows diskpart ā€œcleanā€ on each of the disks in the array each time I reinstalled. I didnā€™t know how to export/disconnect until recently so that was my preferred method of killing the zfs pool. I also seem to have some sort of problem with the 32gb ssd I was using as the boot drive. I remember seeing some grub encryption errors on boot and when I most recently started reloading RC2 it wouldnā€™t allow me to install the software. I swapped it with another drive and was able to do my installs. I guess itā€™s possible that some sort of failure on the 32gb ssd could have caused my reporting issue. I havenā€™t even looked at the drive to see whatā€™s wrong.

So every subsequent reinstall, pool add etc have all been reporting the correct disk space. I guess we can close this discussion. Thanks for all you help and insights. Itā€™s really great that there is a thriving community to stand behind such great software.

**** HOLD ALL OF MY LAST COMMENTS!!! I TYPICALLY ā€œLOAD UPā€ THE NAS WITH DATA AND ALL OF A SUDDEN Iā€™M GETTING THIS: ****

Iā€™m going to do some more investigating, Looking back at my previous screen shots, that was fresh build, before putting any data onto the NAS. What Iā€™ve been doing to load up the NAS is to windows file manager copy about 670gb from my pc (which are a bunch of rar files). Then I use windows file manager to copy that from the NAS to the NAS into seperate directories, each time selecting all the previous directories and placing them into a new directory. Is there something in ZFS that doesnā€™t actually use disk space while doing this sort of copy? I also notice that the initial 670gb copy from PC to NAS transfers at 600-800MB/s (4 2.5gb eth on PC, 10GB eth on NAS, Multichannel turned on). But when I copy from NAS to NAS I go from 3GB/s to 5GB/s. 10GB Nic is Mellanox ConnectX-2 10 GbE (I believe). Was assuming it was possibly using RDMA and that was why I was seeing such quick transfer speeds. Data was never going over the wire. Correct me here. But I was just loading 3TB of data to basically ā€œfillā€ the NAS when I noticed the crazy numbers again. Even if itā€™s noticing that itā€™s duplicate data, that shouldnā€™t be messing with the capacity numbers. Let me know if you want any commands run or to inspect anything.

2 Likes