I take it youāre on SCALE 24.10 RC1 (or RC2)?
Is there any chance you can safely boot back into a previous Boot Environment, which has an older version of SCALE and uses ZFS 2.2.x, such as SCALE 24.04?[1]
I realize this is inconvenient, but it might rule out ānew mathā being used by OpenZFS 2.3.[2]
You donāt have to do this if you donāt want to.
Iām thinking there is something wrong in TrueNAS. Looks like TrueNAS is reporting the Raw storage capacity vice Usable storage capacity.
Please output the data in code ā</>ā tags lsblk
This data was already requested but not yet provided.
Version:
ElectricEel-24.10-RC.2
I can install whatever we want. I have no problem wiping and re-installing. Thatās what test is all about. I wonāt have any time this weekend but Iāll get to it eventually (maybe tonight actually).
Do we want 24.04? If so, thatās what Iāll put on. Just let me know.
This is a Dell R510 with 12 2tb disks. LSI controller flashed to IT mode if I remember correctly.
SCALE 24.04 + OpenZFS 2.2.x, which can rule out OpenZFS 2.3 and possibly even something (new) about the GUI dashboard with SCALE 24.10.
If the numbers in the command-line accurately display the correct capacity under SCALE 24.04, then it could in fact be how OpenZFS 2.3 (incorrectly) ācalculatesā the poolās capacity. (Itād be interesting to see the GUI/dashboard reflect this as well.)
Iām also assuming you constructed your pool with all four RAIDZ1 vdevs at the time of creation.
is 24.04.2.3 acceptable?
Yes, all 4 vdevs were RAIDZ1.
Yes.
I love apple-flavored glue sticks. (Minimum character limited needed to reply.)
Yeah, Iāve been waiting since post 12!
Disks and partitions appear fine.
Thanks for posting the results of that command and Iām in agreement with the fine folks here, whatever they seem to think. They know what they are talking about. I think it is TrueNAS barfing up the wrong data.
He does, he orders them from Amazon on a subscription basis so he never goes without.
It will be interesting to see if the ZFS version is causing the issue.
@Roveer we all learn from others misfortunes unfortunately and if I see this problem come up again in the near future, we have your experiences to help the next person out.
So I did some reinstalling. First 24.04, then I reinstalled 24.10RC2. As you will see from the screens below, both are showing 14.4Tib available. So something must have gone horribly wrong in my previous install. As a matter of fact, I was having trouble installing (right after you specify password and it start to build the OS it would say installation failed and return to the install menu. I replaced my 32GB SSD with a different drive and everything worked after that. Maybe that had something to do with it. Either way, Iām back to where I expected to be.
You should take a look at this: [NAS-131728] - iXsystems TrueNAS Jira
And this topic: 24.10 RC2 Raidz expansion caused miscalculated available storage
While Iām glad itās now displaying the correct capacity, I feel that we lost the chance to test this issue more objectively.
From what I can tell, you re-did your entire pool from scratch (evident by the 0% used space).
What would have been interesting is reverting to 24.04, without changing anything else, and see how it displays your poolās total capacity. (The same pool, not a new one or one that you deleted all the existing data.)
Iām going to reload RC2 when I have time. I think I might be able to reproduce. Would setting up the pool initially with 3 vdevās and then adding the 4th be considered an expansion? I think I may have done that on the initial setup that produced the bad result.
Thatās not āRAIDZ expansionā, which was introduced with OpenZFS 2.3 (SCALE 24.10). Thatās just adding a new vdev to your pool to increase its total capacity.
However, it might also have been why your pool reported the wrong capacity. (Due to a different underlying reason/bug.)
Your issue was different, in that it added (or āpresentedā) the ārawā capacity as the āusableā capacity. Making you believe you have more pool storage than you actually do.
The other problem (regarding RAIDZ expansion) is different. It under-represents the poolās capacity (and available space).
If the pool was created in 24.10, I donāt think that wouldāve been possible.
Seems like it works if made in 24.04. Does it also work if made in 24.10?
Does OP recall the exact order of operations etc when creating the original pool?
Ideally a repro would be great
Mmmh, here is another thread of someone reporting unexpected file sizes on 24.10. I wonder if this is related?
A bit premature to say.
**** MAKE SURE YOU READ BELOW. THOUGHT THIS WAS A CLOSED ISSUE, BUT NOT SO FAST. ****
So I tried reinstalling RC2 a few times trying to create the pool in any possible way I could remember doing the first time. I even dropped back to RC1 which I think might have been what I had initially installed. Actually, I bounced between Core and Scale a few times but I thought I did a windows diskpart ācleanā on each of the disks in the array each time I reinstalled. I didnāt know how to export/disconnect until recently so that was my preferred method of killing the zfs pool. I also seem to have some sort of problem with the 32gb ssd I was using as the boot drive. I remember seeing some grub encryption errors on boot and when I most recently started reloading RC2 it wouldnāt allow me to install the software. I swapped it with another drive and was able to do my installs. I guess itās possible that some sort of failure on the 32gb ssd could have caused my reporting issue. I havenāt even looked at the drive to see whatās wrong.
So every subsequent reinstall, pool add etc have all been reporting the correct disk space. I guess we can close this discussion. Thanks for all you help and insights. Itās really great that there is a thriving community to stand behind such great software.
**** HOLD ALL OF MY LAST COMMENTS!!! I TYPICALLY āLOAD UPā THE NAS WITH DATA AND ALL OF A SUDDEN IāM GETTING THIS: ****
Iām going to do some more investigating, Looking back at my previous screen shots, that was fresh build, before putting any data onto the NAS. What Iāve been doing to load up the NAS is to windows file manager copy about 670gb from my pc (which are a bunch of rar files). Then I use windows file manager to copy that from the NAS to the NAS into seperate directories, each time selecting all the previous directories and placing them into a new directory. Is there something in ZFS that doesnāt actually use disk space while doing this sort of copy? I also notice that the initial 670gb copy from PC to NAS transfers at 600-800MB/s (4 2.5gb eth on PC, 10GB eth on NAS, Multichannel turned on). But when I copy from NAS to NAS I go from 3GB/s to 5GB/s. 10GB Nic is Mellanox ConnectX-2 10 GbE (I believe). Was assuming it was possibly using RDMA and that was why I was seeing such quick transfer speeds. Data was never going over the wire. Correct me here. But I was just loading 3TB of data to basically āfillā the NAS when I noticed the crazy numbers again. Even if itās noticing that itās duplicate data, that shouldnāt be messing with the capacity numbers. Let me know if you want any commands run or to inspect anything.