My boot disk failed - it wasn’t redundant. (cost choices were made that weren’t the best)
New more reliable disk installed, comes up, lets me import pools, the smaller NVME SSD imports perfectly, my bigger data HDD pool imports without error. But it doesn’t work right.
It’s 2 8GB (or I guess GiB nowadays) HDDs in mirror mode. All of the previously set up datadets show up in the Datasets part of the TrueNAS web UI. But in the Usage section of TrueNAS’s Storage Dashboard it shows 100% used - 7.14 TiB used, 0B available.
If I go to the Shell (under System) use the Linux ‘df’ command I get:
Filesystem 1K-blocks Used Available Use% Mounted on
EpsilonMachine 128 128 0 100% /mnt/EpsilonMachine
but going to /mnt/EpsilonMachine in the shell is empty.
I’m new to TrueNAS - using it after a prior SSD failure - any ideas/assistance would be greatly appreciated
Keep a current download of your system configuration file. You can do a fresh install of TrueNAS on a new boot device, upload your configuration and be back to normal. Mirrored boot isn’t, really required.
The version of TrueNAS and a good description of your hardware and pool layouts could help. Guessing boot disk, NVME for Apps and a mirror VDEV of 2, 8TB hard disks?
zfs list -r -t filesystem,volume -o space <poolname>
that should show you what is using space. Replace ‘poolname’ with the name of your mirror pool
TrueNAS 25.04.2.6 community
It’s a Terramaster F2-423
Intel N5095 16 GB RAM
the two Main bays are the data pool with 2x8TB Seagate Ironwolfs
one NVME is 1 TB apps
OS was on the internal USB port (no longer booted, I admit, poor choice), now on a 128 GB NVME.
in the shell the command suggested gives: zsh: command not found: zfs
The SMB shared folders are owned by another user? try to su USERNAME and then try df.
Try posting the command results using preformatted text mode. Hit Ctrl+e or (</>) on toolbar and then paste. It should make it a lot more readable and keep the formatting you see in the Shell window.
It just looks like you filled up your server. urbackup is 4.68T and Shared section is big too.
I figured out the preformatted code on the second block, I tried screenshots but it wouldn’t let me embed them here.
Yes the drive is very full - urbackup will suck up any available space with incremental backups if given the chance. I was still trying to tune that down.
’sudo df’ (ie as root) gives
Filesystem 1K-blocks Used Available Use% Mounted on
EpsilonMachine 128 128 0 100% /mnt/EpsilonMachine
(with the other lines trimmed out) - everything else shows way more than 128 1K-blocks, which tells me something isn’t quite right.
You should have set a quota to prevent that. Due to Copy-on-Write, ZFS cannot work without free space.
Try deleting snapshots to get back some space.
If that fails (ZFS needs free space to delete and free space!), you’ll need to stripe an extra drive with your mirror to have free space. A USB drive if need be. Then delete something to recover some space and remove the extra drive (from GUI!).