I’ve tried searching the forums to no avail, I have one data set with 2 children datasets and they have 10Tb of space available total but after I hit 3.17 Tb its no longer allowing me to add files. I tried changing quotas and that hasn’t made any change and I can’t find anything that I’ve configured that wouldn’t prevent the dataset from using all available storage. There aren’t any snapshots either. I assume this is an easy fix but I am having no luck
ZFS doesn’t use traditional inodes like EXT4/XFS, but it can run out of space in the metadata objects if there are:
- Millions of small files
- Very high directory nesting
- Lots of extended attributes or ACLs
You can check if you’ve run out of “space” due to too many files with:
zfs get all pool/dataset | grep usedobjs
This shows the number of metadata objects used. If that number is very high, you’re likely hitting a soft limit (e.g. on a small-block dataset with many files).
If the dataset is storing many small files but uses a large recordsize (e.g. 1M), it can result in space fragmentation. This doesn’t block uploads directly but could make the usable space fill faster than expected.
Check it with:
zfs get recordsize pool/dataset
ZFS reserves around 3-5% of total pool capacity for internal metadata and performance. If you’re seeing the pool at ~96-97% full, it will block new writes even if “some” space appears free.
Use:
zpool list
zpool status
If zpool list shows ALLOC nearing CAPACITY 100%, this is likely the cause.
So gave those commands a shot, when I run zfs get all pool/dataset it returns “dataset does not exist” but when I run zpool list it shows my pool with a 21.8T size and 4.37 allocated. I did just notice I’m having these issues and my dataset is utilizing exactly 25% of the storage and the RaidZ1 is across 4 drives. I’d think I have to have something misconfigured but at this point I don’t really know what
Try putting ‘sudo’ before the zfs command. ‘dataset’ needs to be the name of your dataset, if that was what was wrong.
Maybe one of these commands will help you sort out your space. Replace [POOL_NAME] with the name of your pool. You can post the results back here using Preformatted Text. </> on toolbar or Ctrl+e
zpool list -o name,size,cap,alloc,free,bcloneratio,bcloneused,bclonesaved [POOL_NAME]
zfs list -o space [POOL_NAME]
Can’t copy it from the terminal but the output for the first was SIZE: 21.8T ALLOC: 4.37T FREE: 17.5T BLCONE_RATIO: 1.00x BCLONE_USED: 0 BCLONE_SAVED: 0
Second was AVAIL: 9.54T USED: 3.17T USEDSNAP: 0B USEDDS: 151K USEDFRESERV: 0B USEDCHILD: 3.17T
Did you have quotas set up on these any of the datasets? User or Group quotas?