I am currently running 2x4Tb Drives in Mirror Mode.
I am looking to expand this to 4x4Tb Drives.
I’ve heard about Raid-Z1 which would allow me to press an extra ~2TB out of the 8TB I would get with simple mirroring (according to the truenas zfs capacity calculator).
I’ve read there are draw backs for RAID-Z1 on larger drives (16tb) but mine would be rather small in that comparrison.
I am also currenlty using 90% of my current mirrored setup. What would my options be to expand this setup? How could I expand it wile not loosing my current data? What is some important info I might have missed?
Changing to Raid-Z1 would require backing up the data elsewhere. Destroying the current mirror pool and creating a new VDEV and pool with all the drives.
Essentially what I expected.
Is there any way I could connect a drive dirrectly to the usb 3.0 port on my server to share files much more quickly than wireless ever could?
When you upgrade to 24.10 (and don’t do it until you are happy to do so) then you get RAIDZ expansion.
At that point you can install the 2x new disks and remove a disk from the mirror and make a new pool which is 3x RAIDZ1. Replicate the data from the other drive, and then destroy the original pool and use the last drive to expand the RAIDZ pool to 4x RAIDZ1. You may want to run a rebalancing script too once the expansion has finished.
Before you do this, make sure that you do:
Existing drives: SMART long tests, a scrub and check the drive SMART attributes;
New drives: SMART conveyancing test, SMART short test, SMART long test, burnin script.
A 4x 4TB RAIDZ1 is just fine - the resilver of a 4x 4TB is relatively short and the risk of losing a second drive correspondingly lower and most people feel that although RAIDZ2 is marginally safer RAIDZ1 is OK for this width and size.
(The approach outlined above is the simplest but does leave your data on a non-redundant drive for a while. The next slightly more complicated approach is to use the 2 new drives to create a 2x RAIDZ1 - which requires using the CLI but is still straight forward. But then you have two pools of the same size and redundancy to copy from / to - and then you need to do 2x RAIDZ expansions and a rebalance. The Marc Khouri link above is a red herring - firstly it assumes no new drives, and the degraded pool trick would again leave you with a non-redundant pool during the migration.)
Using a USB disk is a possibility, but again it would mean putting your data into a non-redundant state during the migration and also to copy the data twice.
EE has been a full release for some weeks now and there have been several very minor urgent bug fixes (24.10.0.1 and 24.10.0.2) and this week a first minor release 24.10.1 which usually means that the most obvious bugs have been fixed and there will be fewer migration issues.
If you are very very cautious (like me) you may want to wait until 24.10.2, but I would think that 24.10.1 is a reasonably safe bet so long as you read the 24.10 release notes and plan ahead.
I went though the expansion process myself in RC. However zfs 2.3, which is the offical release that includes expansion is still itself in RC. I believe this fuction is stabled (IMO), but risk is risk…
Edit: Just FYI, looking over the release notes for zfs-2.3.0-rc4, there has been some improvements to Expansion.
@HuggieBo I meant to ask you to give us your exact drive model numbers. If you don’t have them to hand, please run the following command and copy and paste the output here inside a </> box:
sdb WDC WD40EFAX-68JH4N1 1 gpt disk 4000787030016
└─sdb1 1 gpt part 4096 4000784056832 Solaris /usr & Apple ZFS 559eecfa-7ea0-48b7-b878-c87091db84cd
sdc WDC WD40EFAX-68JH4N1 1 gpt disk 4000787030016
└─sdc1 1 gpt part 4096 4000784056832 Solaris /usr & Apple ZFS e7e731bc-fe21-42f2-9e03-36e81ddd2d8d
sda is currently my truenas instance and one of the 4 sata ports my mobo has. It will be replaced by an nvme drive.
Well, it appears that I was right to ask you about your drives.
Unfortunately WD40EFAX drives are Western Digital Red drives (and not Red Plus or Red Pro) and they are SMR drives and completely and utterly and absolutely unsuitable for ZFS redundant vDevs (as stated explicitly by Western Digital themselves).
The issue with all SMR drives is that they have a relatively small CMR cache that writes go to first, and then when the drive is idle it destages the data from the CMR cache and writes it to the much much much slower SMR area. This is fine when the drive is idle for most of the time, and they will thus seem completely fine, but during times of bulk writes the cache fills up and the writes to the drive slows to the SMR speed i.e. a crawl. And that happens in spades during resilvering and RAIDZ expansion - which then take days or weeks to finish, if they don’t error out first - and it is this that makes SMR drives unsuitable for ZFS redundant vDevs.
As I said, your current drives may look fine during normal usage, but you do not want to find out that they error out when you get some sort of a drive failure and you try to resilver.
These drives are no more suitable for mirroring than for RAIDZ1. As Western Digital themselves state, they are completely unsuitable for ZFS usage.
But it is your data and if you are prepared to live with the risk losing your data due to an extended resilvering time (and thus extended period of stress on the remaining drive) causing the remaining drive to fail, then that is of course your prerogative.
Oh okay I understand now. This is really frustrating as the zfs unsuitability is not stated on the german page for the WD Red.
The advertising around this topic is very confusing.
I was getting 2 Seagate Ironwolf Pro drives anyway and was thinking of doubling that to replace the wd drives. Are those suitable for this purpose?
Unfortunately I did not save the previous PDF spec sheet for WD Red drives which stated unambiguously that they were unsuitable for ZFS.
Somewhat shamefully, WDC are now branding Red for SoHo NAS use in some kind of assumption that SoHo users do not use ZFS, with a suggestion that Red Plus are better for ZFS usage.
IMO this is quite disgraceful behaviour as despite reasonably widespread knowledge, users like yourself are still not hearing about it and are installing WDC SMR drives in the ZFS systems.
The reality is that SMR drives are unsuitable for any RAID system which might need resilvering at some point, not just ZFS systems, and IMO WDC should not be marketing or selling ANY SMR drives for use in ANY NAS system, period - and since all the various Red drives are specifically marketed for NAS (the WDC NAS brand) that means that they should withdraw the Red SMR range for good and sell it under a consumer desktop / workstation branding.