Migrating RAID1 to RAID5

Hello.

When i built my home server a few months ago i had decided to go with TrueNAS inside a proxmox VM to manage my two 8 TB harddisks. They were initially configured with RAID1, so until now i have had 8 TB of usable space which ive used half of.

A few days ago i ordered a HBA card along with 2 more 8 TB harddisks. I plan to migrate the disk pool to a Z1 RAID. Though i do want to do this without any dataloss, is this possible?

The new pool will then consist of x4 harddisks of 8 TB each. According to Seagate’s RAID Calculator i will have 24 TB of usable space and able to lose 1 disk.

I’m just looking for some pointers on how to do this without data loss.

This is a “back up, rebuild the pool, restore” scenario.

1 Like

Alright, shouldn’t be such a big deal then.

The other day i installed a 5 TB harddisk and configured a replication job from the Data pool (main pool) to the pool_backup (which is just a temporary disk).

So in an ideal migration process i would:

  1. Install the new disks
  2. Remove the disk pool
  3. Create new disk pool
  4. Create a new replication job to replicate the data from the backup disk to the new pool i just created.

This doesn’t sound so bad.

:rotating_light: Potential SMR Drive Alert :rotating_light:

Especially if this is a 2.5" drive from Seagate - please be advised that SMR (Shingled Magnetic Recording) drives are not recommended for use.

Thanks for the heads up, but all the harddisks are 3.5".

EDIT: The OS itself runs on an M.2.

Good. Still check that these aren’t SMR. And please learn correct ZFS terminology: No RAID1 but mirror; no RAID5 but raidz1.

1 Like

BASICS

iX Systems pool layout whitepaper

1 Like

@etorix - Unfortunately, the drives i have chosen are all SMR. According to this. From what i have read about it these past minutes… Due to how the SMR harddisks works behind the scenes, they allow for higher density storage capacity and may lack r/w speeds. That isn’t going to be much of a problem for me, since these harddisks are primarily used to store home theatre movies and tv shows and a miniscule amount of libre office files. Unless it is going to be a problem in terms of running in raidz1? or even other raid types?

Also thanks for the update in terminlogy!

@SmallBarky - I will have to do a proper read through those. Ty! I’ve only been using TrueNAS for a few months now so im very new to all this. All i ever did in the past was to set up a hardware raid at work, thats about it lol.

I’m afraid but it is major problem. While what you read is correct as a general observation, in the context of ZFS the repercussions are different.

The bottom line is that you simply cannot use SMR drives for ZFS, unless you are prepared to risk loss of data. In that case, the use of ZFS in itself wouldn’t make sense.

They allowED. By now, the extra SMR capacity is less than 10%.

Read is fine. The issue is with sustained writes. Adding one movie at a time is fine. Loading a movie collection of several TBs will be slow. But the worse would be resilver after replacing a drive; if a drive fails you’re in for about one week of sustained activity on all drives—which may cause further drive failures.

Running is not an issue… until something goes wrong. Then, it goes VERY wrong. Raidz of all levels is worst; mirrors might fare better during a resilver if the new drive is CMR.
If possible, get CMR drives instead. (Cyber Monday deal?)

1 Like

The main reason i went for this particular harddisk is because of the pricing in Norway. There’s a $100 gap between the one i went for and Seagate IronWolf 8 TB. So in this case, it would either be the one’s i currently have or no harddisks at all.

The thing is, i have been running a harddisk mirror raid ever since i got it and that has been working fine. Doesn’t this also utilize ZFS?

I think it is way to late to back down from these harddisks i have already invested money in. What are my options here, without buying new harddisks? This really sucks. I could potentially go for another raid type, all i really need is to store massive amounts of data in a single disk pool and a minimum of 1 disk failure. yuck.

You might get away with SMR disks for a while. Just make sure you backup important / critical files.

Part of the problem with ZFS is that writing uses copy on write, (COW). This means that new space is allocated for all writes, even simple updates. Even updates to directory entries. This can fragment free space on disks, which can become quite bad the longer a SMR HDD is used.

And it is not just small updates. EVERY single update causes a copy on write of the critical metadata. One of these is known as Uber Blocks in ZFS, which has multiple copies, in different places on a disk.

All that ends up making ZFS less suited to SMR HDDs than other file systems. (TrueNAS does not support any other file system for data…)

Now I understand you’ve been bitten, and can’t change things soon or at all. We just want you to know that things could get bad. Or maybe not, that is the problem with SMR HDDs, no guarantee.

1 Like

A different filesystem is probably going to be required if you are beyond the point of no return with your SMR drives.

Unfortunately, ZFS is the only filesystem you can use with TrueNAS.

If you’re determined to use those SMR drives, you’ll probably need to look at solutions other than TrueNAS.

  • Mirror is possibly less bad that raidz—at the cost of capacity. (No guarantee and I did NOT write “better”.)
  • Classical RAID5 (or RAID6), as would be provided by OpenMediaVault, is also impacted when rebuilding an array, but not as badly as ZFS (ZFS writes to ALL drives during raidz resilver).
  • Or you could go for raidz anyway (raidz2 for an extra degree of resiliency…), and pray that you will afford to replace the SMR drives before they begin to fail…

Choose your evil. There are reasons why these drives were cheaper, and why SMR has essentially disappeared from the latest generations of consumer HDDs.

1 Like

These are all very good answers and i now have a slightly better understanding of all this. It really is a shame i wasn’t aware of SMR before i built this server, but it is what it is.

Though, i did look into Unraid as they do not use ZFS. For anyone else having this same problem down the line:

Unraid: Uses a mix of a parity-protected array with XFS or BTRFS, which is less demanding on drives than ZFS.

Disk Independence: Data is not striped across disks. Each disk has its own file system (XFS, BTRFS, or EXT4), so even if multiple disks fail, you can still recover data from the remaining disks.

There’s also mdadm and other software raid alternatives, but i landed on Unraid. It is a paid software, but it suites my needs perfectly. For now i will wait for my two 8 TB harddisks (with SMR) and then invest in a starter license on unraid.

This is either a success and i invest in more harddisks or it goes to shit and i will become a tennis player instead.

Jokes aside though, thank you all for taking your time to include your messages in this thread. You’ve all been very helpful! Such a great community for new-comers.

Going forward I would also check out the Seagate Exos prices. Those drives are made/marketed for the data center, so they are “better” than the IronWolf. And at least here in Germany they have always been considerably cheaper.

Unfortunately the cheapest Exos model i was able to find was 20 TB, and that was $80 more than the IronWofl, so thats a shame…