A while back I migrated from an old Truenas setup to TrueNAS Scale on new (2nd hand) hardware (Dell 720 with 8 drive slots).
Being in a hurry (old system was not so slowly dying) I bought 8 4TB drives and built TrueNAS and migrated my data over.
Then discovered that the drives I had bought were SMR drives and regularly caused slow I/O on the storage pool (and made trying to run VMs an exercise in futility - just one made the slow I/O appear every time )
So, I want to replace the SMR drives with proper CMR ones but I can’t afford to do all 8 at once. I’m also planning to replace 4TB drives with 6TB (and I know this won’t expand the pool until all the drives are replaced).
The question is - can I replace the drives one at a time? Is it a problem mixing SMR & CMR drives in the short term?
Once they are all done I can (hopefully) migrate the VMs from one of my proxmox servers (the one that keeps eating its RAID card) and reduce my electricity bill…
@launcap - You don’t list your pool layout. As @garyez_28558 said, depending on pool layout, SSD pool might be best.
Meaning if your current 8 x 4TB disks are in a RAID-Z2, that is not ideal for VM storage. Mirror vDevs are best for VMs, with SSD & NVMe even better.
Last, you probably know this, but replacing a SMR drive with a CMR drive will still take a long time. It’s not about the random write performance of the CMR. But, it’s the reading of the other disk(s) in the vDev that is used to re-create the redundancy. If those are slow, then it does not mater if the new replacement disk is fast.
Yeah I replace a couple of SMR ones recently. You go to Storage > Manage Devices > Data VDEV name
When expanded you will see the vDev disks and you can click on the one that is SMR. That will show you Details for the disk and an option to take the disc Offline.
Once offline you can physically replace the disc (i’d do it one at a time) ensuring you check that you have the right serial number.
Then once the new disc is replaced you click on Replace at the bottom of the same Details for disc page.
Choose the new disc and wait for the pool to resilver.
Once completed repeat for any further discs as necessary.
Can you just clarify this procedure for me (newbie alert) because I thought what @Okedokey was doing was the right way, or are you saying this is this right way to do it?
If you follow his procedure, your pool enters a degraded state before starting the resilvering process: this is something you only do when there is no free port to connect your new drive while keeping all the old drives in the pool connected.
Connect new disk
Software-replace the old disk with the new disk from the WebUI
Once the replacing process ended, power off the system and physically detatch the old drive from the system.
@Davvo thanks for that, basically this is the difference from doing it at home and being professional , I only have 4 slots and they are all filled (fyi RAIDz2 4 wide). Will add this to my notes.
Sometimes the disk you’re replacing is faulty in a way that it will cause the replace to proceed very slowly.
If that is the case, you can offline it after the replace has begun, but in general its best to retain as much redundancy as possible during the replacement/resilvering operation.
Even a disk which has faulty sectors will contribute redundancy to the pool, ie another disk may also have faulty sectors, but the sectors would have to be colocated to cause an issue.
also, thinking about it, when you have a 2 way striped mirror, off-lining and replacing a disk is not viable (also I would not use this arrangement, it is a thought experiment only )
Either way is viable, but maintaining redundancy is generally better if it is possible at all.
If you have a spare SATA port but no official slot for the drive, you may lay a drive (preferably the one that is to be replaced) temporarily outside of the case while replacing.
If you lack a free SATA port, you may consider attaching the drive to be replaced to a USB adapter, just to maintain redundancy.
Thinking about it - I still have a spare dual NVMe card - could populate that with decent-sized NVMe drives and make that as a specific VM mirrored vDev.