Expansion of pool with external HDD?

I’ve had my TrueNas Scale server running for a couple of years now, pretty much runs without much intervention in my home network. I have astrophotography as a hobby and the project files / datasets get quite large. My core setup is a AMD B550-I mb and I have 7x4tb drives in a RAIDZ2 configuration. Approximately 60% of available storage is currently in use. I’m not in a bind (yet), but I’m wondering what the best approach is to expanding the storage capacity. Swapping out drives (individually or in pairs?) for higher capacity, or an external drive enclosure via a USB 3.x connection, or (other options ???).

Yes I can probably reclaim some significant space by going into the individual project folders and deleting the image processing step change ‘saves’/working files. I think regardless of that effort, I still need to understand what the best/easiest options are for expansion.

Thanks in advance,

Clayton

Swapping out the individual drives in the Raid-Z2 and allowing resilvers, x7. Then Expanding the pool, using the GUI, if it doesn’t autoexpand.

USB enclosures are not recommended. Full hardware details and how all drives attached may give more detailed advice

1 Like

It depends on how much $$$ you have to spend, and your timeframe.

First - an external multi-drive USB enclosure is seriously bad karma. Among other things, TrueNAS will refuse to recognize it, and even if you get past that you’ll quickly discover that USB is horribly unreliable under any sort of load (nevermind slow since all the traffic to your multiple drives go over the same wire). Just don’t.

I started with FreeNAS years ago, finally moved on to CE over Christmas, and over those years gradually expanded my only RAIDZ2 pool (now 6 ST10000NE0004 9.1 TiB usable space Seagates) over time, as funds allowed me to. I periodically buy a new bigger drive, swap it with the smallest one in my pool, and resilver. Eventually all of the drives are large enough that it’s worth my time to expand the pool. I enabled my CE upgrade by adding a pair of external SATA drives (Seagate ST26000NM000C 23.65 TiB drives) in a mirror so I at least had a spinning backup through the upgrade.

The big bang approach would cost more but move faster - do what I did and add two large drives in a mirror configuration, snapshot your pool over to them, swap out the drives in your original pool with bigger drives, and snapshot your mirror back to your now-bigger pool. How hard this is depends on if you have any extra SATA ports and can either fit more drives in your case or can put in eSATA cable adapters and a pair of eSATA cases to put the mirror drives in.

The even bigger bang, if you have the $$$ and a PCIe slot available, would be to add an HBA in the slot and an external disk chassis with bigger drives. Snapshot over to the new drives, and use the drives you have as backup, or as a second vdev to make your pool bigger. Beware, though, you need to make sure you have working backups because bigger pools by definition fail harder and take longer to recover when they do fail. The more spinny things, the more they get dizzy and fall over :slight_smile:.

The usual tradeoff between $$$ and time :-). TrueNAS does make it easier if you can figure out a hardware solution that fits your budget and time constraints.

1 Like

The existing server is home built. It is an 8 drive mini ITX case w/ASUS ROG Strix B550-I mother board, AMD Ryzen CPU w/32gb ram and NVME application drive (4tb) and 8 Seagate 4tb drives (SATA). There was a problem with the drive cage power header, and only 7 of the drives powered up, but I decided just to leave as is rather than try to chase down if the problem was the cabling or the internal drive cage distribution (the cage is hot swappable if the OS supports it). It is running the native TrueNAS os.
I don’t really want an external drive enclosure, prefering to stick with my compact setup. Is there a video/tutorial on what you described (swapping out individual drives and reslivers (??).
I don’t know much about TrueNAS beyond what I had to do to follow the videos on how to create the original installation. It just runs (which is great) and all use it for is archival storage of my astrophotography project and cell phone photos from both of our phones.
My oversight is limited to checking for errors occasionally and insure the TrueNAS software is kept updated/current.

Documentation section Replacing Disks. Only extra is auto expand doesn’t give you all the space at the end of 7 drives of replacement and you have to hit the Expand button to trigger. Far right in screenshot.

2 Likes

I would have to open the case back up to confirm, but as I remember with the drive cage fully populated, the PCIE slots were inaccessible/unusable - everything was a VERY tight fit - even the CPU cooler had to be low profile. It operates ‘headless’, Iog in via the web interface on the local network, unless something requires me to access it at the console (very very rarely). When I need direct access, I attach a small HDMI screen and wireless keyboard. I had to do that around 3-4 times - it seems about once or twice a year. Not sure as to cause, usually it goes straight to the console/enables the web interface even after complete outages. Except for those rare one offs when it does not progress to the Truenas Console. On those very rare occasions I have to direct connect via keyboard/monitor. After that all is normal again and it reboots to console without any further action on my part beyond the connecting directly to the NAS and (sometimes) rebooting.

Thanks, I go into this tutorial. I’m considering replacing two of the 7- 4tb with two 12 tb Seagate CMR Wolf drives.

If you are a bit unsure of the process, I would test it out in a VM if possible with virtual disk of two different sizes. You can create a small Raid-Z1 or Raid-Z2 VDEV with three or four disks (small size) and then work through the replacement process with larger virtual disks. I have only tested a Mirror setup in a VM but it goes fast if your pools are empty. It at least give you a walk through of the whole process and familiarity with the GUI.

1 Like

Physically, all you have to do is power off, open the case, remove the drive you’re targeting, put in it’s place the bigger drive, and power back on. With a 7x RAIDZ2 your pool will be intact but degraded, and then you can use the instructions that @SmallBarky provided in order to navigate the GUI and kick off the resilver. It’s easy, if a bit scary the first time you do it. Be patient while it resilvers, it can take a while…

1 Like

Thanks for everyone’s help. I’ve decided to bite the bullet on the non operative drive slot and ordered a new 8 drive NAS case. I can physically transfer the entire setup to the new case and see if the drive fires up. I know it was operative before because I swapped drives around to identify that the issue was the drive slot and not the drive. If/When the previously inoperative drive powers up, I’ll have a bit of ‘practice’ adding it to the current pull which will also pull in another 4tb of storage (well 3.63 after the TrueNas overhead). The advantage (same size) of the new case is that it has individual activity lights for each drive housing/drive so don’t have to guess which drive is out if one goes down (or doesn’t power up). The other is the drive housings are now horizontal rather than vertical so don’t have to be concerned about the drives at the top collecting heat from below. Not a huge issue, but every little bit helps. Then I will start looking at individual drive upgrades.

My impression from all the input is that is preferred to swap one drive at a time rather than in pairs when changing out existing drives - is that correct?

On my second question - I need to select/take the drive I wish to changeout offline in the TrueNAS web gui first before powering down to physically swapout the drive?

We suggested one at a time so you keep the best redundancy while resilvering. Raid-Z2 give you two drives at first. You offline one and you’re down to one drive of redundancy while the resilver happens. If you were to attempt two drives at one and you have another fail, you could lose the pool and data. If you had all eight slots to use for a 7 wide Raid-Z2. I would have just suggested an in place replacement of adding the new drive to the enclosure and choosing to replace a single drive and keep Raid-Z2 level of redundancy.

If you power off, I would go through the process of labeling the slots to the drive serial numbers. Drive names can changes between reboots but the serial is unique. i.e. SBA could be SBB the next boot with the same physical drive.

Note serial number, Offline the drive and keep track of serial numbers to make sure you are replacing the correct ones

1 Like

The drive cooker Silverstone DS-380 by any (mis)chance?

Getting only 7 of 8 positions looks like a hardware issue. And a B550 motherboard can only provide up to 6 SATA ports, but your board appears to provide 4. So something is wrong with your description.

If you have a spare position, the preferred procedure is actually to leave the old drive in place, plug in a new drive, use “Replace” in GUI, let it proceed and then remove the old drive. Rince and repeat for all drives in the vdev.

I’m going to wait until the replacement NAS case (another miniITX 8 drive/hotswap from NewEgg) gets here before I start physically dissecting. Thanks to the reliability of TrueNas, the existing setup has been hidden away in a corner for over three years without me touching it other than an occasional nudge to complete a reboot and also keeping TrueNAS current.

So my memory of all the interior details is a little short. I literally copied the setup of a youTube made by a TrueNAS expert that he created to demonstrate how inexpensively an 8 drive TrueNAS server could be setup. If I remember correctly the TrueNAS version at the time was 22 something - I just updated from 24 to 25 (current distribution version) yesterday.

Yes the case is a Silverstone, but no longer made; I think the current rendition is the 382, while the 383 is more a normal size mid-tower. Iam not going back with a Silverstone case.

I used the identical components as that developer listed in his video (he also sourced from NewEgg) to avoid configuration mismatches. There is, on reflection, a half height SATA expansion card installed that all the drives connect to. I believe the low profile CPU cooler blocked the other expansion slot.

The ‘dead’ hdd is actually a dead hdd slot in the Silverstone hot swap drive carriage - drive slot 3 if I remember, but I’d have to open the case to see my mark to confirm. The drive itself is fine (swapped around to confirm at the time). I wanted/needed to get the server online to start configuring TrueNAS and start moving astroimaging project files off my desktop PC as it was ‘drowning’. So I didn’t delay to get either a replacement case or drive cage, I just rolled with the 7 drives that were operational. I assumed (rightly) that a few years would have passed by the time I used up enough storage for the missing 8th drive to be an issue,

I say I exactly copied that YouTube video, but that is only true for the physical components (drive capacity is different). The developer configured two pools in his example and mounted some multimedia server apps on one of them. My needs are simpler - just needed storage so I have only the one pool. I think the YouTube video was using two either 8 or 10tb drives in one pool, and six 1 or 2 tb drives in the other pool.

So in the new case, after transferring the components, it would be better (or simpler ?) to restart the server with a new larger drive I’m considering in the place of the formerly ‘dead’ drive vs restarting it with the existing ‘dead’ drive in place and joining the pool for the first time?

The math of the drive capacity vs raid storage available is opaque to me but starting out with approx 28tb (7x4tb), resulted in approximately 16-ish tb available for storage. Of that I still have 7(ish) tb free last I checked. It sounds like with my TrueNAS RAID setup, that maybe around 60-ish percent of a given drive becomes available when added to the pool, the rest goes to RAID overhead (quite certain I’m making a hash of this estimation) :wink: Or is the RAID overhead ‘fixed’ so that larger drives give more benefit than smaller drives in terms of percentage of capacity dedicated to RAID overhead?