Verifying hardware and setup choices before deployment

Hi all. Just want to run my planned hardware setup by you to make sure I’m not making any obvious mistakes. I’m a complete rookie at TrueNAS and Linux in general (though I’ve been reading the forum archives for the past month), so I’d appreciate any feedback.

Disclaimer: a lot of the hardware choices were dictated by what I already have, and they may not be perfect/have overkill functionality.

Use case: general media server with Jellyfin + seedbox.

Hardware:
CPU: AMD Ryzen 5 5600G (with be quiet! Pure Rock 2 cooler)
MB: Gigabyte B550M-K (2x M.2, 4x SATA)
RAM: GeiL 16GB DDR4-3200 x2 (32GB total)
GPU: AsRock Arc A310 LP
HBA: LSI 9211-8i
PSU: Corsair RM750x
Drives:
Boot: ARDOR Gaming Ally 256GB (NVMe) connected to M2A
Applications pool: Samsung SSD 980 1TB (NVMe) connected to M2B
L2ARC: Samsung 860 Evo 512GB (M.2-SATA through adapter box) connected to SATA0
HDD: 10x WD HC520 12TB (SATA): 8 of them connected to HBA card through 2 SFF8087-4xSATA cables, the other 2 to SATA1 and SATA2 on the MB.
Expansion slots:
PCIe x16: LSI 9211-8i HBA
PCIe x1: Intel Arc A310 through x1-x16 adapter

Some notes:

  • I initially planned to just use the 5600G’s integrated graphics as the Jellyfin transcoding device, but I’ve read really bad reviews on AMD graphics for that purpose in general, so instead I’m going with an Arc card (it’s more modern and future-proof anyway) and will probably just disable the iGPU so it doesn’t cause conflicts. Since the motherboard only has one x16 slot and it’s taken up by the HBA, I bought a simple passive adapter on Aliexpress and plan on plugging the GPU into it. Should be fine…?

  • the L2ARC drive is chosen because it’s an old, worn-down SATA M.2 SSD that I otherwise have no use for, and I don’t want to trust with anything more important than a cache that won’t disrupt anything if it dies. Similarly, the app pool drive is so large solely because I’m repurposing existing hardware. I’d go without an L2ARC at all, but if this is to be a seedbox, I’d love if it soaks up some of the random reads torrent seeding does and reduce HDD head thrashing.

  • The HDDs unfortunately have the Power Disable pin active, so I’ll have to power them from the Molex leads on the power supply through Molex-2x SATA adapters. Hopefully won’t be an issue. I know “Molex to SATA, lose all your data”, but I’ve been running two of those drives in a PC for many years and have not had issues.

  • I’ve read somewhere that LSI cards don’t behave particularly well on Gigabyte motherboards. I already have this board so I didn’t explicitly choose it, but if anyone wants to substantiate this claim I’d love to know.

  • Case compatibiltity isn’t an issue, since I’m DIYing a piece of plywood and some metal brackets to mount everything on, and it’ll sit in a ventilated closet away from the living spaces.

Software: I’m planning to run Scale 24.04 (because this appears to be the more actively developed and preferred version by developers) with Jellyfin and qBitTorrent apps from the official catalog running 24/7. The idea is 2x 5-wide RAIDZ1 vdevs in one pool for one huge SMB share that all the apps (and Windows clients) interact with.

I also need to migrate 40TB of existing data from my current 5-drive DAS, and ideally I want to do the following: create a pool with 1 5-drive RAIDZ1, transfer all the data to it, delete the DAS array thus freeing up its 5 drives, then add those 5 to the existing pool as a second 5-drive vdev, thus doubling the size of the pool and the big SMB share. Will this work as I imagine it to? Will TrueNAS then balance the load to account for the newly-added drives?

Again, if I’m making any incorrect assumptions or mistakes here, I’d love if people here correct me! Thanks in advance!

Skip the L2ARC for now. You have 32GB RAM. Just watch your ARC hit ratio and decide if to add later. L2ARC would use some of your RAM if added in.

5 drive Z1 pool when you copy data from old nas will be over the 80% capacity? Once you add second 5 drive Z1, you will have to use a rebalancing script to distribute data among VDEVs. Search forum for it.

I am not sure of the performance implications of having some drives on the HBA and some directly connected to motherboard, especially within the same VDEV.

Can you use the Intel Arc for video and transcoding at the same time or does it need to be passed through completely? I don’t know on this.

2 Likes

Just checked. That’s percentage of usable capacity, right? Then I’ll be at 75% or so. What are the exact consequences of exceeding 80%? I’ll read up on the rebalancing script, thanks.

Performance doesn’t really matter too much here, I’ll be limited by the gigabit network connection on the motherboard anyway. But as long as the HBA just passes through the drives to the OS and doesn’t do any processing of its own, I think it shouldn’t be much a problem? Some motherboards have multiple SATA controllers and the user mostly doesn’t have to care about it.

I don’t think I need video on the host system at all? TrueNAS itself can run headless.

The next version of Truenas will allow you to add drives to a vdev while keeping the “raid level”.
So you could start with a 5 wide raidz2, and then add 5 more drives.

2 vdevs with 5 wide raidz1 is risky. You lose 2 drives from the same vdev, and your data is gone. During a resilvering, it happens that because of the added stress to the drives another one goes bust.

I am not a fan of repurposing gaming hardware, its not meant for 24/7 operation. Especially if older. But if this is what you have, i suggest to put some resilliency in your system with a good pool/vdev layout and dont forget about a proper backup of your most valued data.

2 Likes

Yeah, I know. But that’s what I happen to have and AM4 isn’t that old (the specific mobo I have was very late in the generation, too).

The next version of Truenas will allow you to add drives to a vdev while keeping the “raid level”.*
So you could start with a 5 wide raidz2, and then add 5 more drives.

The version question is another thing I wanted to ask. I know that 24.10 plans to move to Docker as the application backend. Is it worth it to wait a bit so I can set everything up fresh on Docker, in addition to what you said? I’m just a bit afraid that the first release of 24.10 won’t be very stable anyway and it’ll take a few months of patches for the new systems to work properly.

But if this is what you have, i suggest to put some resilliency in your system with a good pool/vdev layout and dont forget about a proper backup of your most valued data.

Most of the data on this pool can be re-acquired, it’ll just be a pain in the ass to catalog everything and do it manually. The actual important data is properly backed up already.

Supposetly the Truenas apps will be migrated to docker during the update. How that will go, noone knows. The vdev expansion is not new to ZFS, but its probably safe to say it would be good to have backups.

Its still best to pre-plan your pool layout or use mirrors for easy expansion.

Raidz1 5 wide would just be too risky for me.

BUT, what your are planning will theoretically work fine. Also with your GPU in the 1x slot.

1 Like
  1. Given a choice I would put the apps pool on the SATA SSD and L2ARC on the NVMe.

  2. I agree with the suggestion that you start without an L2ARC and see what the ARC hit rate is, and then try L2ARC in metadata only mode and L2ARC in full data mode. But for your use case my guess is as follows:

    • Jellyfin streaming - you are unlikely to read the same data twice with sufficient gap that it would no longer be in ARC - so I suspect that L2ARC will add nothing here. ZFS is also good at read-ahead, so you probably won’t get a lot of head thrashing from this.

    • qBittorrent - I think that L2ARC could add a LOT here as you are likely to get leeches requesting the same data but hours apart which would be a good use case for full L2ARC

    • My own NAS only has 10GB of RAM and I still get 99.5% arc cache hit rate.

    So my guess (and it is no more than that) is that for your use case an L2ARC would be beneficial and I would use the 1TB NVMe for that.

  3. Use the apps pool not only for the ix-applications dataset but also for e.g. the Jellyfin and qBT metadata. Don’t forget to replicate your apps-pool to HDD as a backup.

  4. For your use case, I am not sure that you will get sufficient performance benefit from rebalancing the two vDevs to make it worthwhile. You will initially have more data on the new HDDs and so put less stress on the old HDDs and this may be a good thing.

  5. Don’t forget to look at @joeschmuck 's Multi-Report script.

  6. I would also recommend that you add use cases for backup of your PCs data.

1 Like

Mixing SAS and chipset SATA ports, or Molex-to-SATA power should be no issue.

As said, you do not have enough enough RAM for a L2ARC. Drop it.

Correct. Depending on your transcoding needs, maybe you don’t even need the Arc dGPU and may leverage the CPU instead.

The ZFS storage part should be stable. The question mark is on auto-conversion from Helm charts to docker-compose. If you know the latter, you may install it in a sandbox already and set up there: It will move straight to Electric Eel.

As also said, the main issue is pool layout: 10-wide raidz2 would be safer than striped 5-wide raidz1.
But then you need ten drives now (or at least nine or eight), OR you have to wait for Electric Eel for raidz expansion, which is actually the new (and potentially risky) ZFS feature.
Without the full set of available drives, I do not see a trouble-free migration path here. If you can wait for Electric Eel, make a 6-wide raidz2 (possibly degraded if you only 5 drives at hand—here be dragons!), move your data to it, expand to 10-wide and then run a pass of the rebalancing script to claim the whole space.

1 Like

Thanks a lot. Overall sounds like it’s best to wait for 24.10. I have absolutely no experience in Docker (or K3S for that matter), that’s actually a lot of the reason I went with TrueNAS - for the one-click deployment of apps. I’m an advanced, but still ultimately a home user.

If you can wait for Electric Eel, make a 6-wide raidz2 (possibly degraded if you only 5 drives at hand—here be dragons!), move your data to it, expand to 10-wide and then run a pass of the rebalancing script to claim the whole space.

Forgot to mention that I also have an 11th drive, also 12TB, that I’d like to keep as a cold spare. So once 24.10 drops, the migration path seems to be: create a 5-drive RAIDZ2, fill it up, move the leftover data to the extra drive temporarily, then add the 5 old drives to the TrueNAS system and expand it to a 10-wide RAIDZ2, and since I now have less than 50% occupied on the newly-expanded vdev, the rebalance/rewrite should work with no issues. Then, at the end, move the data from the extra drive onto the array.

Hopefully the vdev expansion process is fully officially documented by the time 24.10 drops. At least to confirm that the existing rebalance script also works for this purpose.

RAIDZ2 with 5x 12TB drives gives you capacity of 3 x 12TB (36TB total), two drives used as parity. You wouldn’t have enough space for your old data to migrate.

Protopia, please see the ZFS Primer section on L2ARC and sizing. Your suggestion for the 1TB NVME would have been over the recommendation. Somewhere this is a resource on ZFS and memory sizing.

Then make a 6-wide raidz2 (you need 4 drives worth of space for 40 TB), move all your data, expand with 4 drives and keep the last one as your new cold spare. Then rebalance.
This means you can actually migrate to TrueNAS now if you want and wait for Electric Eel to expand.
Documentation is linked here:

It seems that our Resident Grinch never wrote a formal Resource on this one.
But the often repeated advice about L2ARC is:

64 GB RAM minimum • 5 to 10x size of ARC (RAM)

Managing L2ARC uses RAM, so RAM has to be large enough and L2ARC cannot be excessivly large.
L2ARC can be added and removed at any time, so test first whether L2ARC would be useful (use system to fill ARC and then run arc_summary).

I found the memory sizing info in the Documentation, Hardware section

Add approximately 1 GB of RAM (conservative estimate) for every 50 GB of L2ARC in your pool. Attaching an L2ARC drive to a pool uses some RAM, too. ZFS needs metadata in ARC to know what data is in L2ARC.

Which adapter? If not mistaken the Arc A310 draws power from pcie slot only. If I’m not mistaken (I could be) the power out of pcie x1 is generally 25w. Card max draw is 30w - you could be fine, but maybe not. Some adapters have options for supplementary power for this reason. None come to mind since I haven’t used x1 to x16 since the mining days & I’m not sure I’d recommend a USB riser (though honestly it might work fine for transcoding, I’ve just never even thought of testing it so I couldn’t recommend it).

Per gigabyte you could actually use ECC ram on that motherboard if you’re willing to spent a bit for DDR4 ECC UDIMMs… Whether or not it’ll actually work as ECC or just ‘work’ is a different story & depends how far you trust them for quality implementation.

Everyone already suggested dropping L2ARC.

Given how many revisions Gigabyte consumer motherboards generally have I’d argue there isn’t much that runs well on them… But this is just to feed my salty personal experience with them & is anecdotal.

Any chance at all the DAS can be instead re-purposed to a backup role & you can skip this? Some have mentioned previously about having to wait for 24.10, but I’m going to guess that there’ll be some risks involved in expanding pools that are likely slightly more intensive than re-silvering after replacing a drive.

If possible why not use cheap sata SSDs directly to motherboard (if I’m counting right you’d still have 2 ports available) & instead use the M.2s to have a mirrored pool for both your Apps & possibly VMs? Take the 256GB NVMe if you already have it & toss it into an M.2 to USB enclosure; BAM, fast USB storage!

I think folks already covered that this may not be the best layout.

Heavily degraded performance, past 90% it gets even worse. I’ve seen some systems completely lock-up & require vendor support of all things because even deleting files to free up space wasn’t possible. YMMV & it might not get that bad, but it is generally not recommended to exceed 80% utilization.

You could just tape those pins off (or break them off) if you’re worried about molex. You could also just disconnect the 3.3v wire off of the PSU side from the connector after squeezing the wire crimp flaps together so you can pull it out of the connector.
image

If your phone came with a sim card slot ejector tool thingy - you could use that to push the pins in. But most folks don’t feel comfortable performing any kind of modifications to their PSUs at all, so…

Make sure you got some fans pointed at the HDDs. Some foam under the case would be clutch too if this isn’t in your basement as in the past I could hear my NAS from the basement while it was doing a scrub - those HDD vibrations magically resonated throughout my floors until I found a solution to isolate the case from the floor.

At 90% ZFS switches to a different strategy for free-space finding, this results in a performance cliff at 90%.

I like to suggest at 80% you begin thinking about increasing your pools capacity… and you do it BEFORE you reach 90%. Of course, how long you have is dependant on your typical fill rate.

IIRC at 95% ZFS will throttle your write speed… and at 100% the pool will lockup.

If you are using your pool for block storage, and its mechanical HDs (rust) then its recommended to keep the pool at 50% usage… which minimizes fragmentation related performance degradation.

1 Like

Which adapter? If not mistaken the Arc A310 draws power from pcie slot only. If I’m not mistaken (I could be) the power out of pcie x1 is generally 25w. Card max draw is 30w - you could be fine, but maybe not.

Specifically this one. Theoretically I shouldn’t use anything but the card’s media engine, which barely uses power at all. But then I also wonder if it’s safer to go with an A380 that has an external power plug (like this one), assuming that external connection will “take over” for an underpowered slot and keep it fully powered.

Any chance at all the DAS can be instead re-purposed to a backup role & you can skip this?

Most of the point of this exercise in the first place is storage expansion (as the DAS only holds 5 disks) and moving off of the DAS’s built-in RAID controller, so unfortunately no. Plus, if I migrate the data without adding the old drives back to the NAS, I’ll already be at near 80% capacity and would have to consider expansion anyway.