System "refresh" questions

My build/config is in signature. Currently running TrueNAS-SCALE-Cobia - TrueNAS SCALE Cobia [release]

This was series of updates going back from FreeNAS.

I am going to re-install TrueNas from scratch, and plan to use TrueNAS CORE

I am also going to use 2x 80Gb Intel SSDs for boot drives (mirror) instead of current SATA DOM.

Questions:

  1. Will my current pool work/detect in CORE after I plug in drive? I never done it before, I need data that I have on 2 separate drives (each just 1 drive stripe) to be available after reinstall.

  2. Is it better to mount those boot SSDs inside a case and connect directly to MB SATA ports or it’s OK to have them in caddys upfront? I just replaced backplate (went bad), so I am not even sure that connecting boot drives to back plane is a good idea. Convenient, but…

  3. Is 6 disks WD RED in RAIDZ2 optimal for general file storage and backups? I already have 4, can technically return them but not sure

Q&A time!

  1. Not likely, unless you’ve never done a “pool version upgrade” since 22.12 (Bluefin). TrueNAS CORE 13.0 currently runs on OpenZFS 2.1, whereas 23.10 (Cobia) uses OpenZFS 2.2. The upcoming CORE 13.3 will support this, but is still in BETA2 status.

If you post the output of zpool get all YOURPOOLNAME | grep feature then it would better tell us what feature flags (and therefore minimum version) is needed to import the pool.

  1. Either works - having them in caddies up front means you can hot-swap in case of failure, but it also means needing the BIOS/EFI boot ROM on your HBA. Some users opt to remove that (or do it when following a guide)

  2. As long as they aren’t SMR (Shingled Magnetic Recording) drives, WD REDs are fine, and a 6wZ2 is a pretty “standard” arrangement that balances usable capacity, redundancy, and performance for general file storage and backups.

3 Likes

I would be curious if there was some stability issue on Cobia or similar that makes you need to run CORE again. Would suck for you to get all setup there and have to migrate again in the future for whatever reason.

3 Likes

Long story, but in short - this is my “home” server for home stuff.
At the same time I have someone more knowledgeable building TrueNAS for me for office on Gen10 hardware with NVME drives. Core gives better SCSI performance (it’s known issue with SCALE I beleive) so it will be Core.

At home I want to setup replication so work server will replicate data to home server (additional backup). I know it’s all interchangeable(?) But feel like correct way is to keep systems the same

#1. I created those pools in SCALE, so I assume they ARE new version. Does it mean they won’t “read” in CORE?

#2. I think it’s little too much to deal with boot ROM on HBA. Not mission critical, I can bring system down if drive need to be replaced. So, SSD connected to MB it is! Just need to find good way to mount them. Probably welcro/ziptie

#3. 6x RAIDZ2 it is.

24.04 made some pretty big changes. :wink:


From: https://www.truenas.com/blog/truenas-dragonfish-performance-breathes-fire/

Correct, you’ve got a couple features set to active that are 2.2.x only (eg: head_errlog) so CORE/OpenZFS 2.1.x will say “I don’t understand/can’t mount this pool”

2 Likes

Ahh fair enough. Coming from Cobia that was true. But now in Dragonfish the SCSI performance has been greatly improved, to the point of exceeding CORE, but again your mileage may vary :slight_smile:

2 Likes

I will ask him where we at, if it was this version, but he said that after 10Gbps performance is worse than CORE. It was last week during tests.

Problem with CORE - it is crashing (also known) when NVME disk detached. It is fixed with SCALE. But SCALE were slower in our use case.

To serve iSCSI, you should use mirrors, not raidz2.

I noticed the above IX performance analysis is using “a TrueNAS M50 with 20 SSDs in 4x 5wZ1”

Not sure what that means, but its not using mirrors, perhaps the advise holds with HDs.

Thing is, a 5w RaidZ1 is going to have a 4 disk wide stripe… which means if your block storage is using blocks smaller than that, you still only get the efficiency of mirror… I think.

It means that SSDs possibly have enough IOPS to deal with many tiny blocks simultaneously when serving block storage.
OP has WD Red HDDs, and will presumably NOT like doing the outcome of feeding its 10 GbE link with block storage from a single raidz2 vdev of spinning rust.

1 Like

I am sorry everyone got confused here. I am NOT doing SCSI or setup in this topic. I responded on why I wanted to stay with CORE, because separate system will be with NVME and it is fastest with CORE. We’ve done tests on same array in SCALE/CORE and CORE wins. Nothing to do with RAIDz2 in Q here.

Acknowledged, but which version of scale?

Scale 24.04 has verified much faster perf than previous versions, at least in some scenarios, so it’s worth knowing if your benchmarking is potentially out of date, or not.

Yes, latest version was checked. 24.04
There was many variables, tried 100Gb cards too, but best results were CORE with 40Gb cards.

1 Like