Setting up new Truenas scale is there a raid 50 equivalent or run RaidZ2?

Have 8x2TB sata drives i am moving from a Dell VRTX box with 22 sas drives running a raid50. I noticed a performance increase with the RAID 50 and was wondering if there was a striped option in scale? was thinking of having two vdevs 4x2TB and then stripe those vdevs? not sure how to set that up in scale.
I am running a older freenas box recently updated it to truenas core.

Cheers

Vdevs are always striped, so the simple answer is “Yes”.

1 Like

Of course there is. Any time you have more than one vdev in a pool, all vdevs will be striped together–so two (or more) RAIDZ1 vdevs would be striped together in something like RAID 50.

HI can you provide some setup for that or just use RaidZ2 x 8 wide?

Not sure what you mean by that. If you want a pool with two striped vdevs, create it that way in the GUI. This has been possible in every version of Free/TrueNAS that’s supported ZFS (so, everything in the last 15 years or so), but the mechanics in the GUI have changed.

since I an new to scale, maybe that is what I am looking i was not able to create to vdevs but able to stripe them together?

starting over again… with screen shots…stay tuned. is that a better than raidz2?


So is the equivalent ZFS to RAID50?

thanks dan,

so i guess striping two raidz2 would be a little over kill? stay with two raidz1 vdevs in a pool?

Can you check my scrreen shots does that look correct?

You have the same storage efficiency as mirrors…

So, is this really what you want? That’s more equivalent to Raid60.

I would personally use 8-way RaidZ2… but the IOPS would not be the same perf as 2x 4-way RaidZ2… but you would have 50% more capacity, and still abel to survive any two disk failure.

3 Likes

I have remade my pool with striped vdevs running raidz1’s

would a 8 wide raidz2 be better performance or just better use of storage?

An 8wZ2 may have lower performance, but better redundancy. Generalizations follow.

In your current 2x 4wZ1 (two vdevs, 4-wide, Z1) setup, you can lose one disk from each of the group of four - if you lose two from the same vdev (group of four) then your pool is unavailable.

In a single 8wZ2 topology, you can lose any two disks and the pool remains available. All eight disks have to “work together” so to speak when servicing I/O, rather than being able to potentially split them up among the “two groups of four” - but depending on your workload, you may not notice. If you’re doing something with a low parallelization count of sequential, predictable I/O - like streaming backups or large files from/to a single client - then Z2 is likely to provide the same performance with better redundancy.

2 Likes

Thanks Honey…

the only issue I am noticing is when streaming my media from a plex VM, whose datastore is on a current z2 pool, keep freezing. didn’t have any issues when plex VM datastore was on the RAID 50 pool.
does that make any sense?

It’s been a while personally, but I believe the Dell VRTX might be using a hardware RAID card - which isn’t recommended for TrueNAS. That might be contributing to some of your issues here.

Can you post a full list of details?

wait how did you know I was using a VRTX lol.

the current datastore is running on my old dedicated freenas box recently updated to TrueNAS-13.0-U6.2. is running on wd red 4TB drives in a Z2 pool

New to me server is running Dragonfish-24.04.2.3
with super micro x11 mobo running esxi 6.5.
I have a LSI 9211-8i FW:P20 running in IT mode and is passed through esxi to a truenas scale VM.
at this point i was going to test with an equivalent striped raid (raid50/60) vs the standard 8x wide z2

hope this helps?

… you mentioned it in your initial post? :stuck_out_tongue:

Can you check the model numbers? WD “Red” specifically (not “Red Plus”) were SMR (Shingled) drives for some time, and those are Bad News specifically because they’ll return IDNF (Sector ID Not Found) under certain circumstances.

Correct HBA - being a VM does add some potential complexity, esp. under 6.5 as I don’t know if IOMMU was quite as robust there as in later revisions.

Questions further:

How much CPU and RAM is assigned to the TrueNAS VM?
For Plex, are you checking to see if your media is being transcoded? If yes, do you have hardware acceleration for this? Is your Apps (or Plex transcode) directory residing on the same RAIDZ2 as the source media?

  1. doh… I was tired lol

  2. yes i have seen these error and recently read about the WD RED issues with SMR. only way to check is to pule each drive out. another reason to switch nas’s
    seeing lots of these messages on old nas box
    image

  3. what version of esxi do you recommend?

truenas Vm
2vCPU’s
16GB Ram
16GB drive

still in “test” environment

If you did a passthrough of the HBA you should be able to see the device details (model number) through the webUI (Storage → Disks, then expand the drop-down for a disk) - if it’s a WD40EFAX then it may be SMR.

Are you transcoding with Plex?

1 Like

there a variety
efrx
efpz
efax

this is the my “old” box so i am not to concerned with it right now. its been stable for years. i am just getting rid of the VRTX as its to much server for what i am using it for and simplifiy to a 2U super micro box.
i got 20 free 2Tb drives so was using them untill they need replacing.

Good, good, bad. You’ll want to replace the EFAX drives with non-SMR ones.

1 Like

so it all looks good at this point. so you are saying that RAIDZ2 x8 has better redundancy than 4x raidz1 striped.

performance would be neglible between the two options?

Is there a test i can run to test read/write performance? would a simple file transfer be good enough?

Iperf is recommended speed test - see this thread’s @HoneyBadger post Performance test | TrueNAS Community