Have 8x2TB sata drives i am moving from a Dell VRTX box with 22 sas drives running a raid50. I noticed a performance increase with the RAID 50 and was wondering if there was a striped option in scale? was thinking of having two vdevs 4x2TB and then stripe those vdevs? not sure how to set that up in scale.
I am running a older freenas box recently updated it to truenas core.
Of course there is. Any time you have more than one vdev in a pool, all vdevs will be striped together–so two (or more) RAIDZ1 vdevs would be striped together in something like RAID 50.
Not sure what you mean by that. If you want a pool with two striped vdevs, create it that way in the GUI. This has been possible in every version of Free/TrueNAS that’s supported ZFS (so, everything in the last 15 years or so), but the mechanics in the GUI have changed.
You have the same storage efficiency as mirrors…
So, is this really what you want? That’s more equivalent to Raid60.
I would personally use 8-way RaidZ2… but the IOPS would not be the same perf as 2x 4-way RaidZ2… but you would have 50% more capacity, and still abel to survive any two disk failure.
An 8wZ2 may have lower performance, but better redundancy. Generalizations follow.
In your current 2x 4wZ1 (two vdevs, 4-wide, Z1) setup, you can lose one disk from each of the group of four - if you lose two from the same vdev (group of four) then your pool is unavailable.
In a single 8wZ2 topology, you can lose any two disks and the pool remains available. All eight disks have to “work together” so to speak when servicing I/O, rather than being able to potentially split them up among the “two groups of four” - but depending on your workload, you may not notice. If you’re doing something with a low parallelization count of sequential, predictable I/O - like streaming backups or large files from/to a single client - then Z2 is likely to provide the same performance with better redundancy.
the only issue I am noticing is when streaming my media from a plex VM, whose datastore is on a current z2 pool, keep freezing. didn’t have any issues when plex VM datastore was on the RAID 50 pool.
does that make any sense?
It’s been a while personally, but I believe the Dell VRTX might be using a hardware RAID card - which isn’t recommended for TrueNAS. That might be contributing to some of your issues here.
New to me server is running Dragonfish-24.04.2.3
with super micro x11 mobo running esxi 6.5.
I have a LSI 9211-8i FW:P20 running in IT mode and is passed through esxi to a truenas scale VM.
at this point i was going to test with an equivalent striped raid (raid50/60) vs the standard 8x wide z2
Can you check the model numbers? WD “Red” specifically (not “Red Plus”) were SMR (Shingled) drives for some time, and those are Bad News specifically because they’ll return IDNF (Sector ID Not Found) under certain circumstances.
Correct HBA - being a VM does add some potential complexity, esp. under 6.5 as I don’t know if IOMMU was quite as robust there as in later revisions.
Questions further:
How much CPU and RAM is assigned to the TrueNAS VM?
For Plex, are you checking to see if your media is being transcoded? If yes, do you have hardware acceleration for this? Is your Apps (or Plex transcode) directory residing on the same RAIDZ2 as the source media?
yes i have seen these error and recently read about the WD RED issues with SMR. only way to check is to pule each drive out. another reason to switch nas’s
seeing lots of these messages on old nas box
If you did a passthrough of the HBA you should be able to see the device details (model number) through the webUI (Storage → Disks, then expand the drop-down for a disk) - if it’s a WD40EFAX then it may be SMR.
this is the my “old” box so i am not to concerned with it right now. its been stable for years. i am just getting rid of the VRTX as its to much server for what i am using it for and simplifiy to a 2U super micro box.
i got 20 free 2Tb drives so was using them untill they need replacing.