Which is better, Hardware (without TrueNAS) or Software (with TrueNAS) RAID?

I’ve seen some general pros and cons on Google for using a hardware RAID versus a software RAID, but I am curious what people who have used both have to say on the matter?

I have a video editing suite with an Avid server (3rd-party server) and am trying to settle on which I should be using.

Comes down to whether you care about the data or not.

RAID works great for scratch disks. I have had good luck with hardware RAID in my Oyen Digital external boxes, for example. However, I would never expect hardware RAID in general to be as performant re: recovery as software RAID can be, nor to detect / repair issues as reliably as TrueNAS can.

I have lost data in hardware RAID, I have had hardware RAID perform as expected when drives went bad also. It depends. However, given how limited hardware RAID systems are with their embedded MCUs, ASICs, and so on when something does go wrong and given how little information they can give you… I prefer software.

Most software solutions are pricier though this also depends - some RAID cards are incredibly performant and cost accordingly. I’ll let other folk chime in on what would work best for you but would expect a 3-VDEV-wide pool consisting of mirrored SSD VDEVs hosted on a 10GbE TrueNAS CORE server to be super-fast and reliable for your use case.

1 Like

Thanks Constantin, that’s a helpful answer.

One technician recommended I set it up as a RAID-Z2 for redundancy. Is there a reason you are recommending mirrored over this? To my knowledge, the time it takes to rebuild when replacing a drive is much faster on a mirrored setup, but is there any other reason you say that? Obviously it gives me less overall storage space…

Also, possibly a silly question, but I’m assuming a software RAID still has hot-swappable drives the same way a hardware RAID does? (I am an end-user–i.e. video editor–so trying to just get to grips with this). Or is all this dependent on how you set up the RAID?

Oh, I also just noticed you said SSD VDEVs… Aren’t SSDs a big no-no when it comes to RAIDs?

That’s really a matter of the hardware you’re using. TrueNAS in all its incarnations supports hot-swap just fine, but not all hardware does.

Not at all, at least not when it comes to TrueNAS.

Good drives will reward you, bad ones will punish. Some SSDs are set up with fast flash up front and slow flash in the back and once that front cache is full and cannot be flushed to the back quickly enough you have a similar mess as with SMR HDDs. So some research is required. I see no reason not to use SSDs for your use case. However, I would choose units that can handle a sustained workload.

Mirrored VDEVs are very fast and good for IOPS, IIRC. Striping them (i.e. 3 VDEVs of a mirrored set of SSDs each) gives you even more IOPS and storage capacity.

ZFS was designed and implemented with a million dollar budget and countless person years by Sun Microsystems to replace hardware RAID.

In my opinion - shared by many forum regulars and probably iXsystems - it succeeded on all accounts.

That’s the main reason why ZFS/TrueNAS on top of a RAID controller is a good recipe for data loss. ZFS does not work well with these because - see above.

In my career I have seen many completely painless swaps of failed disks and automatic rebuilds with RAID controllers - of course. These are not intentionally crappy and if your OS of choice is Windows or VMware there are few different options to get any redundancy.

The main advantage of ZFS over any hardware RAID solution is that the on-disk format of the latter is completely proprietary.

What are you going to do if the controller or the mainboard or both fail and you cannot get an identical replacement because the system in question is 6 years old and the manufacturer of that controller has gone out of business?

Your data is perfectly well on those disks - there just isn’t a system in the world that can read it. Good luck getting a used XY controller on eBay.

With ZFS there is no such problem. If you use SATA disks and connect them via an LSI SAS HBA or if you use your mainboard’s SATA ports - or an external storage system (JBOD) connected via SAS - irrelevant.

If the system “explodes” and the disks are fine all you need is

  • any hardware that can connect the disks
  • if TrueNAS for some reason cannot be installed: a recent version of Linux or FreeBSD

And all your data (zpool) will be there. Good as new.

That’s why ZFS is the last filesystem you will ever need.

6 Likes

Okay, that is a pretty good pitch pmh. I think I’m sold!

The other great benefit with TrueNAS is this very handy forum that gives you answers to any questions you may have in minutes. The best “service contract” you can get. :slight_smile:

Fair enough. My server currently has HGST Ultrastar He10 10TB 7200 RPM SAS 12Gb/s 256MB Hard Drives. Seems to work pretty fast. I don’t think I’m likely to change that to SSDs at this point, but thanks for the data!

I use the same drives in a single Z3 VDEV with a sVDEV (to help the NAS with small files and metadata). Pretty sure it’s not the right solution as a scratch drive for video editing work but it’s awesome for long term storage.

I don’t recall who pointed this out on the old forum, but I think it is valid: all RAID is software RAID. Whether that software takes the form of firmware in a dedicated piece of hardware, or instead runs on a general-purpose computer, it’s still software–and the general-purpose computer is going to have far more memory and CPU cycles available to do the job.

4 Likes

Agreed.

The problem / issue with hardware RAID is that the firmware is both proprietary and generally fixed. Meaning updates are rare and bugs can be hidden without the user knowing they exist.

On the other hand, 2 of the times I lost data, (just had to restore…), on hardware RAID, it was because we had inadequate monitoring. Basically in one case, a dual disk failure, where the first disk’s failure was not noticed. Until the second disk’s failure when it took out the file system.

The other time I lost data on a hardware RAID, was a bad block on 1 disk that was not detected. Then a failure on another disk which when replaced caused the RAID set to fail due to the bad block on the 1st disk. This can be overcome on hardware RAID with “patrol reads”, (what the RAID card vendor calls it), which is similar to a ZFS scrub.

Weird thing about that failure, while their was a bad block, on the OS disk and I could not re-mirror with hardware RAID due to that bad block. The bad block was not in use, so the OS was 100% recoverable but a ROYAL PAIN in the rear to fix.

It should also be mentioned that under some conditions, a hardware RAID set can be recovered with software RAID. Basically you need to figure out lots of details:

  1. Offset from beginning of disk for data stripes
  2. Width of RAID-3/4/5/6
  3. Block size
  4. Parity algorithm

All in all, if someone paid me to try, (and it can be done virtually with a copy of the disks), I would charge thousands of US dollars. Much easier to send to a recovery service.

1 Like

If you use LSI or Adaptec hardware RAID, you can plug the drives into an HBA and Linux mdadm will do all the heavy lifting of decoding the metadata. All you need to provide is a list of physical drives and the RAID level that was used.

2 Likes

Agreed. But it also means using a high-quality RAID system from a vendor with wide market acceptance, a history of reliable recovery tools, etc.

Years ago I used a “XFX RAID” hardware solution for the Mac G5 Pro tower and it seemed to work well enough… until it didn’t and then the lack of available software tools, etc. led to a complete pool loss. That experience likely set the seeds that started my journey to TrueNAS, even if I used a couple of hardware RAID solutions in the meantime w/o major issues.

The Oyen Digital Mobius 5 does all the things that @jgreco abhors, such as port multiplication, hardware RAID using a JM Micro chip, etc. but with the exception of one motherboard, that series has treated me well. The XFX experience had taught me the value of backup strategies so the impact of a failed array has been NIL.

Where I grow extra sore is packages like SoftRAID that consistently had issues in my experience and which also have the temerity to charge $80-140 annually for the privilege. When it came it out, SoftRAID made all sorts of claims re: performance, ease of use, etc. Maybe the product is there is now, like allowing the use of FileVault or better failed drive detection. But back then it was a disaster and not worthy of being sold for any price. I’d happily deal with installing OpenZFS on the Mac well before entrusting any data to SoftRAID again.