Just RAIDZ1 with 11 NVMes?

I’m using a ASUSTOR Flashstore 12 Pro FS6712X which has 12 internal NVMe slots and and 4 external USB ports.

I really need at least 10 NVMe slots for data. I initially accidentally setup my pool across the 12 NVMEs using DRAID2 instead of RAIDZ2.

Now I’m about to wipe the pool and set it up correctly, but I have an issue with the boot drive on the USB ports which is making me consider using one of the NVMe slots for the boot drive instead.

But that would mean RAIDZ1 across 11 NVMes in order to keep 10 drives for data.

What’s my most likely failure mode with a bunch of NVMes? If one drive fails, I can just power off the whole system until I get a replacement drive and install it. So I just need another drive to not fail while the recovery process takes place.

Which do you think is the least risky?:

  1. RAIDZ2 across 12 NVMes, in a dodgy system that requires hardware button power cycles every few days
  2. RAIDZ1 across 11 NVMes, in a stable system with high uptime

And BTW, if a drive fails, what information is TrueNAS going to give me, and how will I figure out which drive failed?

RaidZ2 across 11 nvmes?

Wouldn’t leave me with enough space for data.

Before you do that, did you check, If the Asusstore supports booting from one of the 12 NVMEs ?

As for the Raidz1 vs Raidz2. It it not unheard of, that 2 devices fail at the same time.

Don’t do RAIDZ1 across 11 (or 12) drives. Do RAIDZ2.

2/11ths or 18% redundancy overhead is pretty darned good.

Personally, if I needed 40TB of useable space I would start by buying 7x 8TB NVMEs to run in a RAIDZ2, plus 1x 128GB NVME as a boot drive, leaving 4x NVME slots for future expansion. (As discussed in another thread, the ASUSTOR does not have enough PCIe lanes to get full performance from 12xNVME slots - and if the performance is limited by lanes, you should get the same or better performance from 7x NVME as from 12x NVME.)

But if I was forced to use a USB SSD as a boot drive to allow all slots to be used for data storage (which indeed did happen) then after trying different USB slots, I have a stable system that stays up for weeks between hanging (which is acceptable to me for my home media system - waiting 5 mins for a reboot before I can watch a TV programme is not a big deal for me).

1 Like

I think you have been given some good advice. My two cents:

What is the capacity of the NAS over the next 3 to 5 years (I prefer to use 5 years as HDDs last typically that long, however NVMe’s may last considerably longer, it depends on the usage). Right now you are looking at about 38TB using the hardware you have (Twelve 4TB NVMe drives in RAIDZ2). Will this capacity be good for that long? I suspect not.

If you can get away with using eleven of the 4TB drives in a RAIDZ2 configuration, then double that using 8TB drives over the year, that brings you up to 70TB using eleven NVMe drives.

I would purchase one 512GB (or similar) NVMe drive as your boot drive. Buy a Gen 3 for all future NVMe purchases for this system. Do not waste your money on Gen 4 or Gen 5, you will never see the speed benefits and you will really heat up the system for no reason.

For each of your NVMe drives, take a photo or draw yourself a little map and write down the serial numbers of each drive and it’s physical location. If the M.2 slots are numbered and you can see those numbers without removing the NVMe, that works best. When you boot up TrueNAS, in the Disk section you can add a “Description” of your drive. Enter the physical location. See my screen shot for the example. It sure makes locating a module fast, for me the serial numbers are on the side against the board, I needed this feature.

Replace one of the NVMe drives with the smaller Boot Drive (I’d use Slot 1 if you can, or slot 12). This size difference will be a key feature to you as it will be by far the smallest drive and when you need to install TrueNAS, you will know which drive to install it to.

Keep your 4TB drive as a spare, you never know when you will need it.

Now build your RAIDZ2 using eleven 4TB NVMe drives.
You will have about 35TB of storage at this point. When you can afford it, replace each drive one at a time, resilver, and then repeat until all eleven have been replaced. At that point the pool will pop to about 70TB, but only after all eleven drives have been replaced.

If your data is not critical, then RAIDZ1 is fine, however most of us value our data, or just would not like to rebuild the pool and copy all that data back as that is time consuming.

I would not use a USB boot device as you have already seen the problems. However an experiment… Create a CRON JOB to reboot the system every night. That is a temporary solution possibly. There are a few commands and each is simple. You have reboot or shutdown -r now or if you wanted a 1 minute delay you could use shutdown -r 1 for example. Either command will do the job.

Hope some of this helped.

2 Likes

How compressible is the data you’re storing?

Bear in mind that the default lz4 compression is a balance between throughput and effectiveness; and depending on the speed/capacity tradeoff you might look towards zstd-1 or similar.

As another has said, perhaps investigating which USB port(s) are more reliable than others, then using that port(s) for the boot drive. Some external USB ports might be shared with internal devices through an internal USB hub chip. Or several external USB ports might be shared with an internal USB hub chip.

I personally experienced USB power saving problems with my miniature desktop computer, (which runs Linux). I had to write up a boot time script to disable USB power saving. Then re-write it, and later update it. Now, for the last few years it seems to work reliably.

Next, using a USB to SATA or USB to NVMe adapter, with SATA or NVMe SSD has been found to be far more reliable than cheap USB flash drives. Their ARE GOOD USB flash drives, but as standard consumer it can be hard to find them.

Last, using a mirrored USB boot pool can help keep a TrueNAS server up and available if / when USB problems happen. We can hope that the USB problem(s) does not happen to both halves of the boot pool mirror at the same time.



It is a shame that some of these new, dedicated NAS computers don't come with proper boot devices. Or just a powered SATA port for SATA DOM. Using such might be more expensive but would be tons more reliable than USB.

TrueNAS may not work well with a eMMC storage device included in the ASUSTOR Flashstore 12 Pro FS6712X. And having only 8GBs is a bit limiting, (though may work okay…).

Going to go against the grain here and say that for NVME with nearly instant resilvering (ok minutes), I wouldn’t feel bad about a Z1 myself. If you do proper backups, I wouldn’t worry at all. You did say you could even shut the machine down until you get a replacement, so clearly it doesn’t have to be up 24/7. For spinning rust, no way would I do a z1!

However you said a z2 doesn’t give you enough storage, that is concerning as ZFS doesn’t really like 80% full pools very well. So, if you are already (almost) out of space, that is not a good sign for the plan.

3 Likes

dealing with a fair amount of crappy consumer M.2 disks that have failed or gotten close to failing? a system like this genuinely does make me wonder.

things to consider:
rebuilds might be “lightning fast” up until the nvme runs out of SLC cache, and the rest of the rebuild will continue at less than SATA II speeds. scrubs will be nice and fast though… a 2TB disk may only have ‘up to’ 226GB of SLC if you TRIM the drive properly and the firmware is in a good mood that day. and it only gets worse the higher capacity you reach. (numbers from Samsung 990 pro specs)

there’s also the question on how a CPU with 8 pcie lanes can make any use of the speed that 8 nvme disks can create? I don’t think you’ll gain much from striping on this class of hardware.

it is not uncommon at all for nvme disks to fail silently, all of a sudden they will begin to cough up a bunch of uncorrectable errors and die. I would argue it’s more common the less the drive is actually used based on my experience.

and of course, the choice of 11-wide RAIDz1 is incredibly daring no matter the storage medium.you really should consider RAIDz2.

regarding your usb boot issues… I would really just try some other USB sticks, using a pair of USB sticks in a boot mirror is a good compromise in a small/medium server IMO. at least if you don’t have the space for a dedicated boot device.

1 Like

update, I looked up the specs of this device…

it claims to have 8GB of eMMC storage, does this not show in TrueNAS?

It does, but AFAIK, TrueNAS needs at least 16GB for an install?

ZFS of any level is not a backup anyway and my resilvers have not been slower than SATA (though if he’s going to run at 90% that does change things as I mentioned also). The general posters here seem to think if a pool is lost it’s the end of the world. That’s what backups are for. As I said in my post IF he has good backups. And he said he could take the downtime so uptime is not an issue either. This is clearly not a mission critical must always be up work type setup based on what he said. I guess I would be much more flexible here. I would not hesitate in his scenario, assuming I understand what he wrote the way it was intended. If he has no problem restoring assuming it ever comes to that, not an issue to use a z1.

that is the official requirement, yes.

for reference, upgrade makes a boot environment, these eat up about 2.2-3GB each (and they don’t really grow unless you go and enable apt+developer mode).
as long as you cull these out when you upgrade, and you keep the “system” dataset on the nvme pool, honestly the 8GB eMMC will be perfectly fine as a boot device IMO.

keep in mind might want to turn off automatic upgrade downloads, and always upgrade manually with an .update file with the nvme pool as the temporary location for the update.

and if you want to “test the waters” you can still make only an 11-wide pool, that way if the eMMC isn’t enough you can just “replace” your boot-pool onto an nvme with just a click.

I’ve installed a VM with an 8GB disk to make sure, it installed fine and a zpool list shows 4.73GB available on boot-pool, it’s not a ton of space but it should be enough for normal (albeit conservative) use. and if you say that this is a non-critical device this may be an acceptable idea?

Unfortunately, while it is presented as a 7.28 GiB SSD in pool creation wizard, the OS installer doesn’t offer it as an option to install to.

Writes will be mirrored and written to both Flash drives, and so they will likely fail at about the same time.

My advice, use a single or mirrored USB SSD(s) instead (SSK USB SSDs have the same form factor as a USB flash stick).

I’m curious if that still holds true for SSD/NVMe drives. With HDDs there is a erase/write speed consideration. This would not be the case for a SSD/NVMe drive. This isn’t the thread to figure this out however I am curious now, just like iSCSI likes 50% or less full.

I’ve definitely broken this rule far enough to find out there is a rounding bug in the middleware when you get close to 100% it will round to 100%… and let’s be perfectly clear, performance doesn’t magically degrade or fall off a cliff because of a number. (and for me if there was a degradation I didn’t notice it…)

my honest take on the 80% guidance/copypasta…

if your data is WORM (write once read many), it doesn’t matter, the 80% rule is afaik for ensuring that a pool doesn’t turn into a fragmented mess with database/vm workloads specifically, which mainly affects spinning media.

no one these days is running giant databases on spinning pools anymore, so I really don’t think it applies.