TrueNAS Intel Optane Advice

I’m new to TrueNAS Scale and I’m setting up a new home lab server to replace my aging Synology NAS.

My current build is:

  • AMD R5 7600
  • Gigabyte B650
  • 64 GB of DDR5 RAM (non-ecc)
  • nvme OS drive
  • 3 x 12 TB Seagate IronWold Pro data drives (RaidZ1)
  • 4 x Intel Optane P1600X 118GB drives
  • TrueNAS Scale 23.10

Things I store on this:

  • Server Backups
  • Cloud backups (drive, google photos, onedrive, etc…)
  • Documents
  • Applications
  • SVN & Git server repos
  • Plex Video Files (actual hosting is done using a Linux VM)

Yes I know server grade hardware is better, but finding used hardware on eBAY is not my thing and I took the gamble on consumer hardware. I spent a lot of time in the comparison before purchasing and I decided to roll the dice with this.

I was doing research and I came across some information on Level1Techs site talking about using Intel Optane drives as a metadata cache to improve the spindle drive performance. I found the Intel Optane P1600X 118GB drives on sale for $60 a drive (new) and thought that would be perfect. In all the documentation it looked like I could make a Raidz1 of these drives and pair them with my data drives. What I have found is that TrueNAS Scale only supports stripe or mirror, which severely limits the size of the metadata vdev. So I’m hoping someone can help with the following questions:

  • Is there a way to use RaidZ for a metadata vdev?
  • If I fill the space on my metadata vdev, will it store the remaining metadata on the data vdev?
  • If I can’t do a RaidZ on the metadata, can i stripe two zdev mirrors?
  • Should i use the optane for ZFS Logs or a ZFS Cache instead or is there something else I can use the Optane drives for that will speed up the spindle drives?

Thanks in advanced!

First, please add the SCALE tag to your post in order to receive targeted help.

You don’t appear to have a correct understanding about ZFS, how it works and what should you do. Let us address this first, please read the following resources:

I will not overly chastise you about your hardware choice, but do note that you can use ECC RAM with AMD’s R5 7600 if the motherboard supports it[1] and that using RAIDZ1 with 12TB drives is a bit risky[2].

So, first thing is that in ZFS you only have a cache drive and that’s the L2ARC drive: basically, you use a drive as a RAM (ARC) extension. Since it does not hold critical data, it and its content can be lost without experiencing data loss; however, in order to make this work the system needs to hold a table referencing the data inside the L2ARC into the ARC, meaning that having too much L2ARC could actually hurt your performance[3]. A such, the ideal ARC:L2ARC ratio is between 1:4 and 1:8 which means that with your 64GB of RAM you should aim for a single 256GB drive (hard to find these days, but you can always overprovision[4] it) or a 500GB drive; going lower won’t impact your performance, you will just not go all the way to what you could.

Now, what your research is likely to have found is the SLOG drive (mistakenly known as write cache drive outside this forum): I have already provided a resource about it but basically it’s a drive you want to use when you enforce sync writes and needs to have great endurance and performance in mixed workloads (simultaneous random reads and writes); from what you wrote, I don’t think you need it.

This does not mean you cannot use your four optanes: one for a L2ARC drive would be a fine choice, and two of the other three could be used in mirror as a separate pool for Apps and VM to run on, with the final drive being left as a ready spare in case one of the other three would come out as faulty in the future… or as a SLOG if you will have such a necessity in the future.

As I wrote, you should have a seprate non-hdd pool for your Apps and VMs for performance reasons, which could actually become handy if you will ever decide to spindown your HDDs.

Now, about the metadata VDEVs: those are a kind of special vdevs and as such contain critical data, meaning that losing one of these VDEVs would bring you to the data loss of the affected pool; metadata VDEVs need to have the same parity level as your data VDEVs. Metadata VDEVs are used to create fusion pools (see linked documentation uptopic) and are a way to increase the performance of a pool composed of HDDs; in my opinion, you will not need them.[5]

Feel free to ask if you have any more questions!


  1. I am a firm believer of ECC RAM and although DDR5 already has some error correction built-in, that’s just because otherwise the manifacturers’ yeld would be greatly reduced; not using ECC with ZFS is willingly creating a weak point, period. ↩︎

  2. Assessing the Potential for Data Loss | TrueNAS Community ↩︎

  3. for this reason, L2ARC is suggested only after having maxed out your motherboard’s ram and definitely not under 64GB. ↩︎

  4. SSD Over-provisioning (OP) - Kingston Technology ↩︎

  5. and if you will ever need them, you can always add them later; just, as they contain cricial data once added they cannot be removed without destroying the whole pool. ↩︎

1 Like

There are two great ways to deal with Metadata quickly. One is a L2ARC, for which you should have at least 64GB RAM, the other is a sVDEV.

There are multiple benefits to using L2ARC, including that L2ARC can be allowed to fail w/o any negative impact on pool data (it is redundant). Pretty much any SSD will do, as long as it can be read quickly. I use a 1TB SSD for L2ARC. With the advent of persistent, metadata-only options for L2ARC (which have to be set manually), L2ARC is a great way to dip your toes in the SSD / HDD hybrid universe. Once the metadata L2ARC “heats up”, it starts to really speed up Finder operations, see here.

A sVDEV requires a lot more planning but it can speed up operations for small files and metadata consistently, see here. For sVDEV use, your drive will be split by TrueNAS into two halves, one is for small files, the other is for metadata. Unlike L2ARC, if your sVDEV goes, so does your pool, so I use a three-way mirror of enterprise-grade SSDs. You will have to look into how much room your small files take up to plan for the proper SSD capacity to use as a sVDEV.

To plan for sVDEV, you need to review the size distribution of small files in your pool - current and expected. ATM, I cannot recall the CLI commands that compile that information, apologies. If your sVDEV small file catalog overflows, the small files will be written into the slower main pool and the benefit of a sVDEV re: small files will be diminished.

For my use case, a 1.6TB sVDEV seems to be sufficient (50TB pool). However, I made a point of nuking every small file I could by consolidating small files into sparsebundles and like archives on the server. My file size limit for small files is 32kB, however your limit will vary based on your use case and should be investigated carefully.

Similarly, you should determine how much metadata your pool needs. This helpful post has the CLI command for that. Remember, only 1/2 of your sVDEV pool will go towards metadata, so plan accordingly (i.e. allow sufficient room based on how big you expect your pool to be). The rule of thumb (and it will vary as the use case dictates) is 0.3% of your pool size will be metadata.

Lastly, the GUI still gives the admin zero insight into how full a sVDEV is - either on the metadata or small files side. So you will have to brush up on your CLI skills to check occasionally, especially if your pool is undergoing major changes like the addition of a busy database, for example.

4 Likes

This is really good advice. Your SSDs are very likely too small for sVDEV use but would be super for hosting VMs and a L2ARC that is set to metadata only and persistent .

2 Likes

Thank you for all the information. I was struggling to mind more information on the special VDEV types and this information helped a lot. I apologize for the incorrect terms, I was trying to use the terminology of the UI, which doesn’t seem to match fully with what most people use.

I agree on the ECC memory, but unfortunately getting a motherboard that supported ECC RAM and 64GB of ECC RAM complete trashed my budget. Unfortunately my motherboard lost ECC support in a later BIOS upgrade that I needed to enable bifurcation on my PCIE x16 slot. Everything is a trade off and I took the non-ecc gamble.

With all that being said, the original plan was to run a RAIDZ2 with 4 x 12TB drives, but I ended up having to put my 4th drive in my Synology to survive until I could get all the files moved to this new server. Once I get everything moved, I’d like to add the 4th drive as a hot spate. Do you know if you are you able to add a hot spare to an existing VDEV?

Lastly, I was reading in the zfs guide you sent and I noticed it said that a stripe can be converted to a RAIDZ. Would it be a better idea to start with the 1 drive as stripe and after I copy all my files over to that, then upgrade that stripe to a z2 once I can add my 4th drive?

2 Likes

A stripe can be converted to a mirror. Which some call raid1. Is that what you meant? RaidZ1 is different

Looking into your available hardware, I propose the following:

4x12TB as striped mirror, or RAIDZ2 (same efficiency, but some pros if you want to expand later, IOPS, resilvering)

2x P1600X mirrored as special vDev for the metadata

Keep in mind that you have to set the small block size before you fill the pool. 16KB will be around 80GB for my 50% filled 4x16TB striped mirror pool.

You could use one P1600X as SLOG, but if your main use case is SMB, the 99% useless. L2ARC is also an option as you „only“ have 64GB RAM

VMs on a different storage pool than HDDs allow to spindown the HDDs to save energy.

I would have gone the AM4 route if costs are relevant…
ECC is basically supported on most Asus and Asrock Mainboards

You must have misread. The only way to convert a stripe/mirror to raidz#, or a raidz1 to raidz2, is to backup data, destroy the pool, recreate it and restore. So get the fourth drive now!

If you go with mirrors, you may experiment with a special vdev, but if you go for raidz2 (less flexible, more resilient), there’s not way out, save for backup-destroy-restore, and a 3-way mirror as special vdev would be strongly recommended.
Persistent L2ARC is a safer way to go, as first intent. It is also a reversible choice, and need no redundancy.

None of that advice is specific to SCALE, so I took the liberty to edit the title.

1 Like

I would start with a single two-way mirror VDEV, then add the other two drives as a second two-way mirror VDEV after you have completed the copy.

To clarify here: striped mirror means 2x two-way mirror VDEVs and would allow you to have a better performance and flexibility.

If he goes with a RAIDZ2 VDEV, he wants a three-way mirror… but again, I do not reccomend a metadata VDEV over a persistent L2ARC drive.

That 1% being a MacOS user since apparently it enforces sync writes.

That’s not the main point of having Apps, Jails, and VMs on a separate pool: main reson is the performance reason.

1 Like

macOS does request sync writes for TimeMachine, but since this is a background process no performance improvement is noticeable from using a SLOG.

Let’s keep it simple: SMB = No SLOG

SLOG is for iSCSI, databases, mission-critical VMs.

NFS is a bit of a mixed case: It defaults to sync writes, due to its historical use for hosting Unix home directories. But if NFS is used as an alternative to SMB for sharing general data, I suppose that it is best to set sync=never on the share and not bother with a SLOG either.

1 Like

SLOG is needed if you do sync writes to HD and want performance.

That can be iSCSI, NFS or SMB.

It’s really very simple.

Yes. Windows does not do sync SMB, but all the world is not a windows box.

I believed any SMB working with macOS would enforce sync writes. I must be remembering wrong.

2 Likes

This could be another Dashboard pane suggestion - what has my SLOG done for me today, how is it performing, etc. similar to networking panes. Perhaps add cumulative writes, etc. to assess how toast the thing is.

3 Likes

I always assumed it was included in the reporting tab.

It is under reporting, as long as you know which drive is your SLOG and then select the right reporting data for it. But no data on what percentage of writes go through the SLOG, etc.

Yeah. Nah.

Not in Scale (Dragonfish RC1).

In scale you get Disk I/O or Disk Temperature.

That’s it.

I miss the dataset and disk capacity views

1 Like

Thank you @Davvo for all the information. After reading through everything you suggested, in the limited free time I had this weekend, I plan to follow your advice.

My current setup plans (with a slight modification) are:

  • 2 x 12TB drives in a mirror for storage 1
  • 1 Intel Optane drive as a L2ARC for Storage 1
  • 2 X 12TB drives in a mirror for storage 2
  • 1 Intel Optane drive as a L2ARC for Storage 2
  • 2 Intel Optane Drives for a temporary app storage
    • Optane drives have great endurance, but they are not the best performance and they don’t seem like the right fit for this.
    • I would like to replace these in the future with a more standard nvme drive (maybe gen 4 drives) with a bit more storage space (probably 500GB). Please correct me if you feel this is a bad choice.

I misread the first time an I read that as you can convert to any RAIDZ. after you suggested this I went back and re-read and realized I could only be converted to a mirror. Thanks for pointing me in the correct direction here.

Thjis is what I wanted to do, but unfortunately my 4th drive is held up in my other NAS until I can copy the data off. I don’t trust my old NAS to survive the file move without it. Also as @Davvo stated, the 2 mirrors give me more performance and flexibility and for now. At least until TrueNAS adds the ability to expand VDEV sizes.

I did look at AM4, but since I did not want a GPU, I was stuck with a G series CPU. At the time of purchase, the 5600G was out of stock and the sales on the AM5 hardware was actually cheaper. AM5 also allows me an upgrade path in the future as well. Honestly, we can argue the hardware choices all day and I spent too much time weighing the options. At the end of the day, this is what I have and I’m stuck with for the current time.

Thanks all!

Optanes are never a bad choice (yes they are that good), but yeah you might want more space. It depends on your needs, my jails’ pool is a pair of mirrored 250GB SSD.

ZFS is a greatly entertaining time sinker.

1 Like