Unraid to TrueNAS Help!

Ok so i need help. Im coming from Unraid I want to move across to TrueNAS, id have done this sooner but my disks have always been many different sizes and I’ve never had the cash to buy all new same sized disk but over time i have been upgrading and with the upcoming OpenZFS 2.3 Raidz expansion i think im finally going to pull the trigger.

At the moment i dont know how best to layout my storage. My current drives are

  • 6x 18TB HDD
  • 3x 8TB HDD
  • 2x 4TB HDD
  • 2x 1TB SSD
  • 2x 512GB NVME

In Unraid my HDD’s minus the 2x 4TB are in my array, the 2x 1TB SSD i use as a write cache and the 2x 512GB NVME drives are used to hold Appdata and docker images/ VM data. I currently have 57.7TB of data used on the array.

Im looking to move to TrueNAS mainly due to speed and performance issues. I have a 10GB network in my home and I’d like to saturate this as much as possible.

The server is used primarily as a media/ file server and will run the typical arr apps, plex and SabNZBD.

My thoughts at the moment are

  • Vdev1 - 3x 18TB RaidZ1
  • Vdev2 - 3x 18TB RaidZ1
  • Vdev3 - 3x 8TB RaidZ1
    Over time i will upgrade the 8TB to 18TB

How do i best used my SSD’s and NVME’s

is a cheap old 128GB SSD suitable to install TrueNAS on?

Thanks

Assuming your nvme is v fast, setup an nvme pool for apps etc.

Maybe setup an SSD pool too.

Your plan for your 18 and 8TB pools is fine.

I would actually use 6-way RaidZ2 personally, but I prefer a bit more redundancy.

4TB drives are no longer economic to buy, when you can get 8TB drives for almost the same price.

The 128GB boot disk is fine. 32GB would be fine. The important characteristic is longevity… scale writes a fair bit of logs and this tends to burn out low quality usb flash drives and ssds

Thanks for the quick reply,

The 4TB disks are sat in the server doing nothing at the moment so can easily just bin those…

The reason i was looking at two vdevs for 18TB was from a speed perspective as i understand it my write speeds will be much better with two rather than 1 larger raidz2 is that correct?

Could I use the SSD’s as a write cache ‘similar’ to Unraid where it moves from the cache to array?

The NVME’s are capable of 3000MB’s read and write so will be fine apps/ dockers and VM’s i imagine

Thanks

No, there is no such concept in TrueNAS.

I know it wouldn’t be exactly the same the same as unraid that wasn’t the question as such. I’m unsure of the purpose of SLog but this seems to be what i need.

If i set the two SSD’s in a mirror for SLog - anytime i write to the server the writes would first go to the SSD’s am i correct?

No, it isn’t. Again, TrueNAS doesn’t have anything like a write cache. SLOG isn’t a write cache, and is beneficial only in a very narrow use case where you need synchronous writes. And in those cases, with modern SSD capacities and prices, you’d be much better off with a SSD pool rather than trying to use SLOG with a pool of spinners.

Ok so it seems I’ve totally missed the point on SLOG.

If i went with the suggestion that @Stux mentions above is it possible to roughly assume what the read/ write performance maybe of the server? I dont want to go to the hassle of changing environments if there is no clear and obvious speed increase as this is my primary goal.

Thanks

Just to drop some half-baked 2am knowledge in here regarding write caching.

TrueNAS, rather ZFS, has a form of write caching in ARC (memory cache). The way this works is by grouping writes into something called a transaction group. Typically I believe the timeout for this operation is five seconds. Writes are processed in memory, and then are synced to the disk. From my basic understanding, there can only be one transaction group in an ‘open’ or ‘processing’ state, and one in a ‘sync’ state at any given time. I believe there is a third state that I can’t remember the name of, but it’s mostly transitional between open and syncing and I don’t think it’s majorly important. While one TGX is syncing, another is able to enter this ‘open’ state and begin processing writes to be synced after the previous transaction group has finished. If your disks are slow (slower than your memory, so almost always), you’ll hit a limit where the TXGs are stuck processing data quickly, but waiting on the previous one to finish syncing to disk. So what you’ll typically see from this when writing a massive amount of data is a very nice fast initial burst of write, and then it will slow down to the speed of your disks after about 10-15 seconds (at least from my experience).

SLOG is a device that can be used for data that (as perceived by the OS) must be written to some form of non-volatile memory as soon as possible. These would be sync writes. In most cases your writes will by async so you won’t run into this. As dan mentioned, there are not many use cases where a large number of writes wil be synced (SMB, at least on Windows, for example is always async unless manually specified).

As I mentioned my knowledge on this particular area is not brilliant, but hopefully this provides you a basic overview of how these work.

2 Likes

So, some of the use cases where sync writes are made

  1. ESXi and other VM hypervisors loading block storage over NFS or iSCSI
  2. VMs hostetd on TrueNAS using ZVols (not actually sure about this)
  3. SMB from MacOS.

Now, there is always some sync writes involved in actually maintaining a ZFS volume, and the SLOG can benefit that.

The issue is that a SLOG drive has to have some important characteristics to make a worth while SLOG. A few years ago… the answer was simple “use optane”. Now its a bit trickier… “use something as good as optane, but not optane, since you can’t get optane anymore… new”

ANYWAY, you probably don’t need a SLOG.

And L2ARC… well, wait till you see your ARC hit ratio. if it’s 99.5% then you’re only missing 0.5%… which leaves only 0.5% to be fielded by the L2ARC.

Re estimating perf…

a mirror drive will write at the speed of the slowest drive in the mirror. and will read at the sum of the drives in the mirror.

a raidz drive will write/read as the sum of the data drives in the raidz array, but will have the IOPS of a single drive… ie poor random access.

And HDs tend to perform between 25-200MB/s… depending on where on the disk you’re writing and how fragmented they are… so call it 50-100MB/s

Metadata drives could be beneficial… but its a fairly advanced concept, and i’d suggest skipping for now.

1 Like