New Member - Micro-SD? HDD vs SSD? Wasted space for vdevs? Need pointer to next steps

Hello, I’m Steve, a new user to the forum and TrueNas Core. I work at a school district and am investigating retasking former VMWare hosts to be TrueNas Core 13 host.

I’ve gone through the setup, read through some of the intro documents, and watched a couple of videos, and am at the point where it seems to make sense and therefore I feel I am at the “a ‘little’ knowledge is a dangerous thing” stage. Please give me some direction for moving forward with this scenario:

Goal:
Resiliancy First, Capacity Second, Performance 3rd, VM storage, Network Storage

Current Environment:
VM Farm currently served 2 hosts, from iSCSI off EMC san with 8TB used out of 30TB
Synology NAS SATA disks 8 TB used out of 30TB

TrueNas Platform Options:
Platform: 2 x Dell R740 servers, (16 x 2.5 drive bays, 2 x 24 Core CPUs, 384 GB Ram ) each
1.92 TB SAS 10K 12 Gbps drives x 12
1.2 TB SAS 10K 12 Gbps drives x 9
450 GB SATA SSD 12 Gbps x 4
400 GB SATA SSD 12 Gbps x 2
300 GB SAS 10K 12 Gbps x 4
400 GB SAS SSD 6 Gbps x 12
1.2 TB SAS 10 6 Gbps x 20

Questions:

  1. SLOG - Based on the videos and reading, the SLOG is on the same drive as data, unless you create a separate vdev. However the size/use of the SLOG is tiny. Do we still need to allocate a pair of 400 SSD drives for the SLOG?
    1.5) Micro SD for SLOG? The servers have internal dual ports for Micro SD cards. I was wondering if it would be better to use a vFlash or SD card for SLOGs, if a very resiliant card exists, etc. I did see another post where someone indicated they don’t hold up to the r/w cycle. I wanted to see if this was still true.
  2. L2ARC and ARC - My servers have 386 GB RAM. Do I need to create a separate L2ARC vdev? Or is it like the SLOG where if I don’t create a separate vdev it uses the data vdev.
  3. Metadata VDEV - I gather this is to make small reads and writes faster, does this need to match the layout of the data dev? (if so, how would that make it faster?)
  4. Used for VMWare - NFS or iSCSI. At a previous company we used NetApp appliances for VMWare, but with NFS over Jumbo Frames at 10Gbps, instead of iSCSI and we got great performance. What is your thoughts on that with TrueNas?
  5. Dedup vs. Compression - Again using NetApp, we used Dedup instead of compression and got both excellent performance as well as capacity. Is TrueNas Dedup as good, and is it best for specific uses?
  6. Block vs Share - can a pool have more than one data vdev, is there a benefit to that, and/or can you share both iscsi and cifs/nfs out of the same pool?

Thanks for your patience. I’m trying to find the best combinations of my SAS and SSD, and Server hard ware for each use. As I said, primarily, we want to serve VMs and network shares. As well, we’ll need to do backups etc. I have other servers I can allocate as well. I just don’t want to make a newbie mistake and get too far down the line and have to redo it.

Absolutely not. I can’t think of anything on a TrueNAS server that MicroSD should be used for, and SLOG is probably at the bottom of the list.

2 Likes

Ok, then, can you use a pair of SSD drives for SLOG, L2ARC, Metadata and Dedup all at the same time? I have 16 slots. If I have to allocate a pair of disks to each use plus the OS, that’s more than 50% of my slots. Are all of the vdev types necessary? Or if you are using the pool for a file share, is Metadata more or less useful than L2Arc, etc.

You are better looking at TrueNAS Scale (Linux based) over TrueNAS Core (FreeBSD based) unless you have a specific requirement for it. Almost all development is going into Scale.

As a broad, general rule, stay away from any special devices or dedupe unless you have a proven need for them. SLOG devices may be helpful if you have synchronus writes but they should be specific hardware. explained in links.

BASICS

iX Systems pool layout whitepaper

Special VDEV (sVDEV) Planning, Sizing, and Considerations

Linking some other posts and whitepaper. They explain a lot of the items you are inquiring about.

You can indeed!
It’s incredibly unsupported and anyone who genuinely recommends it is someone you who’s advice you should treat with great scepticism, they won’t be there to clean up the mess you create if you follow their guidance.

SLOG and L2ARC are not pool critical but Metadata and Dedup are. Lose one of the two latter and the whole pool goes.

Many users are perfectly fine without ever having any non-data VDEV.

1 Like

Thanks. I’ll reread those. The problem I’m finding though, is a lot of assumed knowledge. I know the people writing those tried to make it understandable, but there seems to be some key missing info. Like your assertion, that special vdevs should be avoided, and SLOG is helpful for synchronous writes.

I also just read a thing about tunables, instead of svdevs, so now I need to go down that rabbithole

That should be added to the documentation.

Tried it, went back to Core.

No. At least, not in a way that’s supported by any flavor of TrueNAS.

Also no.

CORE is dead.

2 Likes

Mostly dead!

Have fun storming the castle!

5 Likes

I have one piece of this extremely expensive HPE microSD RAID 1 drive:

https://support.hpe.com/hpesc/public/docDisplay?docId=a00093294en_us

Officially “hallowed” boot drive to run e.g. ESXi on a Microserver using the internal USB connector to boot.

I am about to repurpose a Microserver Gen10 (no plus) to TrueNAS and ponder if I should just keep it and use it for booting TN.

Yeah, we used SD cards to boot our whole VMWare farm, seems very common. Question becomes, if you use it for the TrueNas OS (like VMWare) does the TrueNas OS read and write from it’s disk significantly more than VMware? IF you don’t have swap on it, I would think after Boot, it wouldn’t do much, except system logs. However, I’ve seen several posts, like Dan who say “What is wrong with you people?!?!?” or words to that effect.

Let’s say it will shred any standard USB thumb drive you can get, even “quality” ones. I tried a few and they were always toast within weeks.

What might add to the writing is that most of them seem to not be designed for 24x7 operation. If you plug a drive with USB 3 specs into a USB 3 socket, it’s going to get hot. I don’t know why but that has been the case for every drive I tried.

You can get around the problem by using a USB to SATA or USB to PCIe adapter and a small “real” SSD.

Ok, adjusting the questions a little, after reading some of those posts and documents.

Looks like for spinning disk for the data vdev, I’d be using sets of 2-way mirrors, or sets of 3disk zraid.

Still would like some info on ARC and L2ARC, used for metadata.

  1. Does TrueNas automatically use all of my extra RAM for ARC, or do I have to go somewhere and configure it? As I said, I have 384 GB of RAM, so ideally, I would think it would get used all the time for caching, negating the need for L2ARC altogether. Yes/No?
  2. I see the tunable features can be set to set L2ARC to metadata only, getting the benefit of metadata, without the risk of data loss if the vdev dies. Yes/No?
  3. I saw that L2ARC can be made persistand, which implies it’s not persistant on reboot? Does this mean it really doesn’t matter if it’s striped instead of mirrored or zraid?

I think our work load is pretty much all async, and random read writes. I want the storage of HDD and speed of SSD. Some base testing I did showed poor random read from the SSD, though, so I may have done it wrong.

ANother post I read (that I can’t find now) talked about using an ARC tunable to allocate ARC to metadata, eliminating the need even for L2ARC. Does anyone know about that?

Thanks for all the quick replies. I’d say I doubled my knowledge, but twice nothing is still nothing.

I’ve used a micro sd card as a single drive vdev for test purposes with no valuable data, in a pc I was to lazy to open up. That seems reasonable.

I’d be interested in seeing how long the micro SD card used for stuff like raspberry pis lasted.

Yes. For the past ten years or so, TrueNAS, and FreeNAS before it, boots and runs from a live ZFS pool (before that, it loaded itself into a RAMdisk). ZFS caching means most of the OS will live in RAM most of the time, but there is constant i/o (though mostly reads, relatively few writes) to the boot pool.

I would use 4 disk RAIDZ2. I don’t trust RAIDZ1 with today’s drive sizes.

With that amount of RAM I do not see a need for L2ARC. CORE does use all of the available RAM for ARC. I don’t know the current state of SCALE - because of some limits of Linux memory management there used to be a hard cap at half of RAM, probably increased now if not removed altogether for Electric Eel.

Anyway it’s a tunable so you can definitely make good use all of your RAM.

Losing an L2ARC vdev does not impact your data in any way. It’s strictly a cache.

The other way round the loss of metadata would definitely imply the loss of data. Metadata is the information used to manage your data.

Without metadata your data will be somewhere in some blocks on those disk drives, but there will be no block pointers, no datasets, no directories, no file abstraction …

That only eliminates the need to “warm” (i.e. fill) the cache with data after a reboot when you will start with an empty cache.

Since with that amount of RAM you most probably will not need an L2ARC that question is also not really relevant.

Not possible unless the working set fits into the cache.

Why are you making such an effort to micro tune things that most probably will work fine with just the defaults?

Decide on your data vdev layout depending on whether the workload is “read mostly” or “transaction heavy”. That leads to either RAIDZn or mirrors.

Then measure the performance you get. You need the storage space, i.e. the spinning drives, anyway. If you have > 99% ARC hits, an L2ARC won’t help you. 99-100% is frequently the case. So start with the spinners and give it a go.

If your ARC hit rate is - despite the huuuge RAM - rather low, you might consider L2ARC. Similarly if you have directories with thousands of subdirectories and tens or hundreds of thousands of files, a special vdev might help.

But unless that is tried and well defined, please assume that the defaults will result in the best possible performance the hardware will be able to give you, because that is actually the case for most setups. ZFS is decades old and (almost) all tuning you can think of has already gone into the TrueNAS product.

tldr; pool layout e.g. RAIDZn vs. mirrors and memory size has a much greater influence than anything else.

HTH,
Patrick

4 Likes

Good luck deploying a brand new, totally unsupported NAS system that support will completely go away over time. Seems silly to “not like it” and go back to a dead product without one heck of a good reason. I am just saying. I just came back to Truenas and I never considered core, even though I prefer BSD for NAS platforms. Scale has been great (a bit bumpy, but well supported)

Good luck.

2 Likes

I think everyone has hit all the points. You have a ton of RAM which means you should not need anything other than the drives for your VDEV(s). My advice, do not over-complicate the design with all the extra possible options. Get your pool started and working, evaluate if you have any bottlenecks with TrueNAS/Server. If you do, address it then.

If you are using this only for Storage, is it accessed frequently, by many people at the same time? Or is it just a casual server, no need for high throughput. Also, what is your network connectivity?

I would start using SCALE vice CORE. I like CORE however SCALE 25.04 is the first iteration of SCALE replacing CORE, if I understood it correctly. CORE will be around for a while but updates are going to be few, if any. It is just easier to learn one version of the GUI.

And if you are using this to support iSCSI, then you probably already know to have no more than 50% of the pool capacity exceeded.

And I also agree, RAIDZ2 vice RAIDZ1. You have a lot of smaller drives, you could of course use those but also realize that once you have a ZFS pool configured, maybe you have a RAIDZ2 of eight 1.2 TB drives, well you cannot change it to 7 drives or 9 drives (you could do 9 drives but that means introducing a stripe). RAIDZ expansion is new to me, it may be a feature you should read about as well in case you need it later.

1 Like
  1. Yes - TN autmatically uses as much RAM as it can as ARC. With 384GB RAM your need for L2ARC will be low. Check after a week or two running to see what the hit rate is.
  2. Yes
  3. With Scale it is persistant by default
    You mentioned “Resiliancy First, Capacity Second, Performance 3rd, VM storage, Network Storage”

VM Storage is ideally on mirrors (for IOPS). So a pool of SSD Mirrors would be a good idea. This pool should remain preferably at <50% used (so for each TB of used storage you need 4 TB of raw storage. Mirror = 50%, 50% Use = another 50%). The keep at less than 50% is a guide, it won’t suddenly stop working if you goto 51% use.

The rest of your requirements can be handled by HDD’s in RaidZ3 (maximum resilience)

Do not bother with L2ARC to start with

A SLOG vDev is used to accelerate sync writes. In speed terms it goes Async > Sync with seperate SLOG > Sync. Sync is just slow, but often used for VM’s and databases - it ensures that the disk is written to correctly in the event of sudden power loss. There are specific hardware requirements for SLOG hardware. High Endurance, High Speed and PLP.

2 Likes