Hdd spin down and ZIL

Hi everyone,

I am considering the migration from unraid to truenas scale and I read a lot about ZIL, SLog and so on in these days but still… I can’t find an answer to this.
Basically in unraid you can have a write “cache” for each dataset (or folder in Unraid case) and what is this:
As long as files are written, they are first written to these cache disks (usually SSDs) and once per day, or more if you set it, those files are moved to the HDDs disks.
This makes the HDDs never spin up (in less you read from them of course). Is there a way to reach something similar in truenas?
To my understanding, ZIL lives on the same pool where the data will be written to… But this means that data are written nonetheless to the same disks therefore they will never spin down.
Can an sLog vDev help in this scenario or is sLog only used for sync writes?
I know many people are not ok with HDD spinning down but… Energy costs a lot in europe… and for us, home users of truenas saving a good amount of energy is not bad.
Can you help with my doubt?

Thanks to anyone

Forget everything about Unraid, it’s not applicable to ZFS.

ZFS’ write cache is in RAM, the ZIL (including SLOG devices) is there just to protect sync writes in case the system goes down and RAM contents are lost.

ZFS does not have any native capabilities to do the sort of tiered storage you’re proposing.

And when does that happen? That’s a very contrived scenario.

Ah ok, so basically write cache happens always in RAM but ZIL/Slog just copies the write cache to an extra vDev, right? So if an async write occurs and the power goes out, data on ram is lost? Do you maybe know how long those async writes occur? I know that sync writes occur every 5 seconds.
What I achieve with Unraid is that until you read some content on the disks, the HDDs stay spun down all the time. While writes happens on a pool with SSDs that consume less power.
A very useful scenario for this is torrents: while you download the files (and they maybe occur in days) I don’t spin up HDD disks at all and as long as seeding occurs those files stay on SSDs so those small random reads are also not performed on HDDs saving energy again.

The main ZIL is on disk, rather than the Separate LOG device, but yes.

Yes, or if the system crashes.

You mean how often they’re flushed? Same as sync writes, really.

Ok, so based on your answers I think I have to restructure how I wanted to set my vdevs. I will think about it and then I post it here if you can help me :slight_smile:

Yes, there is no tiered storage integrated inside ZFS (and it’s a pity, really). But, you can sort of do that by yourself with a bit of scripting abilities… ie, running a cronjob every day that copies everything you have written to an SSD pool to your HDD pool, and then erases the files on the SSD pool.

Or, just let the drives spin up and down a few times per day.

1 Like

I’d script that VERY carefully in case the user accidentally starts using folder names that are identical to the contents on the HDD pool. So much to potentially go wrong there.


yeah some tiered caching wouldn’t be bad… but I understand that the PROs users don’t really let their HDDs spin down at any time (maybe in 10 years it won’t really makes sense if SSDs keep growing in size and therefore energy consumption won’t be really an issue at all :smiley: ) . Anyway… I am more of a windows guy, so scripting is not my thing and as Constantin said, maybe you forget a folder and something goes wrong… so I guess more pain than gains.

I’d propose my structure of HHDs/SSDs and then you can tell me your thoughts about it:

  1. truenasOS:
    a. dual 120GB SATA SSD in mirroring
  2. “Apps&VMs” Pool:
    a. 2X2TB NVME SSDs in mirroring
    b. Maybe 1TB NVME SSD to passthrough (if possible)
  3. Media and Mix Pool:
    a. 3X6TB + 1X20TB Raidz1 (to think about whether the 20TB disk makes sense to keep it in the pool or leave it aside, for the moment)
  4. Backup Pool (to keep snapshots) and maybe backups of PCs in the house:
    a. 2x4TB and/or Cloud backup on Pcloud
    5: ImportantData (Photos, etc):
    a. 2x4TB in mirroring

For the caching “issue” I’d propose to leave HDD with spin down automatically when not in use as far as possible OR I found a “good” deal for 10x120GB sata SSD and maybe put 1 ssd disk as Slog on each pool? but yeah, if the writes happens every 5 seconds I don’t know if it really makes sense (maybe you can give me some insights)
For torrents I will keep the seeding files in the Apps&Vms pool and as soon as seeding is finished i’ll let sonarr and radarr move the files in the media pool.

What do you think so far? :slight_smile:

I’d monitor that very carefully to see if the HDDs actually get to rest for long periods of time. If they don’t, then there is no point to spin them down in the first place. Spin-up strains the motors, so it should be limited, if possible. There is a spin-down resource over in the old forum that I’d take a look at re: guidance.

As a intermediary to SSD, consider Helium-filled HDDs, which tend to consume less power than Air-filled HDDs.

Most of the time my HDDs are spun down, because either I watch movies/tv series at night and the rest of the day the server is mostly idling and downloading new movies tv series. I access files during the day, but not so not many spin up and downs I would say, is more a casual thing. During the night instead I will develop on VM and stuff like that. Are there any helium filled HDD with 6TB or 4TB? that’s the sizes of HDD I can do raidz with at the moment, instead of buying all new HDDs

It won’t take ten years. Find me a 60 TB HDD today:

1 Like

Ahahahah, those SSDs are very cool but the price… Almost 4000 dollars for the 44TB version (which is my entire current server storage size in 1 single disk!! hahahah). Hopefully also us, humans who don’t work in a server environment will find SSDs with cheaper prices and bigger sizes in a few years. I think that reaching 10TB for 500 euros per SSD would already be a good sweetspot for many (if only they don’t bitrot in the long run). What do you think about the storage configuration I’ve made a few posts above? Does it look correct?

1 Like

Either use one or read Highly Available Boot Pool Strategy | TrueNAS Community.

Better to use it for block storage with iSCSI (the only reason I believe you would passthrough an SSD).

You can have these in the same pool.

Likely it’s not going to work. I would set at the very least a 60 minutes waiting time before spinning down the drives… better if 120.

A SLOG is not a write cache. You likely do not require a SLOG.

That’s what I do as well, it prevents fragmentation in the HDD pool.

1 Like
  1. Thanks for the post on boot drives. I still have to buy an HBA, probably in the near future. Since the SSDs are mirrored I could basically go inside the bios and put the second one as boot if the first fails, right? I have an ipmi card to access the bios on the go, that’s why I am asking
  2. Regarding the 1TB NVME yes, I would basically pass it though a windows VM, or split it into 2 (if possible) one for my windows VM and 1 for my girlfriend’s Windows VM.
  3. Regarding the merge of the pool for important data, you are right, copying also photos etc to Pcloud is highly reccomended.
  4. Ok, i’ll the HDD to sleep after 60 minutes or 120 minutes, it will spare some energy and also spare the motors inside the HDDs
  5. ok, no SLOG :smiley:
  6. Awesome :slight_smile:

If it fails, yes.

Block storage with iSCSI would be ideal.

Finally, I recommend running @joeschmuck’s Multi-Report.

Is the scenario with iSCSI supported by truenas scale? And can I “split” the device in 2? I mean I assign 500 GB to a VM and the rest 500GB to another VM?

I believe Unraid supports ZFS now but I’m not super familiar on how they implement it. Just an FYI.

I learnt a lot with Unraid and I am still thinking it is a valid product and the community is also great but it is very community dipendendent product… Unraid is “supporting” ZFS… but replication, snapshotting etc is still not implemented and most of the functionalities/plugins etc are running thanks to indipendent developers not because of Unraid and this is a little concerning… that’s why I am thinking of moving into a more serious project like truenas where the core of the project at least is entirely developed by company developers

Right, the CORE of it… pun (and jab) totally intended.

Just another question regarding the migration. Since I have 2 Windows VM I would like to pass a the same GPU… I have the iGPU of my intel CPU and is it possible to pass it the VM in SR-IOV mode?