I currently store my data on a Linux Mint VM, which is supposed to be my NAS. Now I would like to switch to TrueNAS, because some functions are easy to use.
Here I have a question
I will probably use 3x 8 TB Seagate Ironwolf HDDs in RAID 5. It is currently very difficult to get 2x 16 TB at a reasonable price.
Since TRUENAS is not used 80% of the time, I would like the HDDs to switch off after 15 minutes to save power. The HDDs only have to start up at night when I create backups or snapshots.
How do I have to set up my RAID 5 so that this is possible? I would add a 1 TB SSD to the pool as a cache. I have also read that I have to move the LOGs to another volume so that the HDDs can go into spin down. I would have the smart check done once a week, the main thing is that the HDDs can go to sleep.
Can you tell me the best way to do this and what I need to set up?
Why do you believe you need this? What is your idea of a “cache”?
Do you mean “logs”, as in system logs? You capitalized the letters, which suggests you might have been referring to the SLOG?
You have to set them to an aggressive power-management level, but reading around it seems that other users have trouble keeping their HDDs “asleep”, even when nothing should be “using” them.
You won’t need a “cache” - but you will need at least 8GB and preferably 16GB of memory.
You won’t need a separate “LOG” device either unless you have virtual disks or zVolumes or iSCSI. The system logs will be written to the boot device.
If you want to run apps or VMs then you will need memory for them above the 8GB minimum, and you really will want another SSD to hold the apps and their active data (which should also prevent the HDDs from spinning up.
If you are buying a motherboard and processor choose a low power consumption MB and processor. Otherwise these could end up using more electricity than the spinning disks. And ideally have a system with a low cooling requirement because fans will also use electricity.
You will need to set up SMART tests and Scrubs for each pool on a schedule.
If you have a non-redundant apps SSD pool, you should replicate it to HDD.
Implement @joeschmuck’s Multi-Report script so you get notified of disk issues and get an email copy of your system configuration disk at the same time.
You set the HDD spindown time in the UI - but if you find that doesn’t work there is a script file that you can implement that is a little smarter about spinning the disks down.
(Terminology: TrueNAS uses ZFS and ZFS does software “raid” and has different terminology. The ZFS equivalent of RAID5 is RAIDZ1, and should be good for what you need.)
I want to run TrueNAS as a VM. My host is running PROXMOX where a SAS controller is passed directly to TrueNAS. The VM will have 4 cores with 16 GB RAM
Thanks for the correction regarding the RAID 5, what I want to do then is a RAIDZ1.
My idea was to add an SSD cache to increase performance in the write process.
No applications will run on TrueNAS. It will be used as pure data storage.
I have heard that ZFS is not intended to disable the HDDs, but it should be possible to reduce access to the HDDs by offloading data (SLO, L2ARC) to a separate SSD.
How likely is it that I can operate the HDDS in spleep mode for the most part?
Please correct me if I am wrong
With my current Linux Mint VM and BTRFS, the HDDs reliably go into standby after 20 minutes
Passing through the SAS controller and the attached disks is great. I believe that the boot drive for TrueNAS can be a Proxmox zVolume. But you also need to blacklist the SAS controller in Proxmox in order to ensure that Proxmox doesn’t try to mount the zpool itself at the same time which will lead to data corruption.
ZFS automatically does caching in normal memory and there is literally no such thing as either a Read or Write cache SSD. L2ARC is NOT a read cache, SLOG is NOT a write cache. Just give TrueNAS memory and let it do its thing.
Here is an explanation about SLOGs…
In ZFS you can have asynchronous writes which are fast, or synchronous writes which are VERY slow. So you only really want to do synchronous writes when you really need to, i.e. for virtual disks/zVolumes/iSCSI or for database files where random blocks are read and written. For normal files which are read or written sequentially asynchronous writes are normally fine.
It is a separate but similar decision about what type of disk layout to use: mirrors or RAIDZ. The same type of I/O that needs synchronous writes is also typically frequent, parallel, small random reads and writes which create a lot of IOPS - and mirrors can provide those IOPS, whilst RAIDZ doesn’t provide IOPS and instead creates read and write amplification (where you have to read and/or write much more data to achieve the small random I/Os. Sequential access OTOH needs throughput rather than IOPS, and RAIDZ is cheaper per TB and can still provide this throughput.
Asynchronous writes are fast because the data for multiple application writes is queued in memory, and written out to disk in bulk every 5secs. Synchronous writes work similarly BUT for each application/network write, an extra write is made to a Zero Intent Log (ZIL), and this is what makes synchronous writes so slow. An SLOG is simply a way of transferring these ZIL writes to a fast technology drive than the data.
So, if you have RAIDZ storage for all your sequential files, then you probably need a separate mirror pool for your virtual disks/\volumes/iSCSI etc. and if the amount of data for this is small enough, you should probably do this mirror on SSDs anyway, and perhaps provide an even faster NVMe or Optane SLOG.
L2ARC is only useful in specific cases, and needs significantly more memory than 16GB.
I am not sure what impact Proxmox will have (if any) on spinning down your HDDs. You may have to experiment when you have it running. But in principle it should certainly be possible.