I’m looking for information about the possibility of installing TrueNAS with a custom partition layout for the boot-pool.
I’ve read many discussions on the topic and I’m aware it’s not a well-received subject. However, nearly all the discussions I found were about using part of the drives for data storage — whereas my use case is different.
I have two servers with 8x 18TB SAS drives for storage, 3x PCIe PM1725B NVMe drives (to use as metadata vdevs), and 2x 480GB SAS enterprise mixed-load SSDs.
I’d like to reserve part of these 480GB SSDs for an SLOG video.
My goal is to create a smaller boot pool and use the remaining space on the two SSDs to create a mirrored log vdev (with 2 partitions) for accelerating synchronous writes.
I would like to know if this setup is possible, and if so, how it can be done.
No, it really isn’t. The issue is not, as such, “using part of the drives for data storage” (and what do you think SLOG does if not store data?); it’s that’s you’re messing with the design of the system at all. The boot pool is designed to be disposable. Anything you do to it, well, makes it less so.
@NugentS
I’ve been working in the IT field for 23 years and got into computers back in the days of the 386 and DOS 5.0, so I’m certainly not a novice.
With my question, I didn’t mean to ask which commands to type blindly, but simply whether there’s a “semi-supported” way to achieve this configuration, even if it’s not officially provided by the installer.
My intention is not to create a system that could be unstable or problematic during updates.
@dan
I understand what you’re saying, but frankly, this is a bit of a borderline case…
The SLOG doesn’t contain static data, so it’s a bit different (in fact, it’s possible to import a pool that originally had an SLOG, even if it’s missing on the target system).
Anyway, I get it — I’m not trying to stir things up. I only asked because, in all the other threads I found on this topic, people were talking about creating an actual data vdev on the boot-pool, which is not my case.
As someone who has partitioned their boot drive due to port constraints on my hardware, my advice is definitely only to do it when you absolutely have to.
In your case, my advice is just to buy a small SATA SSD and use that for a boot drive.
@Protopia
Thanks.
Not a problem for a couple of drives, of corse… I just have to verify if the server has available SATA port and mount points in the chassis.
(the question was also born to avoid having to go to the office to work on the servers )
What makes you think you need a (mirrored) SLOG and async writes for video in the first place?
A good SLOG needs PLP and the lowest possible write latency; for maximal performance, that would be a (DC) NVMe drive. SLOG on SAS devices feels like last century technology.
These are two SAS SSD for mixed-load with PLP and onboard cache, of course.
So nothing ultra-fast like an NVMe, but definitely better than the spinning drives used for main storage (except metadata and small files, which will go on the NVMe special vdevs).
Are you using SMB or NFS to access the data (or iSCSI) ?
A SLOG will only be of any benefit for SYNC writes, typically NFS or iSCSI, if you are using SMB the SLOG will sit almost entirely unused.
If you really want a SLOG then use 1 for OS and 1 for SLOG and you have an expected configuration.
While TrueNAS lets you do whatever you want, you will have better results treating it like an appliance. Speaking as someone who has been managing ZFS servers since just about the first day ZFS appeared in Solaris 10.
I do not question the enterprise SAS SSD have PLP and are formally suitable for SLOG, but if you have a genuine use case for SLOG (and mirrorred SLOG at that, implying a mission-critical and/or hard-to-service system) on a pool which already has a NVMe svdev component, you want the meanest, fastest, possible SLOG, which would be Optane (DC P5800X) or some extreme form of (MLC-used-as-) SLC write-intensive drive with heavy overprovisioning.
@PK1048
The plan is to use NFS over a 10Gb link, so all writes are treated as synchronous and will use an SLOG device.
These servers are part of a new storage solution for a recording studio, with multiple Pro Tools sessions running simultaneously.
I’ve run some tests with a lab machine and got good results, but I noticed that using NFS results in lower load. So I want to add an SLOG vdev to ensure the system won’t slow down during concurrent burst sync writes.
I also used ZFS for the first time with Solaris 10
@etorix
Unfortunately, I have to build this solution for my customer on a small budget, so the best option I found was a TrueNAS setup using refurbished servers and the best deals I could find on new drives.
So this is about video after all…
Editing on the NAS should not require sync writes. I’d suggest to disable sync writes on the NFS shares for performance and do without a SLOG.
Audio files are written only during the tracks recording and commits, but I have to evaluate with some tests if it can be risky to work with sync writes disabled for session data, because Pro Tools is a bit sensitive.
I suspect that Pro Tools pretty much only writes sequential files, and an fsync at the end of each file is probably sufficient.
fsync writes are synchronous i.e. the data of any unclosed/unwritten TXGs for the file are written to ZIL, but that is completely different from synchronous writes for each of the file I/Os which have a major impact on write speeds and an SLOG is a patch to reduce (but not eliminate) the performance impact of synchronous writes by moving the ZIL to a faster device than the data vDev(s).
If Pro Tools has some data which is random 4KB reads and writes, then these should be held in a different dataset from the multimedia files and there are a range of design actions you generally need to take for these which are not simply limited to synchronous writes and SLOG.
Any other datasets should have sync=standard.
If you are using NFS (which is synchronous by default) then you should mount the share with async set.
Unfortunately these are things that are not explicitly documented in such detail by Avid.
Doing an analysis with fs_usage locally, I see that the file is written every 2-3 seconds with asynchronous writes during recording (so the live recording is buffered in RAM) and at the end of the recording a last synchronous write is executed.
The .tmp files of some old versions no longer exist, the WAV/AIFF file is written directly.
The session file, on the other hand, seems to be written only when saving (or auto-save), as well as backups of session files. It is not very clear how it is written, because it is not captured by fs_usage.
Unfortunately, it is not possible to separate the files, as the sessions refer to the files in their own subfolders and it would be impossible to make the technicians in the studio (obviously) totally change the way they work.
I have to do some further verification with PT Ultimate and HDX system used in the studio, to be sure that there are no different behaviors compared to my basic version.