Custom Boot-Pool Partitioning

Good morning,

I’m looking for information about the possibility of installing TrueNAS with a custom partition layout for the boot-pool.

I’ve read many discussions on the topic and I’m aware it’s not a well-received subject. However, nearly all the discussions I found were about using part of the drives for data storage — whereas my use case is different.

I have two servers with 8x 18TB SAS drives for storage, 3x PCIe PM1725B NVMe drives (to use as metadata vdevs), and 2x 480GB SAS enterprise mixed-load SSDs.
I’d like to reserve part of these 480GB SSDs for an SLOG video.

My goal is to create a smaller boot pool and use the remaining space on the two SSDs to create a mirrored log vdev (with 2 partitions) for accelerating synchronous writes.

I would like to know if this setup is possible, and if so, how it can be done.

Thanks!

Best regards,
Edoardo

Yes its possible - but as you insinuated - highly not reccomended.

I know how to do this - but won’t be telling you. If you don’t know enough to be able to do this yourself - then you really shouldn’t be doing this.

I hope also that no-one else will tell you how to do this.

4 Likes

No, it really isn’t. The issue is not, as such, “using part of the drives for data storage” (and what do you think SLOG does if not store data?); it’s that’s you’re messing with the design of the system at all. The boot pool is designed to be disposable. Anything you do to it, well, makes it less so.

2 Likes

@NugentS
I’ve been working in the IT field for 23 years and got into computers back in the days of the 386 and DOS 5.0, so I’m certainly not a novice.

With my question, I didn’t mean to ask which commands to type blindly, but simply whether there’s a “semi-supported” way to achieve this configuration, even if it’s not officially provided by the installer.

My intention is not to create a system that could be unstable or problematic during updates.

@dan
I understand what you’re saying, but frankly, this is a bit of a borderline case…

The SLOG doesn’t contain static data, so it’s a bit different (in fact, it’s possible to import a pool that originally had an SLOG, even if it’s missing on the target system).

Anyway, I get it — I’m not trying to stir things up. I only asked because, in all the other threads I found on this topic, people were talking about creating an actual data vdev on the boot-pool, which is not my case.

Thanks

As someone who has partitioned their boot drive due to port constraints on my hardware, my advice is definitely only to do it when you absolutely have to.

In your case, my advice is just to buy a small SATA SSD and use that for a boot drive.

2 Likes

@Protopia
Thanks.
Not a problem for a couple of drives, of corse… I just have to verify if the server has available SATA port and mount points in the chassis.
(the question was also born to avoid having to go to the office to work on the servers :grin:)

What makes you think you need a (mirrored) SLOG and async writes for video in the first place?
A good SLOG needs PLP and the lowest possible write latency; for maximal performance, that would be a (DC) NVMe drive. SLOG on SAS devices feels like last century technology.

I assumed that was a typo/autocorrect for “vdev.”

Yes, typo, sorry :grin:
“vdev” not “video”

These are two SAS SSD for mixed-load with PLP and onboard cache, of course.
So nothing ultra-fast like an NVMe, but definitely better than the spinning drives used for main storage (except metadata and small files, which will go on the NVMe special vdevs).

Are you using SMB or NFS to access the data (or iSCSI) ?

A SLOG will only be of any benefit for SYNC writes, typically NFS or iSCSI, if you are using SMB the SLOG will sit almost entirely unused.

If you really want a SLOG then use 1 for OS and 1 for SLOG and you have an expected configuration.

While TrueNAS lets you do whatever you want, you will have better results treating it like an appliance. Speaking as someone who has been managing ZFS servers since just about the first day ZFS appeared in Solaris 10.

I maintain the inquiry into why you need a SLOG.

I do not question the enterprise SAS SSD have PLP and are formally suitable for SLOG, but if you have a genuine use case for SLOG (and mirrorred SLOG at that, implying a mission-critical and/or hard-to-service system) on a pool which already has a NVMe svdev component, you want the meanest, fastest, possible SLOG, which would be Optane (DC P5800X) or some extreme form of (MLC-used-as-) SLC write-intensive drive with heavy overprovisioning.

@PK1048
The plan is to use NFS over a 10Gb link, so all writes are treated as synchronous and will use an SLOG device.

These servers are part of a new storage solution for a recording studio, with multiple Pro Tools sessions running simultaneously.

I’ve run some tests with a lab machine and got good results, but I noticed that using NFS results in lower load. So I want to add an SLOG vdev to ensure the system won’t slow down during concurrent burst sync writes.

I also used ZFS for the first time with Solaris 10 :slightly_smiling_face:

@etorix
Unfortunately, I have to build this solution for my customer on a small budget, so the best option I found was a TrueNAS setup using refurbished servers and the best deals I could find on new drives.

No room for a proper full-SSD solution :slightly_frowning_face:

Thanks!!

So this is about video after all…
Editing on the NAS should not require sync writes. I’d suggest to disable sync writes on the NFS shares for performance and do without a SLOG.

2 Likes

Not video, but mainly audio.

Audio files are written only during the tracks recording and commits, but I have to evaluate with some tests if it can be risky to work with sync writes disabled for session data, because Pro Tools is a bit sensitive.

Thanks

I suspect that Pro Tools pretty much only writes sequential files, and an fsync at the end of each file is probably sufficient.

fsync writes are synchronous i.e. the data of any unclosed/unwritten TXGs for the file are written to ZIL, but that is completely different from synchronous writes for each of the file I/Os which have a major impact on write speeds and an SLOG is a patch to reduce (but not eliminate) the performance impact of synchronous writes by moving the ZIL to a faster device than the data vDev(s).

If Pro Tools has some data which is random 4KB reads and writes, then these should be held in a different dataset from the multimedia files and there are a range of design actions you generally need to take for these which are not simply limited to synchronous writes and SLOG.

Any other datasets should have sync=standard.

If you are using NFS (which is synchronous by default) then you should mount the share with async set.

1 Like

Unfortunately these are things that are not explicitly documented in such detail by Avid.

Doing an analysis with fs_usage locally, I see that the file is written every 2-3 seconds with asynchronous writes during recording (so the live recording is buffered in RAM) and at the end of the recording a last synchronous write is executed.

The .tmp files of some old versions no longer exist, the WAV/AIFF file is written directly.

The session file, on the other hand, seems to be written only when saving (or auto-save), as well as backups of session files. It is not very clear how it is written, because it is not captured by fs_usage.

Unfortunately, it is not possible to separate the files, as the sessions refer to the files in their own subfolders and it would be impossible to make the technicians in the studio (obviously) totally change the way they work.

I have to do some further verification with PT Ultimate and HDX system used in the studio, to be sure that there are no different behaviors compared to my basic version.

Thanks!

Because I am also Pro Tools user/supporter it is really interested topic to me. From what I know, there should be no difference between writing files in software only version and PTU + HDN/HDX. Did you have a chance to verify it?
Would be really great if you can share your thoughts and maybe configuration with us - it is quite f not so commonly used application for TrueNAS. I just started building my own solution for about 16 such systems and I see every found answer brings new question(s) in this subject :slight_smile:

Its definitely not supported. ZFS likes whole drives which is why people bother with ProxMox. However…. this is written by me.

Creating a boot app and data partition within the TrueNAS Community boot drive.pdf

1 Like

Hello, certainly!

I don’t believe there’s any difference in file writing between the “software” version and the HDX interface version we use in the studio.
(it wouldn’t make any sense for them to develop two separate I/O subsystems for two products without any functional difference between the versions).

I did a test recording at 24/192, simultaneously capturing 48 tracks from one studio and 32 tracks from another with an analog signal (random noise) as input.
The server had absolutely no problem handling the load.

Unfortunately, I don’t remember the write patterns well during the test, but as soon as possible I’ll try to run another test.

The only situation where I experienced brief slowdowns was during random seeking while playback in a very long Atmos project, with many tracks and also a video track (deliberately chosen because it didn’t all fit in the RAM cache set in Pro Tools).
We’re still talking about 1 or 2 seconds of repositioning time (3-4 in the worst case).
And anyway, the same operation done on local SSD storage also causes brief delays, so no problem in usage.

The technicians in the studio are a bit doubtful about switching from working on local storage (and backing-up to the server) to working entirely on the server via network, but I’m slowly convincing them because no problems are occurring and anyway this method simplifies their life in project management and transferring projects between studios.

If you want any more specific information, just ask (I might take a few days to respond, as in this case).

Anyway, for the boot device, I ended up doing a mirror on two inexpensive NVMe drives.
Since I unfortunately discovered later that those servers don’t support direct NVMe boot, I added two USB flash drives on which I installed a rEFInd bootloader configured to “hand off” to the TrueNAS bootloader on the two NVMe drives.
(Everything works perfectly, even after 2 TrueNAS updates)

Bye!
Edo

2 Likes

Hi, many thanks for your answer! It is really good news to me - as I am just about to deploy TrueNAS server into production with network of about 19 Pro Tools systems.

The studio already works on networked storage (Synology) but decided to upgrade old structure - and speed also. In case of my server, it is AIC SB201-HK server platform.

You mentioned: “We’re still talking about 1 or 2 seconds of repositioning time (3-4 in the worst case).
And anyway, the same operation done on local SSD storage also causes brief delays, so no problem in usage.” - is that mean that Pro Tools show some AAE error at the moment (eg. drives too slow etc. or orher)? What is frequency of such error in your case? What type of drives you use as storage?

I had a chance to select NVMe drives as main storage, connected directly to CPU in my case, and hope it will work without problems :slight_smile: - at the moment I see no reason to not work. If Pro Tools is set to “Normal” cache mode, it requests relatively small blocks of data, which should be not the reason of real delays using such blazing fast/responsive storage, even when such number of workstations are connected to TrueNAS.

In the past, I did such setups using FibreChannel or iSCSI, but as that was volume-level sharing (not file sharing), there had to be some special management software used for accessing storage and it is not such “natural” for users as normal file-level sharing. What protocol you use for sharing - SMB or NFS?