Not supported but in the old forum there was definitely one thread re doing exactly that by partitioning a fast SLOG to do multiple jobs. You’d likely have to do it from the command line since the GUI only plays SLOG with complete drives.
@chuck32 nobody said that SLOG was a write cache. I said that i wanted to use an SSD for both SLOG and also as a write cache for a software. (2 different partitions for 2 different things)
In my environment, the SLOG is being used by ZFS file system because i forced my datasets to “Sync always” on purpose in an effort to better protect my data (it may be foolish but helps sleep at night). With the SLOG i can saturate my 1Gbps connection without a problem, so it is working great for me.
Now I’m deploying UrBackup in a jail which saves its backups on a child dataset with deduplication enabled. As you can imagine the deduplication slows writes to that child dataset quite a bit.
As an option, UrBackup allows to save the backup files to a temporary location before moving them to their final location. If the temp location is on an SSD, the backup operations will be so much faster which is why I’m calling this “partition” a write cache.
Anyway, it sounds like i will have to use a different SSD for these purposes.
32 GB must be a M10 Optane. It’s not that much oversized for 10 GbE, and M10 drives, being x2 devices with limited capacity, do not have that much endurance and throughput. 900p/P4800X can deal with being partitioned for double duty, but a M10 is probably best used for a single duty.
What’s the purpose of “M.2 to 5 sata adapter (Silverstone ECS07)” in the “Home” system? We do not like these devices, and the motherboard has enough SATA ports. (Edit. no port multiplier here, but a JMB585. Hmm…)
@ragametal I misunderstood you they upon rereading that’s clear.
I don’t really dabble in sync writes (I used VMs but not NFS shares or the likes).
Just food for thought:
Will this also be the limit for urbackup? I don’t know how busy your pool is most of the time but even a single HDD should sustain that load, so I don’t really see a need for the temp file save on the SSD.
Deduplication is expensive, resource-wise. With only 32 GB of memory and your atom CPU I assume this puts a strain on your system.
Longs story short, i wanted to use my onboard SATA ports to serve the hot swapable bays and nothing else. That didn’t leave me with enough SATA ports for the internal single boot SSD and the 2 SSDs i use for apps. This is because the board has 6 SATA ports but 1 is shared with the M.2 port (you can use one or the other, not both). So i used this adapter to fix that limitation.
This particular adapter is not a port multiplier and acts like a normal HBA. i bought it based on the advisement at M.2 to Sata Expansion Card?
Normally i would agree with you but you need to remember that the datasets on the HDD will be performing deduplication which i anticipate it will slow writes (I have not tried this yet). I’m trying to make everything i can to increase the performance of UrBackup since my last experience with it was terrible. Granted, that was on another system without ZFS and far lesser memory but still.
I admit i have no experience with deduplication. My hope was that UrBackup was going to save the temp files on an SSD, then transfer them to the HDD and at that point it can take as long as it needs without affecting the normal operation of the server that much.
However, if you guys thing that my best course of action is to increase the memory, then that is what i will do.
Based on some quick searching, UrBackup appears to support file-level deduplication - if this is the case, then adding ZFS-level deduplication will probably result in minimal additional gains for potentially significant resource costs. If you’re able to engage the dedup “higher up the chain” so to speak, I would opt for that personally.
I swear I’m not angry but this will sound like I am. I mean, can we stop focusing on things that are not problems?
Whether or not i need SLOG is not the question nor a problem. Move on.
The M.2-SATA adapter is also not the question nor a problem (at least not at the moment). Move on.
The question was if i could partition an SSD to use it as SLOG and something else. @Constantin and others pointed out that this may be possible via the command line but it is not recommended since it is not supported by the web UI of TrueNAS. That is very understandable and i will not pursue that approach.
Now, can we focus on problems related to the main topic from this point on? Some topics that came up are:
whether or not it would be a good idea to use “deduplication” on the dataset used by UrBackup and, if i do, should i increase my memory? It seems the general consensus is that if i want “deduplication” i should increase my ram first. Thanks, and I’m welcoming your input on this as I’m not familiar with deduplication.
Would it be beneficial to set UrBackup to write temp files on an SSD before committing the data to its final dataset which may be slower due to deduplication dutties? This one is tricky. It seems that this is only applicable if “deduplication” is enabled. If disabled then there is very little benefit in writing to a temp folder. Feel free to comment on this if you happen to have experience with this kind of setup.
Thanks for the very sensible suggestion. The UrBackup manual doesn’t seem to address how the software does deduplication.
I posted some questions on the UrBackup forum asking for some clarification. However, I’m starting to get cold feet about all this because nobody has answered anything in the last 2-3 days. Makes me feel the project is dead.