Can i use a partition for a SLOG?

I have a 32GB intel optane drive that i use as SLOG. It is definitely too big for the task but that never bothered me before.

Now, i’m deploying an application in a jail that could benefit from using an SSD as cache to write temp files before moving them to the HDD.

Could i just create 2 partitions in my optane drive and use one for SLOG and the other one for my application?

Or does SLOG needs access to the entire drive?

Not supported but in the old forum there was definitely one thread re doing exactly that by partitioning a fast SLOG to do multiple jobs. You’d likely have to do it from the command line since the GUI only plays SLOG with complete drives.

1 Like

SLOG is not a write cache, memory is used for that.

Does application actually use sync writes?

1 Like

@Constantin thank you for your straight answer.

@chuck32 nobody said that SLOG was a write cache. I said that i wanted to use an SSD for both SLOG and also as a write cache for a software. (2 different partitions for 2 different things)

In my environment, the SLOG is being used by ZFS file system because i forced my datasets to “Sync always” on purpose in an effort to better protect my data (it may be foolish but helps sleep at night). With the SLOG i can saturate my 1Gbps connection without a problem, so it is working great for me.

Now I’m deploying UrBackup in a jail which saves its backups on a child dataset with deduplication enabled. As you can imagine the deduplication slows writes to that child dataset quite a bit.

As an option, UrBackup allows to save the backup files to a temporary location before moving them to their final location. If the temp location is on an SSD, the backup operations will be so much faster which is why I’m calling this “partition” a write cache.

Anyway, it sounds like i will have to use a different SSD for these purposes.

Thanks to both for the responses

I have a split slog/l2arc in use on my 304 build.

The 2nd partition could’ve been a pool. But it would need to be made in the shell.

1 Like

32 GB must be a M10 Optane. It’s not that much oversized for 10 GbE, and M10 drives, being x2 devices with limited capacity, do not have that much endurance and throughput. 900p/P4800X can deal with being partitioned for double duty, but a M10 is probably best used for a single duty.

What’s the purpose of “M.2 to 5 sata adapter (Silverstone ECS07)” in the “Home” system? We do not like these devices, and the motherboard has enough SATA ports. (Edit. no port multiplier here, but a JMB585. Hmm…)

1 Like

@ragametal I misunderstood you they upon rereading that’s clear.

I don’t really dabble in sync writes (I used VMs but not NFS shares or the likes).

Just food for thought:

Will this also be the limit for urbackup? I don’t know how busy your pool is most of the time but even a single HDD should sustain that load, so I don’t really see a need for the temp file save on the SSD.

Deduplication is expensive, resource-wise. With only 32 GB of memory and your atom CPU I assume this puts a strain on your system.

1 Like

Atom is no issue, but 32 GB RAM is not enough for dedup.

2 Likes

Longs story short, i wanted to use my onboard SATA ports to serve the hot swapable bays and nothing else. That didn’t leave me with enough SATA ports for the internal single boot SSD and the 2 SSDs i use for apps. This is because the board has 6 SATA ports but 1 is shared with the M.2 port (you can use one or the other, not both). So i used this adapter to fix that limitation.

This particular adapter is not a port multiplier and acts like a normal HBA. i bought it based on the advisement at M.2 to Sata Expansion Card?

Normally i would agree with you but you need to remember that the datasets on the HDD will be performing deduplication which i anticipate it will slow writes (I have not tried this yet). I’m trying to make everything i can to increase the performance of UrBackup since my last experience with it was terrible. Granted, that was on another system without ZFS and far lesser memory but still.

I admit i have no experience with deduplication. My hope was that UrBackup was going to save the temp files on an SSD, then transfer them to the HDD and at that point it can take as long as it needs without affecting the normal operation of the server that much.

However, if you guys thing that my best course of action is to increase the memory, then that is what i will do.

It depends.

Zfs dedup is expensive. Other backup software does dedup differently, and UrBackup might be like that.

Only the SATA lane is shared; you can actually use the SATA port and have a NVMe drive in the M.2 slot.

I would advise against it becasue:

  1. Such configurations are not natively supported by TrueNAS;
  2. Assuming you need a SLOG, it’s a write-intensive application that gnawls at a drive’s endurance;
  3. Assuming you need a SLOG, you require said drive to perform to its best in order to not further slow down your pool.

As others have written, it’s possible. Would I ever do that? Nope.

Or, in this case, a dodgy AHCI controller using the PCIe lanes.

Based on some quick searching, UrBackup appears to support file-level deduplication - if this is the case, then adding ZFS-level deduplication will probably result in minimal additional gains for potentially significant resource costs. If you’re able to engage the dedup “higher up the chain” so to speak, I would opt for that personally.

3 Likes

I swear I’m not angry but this will sound like I am. I mean, can we stop focusing on things that are not problems?

Whether or not i need SLOG is not the question nor a problem. Move on.

The M.2-SATA adapter is also not the question nor a problem (at least not at the moment). Move on.

The question was if i could partition an SSD to use it as SLOG and something else. @Constantin and others pointed out that this may be possible via the command line but it is not recommended since it is not supported by the web UI of TrueNAS. That is very understandable and i will not pursue that approach.

Now, can we focus on problems related to the main topic from this point on? Some topics that came up are:

  • whether or not it would be a good idea to use “deduplication” on the dataset used by UrBackup and, if i do, should i increase my memory? It seems the general consensus is that if i want “deduplication” i should increase my ram first. Thanks, and I’m welcoming your input on this as I’m not familiar with deduplication.

  • Would it be beneficial to set UrBackup to write temp files on an SSD before committing the data to its final dataset which may be slower due to deduplication dutties? This one is tricky. It seems that this is only applicable if “deduplication” is enabled. If disabled then there is very little benefit in writing to a temp folder. Feel free to comment on this if you happen to have experience with this kind of setup.

Thanks for the very sensible suggestion. The UrBackup manual doesn’t seem to address how the software does deduplication.

I posted some questions on the UrBackup forum asking for some clarification. However, I’m starting to get cold feet about all this because nobody has answered anything in the last 2-3 days. Makes me feel the project is dead.

It appears to me that all questions in the first post, including the one in the title, have been answered. What are you referring about?

@davvo kudos, what a way to defuse me.

Yes, most (if not all) the questions seems to be answered.

The comment was more to invite others with experience in “Deduplication” and software similar to “UrBackup” to share what has worked for them.

I’m still feeling a bit lost and I’m studying all the information i can to make an educated decision on which would be the best way moving forward.

Don’t use it, wait for fast deduplication to land.

1 Like

That is a very good advise. Thanks.
I have to admit i have not heard about “fast deduplication” until now.

Very interesting. Hopefully they will also implement it in TrueNAS core and not just on Scale.

Time will tell.

IMHO, it’s unlikely it will have WebUI support.

More about Fast Dedup at Blog - Fast Dedup is a Valentines Gift to the OpenZFS and TrueNAS Communities | TrueNAS Community and OpenZFS "Fast Dedup" Project now in Public Review | TrueNAS Community.

1 Like