Slow transfer speeds

I tried copying the files again from my Windows PC to the truenas server and no issues- constant 600MB/s. I then tried it for a second time to see if I got the same result and like before it dropped to 6/7MB/s after about 10/15 seconds of transferring images so it does seem to be a cache issue with the drive as people have mentioned.

Thank you everyone for the help

1 Like

The way ZFS works is that it buffers incoming data for 5 seconds before blasting it out to the disks.

Then while that is being blasted it buffers incoming data.

If it gets to the point that it needs to write its buffer and it’s still writing the previous buffer, it pauses everything.

This causes massive issues with protocols like iSCSI (disconnections) and would probably show up us slow downs like you see in SMB.

Thus you need drives which can write, in aggregate, as fast as the network can ingest

At potentially 1GB/s, you’d need closer to 12 of these drives.

2 Likes

Ok thank you.

So if i put a nvme cache drive would that resolve this issue or would i still face the same issue of speed slowdown after a big data transfer? Because at the moment i only expierence the issue if i am transfering 2 bunches of files eg transfer 1 is 7gb and then once that transfer is done transfer 2 is 7gb and its only on this second transfer after about 10seconds that i see the speed decrease.

1 Like

Increasing the ARC is the likely cheaper way.

3 Likes

Does a nvme drive as a cache drive work slightly different to using memory as a cache?

1 Like

It does. Basically ARC is RAM, much faster; L2ARC is PCIe. Additionally, L2ARC is subordinate to ARC and works in a different way.
Then there is SLOG, not a write cache.

2 Likes

Ok. So if its writing to the RAM cache before it is writing to the SSD why would i be getting the slowdown on the SSD on the second transfer of files if i have 41gb free in RAM as surley for the second transfer it should be writing to the RAM.?

I may be getting confused here. Is the L2ARC (NVME ssd) used in file transfers eg when transfering files to the system they are writted in the L2arc wheras with ARC (RAM) it only stores files that are access regularly and wont store the files from a file transfer in the ARC or is this only the case when L2ARC is used in conjunction with arc and when no L2ARC is available, when transfering files to the system they will be written into RAM which writes it to the storage?

1 Like

This video says that if you write a lot of files then L2arc wont bring any benefits and that your better off adding more memory to the system. Does this mean that the speed slowdown when transfering a second batch of files is due to there not being enough RAM as someone else also mentioned that truenas only uses 50% of ram?

1 Like

You are a bit confused, but the TLDR here is that there is no write cache in ZFS.

When you transfer data TO your NAS it gets handled by the RAM for a few (5 iirc) seconds first and then it is flushed to the drives, which have an internal cache each that can be relatively big (some SSDs) or small (up to 500MB for large disks) and it’s used not to swamp the transfer as the drive writes the data (writing speeds are universally lower than reading speeds).
The larger your RAM, the larger these transaction groups (txg) are; however, even if you have TBs of RAM if your drives cannot keep up with the data flux your transfer will hit a wall.

A L2ARC is a READ CACHE that helps with, as the name suggests, reads; moreover, its referenced into the ARC so if you don’t have enough RAM (at least 64GB) or have too much L2ARC (suggest ratio is 1:4 to 1:6 of ARC:L2ARC) your system’s performance will actually decrease.

A SLOG is a write LOG, not a write CACHE, and is useful only if you enforce synchronous writes; it concerns almost exclusively block storage.

Also, TrueNAS SCALE being based on Linux was able to only use up to 50% of its memory for ARC, but the recent dragonfish/dragonfin (fact-check me here, totally going by memory) update should have introduced a fix that finally delivered an usable ARC experience. The Linux kernel writers had to wait for FreeBSD to come teach them… but let them continue believe their BeTteRFS is actually better :nerd_face:

2 Likes

“slow” doesn’t even begin to describe such a result, if true. Spinning rust will do 100+ IOPS!

2 Likes

There is a write cache, it’s just in RAM. So there’s nothing to gain from adding a “write cache” SSD, hence why the option does not exist.

1 Like

Someone needs to write up a resource that demystifies the concepts of “cache”, “write cache”, ARC, L2ARC, ZIL and SLOG in the context of ZFS.

Ideally, the resource should be brief, so that new users don’t find it intimidating or bewildering.

3 Likes

You mean like jgreco’s? We can probably port it over.

1 Like

Will do.

1 Like

Thank you all for the help. Does anyone have any ssd drive reccomendations that are better performing than my current drives that would be able to handle transfering files that amount to 10Gb in size without slowing down? Idealy drives that have a good SSD Endurance (TBW) spec.

1 Like

We have some of that available on the docs hub:

2 Likes

Thank you ill take a look at that

1 Like

I dragged over an old post of mine that explains how the “write throttle” works, but it doesn’t pull apart the issue with different NAND types and SLC caching tricks.

5 Likes

I rechecked. The review (for 0.5TB model) was testing 16MB IOs when it found 30 Iops. This is expected on 600MB/s SATA.


The drives might not be that crap.

2 Likes
1 Like