QNAP TS-877 Truenas Journal

hm i see hbs i am maxing 95 MB/s. In the server connection test earlier it said i got like 120-250 or something i forgot. Probably the later because the 253D is 2.5gbe so that is the bottleneck.

As for why its transfering at half of that not sure but it’s fast enough for me.

The real test will be done later. This will be between the TS-877 and desktop pc, both connected to the same switch using SFP+ 10g fiber optic. so i expected this to have the best result.

Back when i was using QNAP QTS for the TS-877, i noticed an issue in smb file explorer when transfering from desktop to NAS, it would be very slow. It throttles down to 50-80 (doesn’t max at 250 MB/s). But when i transfer from NAS to desktop it does 250 MB/s.

So i switched to fiber optic 10g, and noticed the same issue.

Now i’m testing if moving from QTS to truenas makes a difference or not, because i had my suspicions QTS was the culprit but i had no proof. So i’m going to find out soon.

Or the culprit could be windows 11 themselves

they claim it’s been fixed, but some users reported it hasn’t which i too sus might be the case.

The first bug i noticed for R1 Dragonfish

doesn’t seem too critical will just ignore it.

hm 1gbe for hbs :thinking:

maybe after it completes this share recover, i will shutdown and move the sfp+ pcie to the proper slot and see if anything changes?

But i want to test the windows 11 smb file explorer transfer speed first before i consider doing that.

Another possible bug i’m experiencing.

truenas UI lag/slowdown to the point it’s very noticeable

before it was fine didn’t notice this. but don’t know why this is now happening. I can’t even restart because i have an active backup job running.

Just how slow is the UI?

Well when trying to login, the login page took 5-10 minutes to properly get to the login page before i could login (after time out for inactivity log out).

Then when i wanted to create a test dataset, it got stuck and wouldn’t let me save. had to wait another 5 minutes or so before i could.

That kind of slow down yikes.

Seems like a temporary fix of sorts

1 Like

i found this


This is the M.2 nvme usb enclosure i am using

and the m.2 nvme sata i used with it

Well the 10gtek Dac arrived. But… the sfp+ om3 lc to lc fiber optic works… so don’t really need it at this point. I’ll still keep it for future projects (DAC is still handy to have around).

Maybe if we hit 100 likes for the thread i will test it out for you guys if i see there is demand for this :sweat_smile:

1 Like

For some reason, I haven’t experienced this issue for the last couple days. I’m not sure what triggered the issue to start, or what caused it to go away.

At the time I was experiencing the problems, I was moving 170TB of small files between pools with rsync. Millions of files and directories. Around 2-3 days ago, I started another rsync job of the same amount of data, but it’s very large files so significantly fewer files and folders.

I noticed you said that you were doing a backup – does it also have a lot of small files? Some of the errors I got were related to iwatch/inotify, and I wonder if it’s just getting exhausted?

1 Like

most of it is video files ranging from 200-600mb. some occasionally 1-3gb each.

now that you mention that… :thinking:

Also noticed the slow down went away. what changed? i didn’t even reboot yet and the backup is still ongoing at 74% still

speeds 92 MB/s

The share i am currently restoring via rsync mostly videos.

The other i am going to restore later is the one with many smaller files like pictures, music, documents etc, as well as videos.

Hm, when you’re back to the smaller files, maybe it’ll return to slow-mode?

The past couple days I’ve been moving videos ranging from 25-75GB each on average. The UI has been snappy. Prior to that I was moving millions of files, all under 1GB or so, with millions of folders. Maybe exhausted the iwatch and whatever else that middlewared needs to function.

Would be curious to hear if your smaller file restore later causes the issue again. Good luck!

1 Like

noticed my backup got interupted midway

heard a beep then later noticed this in log. I see it’s transfering again. Not sure what happened there.

There is a possibility that the pcie for the sff card is coming loose. But i checked in QTS UI it still detects tl-d400s so i think that’s fine, will keep an eye for now.

saw an explanation here

When connecting to a remote system via SSH, you might encounter the error Client_loop: send disconnect: Broken pipe.

In this tutorial, we will see why this happens and address the error.

Client_loop: send disconnect: Broken pipe Error

The error is simply a disconnection message that notifies you that your SSH connection timeout has been exceeded.

This is a period of inactivity during which no Linux command is executed or issued from the client side. When this happens, the SSH session is terminated, effectively disconnecting you from the remote server.

Most users will usually press ‘ENTER’ or a key on the keyboard to avoid having an idle SSH session which will cause the disconnection to the host. However, this can tedious and time-wasting.

Thankfully, SSH default configuration settings provide a few parameters that you can configure to keep your SSH connections active for longer periods of time.

Fix Client_loop: send disconnect: Broken pipe Error

To resolve this issue, you need to increase the SSH connection timeout on the client. To do so, modify the default SSH configuration file which is usually at /etc/ssh/sshd_config.

$ sudo vi /etc/ssh/sshd_config

Be sure to locate these two parameters: ClientAliveInterval and ClientAliveCountMax. Let’s check out what they do.

  • ClientAliveInterval – This is the period of inactivity after which the SSH server sends an alive message to the remote client that is connected to it.
  • ClientAliveCountMax – This is the number of attempts that the server will make to send the alive message from the server to the client.

We will set the two values as follows:

ClientAliveInterval 300 ClientAliveCountMax 3

qnap website

yeah i will be sure to let you know.

when doing this i will be moving the sfp+ to the other slot to see if there is any difference in performance or not

already answered the m.2 satas how that was installed, via the QWA wireless addon card (maybe this is what is causing issue with booting it)

This TS-877 is limited to 2.5’’ sata which it has 2 slots. This is most likely to work (since it’s a direct connection to the board without any sort of intermediaries), and i may pursue this in the future. Is there perhaps a way to clone truenas off the usb m.2 nvme onto a pair of 2.5’''sata ssds ? :thinking: or do i have to do everything again from scratch…

not sure what you mean sata dom. The dom i didn’t see where it’s located because i didn’t plan to remove it physically to begin with. Unlike the TS-253D it would be a bit much to dismantle that :sweat_smile:

file explorer test windows 11 from desktop to nas

from nas to desktop

not sure why one direction is significantly slow. but it’s still roughly 2.5gbe speeds :thinking:

in the other direction, sometimes it’s a blistering 700 MB/s (like you said i think there is a bottleneck from the pcie slot so it can’t reach that 10g speeds, but this seems good enough for me to not worry too much about it) but it will also at times dip down to a steady 300 - 400 MB/s.

NAS is setup with 4x 4tb HDD seagate ironwolf 128mb cache?

And the desktop has a Crucial P5 Plus 2TB PCIe M.2 2280SS Gaming SSD

Even before when i was using QTS and the 2.5gbe NIC installed on the QNAP, it also showed this speed slowdown in one direction (from PC to NAS was drastically slower than the other direction).

But the difference now is, speeds are at least 2.5gbe stable rather than an erratic 2.5gbe than may also dip and hover at 1gbe at times. So there is that improvement.

Still don’t understand why such a disparity.

Only thing i can think is sus is windows 11 smb? :thinking:

when you think 10g is a lot, raidowl goes further :exploding_head:

doubt most people would need 10g, but it’s nice to have.

ok moving graphics to other slot 100% won’t work. it’s too big for that :sweat_smile:

so moving forward either the graphics card gets kicked out (and only reinstall when needed to access bios rarely), or i use it in the previous setup.

that said i moved the sfp+ pcie to the proper slot to avoid any possible bottleneck in performance. And also to test if it gets me to that 10g or not. Or will it just get stuck at 700 MB/s max like what i observed in the previous configuration.

I will also be doing my last share restore over rsync in this new config and see if performance has any change or not. I doubt it because the 253D is capped at 2.5gbe speeds anyway, so doubt changing the pcie on ts-877 would do anything for that (also i did hit 700 MB/s in my test so it’s not like it was lacking in the first place).

after switching pcie to the slot, ran the windows 11 file explorer test again

desktop to truenas

truenas to desktop

at most i saw truenas to desktop peak around 800 MB/s ish then it came down most of the time :thinking:

that’s my result. But basically if the pcie card was in the other slot i would probably not notice any big difference (at least to me :sweat_smile: )

Now running HBS rsync one way to restore from backup.

96 MB/s about the same as the previous config. Like i suspected it wouldn’t have made a difference since the bottleneck in this case was at the 253D with it’s 2.5gbe nics.

In summary, i am better of putting back the pcie in the other slot, so… i can then keep the graphics card in the slot without having to remove it at all, seeing as there is no noticeable downside keeping it in that configuration :sweat_smile:

if i had to guess why performance is bad, maybe i need to replace the 4x 4tb drives for 6-8tb drives and either 4-6 to get 10g speeds :thinking:

ok something odd happened earlier.

This is before i started HBS to run a new rsync.

since i had some videos already restored, i tried playing some, but the video seemed a bit laggy, that didn’t look normal.

so tried a couple also similar issue.

in networking for desktop i disable/ turn on again to see if that did anything.

sometime after that, can’t remember when, and even while i have HBS running at about 96 MB/s, it’s working fine for video playback without issue, including even the same videos that had issues before (where it was laggy and stuff).

Not sure what that was about, only that the issue resolved on it’s own :thinking:

Could this be anything related to your odd slow-mode issue Jorsher?

I’ll keep monitoring for now if it happens again.