QNAP TS-877 Truenas Journal

Well the 10gtek Dac arrived. But… the sfp+ om3 lc to lc fiber optic works… so don’t really need it at this point. I’ll still keep it for future projects (DAC is still handy to have around).

Maybe if we hit 100 likes for the thread i will test it out for you guys if i see there is demand for this :sweat_smile:

1 Like

For some reason, I haven’t experienced this issue for the last couple days. I’m not sure what triggered the issue to start, or what caused it to go away.

At the time I was experiencing the problems, I was moving 170TB of small files between pools with rsync. Millions of files and directories. Around 2-3 days ago, I started another rsync job of the same amount of data, but it’s very large files so significantly fewer files and folders.

I noticed you said that you were doing a backup – does it also have a lot of small files? Some of the errors I got were related to iwatch/inotify, and I wonder if it’s just getting exhausted?

1 Like

most of it is video files ranging from 200-600mb. some occasionally 1-3gb each.

now that you mention that… :thinking:

Also noticed the slow down went away. what changed? i didn’t even reboot yet and the backup is still ongoing at 74% still

speeds 92 MB/s

The share i am currently restoring via rsync mostly videos.

The other i am going to restore later is the one with many smaller files like pictures, music, documents etc, as well as videos.

Hm, when you’re back to the smaller files, maybe it’ll return to slow-mode?

The past couple days I’ve been moving videos ranging from 25-75GB each on average. The UI has been snappy. Prior to that I was moving millions of files, all under 1GB or so, with millions of folders. Maybe exhausted the iwatch and whatever else that middlewared needs to function.

Would be curious to hear if your smaller file restore later causes the issue again. Good luck!

1 Like

noticed my backup got interupted midway


heard a beep then later noticed this in log. I see it’s transfering again. Not sure what happened there.

There is a possibility that the pcie for the sff card is coming loose. But i checked in QTS UI it still detects tl-d400s so i think that’s fine, will keep an eye for now.

saw an explanation here

When connecting to a remote system via SSH, you might encounter the error Client_loop: send disconnect: Broken pipe.

In this tutorial, we will see why this happens and address the error.

Client_loop: send disconnect: Broken pipe Error

The error is simply a disconnection message that notifies you that your SSH connection timeout has been exceeded.

This is a period of inactivity during which no Linux command is executed or issued from the client side. When this happens, the SSH session is terminated, effectively disconnecting you from the remote server.

Most users will usually press ‘ENTER’ or a key on the keyboard to avoid having an idle SSH session which will cause the disconnection to the host. However, this can tedious and time-wasting.

Thankfully, SSH default configuration settings provide a few parameters that you can configure to keep your SSH connections active for longer periods of time.

Fix Client_loop: send disconnect: Broken pipe Error

To resolve this issue, you need to increase the SSH connection timeout on the client. To do so, modify the default SSH configuration file which is usually at /etc/ssh/sshd_config.

$ sudo vi /etc/ssh/sshd_config

Be sure to locate these two parameters: ClientAliveInterval and ClientAliveCountMax. Let’s check out what they do.

  • ClientAliveInterval – This is the period of inactivity after which the SSH server sends an alive message to the remote client that is connected to it.
  • ClientAliveCountMax – This is the number of attempts that the server will make to send the alive message from the server to the client.

We will set the two values as follows:

ClientAliveInterval 300 ClientAliveCountMax 3

qnap website
/en-in/how-to/faq/article/hbs3-backup-job-to-hbs2-rtrr-server-fails-with-error-broken-pipe

yeah i will be sure to let you know.

when doing this i will be moving the sfp+ to the other slot to see if there is any difference in performance or not

already answered the m.2 satas how that was installed, via the QWA wireless addon card (maybe this is what is causing issue with booting it)

This TS-877 is limited to 2.5’’ sata which it has 2 slots. This is most likely to work (since it’s a direct connection to the board without any sort of intermediaries), and i may pursue this in the future. Is there perhaps a way to clone truenas off the usb m.2 nvme onto a pair of 2.5’''sata ssds ? :thinking: or do i have to do everything again from scratch…

not sure what you mean sata dom. The dom i didn’t see where it’s located because i didn’t plan to remove it physically to begin with. Unlike the TS-253D it would be a bit much to dismantle that :sweat_smile:

file explorer test windows 11 from desktop to nas

from nas to desktop

not sure why one direction is significantly slow. but it’s still roughly 2.5gbe speeds :thinking:

in the other direction, sometimes it’s a blistering 700 MB/s (like you said i think there is a bottleneck from the pcie slot so it can’t reach that 10g speeds, but this seems good enough for me to not worry too much about it) but it will also at times dip down to a steady 300 - 400 MB/s.

NAS is setup with 4x 4tb HDD seagate ironwolf 128mb cache?

And the desktop has a Crucial P5 Plus 2TB PCIe M.2 2280SS Gaming SSD

Even before when i was using QTS and the 2.5gbe NIC installed on the QNAP, it also showed this speed slowdown in one direction (from PC to NAS was drastically slower than the other direction).

But the difference now is, speeds are at least 2.5gbe stable rather than an erratic 2.5gbe than may also dip and hover at 1gbe at times. So there is that improvement.

Still don’t understand why such a disparity.

Only thing i can think is sus is windows 11 smb? :thinking:

when you think 10g is a lot, raidowl goes further :exploding_head:

doubt most people would need 10g, but it’s nice to have.

ok moving graphics to other slot 100% won’t work. it’s too big for that :sweat_smile:

so moving forward either the graphics card gets kicked out (and only reinstall when needed to access bios rarely), or i use it in the previous setup.

that said i moved the sfp+ pcie to the proper slot to avoid any possible bottleneck in performance. And also to test if it gets me to that 10g or not. Or will it just get stuck at 700 MB/s max like what i observed in the previous configuration.

I will also be doing my last share restore over rsync in this new config and see if performance has any change or not. I doubt it because the 253D is capped at 2.5gbe speeds anyway, so doubt changing the pcie on ts-877 would do anything for that (also i did hit 700 MB/s in my test so it’s not like it was lacking in the first place).

after switching pcie to the slot, ran the windows 11 file explorer test again

desktop to truenas


truenas to desktop

at most i saw truenas to desktop peak around 800 MB/s ish then it came down most of the time :thinking:

that’s my result. But basically if the pcie card was in the other slot i would probably not notice any big difference (at least to me :sweat_smile: )

Now running HBS rsync one way to restore from backup.

96 MB/s about the same as the previous config. Like i suspected it wouldn’t have made a difference since the bottleneck in this case was at the 253D with it’s 2.5gbe nics.

In summary, i am better of putting back the pcie in the other slot, so… i can then keep the graphics card in the slot without having to remove it at all, seeing as there is no noticeable downside keeping it in that configuration :sweat_smile:

if i had to guess why performance is bad, maybe i need to replace the 4x 4tb drives for 6-8tb drives and either 4-6 to get 10g speeds :thinking:

ok something odd happened earlier.

This is before i started HBS to run a new rsync.

since i had some videos already restored, i tried playing some, but the video seemed a bit laggy, that didn’t look normal.

so tried a couple also similar issue.

in networking for desktop i disable/ turn on again to see if that did anything.

sometime after that, can’t remember when, and even while i have HBS running at about 96 MB/s, it’s working fine for video playback without issue, including even the same videos that had issues before (where it was laggy and stuff).

Not sure what that was about, only that the issue resolved on it’s own :thinking:

Could this be anything related to your odd slow-mode issue Jorsher?

I’ll keep monitoring for now if it happens again.

more performance tuning videos :thinking:

I was snooping around reporting when i spotted netdata, clicked that then this popped up

wow… why doesn’t qnap, synology have something like this? when did truenas integrate this into the OS? great decision.

note: to be fair i think both can install it via their native apps. but… truenas went further by placing it in an obvious spot and making it deploy quite easily to setup with ease. that made a difference

Before i had setup Prometheus, cadvisor and grafana. Which is way more of a resource hog than netdata

this assessment clearly would be biased so take with a grain of salt. but from my own usage, when using the old setup, i did notice the storage space consumption and cpu utilization spike beyond reason. but with some tunning it became more reasonable for 24/7 usage.

Well tim suggested to use a pair of ssds for a SLOG to be able to boost performance to hit those 10g speeds.

what capacity size? any good budget recommendations? and how long for estimate life for setting in this setup as a slog mirrored :thinking: ?

:thinking:

https://www.reddit.com/r/zfs/comments/1bsx6pc/zfs_zilslog_ssd_question/

https://www.reddit.com/r/zfs/comments/1bqs0q4/ssd_recommendations_for_slog/

https://www.reddit.com/r/zfs/comments/10wyt6e/zilslogl2arc_do_i_need_it_pcie_drives_on_pcie/

https://www.reddit.com/r/zfs/comments/zyclx2/slog_usecase_should_i_have_one/

gea

20d ago•Edited 20d ago

For an Slog you need ultra low latency/ very high steady 4k write values and you need powerloss protection. Size must be at least 10 GB. The best affordable Slog is an Optane 1600 with 500000 write iops.The larger 118GB model has a higher endurance.

When an Slog fails, ZFS reverts write logging from Slog to onpool ZIL, so nothing happens. Only when the system crashed with an Slog failure, content of rambased write cache is lost. This is a very rare condition so usually a Slog mirror is not needed as you can import/use a pool with missing slog. A Slog mirror protects against this and against the performance degration when the slog fails.

well i don’t have a ups (though i probably should :sob:) so i can’t do something as risky as this unfortunately.

So only thing i can think of is replacing the hard drives for higher capacity ones 6-8tb or better with higher cache, and possibly somewhere between 4-6 hdds in raidz1 minimum.

if it were 4 ssds in raidz1 no problem, but not going that route due to price per gig :smiling_face_with_tear:

doubt the issue is networking since i’m sfp+ 10g fiber

help understanding alert: “‘freenas-boot’ is consuming USB devices ‘sdg’ which is not recommended.”

boxsterguy

4mo ago

It’s a silly warning that can’t have enough information to make a valid statement.

The goal is to get people off of using lower quality USB flash, especially as TrueNAS gets ready to remove the option to put syslog on your data pool (will result in more writes in boot). The original is the system doesn’t know what kind of storage is actually attached via USB, and there’s absolutely nothing wrong with using SSDs in USB enclosures for boot. This, even though I’m using decent m.2 SSDs in a USB enclosure, TrueNAS complains because it can only see that I’m using USB as the interface and assumes I’m using USB flash even though I’m not.

The reports of USB flash death have been greatly exaggerated.

https://www.reddit.com/r/truenas/comments/18uiin7/help_understanding_alert_freenasboot_is_consuming/

like said earlier in the thread it’s a non issue. but i just added more context with more depth for others if curious.

But the dillema i am facing is, how do i stop the alerts for this? i keep getting spammed for something i’m already aware of and am okay with but truenas keeps hammering me to death with unwanted alerts x-x;


kinda reminds me of Taylor Swift’s new song Daddy :rofl:

note: in short the song flames all her hardcore fans that criticizes all her life choices (especially who she dates) and tells them it’s her choice even if she makes mistakes it’s her own life to bear them and not for others to dictate :rofl:

But in my case i’m using a m.2 NVME SSD in usb enclosure, not those *** flash sticks. So can you plz stop harrassing me as collateral damage plz :sob: this is hardly helpful

note: before any misunderstanding, unlike swift, i’m not trying to cuss out anyone lel. i just found it funny how it may apply to this situation is all :rofl:

Now doing some research on docker setup on truenas