High IO WAITS SMB LAG

hey I’ve noticed the io waits are extremely high i am using a one gb link but im not saturating it completely only up to 200mbps while looking at netata my whole graph is purple


im not completely using my cpu the nas ha an 8 core 16thred cpu with 80gib or memory so plenty i have maybe about 60gib free i only bring this up because im noticing lots of annoying lag and yes i am on ethernet while im downloading big files to my nas i losing connection and it kills along with my big stutters in my video editing and i can hear my nas crying like its out of breath i also have one ssd for cashing it is a wd 500gb nothing fancey am i misssing something ?

Yes - you are missing something - could you provide full hardware specifications.

1 Like

Full spec
CPU


lspci

You need to provide better details on your entire system and the problem. We can only go off what you post.

This may help

1 Like

My guess is that Realtek nic card is the problem - switch out for a intel nic card

What is this? Is it used for L2ARC or SLOG, sometimes called write cache?

its set for L2ARC

yieks i current have all my pcie slots filled or blocked hopeful i could pop out the 1060 ti replace it with a single slot gput and free up one more slot ill keep that in mind

as for my zfs config this is what i have


i mostly hit radiez1 the other one i a full ssd vdev and yes i know it is a stripe all the data in their is not import and data lake games and easily replicable stuff

It would help if you posted detailed hardware, OS version, pool layout, etc.
You can expand my Details for an example or read the Joes Rules for Asking for Help link in an above post.

We usually start with that and do a network test like iPerf3 to rule out that and then start looking at what you are doing. Please be as detailed as possible on your problem and how you are testing. We the details on your hardware and pool setups we can get an idea of what to expect for performance.

Adding link. I don’t know about the controller you listed but might be covered by this.

also for the config i have 4 drives plug into the mother bored and some plug into a sata controller it was a cheap one

I don’t think you need the L2ARC on your Raid-Z1 pool. Documentation suggests considering adding it after you have 64Gb of RAM. You have to go by the ARC statistics. If you have a low RAM, having your current 2 x 465GiB of L2ARC eats into that RAM and what is available to ARC. Is that set up as almost 1Tb of L2ARC? Have you checked the statistics to see if the L2ARC is even helpful for you? Those could be repurposed.

fair i do have over 64gb of ram i removed thoes drives maybe ill do something with them i have been watching htop and i noticed my disk io is ]hitting over 100 percent sometimes even in the 200 even with very low cpu usage could this just be a hhd bottle neck ?

This is taken from calomel.org
It at least shows a comparison of some testing It at least gives you an idea of a single drive performance to the Raid-Z1 with three drives. Your issue may be the controller or a combination of items.

ZFS Raid Speed Capacity and Performance Benchmarks
                   (speeds in megabytes per second)

 1x 4TB, single drive,          3.7 TB,  w=108MB/s , rw=50MB/s  , r=204MB/s 
 2x 4TB, mirror (raid1),        3.7 TB,  w=106MB/s , rw=50MB/s  , r=488MB/s 
 2x 4TB, stripe (raid0),        7.5 TB,  w=237MB/s , rw=73MB/s  , r=434MB/s 
 3x 4TB, mirror (raid1),        3.7 TB,  w=106MB/s , rw=49MB/s  , r=589MB/s 
 3x 4TB, stripe (raid0),       11.3 TB,  w=392MB/s , rw=86MB/s  , r=474MB/s 
 3x 4TB, raidz1 (raid5),        7.5 TB,  w=225MB/s , rw=56MB/s  , r=619MB/s 
1 Like

thank you very much i will run sum benchmarking and testing and report if i pin point my issue incase someone else runs into this

i forgot to mention that i always have two vms running in the background i added two intel optain m10 in a miorr configuration and that seems to have had drastic improvement its not the perfect all fix but it seems to have helped alot

1 Like