hey I’ve noticed the io waits are extremely high i am using a one gb link but im not saturating it completely only up to 200mbps while looking at netata my whole graph is purple
im not completely using my cpu the nas ha an 8 core 16thred cpu with 80gib or memory so plenty i have maybe about 60gib free i only bring this up because im noticing lots of annoying lag and yes i am on ethernet while im downloading big files to my nas i losing connection and it kills along with my big stutters in my video editing and i can hear my nas crying like its out of breath i also have one ssd for cashing it is a wd 500gb nothing fancey am i misssing something ?
yieks i current have all my pcie slots filled or blocked hopeful i could pop out the 1060 ti replace it with a single slot gput and free up one more slot ill keep that in mind
i mostly hit radiez1 the other one i a full ssd vdev and yes i know it is a stripe all the data in their is not import and data lake games and easily replicable stuff
It would help if you posted detailed hardware, OS version, pool layout, etc.
You can expand my Details for an example or read the Joes Rules for Asking for Help link in an above post.
We usually start with that and do a network test like iPerf3 to rule out that and then start looking at what you are doing. Please be as detailed as possible on your problem and how you are testing. We the details on your hardware and pool setups we can get an idea of what to expect for performance.
Adding link. I don’t know about the controller you listed but might be covered by this.
I don’t think you need the L2ARC on your Raid-Z1 pool. Documentation suggests considering adding it after you have 64Gb of RAM. You have to go by the ARC statistics. If you have a low RAM, having your current 2 x 465GiB of L2ARC eats into that RAM and what is available to ARC. Is that set up as almost 1Tb of L2ARC? Have you checked the statistics to see if the L2ARC is even helpful for you? Those could be repurposed.
fair i do have over 64gb of ram i removed thoes drives maybe ill do something with them i have been watching htop and i noticed my disk io is ]hitting over 100 percent sometimes even in the 200 even with very low cpu usage could this just be a hhd bottle neck ?
This is taken from calomel.org
It at least shows a comparison of some testing It at least gives you an idea of a single drive performance to the Raid-Z1 with three drives. Your issue may be the controller or a combination of items.
i forgot to mention that i always have two vms running in the background i added two intel optain m10 in a miorr configuration and that seems to have had drastic improvement its not the perfect all fix but it seems to have helped alot