What Is Realistic NAS Performance? Are my Expectations Too High?

I’ve been using TrueNAS for a couple years now. I have a dedicated PC with TrueNAS Core installed directly on the system. It’s NOT running through any VM. The idea is this gives TrueNAS direct access to any and all resources the computer has without any kind of mediator slowing it down or interfering. The system has a Ryzen 2700X CPU, 64GB ram (maxed out) boot SSD and is linked to a 120TB drive box via an enterprise 10gb fiber connection.

Until recently, my network was a solid 1gb network with all switches and routers able to deal with 1gb. All the NIC cards are at least 1gb with the best drivers I can find. Performance tests indicate the network is a reliable 1gb network…

I have an SMB Share that I use to back up my computer systems on. My main system is a custom-built monster with 4 disk drives, 1 m.2 SSD 1TB (Boot), 1 2.5" SSD 4TB (Gaming) and a 6TB Western Digital high speed hard drive and a 10TB. The hard drives are used for long term storage, not daily use.

My question comes with the obvious - backups and copying the 2 16TB hard drive data over to a dedicated share on the NAS. I’ve tried this several times, but this takes huge amounts of time, like WEEKS to copy just the 6TB hard drive!

Backing up the 1TB boot drive to the SMB Share takes about 2.5 days. Extrapolating that out, my entire system is roughly 21tb. Backing up the entire system would likely take around 53 DAYS! That’s a lot of days where if one thing is done wrong, or the system hangs up or needs to be rebooted, or Windows forces an update/reboot (despite updates being turned off), the entire process would abort and I’d have to start over.

Are these numbers realistic?
1tb Boot drive - 2.5 days
4tb Gaming drive - 10 days
6tb Storage Hard Drive 1 - 15 days
10tb Storage Hard Drive 2 - 25 days

I’m currently in the process of upgrading my entire network to 10gb. All the NIC cards are upgraded, as are the 2 switches. The only piece I’m waiting on is the 10gb router. Internet comes via a 2.5 gb cablemodem, which connects to a wireless router, which connects to 2 switches, one which the devices in my office/workshop are connected to, andd the other switch at the other end of the house where my server rack is. All of these are using certified and verified Cat 8 cables. again, at present, the weak link is my wireless router only supports 1gb.

So, the connectivity path is Windows 10 (10gb NIC) - 10gb switch - 1gb router -10gb switch - Truenas dedicated 10gb fiber to storage array) (10gb NIC)

Are these times in days, expectations realistic?

I’m starting to think the best way to deal with this is to get 2 10tb hard drives and make hard drive to hard drive copies of the entire system, then seal these drives away in a safe… Harddrive to harddrive copying should take less than 2 days for the entire computer…

This is likely the culprit here. Can you describe this connection in more detail - the storage adapter in your TrueNAS machine, the media and network path it’s going across, as well as the “drive box” you’re connecting to?

For scale, a single SATA link is 6Gbps - while 10Gbps may be a fast front-end connection, a back-end storage bus is something else entirely.

The TrueNAS server box has the 1gb NIC integrated to the board (UNUSED), a 10gb NIC and an enterprise PCIE NIC that connects to my 12 drive Dell HB-1235 enclosure and its 2 SAS Controller Modules “E15M E15M003”. This connection is done via a fiber patch cable.

The TrueNAS server itself is connected to a 10gb switch via a certified CAT8 patch cable. The 10gb switch connects to the router (1gb) via a 50 foot certified CAT8 cable. The router connects to another 10gb switch via another certified CAT8 cable, which then connects to my main computer via another certified CAT8 cable. The NIC in my main computer is a 10gb nic. The usual issue with these kinds of systems is the cables and how poor the quality of the cables are. Anything above cat 5 is questionable and literally needs to be tested to prove their throughput. and how well their internal shielding works…

The weak link in this system is the wireless router, for which a replacement has been ordered and is still a week away…

If it’s actually the SAS controllers with the PN listed, then it’s a SAS3 connection at 12Gbps per lane. Is it an SFF-8644 cable as pictured below?

The cable I have is a SFP-H10GB-CU1M

First validate the network bandwidth between the client and server using iperf3.

2 Likes

Agreed. Those numbers do not make sense at 1GbE never mind 10GbE. A solid 1GbE connection should be saturating out at 120MB/s or so. Transfers will take time but not weeks?

This took me a bit to set up, but I think I got it running correctly. That said, there definitely seems something isn’t working correctly. I seem to be getting a ton of retries somewhere…

image

So it looks like a network issue then? 120MB/s is about 1000Mbit/s.

Took less than 4 days to copy 20TB+ from my old i5 2500k NAS to a temp (2700x/32GB) over 1Gbit ethernet.

From your post, you’re going 10 gig from machine to a switch THEN to a wireless router then back out to another 10gbe switch to the new NAS?

Might be some issue with storage IO. Or a bad network cable.

I have a quick solution, move the NAS to the same 10Gbe switch you have your windows machine connected to :wink:

3 Likes

Agreed the speeds indicates closer to fast Ethernet, not gigabit speeds.

Fast ethernet would be circa 100mbps of course.

This is 25mbps.

But it could be the retries…

Anywho,

IMG_2008

1 Like

hah

Another idea, if you have an hba & 10gbit nic, what PCI-E slots are you using. Server cards are very much designed around x8 lanes. Most modern consumer board is giving you a single slot (your x16 for gpu) that’s going to play nice with x8. PCI-e spec is backwards compatible but lanes are not. I usually re-use my old gaming mobo/cpu/ram for my NAS, but I can’t add a gpu (5900x doesn’t have igpu), HBA and 10Gbit NIC to my current gaming board. It just does not have enough x8 slots for everything to be happy.

Stux put me on the most logical path. Using other computers on my network and a laptop, I systematically tested all the cables and devices with iperf3. The result of that ended up being a disappointment. I spent a lot of money on a 100 ft CAT 8 S/FTP certified cable, which I ran to my rack at the other end of my house (Dealing with a rack in the same room is “annoying”, to me…). I have a second one, which I ran inside the house, directly to the server and plugged it in. It runs up to 2.5gb speed with no problem. When switching back to the original cable, it give a max of about 33mb throughput. Obviously there is something wrong with it. For now the performance has significantly improved (for as long as my wife lets me leave the cable running from my office to the rack)… Ironically of all the CAT 8 cables I have and tested today, only that one out of 14 cables failed to meet performance expectations.

Humorously, this goes back to one of the very first computer rules I learned back in the early 1980s, “Never trust cables! They all eventually fail…”

Network speeds to the server are back up the numbers that were expected. For others with slow performance, start by checking and verifying every cable and device along the path…

Thank you again everyone!

4 Likes

I suggest going optical if possible.

3 Likes

Some forum links on networking

Thing is, optical is the best choice as long as you do not have to confection the ends. That gets real finicky and expensive real quick due to the cost of the tools and at $15 per plug and 4 plugs per cable (TX and RX on each end) even the variable cost is pretty high. Meanwhile, OM3 multi-mode pre-confectioned optical is dirt cheap and incredibly performant.

That’s the main delta to Copper - for copper the CAT6+ wires themselves are expensive but the ends and tools are cheap, for optical it’s the inverse.

Similarly, copper transceivers are expensive, run hot, and can be finicky, while optical transceivers run cool, are inexpensive (but may be vendor-locked!).

Optical has most driver bugs ironed out as long as you select from the right vendors, we still hear about occasional copper driver issues here.

However, while copper is physically plug and play, optical requires some care in transceiver and cable selection - i.e. multi-mode vs. single-mode, OM1-OM3 vs. the higher ones, etc. So read the primers.

Overall, I would ALWAYS go optical if given a choice as optical breaks the electrical connection between machines, offers more range, runs cooler, and is immune to most forms of interference.

2 Likes

And please do not treat your fiber connections the way my electrician treated mine. Respect bend radii and do not break the fiber by stuffing it into a small wall box like it’s copper…

Ultimately, I was able to save every connection except one. But it took some work to research how to confection fiber optic cables from scratch and the tools were pretty expensive. But thankfully I was able to get it all done and resell the lot on eBay.

If you choose to go the DIY confection route, do not skimp on getting a proper optical measurement device to measure attenuation.

2 Likes

If you’re going to go with copper, go with actual CAT 8 cable and be done with it. CAT 8 is the upper maximum limit for copper wire and is significantly better than CAT 6a or 7. There will never be anything made of copper wire faster unless wire is radically altered in how it is assembled… That said, fiber will always be better at any distance, but it also needs to be protected because of the fragile nature of the fibers.

The HB-1235 came with a SAS1or SAS2 controller, not a FC controller. The E15M E15M003 is a SAS3 controller. Is your cable connector square like the picture above or is it rectangular, if so its a SFF-8088 cable. You have a SAS controller in your server not a “enterprise 10gb fiber connection”. You need to find out what model it is and make sure its not SAS1 (3Gb). Change your network cabling to cablemodem to router, router to whichever switch is closest. 10Gb connection between switches. Do not put your router between the switches. Don’ t worry about all your cables being Cat 8, Cat 6 is fine for 10Gb upto about 50m. You can upgrade when you need greater than 10Gb. Also how many disks are in the HB-1235 and how are they configured. Type of raidz and how many vdevs? There are a lot of things that can hurt your NAS performance and we need more info to help figure it out.