Your NAS experience

I’m curious as to your NAS experience, in terms of what you have learned over the years deploying NAS; what to do and what not to do…

In my case,

  1. I have found hardware requirements (CPUs in particular) to be of no/low importance. space and speed of storages matter. RAM size isn’t important as well - I have the NAS Core deployment on a dual core 4GB Dell SC440.
  2. I under-estimated how important it is to have an energy efficient system. I have numerous 5400 and 5500 (dual) Xeon builds that are only as playground or backup servers, because they consumes too much electricity.
  3. raw transfer speed seems to vary very little from one deployment to another. I have ran multiple NAS systems (TrueNAS, fnOS, or my own linux + samba), bare metal or over virtual machines, fast or slow cpus, very little memory (4GB) to lots of memory (>64GB), with aggregation or without, … I consistently get 100MB/s over a 1Gbps network.

If I were to do it again today,

  1. I would pick a desktop with lots of drivebays for a pure NAS deployment; or
  2. I would pick a power efficient workstation for a NAS + VM/Container deployment.

Interesting to see your experience.

Do your job and find out the requirements for the NAS.
Engineer the build. If you don’t know basic thermodynamics, you probably should not be assembling parts into a case that will get hot.
Buy the parts which will fulfil the requirements.
Don’t buy cheap parts.

1 Like

I read a blog where someone built their NAS for $500 (not counting hard drives, obvi). I’ve never built a PC before, so I thought that was a pretty minimal monetary investment for a first-time build.

I quickly discovered that 16GB of RAM isn’t enough if you’re planning to run lots of apps (32GB seems to be fine). I also recently learned about the importance of ECC RAM. If I had to do it all over again (which I probably will at some point) I would:

  • Buy a bigger case
  • Choose components based on their utility (ECC RAM), not price

Those are the two biggest lessons so far.

Never place a cup of coffee near the NAS. Never.

4 Likes

The CPU does matter.
A good example of this is the N100-line that seen a surge of popularity in small form-factor setups. That CPU and it’s cousins have so few PCIe lanes that it severely limits connectivity. It also doesn’t support ECC memory. You can do without that, obviously, but make sure it’s an informed choice.

Any modern HDD can digest or output more than 100MB/s, SSDs obviously do even better. If you get 100MB/s on a 1Gbps network you sound bottlenecked by your network and unable to gauge what the upper limits of your server really are.

Know what is important? The systems stability. Including incoming power stability, and the stability of the parts. Know your parts. If the add-on HBA card overheats, m.2 cards/drives overheat, or even the embedded NIC chip or add-on NIC overheats during sustained work for example, your data may be toast. If the entire system loses power for any reason your data may be toast. One needs to look no further than these forums to see the number of “help my system is acting weird, is rebooting randomly, system lost power, while I was gone …, and now I have no pool or no data.”

In a system design. this question needs to be answered First. What is the plan to backup or replicate the data you plan on storing/serving? Build or setup that system first. Otherwise you might never get around to it.

HBA’s in many cases will need extra direct cooling. Meaning a fan directly blowing at the heatsink so figure that in the design.

If your data is important (and whose isn’t) then at minimum add an UPS capable of sending a shutdown signal to the server that can support the server until it fully shuts down and don’t wait. Give the power 30 seconds to return, and if it does not shut the server down.

2 Likes

Choose a major brand for the boot nvme. I learned it the hard way this week.

Also: plan from the start to place it in your basement or anywhere else where you can’t hear it. All these disks (unless you’re full SSD, but probably not the majority of setups) and the fan power needed to cool them and the motherboard is only possible far from your ears.

good data points.

someone built their NAS for $500

I am syncing files to a Dell T7400 that I got for free (practically speaking).

unable to gauge what the upper limits of your server really are.

maybe another way to say the same thing is that unless you go crazy on your network build, even the low spec computers / parts are unlikely to be your bottleneck.

given that, I also think stability (reliability) is much more important than performance.

What have I learned? It’s a constant learning experience.

It’s best to start out with a small “proof-of-concept” system before building a full-blown system. My first system in 2015 was a recycled office system running a Core 2 Quad, a USB boot drive, and a single terabyte hard drive running (at the time) FreeNAS. It was more than enough to get up a jail and some SMB shares for backups.

The actual core NAS system doesn’t use up resources, but the apps do. When I built my actual production server in 2016, I initially had in mind system backups (primary), but it was nice to set up a Plex server. When Docker containers came along, I increased my apps as well.

Backups, backup, backups. I have backups of my critical data going to Backblaze “just in case”. The $5 bill I get from them each month isn’t much compared to time to recreate and recover the data… especially data that can’t be recovered.

Core 2 Quad,

I have two Dell SC440, one running a dual core Xeon and another running a quad core Xeon :). Loved them.

a USB boot drive

I am running a fnOS on USB (16gb). other than slow bootup, I have not noticed anything else.

I assume endurance is likely an issue long - term but what does the community think of running NAS over USB?

TrueNAS on USB? Strongly discouraged–the boot device is a live ZFS pool, and USB sticks have shown very poor longevity. I don’t think we have a prevailing opinion with respect to any other NAS OS.

1 Like

For a proof of concept system, maybe as you are expecting data loss. For a production server, nope. My boot drive is a 128GB NVMe drive.

Things I’ve learned; small form factor, quiet, cheap, & power efficient are a dream.

Buy something that has more pcie and ram slots than you’d ever know what to do with & be happy. I wish I had more pcie slots…

1 Like