After stalking these forums for over 10 years, I’ve finally taken the step to build a truenas box myself.
I’ve gotten my hands on a Dell Precision 5820 for free, equipped with xeon p2000w CPU. Got some cheap ram, so it runs with 128gb 2666 ddr4 ecc.
The server is to be used in my home network. Storing some nightly backupjobs, maybe I’m gonna do videoediting from it, store archival files, just casual stuff, and I’m the only user.
The problem/choice:
Due to severe space limitations in the case, I am limited to running 2,5" discs/SSD, or nvme.
The price between ssd or nvme is more or less the same.
Options:
-8 ssd 2tb(or4tb) in raidz2 (running on an HBA controller)
-8 nvme m2 drives 2tb(or4tb) in raidz2 (running from two asus hyper m2 cards). I think maybe the dell 5820 can handle two of those cards.
I will problably never saturate a silly 1gbe link anyways. But maybe I will upgrade in the future.
I really wonder if there is any technical reason or other good arguments to go with one option over the other? And generally SSD vs nvme as storage in truenas.
Never heard of that… P2000 could be the GPU, which you do not need.
NVMe drives are SSDs. You mean “SATA vs. NVMe”.
SATA SSDs have no future. You may find up to about 8 TB (7.68 TB DC drives, refurbished), and the line stops there.
NVMe is the future… but M.2 is an issue, due to its limited size and cooling challenges. Capacity is in U.2/E1.S/E1.L/E3.S form factors.
EIther way, on cannot argue against a free horse, but this is massive overkill for one user, a 1 GbE link (!), and apparently limited storage needs. 8*2 TB in Z2 is 12 TB—one can do that with a few HDDs, and you won’t see the difference through the “silly link”.
Going for the M.2 with the hyper cards should be a bit easier than going the HBA + SATA SSD route, which on the other hand could be a bit cheaper. That’s it.
Question is, do you really require that many SSDs?
If I am using SSDs (NVME or SATA) I want enterprise drives (with high enough TBW/DWPD) and power loss protection in my NAS units. Most of the M.2 NVME drives you will find on the market are consumer drives that lack both of these attributes. On the other hand, if you are on a budget, enterprise SATA SSDs with significant remaining life are easily obtainable on the second hand market, sometimes as low as $40/TB. YMMV and you have to be a careful shopper. If you go with consumer drives, you run the risk of them wearing out quickly because of ZFS write amplification. Just FYI…
Yeah, sorry, it’s a Xeon W-2133, a little typo there!
Yes, you are totally right!
And yes, it is probably massive overkill. I have a few esxi boxes running at home, so maybe I shall migrate the VMs to the truenas to put it to better use. I also have an attached 250gb nvme in the flexbay, not sure what I’m going to do with it.
Well, from your replies, it seems the nvme is the way to go over SATA SSD.
Actually not right now. I could start with one card (4 nvme) in raidz1, and expand/recreate to 8nvme-raidz2 later down the line…
If you think long term, possibly. But the future is far away and notoriously hard to predict. It could be that ten years from now, SFF-8639 is really out of fashion and requires cumbersome adapters.
Mind you, my own SSD storage array began as 3*(4-wide raidz1) using 3.84 TB SATA drives (second hand Micron 5100 and Samsung PM893), attached to an A2SDi-H-TF. With a few extra drives, it is now 2*(8-wide raidz2) attached to a LSI 9305-16i.
The drives take two 5.25" bays in an IcyDock enclosure.
It works now, using far less PCIe lanes than would be required for a NVMe array, and I might have one upgrade cycle left, by hoarding 7.68 TB drives before these wear out and disappear.
One can still have a good ride on a dead end road.
AIUI, NVMe is an interconnection via PCIe lanes. And SFF-8639 is just one of the possible connectors for this. It’s very hard (at least for me) to come up with more advanced interconnection, even over a next 10-year period. Drives already have a “direct” connect to CPUs, and all modern CPUs already have support for it.
So, mark bookmark my words (with a 10-year reminder) – NVMe would be the “primary” interconnection 10 years from now for SSD (or other fast block storage) just like it is right now.
NVMe is a protocol.
SFF-8639 is a physical connector, which is used by SAS drives as well as the U.2 and U.3 forms of NVMe drives. And ten years from now, this connector may well have been sent into oblivion by EDSFF (SFF-TA-1006/1007/1009/1042).
See my build, same using a Dell T5820, but I had another one that let me swap out the NVMe controller for another SATA controller so I have 4 x 3.5 drives in the trays, and then use and Asus card for 4 x 2TB NVMe’s I use for my NFS VM target.
With the cost of NVMe’s right now, I would only get the single card and drives you need with some headroom, then when you need more, buy the 2nd card and add more NVMe’s as they will likely be cheaper.
I do run consumer drives in mine, 2 Samsung 980 PRO’s and 2 XPG models and had no issues so far (knock on wood) but I also backup my VM Data store so if it does die, I am not losing anything anyways.