I have a little n100 based NAS with 6 spinning SATA drives and 2 emmc m.2 drives for the OS. I’m thinking about changing to dual nvme in a mirror and a pair of SATA SSD for OS to reduce the power draw.
I need to check out how many lanes those m.2 slots really have and decide if it is worth doing vs just buying some decent SATA SSD for the storage. (edit: probably not since the n100 has limited lanes available)
This leads to the question, SLC or DRAM cache on the nvme drives? Is it really going to matter once either the SLC or the DRAM is full and it is now down to the base speed of the TLC storage?
Just thinking out loud and wanted some opinions before I spend a bunch of money on drives. I need about 4TB for my lab and the only real way I can see me doing this is in a mirror, so around $400 to $500 with current prices (not for Enterprise drives).
I’m kind of thinking that used Enterprise SATA SSD might still be the better way to go if I can score a set to replace the spinning drives. 960GB to 1.xTB would be what I’m looking for.
For the SATA ssd’s, you can get a pair of cheapos for that task. The price is so low on lower capacity models I’ve gotten them sold as a pair or a 3 pack. 128G is plenty for that purpose. Lots of spinning rust = heat and power consumption so good call there. A pair of 4tb nvme’s as a single pool would work but not cheap at all. Would rather spend an extra fiver on the power bill and have capacity but not your use case. Maybe 4x2tb drives in mirrors and you get your 4tb capacity and nice speed without much power draw; 2 vdevs consisting of mirrors would blaze on big files. Hope you have a 4 way card using a 4x4x4x4 bifurcated slot for that.
You won’t generally be getting 4TB SLC NVMe drives, especially in an M.2 form factor - and if such things do exist, they sure won’t be cheap.
The question is really down to “what do you want to do with the storage?” You mentioned “Lab” so I assume some manner of virtualization back-end, or another more I/O intensive use case than the spinning disks will provide.
Modern DRAM-buffered TLC can be pretty darned fast for most use cases even at the consumer NAND level. I’d wager your N100-based unit has at best a 2x2.5Gbps network connection, so you don’t need to worry much about going beyond what a theoretical 5Gbps of throughput can do.
I bought one of the n18 board varieties which has a 10base-t and dual 2.5g connections. It does not have a PCIe slot, so no change of bifurcation possible. I have a “nice” low profile 4 m.2 card that I picked up cheap, I was hoping that one of my servers might do 4x4 bifurcation, but I could only get 8x2 and one was only 4x2 so that was out. Need at least 4 data drives to get “decent” failure protection.
With the 6 spinning drives and testing at more than the ram buffer size, I’m getting 6gbps to 7gbps during disk benchmarks from a Windows VM running on XCP-ng (NFS share), so it’s chugging along. 4TB drives are about $200 to $250 new, but these are all consumer models.
And the more I think about it, I think staying with the SATA is probably better.
The n100 only has 9 PCIe lanes, in an ideal world, each nvme would take 4 of those, that doesn’t leave much room for the 10g NIC or any of the other things that need PCIe. And I think I read that the lanes are shared between the nvme slots and the 10g NIC, but I could be wrong.
I may need to see how cheap I can get m.2 SATA drives and put them in enclosures. Some of them have RAID 0/1 so I could maybe buy a bunch of cheap 512 and RAID 0 each enclosure. They would need to be really cheap though, used enterprise 2.5" SSD SATA in the 960 GB range are about $50 (average) and that would be enough for my lab.
I am planning a Z1 Pool with 4TB NVMe’s
I want to run apps like Frigate, Tailscale , Jelu immich and a Jellyfin movie server along with the movie backups library
How crucial is it that I get NVMe drives with DRAM? or will something else suffice?
Suggestions?
thanks