Hello All
I have an aging Truenas Core with 7 years old WD RED 8gb
I want to go all flash (yes)
Need to build 4x8tb of storage array
nvme ? ssd ?
What kind of motherboard ?
Scale instead of core ?
Thanks for advises
Hello All
I have an aging Truenas Core with 7 years old WD RED 8gb
I want to go all flash (yes)
Need to build 4x8tb of storage array
nvme ? ssd ?
What kind of motherboard ?
Scale instead of core ?
Thanks for advises
Your OS of choice.
Think about present and future needs.
You can find (refurbished) 7.68 TB enterprise SATA SSDs; possibly consumer 8 TB drives. But that’s about as far as capacity will go on the SATA side; more capacity would be by going wider. The hardware for SATA SSDs is just the same as for HDDs: A motherboard with enough SATA ports, or just about anything with a PCIe slot for a SAS HBA. You could convert your current NAS to SSDs using adapters for 2.5" drives in 3.5" bays.
NVMe drives are already over 100 TB in size… at least in U.2/U.3 form factor (and with deep pockets). Clear way forward, which can grwo by going for bigger drives as well as for a wider array.
M.2 NVMe offer far less options, not least because the physical form factor is too small to pile uup capacity—and is not good for cooling with double-sided drives. So growth would likely be by going wider rather than going bigger.
There comes the motherboard issue: NVMe drives need PCIe lanes. Consumer-style CPUs top at 20-24 lanes, so a maximum of 5-6 directly attached drives (AMD Ryzen only for x4x4x4x4 bifurcation in a x16 PCIe slot). Beyond that, you either add a PCIe switch or go for EPYC or Xeon Scalable (now Xeon 6).
(Terminology snark: NVMe drives are SSDs just as well as SATA/SSD SSDs.)
Do you have links or componements I could look at ?
Thanks
Links or components for what?
Motherboard and nvme
Or ref that I could check for
Thanks
Hope this gives some idea to start with.
I’m still not sure what you’re looking information about…
So I’ll be very basic.
For 4-5 NVMe drives only, any AM4 or AM5 board with a Ryzen CPU (x16 to x4x4x4x4 bifurcation) would do.
Thank you
Will look that ASRock (like it already as asrock is reliable)
When you say 4x8TB, what sort of array are you thinking about? RaidZ1? RaidZ2, 2 mirrored VDEVs? Something else? How much total storage capacity do you actually need? My TrueNAS machine has 10 enterprise SSDs in it, so fitting that many drives is pretty easy (although mine are all much smaller than 8TB).
I am using this board:
https://www.newegg.com/gigabyte-b550i-aorus-pro-ax-mini-itx-amd-b550-am4/p/N82E16813145222?Item=N82E16813145222
I used this M.2 to SATA adapter to add 6 more SATA ports
https://www.newegg.com/p/17Z-0061-000D5?Item=9SIARE9K9G2395
And I used one of these to take advance of PCI bifurcation to add two more NVNE drives plus a 10 gbe network card
Amazon.com: PCIE X16 Expansion Card, M.2 NVMe Controller Expansion Card, Pcie X16 to X8 X4 X4 Split Expansion Card, Support PCIE4.0 Downward Compatible with 3.0 : Electronics?
BTW, going all flash might or might not make a noticeable difference in performance, depending on how you connect your machine to the network and what kind of network capacity you have. Enterprise SATA SSDs will be the most cost effective, but you will need three mirrored VDEVs to saturate a 10 gbe network connection. NVME is obviously much faster, but if you use consumer NVMEs its kind of a step backwards in reliability (no power loss protection and questionable endurance for ZFS). Enterprise NVME and U.2 drives are good, but again, will your network keep up? Pool design and networking decisions are just as important as the type of drive you choose.
RaidZ1
Total capacity 24Gb (4x8-8 for parity)
Huge thanks for the links <3
Everything is 10gb on the core network (proxmoxs and nas)
The rest of the network is 2.5gb
The rule for a single raidz1 VDEV pool is you will see the IOPS performance of a single drive, and under a streaming read/write you will see (N-p)*streaming read/write speed of a single drive. So if the average enterprise SATA ssd has a max read/write of about 400MB/sec, you will likely not saturate a 10gbe NIC under random reads and writes, but it should do it for streaming reads and writes {(4-1)*400MB/sec} or 1200MB/sec, give or take.
As indicated, this Siena motherboard is overkill if you want just four drives.
A passive adapter in the x16 slot of a B650D4U, EPYC400D4U or similar would do.
Just some background information:
What are you trying to accomplish? Whats the workload? How are you serving the data from the NAS? SMB/ISCSI/NFS/???
In the thread in my signature I just talked a little bit about NVME performance and some build considerations if it helps.