Hello, Im building NAS from my previous gaming PC. Use case is not typical because my goal is to use 6/7 devices on 2.5Gb/s CAT6 (sometimes few on remote acces as well). Every day 100-300GB will be writen/overwriten so not really usecase for ARC and I need constant acces to this speed on every machine thats why Im buying dual 10Gb SFP+ card with agregation and switch - “TP-Link TL-SG3428X-M2” and RAIDZ with 3x 4TB M2 disks (no HDDs, 8tb real space is enough)
PC specs:
CPU :Intel i7-6850k
MotherBoard: Asus Rog strix x99 gaming
32GB RAM
NIC: Intel X520-DA2 (or similiar)
3x 4TB M2 disks RAIDZ
What do you think of this setup. Is this CPU and motherboard enough? It looks like its stronger than those 10gb qnaps etc. I cant find info about bus layout in mother board, but Im not very worried because there wont be GPU. I want to plug one M2 direcrtly to MB and use adapter to PCIe for another 2 M2 disks. Have somebody used this kind of adapters for example: “Axagon PCIE NVME+SATA M.2 ADAPTER”? Do you see some redflags bottlencs etc?
Summarizing those PCIe I want to plug NIC into first PCIe (instead of GPU) and two adapters into 2 bottom ones? Do you guys can confirm that this is optimal for lanes? this cpu is 40 lanes but this manual of MB is not clear for me when it comes to lanes
The adapter you want to use, PCIe portion is fine however the SATA portion will give you SATA speeds for one of the M.2 cards, and that M.2 card must be a SATA version, not a PCIe version. Carefully read the specs on anything you are purchasing.
As for the motherboard, it does not appear to support ECC RAM. I’m not certain how important the data is but if it is important, then you should build a system that uses proper server type components. The motherboard does appear to have a lot of PCIe slots, however looks can be deceiving. If you only want three NVMe drives installed and one will be on the motherboard, then I would suggest you find out if the BIOS support bifurcation and if it does, you are looking for 4x4x4x4. Then you can purchase a four M.2 card PCIe adapter for very little money. If the motherboard does not support bifurcation and you only need two more M.2 drives, you can purchase two similar to this linked item and then pop them into the PCIe slots. That will give you three M.2 cards. That is not the way I would do it myself however given the motherboard you want to use, the only other way is to use a very expensive PCIe non-bifurcated adapter like this for example. The good part here is this adapter can hold 4 M.2 cards and can be installed in practically any motherboard and work.
I didn’t see a boot device listed, a simple 2.5" SSD would work fine, nothing fancy needed here.
With three 4TB drives in a RAIDZ1, you should get about 7.7TB of total storage space, and 7TB of usable space (90% of total space). If you want more, add another 4TB drive, this would give you 10.64TB in RAIDZ1 and of course subtract 10% for a total.
One last thing, if your data is important, I recommend RAIDZ2 for this. Just because it is M.2 NVME doesn’t mean it is less susceptible to failure.
Since you already have this motherboard, check for bifurcation as I indicated above.
Thank you for answers. I will backup this data every night so 3/4 disks with RAIDZ1 and without ECC should be fine. Great tip about bifurcation, unfortunatelly it doesnt support it (or at least I couldnt find any info about it). I’ve got x570 gaming plus as well, and it supports bifurcation, but I think its overkill (I would need to use 16core ryzen). I’ve planned to use one disk per pcie slot so 3 free slots (because one for NIC) plus one on motherboard is maximum I’ve planned. Also good you noticed about this adapter. I wanted to use only one m2 per adapter, but I’ve not seen after searching at first glance there are singles as well. Its great info about this 4x adapter you can use on every MB, it is exepnsive but it seems like great alternative if I would need more than 10.64TB, good call. My biggest concern is if this MB and processor are good enough for those nvme disks. This MB manual and my knowledge about lanes, PCIe etc is mediocre
Good enough to saturate the network connection you will have. While I’d say yes, there are also other factors that play into this. For example, the size of the files. Lots of small files take longer than few large files.
You can sit down and write out the theoretical maximum speeds of each interface to see if they will exceed the NIC interface speed, that is what I would do. Each M.2 drive uses 4 PCIe lanes. PCIe lanes have speeds based on the PCIe version they are, and how they are connected to the system (CPU/PCH). Some of these things you may not know, some you may find in the user manual.
I would not purchase the PCIe card with the PLX controller I mentioned above for the maximum speed, I would find a x16 PCIe PLX card (or at least look at the specs). That would give each of the four M.2 drives a full 4 lanes.
I’m not sure I understand your use case, 100-300GB is not a terribly high amount of data, however how fast does the data need to be transferred? Why do I ask? A 10Gb interface can transfer 300GB in under 5 minutes. There is a huge disconnect between the amount of data being transferred and the time it needs to be transferred in. Do you need that transfer speed? What if it were 2.5Gb then it would take about 18 minutes to transfer 300GB of data.
These are just basic numbers, nothing exact but I’m trying to make a point or understand that it is critical to pass data really fast, even if it is for a 3 second burst several times a day.
100-300GB I meant unique data so they wont be on quickly available on ARC. Overall data transfer will be muuuch higher. Its for small 3D studio enviroment so there is no rule about size of files. Sometimes you want to open/upload one 5GB file and other time you want sequence of 500 files 50 MB each and everything between. My absolutely must have is that I will provide constant 2.5Gb/s for each workstations (as switch outs and NIC inside every PC is 2.5Gb/s) to max 8 of them (becasue agregated dual 10Gb/s NIC in nas is bottleneck) untill it will slow down.
That is good information to know as it changes the use case in my mind.
If you are looking to edit video while it resides on the NAS, you are looking for a very fast and expensive system. If you are looking to just transfer content and having the remote computer perform all the work, and then move the resulting file(s) back to the server, that should be easy to do. But working on video content live off the NAS, it can be done but you need a pretty well designed system, and it will not be cheap.
I don’t think I can be of any real help, I just saw a few things with the components that I wanted to point out.
Good luck.
You don’t specify the model of SSD.
Not all M.2 NVMe SSDs are equally performant.
In fact, some cheap 4TB SSDs are terrible, and will not sustain fast speeds once their onboard cache is depleted.
I’d suggest looking at detailed storage reviews covering sustained speeds before committing to a specific device.
Yeah, even 10Gb will not be the same experience as montage localy, propably will copy files localy anyway, but I’ve done a little bit more reasearch and found company in my city that refurbishes enterprise switches and offers service for a few years. They have Dell N4032 24x 10Gb RJ45 + 2x 40Gb QSFP+ extension for 1000$. That made me realize that 10Gb might actually be within my budget. Now bottleneck will be the disks and cpu.They also have a dual QSFP+ 40Gb NIC on PCIe Gen 3 x8. I know that PCIe Gen 3 x8 offers 8GB/s, which is less than 2x40Gb/s (RAIDZ with three M.2 is even little less than 7GB in Gen3), but it still seems great. Im planning to buy ICY BOX PCIe 4.0 x4 adapters to help with heat and 3xCrucial P3 M.2 drives-RAIDZ (or maybe samsung 990pro). I hope the CPU can handle it, but Im wroried about one of M2 drives sharing bandwith with PCH (system sata ssd in this scenario, if I understand correctly). Can this slow down my overall experencie significantly, or cause some issues with RAIDZ? What do you think about this setup? In the worst-case scenario, I could use a second PC with a 16-core Ryzen (I don’t remember the exact model), and its motherboard even supports bifurcation or buy x16 PLX adapter. Thanks for all the information—it helped me dig deeper into PCIe/lanes/M.2 drives etc.!
I’ve had good luck with this adapter card
Please see: TrueNAS Scale NVME Performance Scaling | TrueNAS Community