Please check my build? I am insane, but HOW insane?

Hello it has been so long since I posted that the forums moved and FreeNas is called TrueNAS. I forgot my username on the old forum but I didn’t post enough to be remembered. I did post a couple hopefully helpful How To posts that hopfully help some folks in 2014 or so. lol

I am still running FreeNAS from 2 or 3 years ago and wanted to do a much needed upgrade. I am about to travel around a lot and I am taking my NAS with me. In my truck / camper. Living my dream! Currently I have low capacity spinning rust in a Full tower case I was able to cram drive caddies into. A raven V2 case I think it is. I wanted to upgrade to a smaller case and hopefully lower power consumption by running more SATA SSDs then spinning rust. I also will need to move this thing around frequently so SSDs are much lighter. I will be sacrificing storage space. However, I am not using as much as I thought I would when I made my current NAS build in 2015 or so.

Purpose of server is to serve up media with TrueNAS scale stable. Other things; Home Automation, NVR as a VM, ISCI block storage for Steam storage. Home lab stuff with VMs. Running Linux systems to test them out. Running Docker tools etc. My new case will have 4 Drive bays that can take Full 3.5 rusts or the smaller SSDs (with adapters). The mobo I chose has a spot for an Nvme drive to use for TrueNAS scale boot. No redundancy on the boot drive. I don’t need 99% up time. In fact I will probably power the NAS off frequently when it is not in use. This NAS will be on the move.

After reading this forum for several days and looking up many options. Here is my build. Comments and Advice are helpful!! Thank you in advance! I will try my best not to ramble and just give relevant facts. This will be my 3rd Free/TrueNAS build.

Case:
45Drives HL4 With Power Supply and backplane. The data cables I choose are SATA. SFF8643 4x 7 pin SATA. (100% full price)
Motherboard
AsRock Rack X570D4I-2T (Open box special!)
CPU
RYZEN 5 PRO 5650GE (I WILL report if it ECC yes or ECC no as the forums were not terribly clear. Found one used!)
CPU cooler
Noctua NH-l9i Low Profile 37mm CPU Cooler with Heatsink PWM Intel LGA 1151 1150
(Someone was selling a used one!! Nice!)
RAM
NEMIX RAM 32GB (1X32GB) DDR4 2666MHZ PC4-21300 2Rx8 1.2V CL19 260-PIN ECC SODIMM Compatible with Samsung M474A4G43MB1-CTD ECC Unbuffered (so it’s not samsung, hopefully a decent clone… good price)
Network
The mobo comes with 2 10 gig copper Nics (rj-45) but I got short, very short cable runs. I am moving into a tiny trailer (I want to be Chris Farley and live in a VAN down by the RIVER!)
Microtek 10 gig 4 port switch, SFP+ . 3x SFP+ to RJ-45 adapters
Main PC (gaming and work… high amount of data transfers) has a 2 port 10 gig Chelso card SFP+
I have some unifi switches that are just 1gig but they are not doing the bulk of the data xfers. I will be getting rid of those soon.
Lastly I have a Blackhawk router that is a bit outdated? It has an SFP or SFP+ port? only 2.5 gig but it might connect to the NAS quickly? Another thing to test. For the road I will probably find a much more energy efficient method for wifi.
Storage (on current NAS ~ approximately ~)
5x 3TB WD Red drives 5600
3x 4TB WD Red drives 5600
2x 128GB Sata SSDs for cache. I hope they are the right kind, I may not put this into the new build? I have not notice a difference in using them. I have watched 4 hour lectures on ZFS and still fail to completely understand when those cashes become helpful? Perhaps the way I use my NAS just doesn’t benefit??
3x 4TB HGST Enterprise storage drives that spin faster, 7200 maybe?
1x Sata Dom for boot.
Please note this is all being downsized and migrated, I also have a ton of 1 TB Sata SSDs I may start with that? Or I might buy 4 2TB sata SSDs. I would love help to decide? Price per TB I have not quite looked into yet cause I can migrate my pools that direction. I have some time before I hit the road nomad like.

I am considering taking up that PCIe gen 4 x16 slot with a 4x Nvme card. I am fairly certain the Asrock supports bifurcation of the Pcie. I hope I don’t have to buy a card with PLX chip for that? I read the mainboard manual and it looks to support bifurcation. 4x4x4x4. Perhaps one of you know? I know this board wasn’t entirely recommended, but it seem like for a home lab? I think it might be the solution.

I will start with some old spinning rust and Sata SSDs cause that is what I got laying about (enough of same sizes). I will try out some Nvme I have got laying around too and a dummy card to see if bifrucation is supported without the UEFI/BIOS doing some odd raid config thing. That pool will mainly be iscsi block for steam. The manual is a little confusing re Nvme.

I am running an old supermicro board x10 series I think? E3 Xeon. It is running fine. I just have to get the weight, size, and power consumption down. Easy to load on a dolly, if it even ends up weighing that much after getting rid of most of the WD Reds. 32 gig of ECC DDR3 while my new mainboard will need DDR4 so I had to buy a single stick. Again hope to keep power consumption down.

I have liked what 45drives is doing for a long time but way to large for my needs. Soo I splurged.
The smaller form factor really appealed to me and some bling too! (custom front panel logo). I am still using PC cases from the 90s folks! I rebuild and rebuild even in cases that cut my hands up , so a case like this? I think it will be well worth. It will be a welcome change! My servers have always gone in better cases than my PC builds. They are all “sleeper” PCs. lol ATX thank goodness has been ATX 4ever!

Well be kind. I would love suggestions? Cost is not a huge concern for the system but 1200 - 1400 USD for an SM board was a bit much or I could find what I already have for a decent price but it is micro-ATX.

I have always wanted to do an Mini-ITX build so this is a chance to give that a go. Dare I say it? The case will be sexy AND functional. It does look to have great airflow if I manage my cables well. I hope the cooler has enough room? It i not very tall…

Thank you for coming to my Ted Talk!

Now… How do you set up TrueNAS permissions again??? lol I am already starting to loose sleep over them. :slight_smile:

I guess from my perspective, how large of a camper will it be and do you really NEED all those drives? It seems like a lot of weight to roll around, even if it is one of those large $500+ USD rolling homes (what my wife wants but would of my reach).

Have you considered NVMe or SSD for storage. The hard drive should be able to handle driving on the road but it may be best to power down while driving. This would also greatly help with power consumption. If you are going up north and need to heat up the place, all those drives would likely do it. If you are headed to the 90F+ areas, that would be a burden.

As for NVMe M.2 format, Try to stick with Gen 3 drives, they generally stay much cooler. There is a Gen 5 coming out that states it does not get hot, but I have no idea when us consumers could afford it.

Your specs on some of this stuff seem a bit extreme, 10GbE as well. If it is a no cost upgrade, I can understand it. But it looks like you are building some CIA/FBI spy truck with all the bells and whistles. Mums the word.

There is a PLX card that holds 10 NVMe M.2 drives. It’s a beast if you can afford to populate the entire thing, just needs a X16 slot.

While I’m sure you have put a lot of thought into this, is this what you really want on the road with you?

1 Like

I’ll just make one suggestion: NVMe RaidZ1 with 4TB drives should be fast enough and have enough capacity with just 4 or 5 drives, then use a single SSD for boot. Forget all HDD

2 Likes

PRO (xx50) = ECC

RJ-45 transceivers run hot. Mikrotik spcifies a maximum of two in a CRS-304, not in contiguous slots. Hopefully you should only need one for the NAS and put the rest on SFP(+). Or consider a CRS-305 instead.

APUs can only bifurcate x8x4x4. That’s still three NVMe without a PLX switch.

How many drives will actually go into the case? What are your storage needs?

Fair point, but with a 5650GE this will be a PCIe 3.0 system anyway.

VMs and iSCSI really want mirrors, for the sake of efficiency.
Mind ports! This board supports up to 9 SATA drives… but has no SATA connector.
1 M.2 NVMe or SATA, on CPU lanes
2 SFF-8611, each for PCIe x4 or 4 SATA with a breakout cable.
For the sake of capacity and making the best use of the case, one of these should be set to SATA to serve the four bays (HDDs or SATA SSDs). That leaves three NVMe on CPU lanes in the PCIe slot, possibly a fourth NVMe on CPU lanes in the M.2 slot (and that would be the preferred slot to have a strip of mirrors) and one NVMe on chipset lanes from the second OCuLink.
All chipset I/O, OCuLink ports and network, go through the upstream x4 link.

I would use a SSD on USB adapter to boot, or the second OCuLink.

3 Likes

Yeah…but looking at the rest of the system, is that really going to be the bottleneck?

And driving a van with HDD kept in a PC case is just asking for trouble. I missed the SFF-8611 though, so yeah, cram in as many SSD linked to those through breakout cables as RaidZ for storage and then as many NVMe drives as can be crammed in as mirrors does make more sense. I’m just mindful that 8TB drives are pricey.

Looking at the case though…I don’t see how many SSD it can actually hold?

Raidz sucks at small transactions, which block storage is about. And 32 GB RAM may not cut it for iSCSI (seriously, put that game library on local storage in the gaming PC…).

The case has four bays, so it could hold four SATA SSDs.

I’ve never understood wanting to put that on a NAS served via iSCSI. So much extra cost and complication, for what purpose?

Yes. I am the black unlabelled Sprinter van parked out front! lol

That will be my goal. I have about 1.5 years before I go nomad, so I will be spinning down my rust and switching over my pools. Luckily I waaaaay overestimated my personal file storage needs so the cost per TB for NVME vs HHD is not going to kill me, plus I have over a year to absorb the costs.
Thanks for the reply! Also I hope this will all work that way. The HW I chose might have not been quite what I hoped. We will see.

It is a highly modifiable case so perhaps 8 Sata SSDs but that only one option. I might not go that route.

By the time I am travelling it is unlikely I will have any spinning rust any more.

Well Darn I will have to test this out. If it doesn’t I will spring for a PLX card.

Thanks for reminding me. I have been using two in slots 1 and 4. I use a 3rd but not in my perminant set up. mostly for testing something then unplugging the module.

For what I am doing I don’t think the drop to 3 will be a huge deal.

I have bookmarked this and will for sure be doing a lot of testing. I didn’t read this Diag properly in the manual and also failed to notice the port in the photos of the mobo or looking at the manual. I forgot that Sata is slowly dying.
What type of breakout cable will work with OCuLink ports? A SATA breakout? I emailed to ask 45 drives what their backplane will connect to. I may need an adapter. I am being supplied a break out cable. I think I still have a few that connect to HBA 3rd gen pcie cards . I have several of those also.
I don’t entirely understand what you mean… Which is probably why I missed it looking at the diagram but I will study this when my brain is a little more fresh. The parts are coming from all over USA and Canada so I have a while to wait. I won’t wait till the HL4 gets here. I will start bench testing once I have the HW… Doing that in the case would be annoying anyhow. I just need the APU cooler to be low profile enough and the PSU to have the right power. Test airflow and temps. Good to go. The bench testing will be in a couple weeks. That is when I will have the parts and a day I can pull an all nighter testing configs. Maybe a couple all nighters.
Thanks so much for pointing out these failures in my design!! Especially the bus conflicts and the odd connector. idk how I missed that reading the manual.

Also on the SFP+ slots, depending on how far you are away from the switch, you could use DAC cable and avoid the modules. I am currently using DAC cables from several sfp+ 10gbe and 2 qsfp+ 40gbe systems, they work great. For longer systems I use BiDi modules with short fiber patch cables. They run cooler then MMF or copper converters.

3 Likes

What would you think of this board instead of the asrock rack and Ryzen 5? It comes with an Atom processor.
Gigabyte B01CVKKM1I the Revision 1.3 , Since I clearly can’t read a manual lol. It looks ok? Or did I miss something once again? If you can find the time, thanks!

That would be a lot of money and power to get 16 lanes out of a x16 slot instead of 12.

Yes, SFF-8611 to 4*SATA. I don’t know if the AsRock Rack board comes with it.

A search actually points to MB10-DS1, with Xeon D-1521 CPU not Atom.
Xeon D-1500 have been a trusted solution for low-power home NAS for more than ten years, which is both good (trusted) and bad (their age). We have been discussing these Gigabyte boards, and @Davvo confirmed that they use Java-based IPMI, which would be a good reason to favour Supermicro X10SDV boards instead (HTML5 IPMI).

Mini-ITX X10SDV boards come with 6 SATA ports, for your backplane, one M.2 and a x16 slot which can bifurcate x4x4x4x4. Good fit for your plan, if the Broadwell-D CPU has enough processing power for your VMs.

2 Likes

Referencing

The Xeon D-1521 has about 1/3 the performance of the Ryzen 5 Pro 5650GE. The D-1521 is circa 2016 and the 5650GE is circa 2021, so five years different technology. The 5650GE also has a lower TDP at only 35W vs 45W for the D-1521.

I don’t think the MB10-DS1 board has any sort of bifurcation on the PCIe slot (I don’t bifurcation was a thing until 2019 or so).

How committed are you to the 45Drives case? If the desired end state is all-SSD, have you looked at any of the all-flash NAS boxes from Asussor, Terramaster, QNAP, etc? Aside from Synology, many of the other brands can usually be reloaded with a different OS like TrueNAS.

1 Like

TDP is mostly irrelevant. What matters for a low-power NAS is idle power draw. Xeon D-1500 is very good with that—and so are Ryzen APUs.

Xeon D-1500 can bifurcate x4x4x4x4, except for the first generation D-1520 and D-1540. Supermicro v.2 boards (all but those with D-15x0) certainly bifurcate all the way.

Good point for prebuilts.

1 Like

Good point. I will try what I purchased already and see how that goes. I got my killawatt all set for when stuff arrives

It in fact does! Two of them in fact. So in a way? It does come with Sata. lol

Oh correct. I am running a Xeon now. My SM board I think is the gen before x10 actually it has the bug where I had to load IE with very old java to connect to IPMI.

I am not sure that it would. I have to check. Thanks!

1 Like

Yeah those reasons are why I went with the Asrock Rack instead.

I am waiting for them the custom build it so pretty committed at this point. :grinning: I did look into several of the boxes you linked. I was concerned a bit about cooling. I might downsize still further to one of these but for now. Something was just pulling at me to build in a 45 drives chassis wo it being a huge monster. Great indeas thanks!

OK. I did skim the manual for the mobo and didn’t see any mention of bifurcation. Of course, some manuals don’t go into that level of detail about the BIOSs, so you need one hands-on to see what can be done, or I could have missed it.

ISCSI is good, because it has native Windows support in all windows versions. AND shows up s AND behaves as ana internal drive, not like a Samba share. (NTFS is not supported natively at all.)