Budget Build - Couple of Questions - Will Share Lessons Learned!

Hi everyone,

There’s already a ton of information in this forum. It can be a bit overwhelming, but I’m getting there. Please be patient with me! :blush:

I’m planning a budget ITX build for around $500. I will use it as a NAS, along with Home Assistant, NextCloud, Immich, and RClone to mirror to my QNAP, as well as for 4-5 small Docker containers.

Components:

  • Motherboard: JGINYUE B650I
  • Processor: AMD 8500G
  • RAM: KINGSTON FURY Beast, DDR5-6000, 32GB, CL36
  • Power Supply: Fortron Flexguru 250 Watt
  • Case: Jonsbo N10

Storage:

  • 2x 1TB M.2 (for essential data and application pool) in RAID 1
  • Other slots used for storage (RAID 5 or 6). I don’t think I will ever need more than 4TB of usable storage.

I’m torn between the Jonsbo N2 and the Jonsbo N10. My space at home is very limited, and the N10 is half the size of the N2. However, the N10 can fit enough drives for my storage needs: 2x M.2 SSDs, 4x 2.5" drives, plus a boot drive on a USB 3.2 interface. I’m a bit worried about thermals, though, and the N2 can accommodate 5x 3.5" drives. What would you do if you were in my situation?

Is there a significant advantage to going for 64GB of RAM over 32GB for my use case?

Are there any other non-obvious things I should consider when building a system for TrueNAS?

Thanks in advance, I appreciate every input!

As someone whose NAS does use a USB drive as a boot drive I would definitely advise against this for a new build - this has been the cause of the only issues (and several of them) I have had with my NAS and way more heartache than it was worth. Buy a board with an additional SATA port and buy a case that will take an additional SATA 2.5" SSD.

But if you decide still to go down this route, make sure you use a USB SSD and not a USB flash stick.

As for storage, you have a QNAP so have some experience with NASes and I guess that should allow you to make a reasonable assessment of your storage needs. Don’t forget that ZFS allows you to take snapshots (which use extra space) and that the maximum usage of your storage is recommended to be < 85% or 90% for performance reasons. Uncle Fester’s Basic TrueNAS Configuration Guide has some reasonably extensive pages about calculating your storage needs if you need any help.

But if you genuinely only need 4TB of storage, my advice would be to buy two x 4TB drives and mirror them because if you use RAIDZ1/2 with smaller drives then the costs for drives and electricity will likely be far more. (Also this will free up a SATA port for your boot drive.)

Also, if you are running Docker apps, if they are only used for external access to at-rest data then having them load from HDD is probably not a performance issue. However if they have active data, these might be better on SSD, and then you should perhaps plan to have a separate SSD pool for these apps and their active data (which can be unmirrored if you back this pool up to HDD using ZFS replication).

Regarding HDD physical drive sizes, I would recommend you go with 3.5" drives because 2.5" drives tend to be consumer drives rather than NAS drives and many are SMR drives which are completely unsuitable for ZFS.

So this means using the N2, and since this has room for 5x 3.5" and 1x 2.5" I would suggest that you find a MB with 6x SATA ports to match. (In all other respects the JGINYUE B650I looks OK.)

P.S. Terminology with ZFS is slightly different: RAID1 = mirrors, RAID5 = RAIDZ1, RAID6 = RAIDZ2.

4 Likes

Thanks a lot for the detailed response; it was actually super helpful. Uncle Fester’s Basic TrueNAS Configuration Guide is nice too! It’s quite a bit to process.

It’s surprisingly hard to find a motherboard here with 6 SATA ports. There are none available for AM5 in that form factor. Even the ones with the X670, X670E, and X870E chipsets usually only have 2.

I could use a SATA extension card that utilizes the M.2 or PCIe interface. I’m still trying to figure out the disadvantages of that approach

I take a slightly different view. You’re not building a mission-critical 24x7x365 box here. As a home NAS, you need to ensure the safety of your data, not it’s uptime. I use an internal USB flash drive and yes, as others have pointed out, it’s not as reliable as a mirrored pair, but heh, who has the internal space for this in a small case when you’re going to be restricted in drive bays. So I would suggest that you’ll be OK with a USB flash drive IF you plan for its failure. The key is to ensure you save the config file every time you make a change to some other storage medium, like your cloud drive, local desktop. Then your recovery process would be to cut a new USB key from the image, boot off of that and load the config. Job done (yes, I’ve had to do it). Be sure to put your system dataset on the data drives and then set the “syslog” option to true (so that logs now get written to the system dataset on the data drives) to reduce USB flash wear. In this way, you can use a nice small NAS case and use all the drive bays for storage.

SATA is falling out of fashion. B650 only provides 4 ports, so you need two chipsets (X670) for 8—except that two chipsets does not make any sense in mini-ITX which is way too small to expose all the I/O.
Maybe you’d have more luck with AM4: B550 has 6 SATA ports.

In any case, no 2.5" drives… unless you want to go for SATA SSDs.

Excellent - alternative viewpoints are always good providing (they are logical).

Agreed. You don’t need ECC memory. You don’t need a mirrored boot drive. You can afford to do something unsupported if you understand the issues and will live with the consequences.

It is not simply the flashness that is an issue - the USB connection can also be an issue.

If this is true it makes huge difference to the lifetime of a flash drive. But this is the first I have heard of this.

Well the help for the syslog checkbox says this is what it does. However, with this config, I still get one small write per hour to the usb drive according to the reports. Still, I can live with replacing the USB drive every couple of years if it means I can use a nice HP Microserver with no external disks dangling off it!

1 Like

I had this config (USB boot, system dataset on data drives) back in the FreeNAS days with almost zero issues. I say almost because UPGRADES (not updates, but version upgrades) take much, much longer than you think they will due to the speed (not) of the USB drive. Just be patient, I was not and learned how to recover from a failed upgrade.

I guess moving Syslog is an EE thing because I don’t have that option on DF.

However, a 128GB USB SSD isn’t that expensive - and if it were me I would spend the few $/€ more and use an SSD rather than a flash drive.

But the issue is the increased likelihood of disconnects and when that happens, your system hangs.

You can set a remote syslog server in community edition.

My understanding was that when you move the system dataset that included the local syslog. By default it is NOT on the boot-pool (at least it was not on my recent installation of 24.10.1).

I think, as a beginner, I will stick with getting a cheap SSD as boot drive and hopefully just forget about it.

I’m considering going SSD-only. @Protopia is it true that an HDD like the WD Red Plus, even when most of the time idle, still averages about 5 watts more than an SSD? I read very different opinions online.

Considering that I want this server to run for a minimum of 5 years, that would already amount to a difference of $70 per year in power costs.

WD Red Plus 4TB 5400RPM is here 120$
SSD 4TB like Kingston NV3 or Crucial P3 would cost me 220$

Even experienced users go with a cheap SSD as boot drive. :wink:

A HDD uses more power than a SATA SSD, yes. Even when the drive is idle, platters have to keep spinning. You may try to spin the drive down, but then it takes even more power to spin up when there’s a request.

If you go for SATA SSDs, then the N10 looks like an interesting pick for a very small server. Though I don’t like that it doesn’t even expose the PCIe slot as half height, and I’ve had bad experiences with FlexATX PSUs and the whine from their tiny 40 mm fan.

If your apps do not need high computing power, an Atom C3558 board (A2SDi-4C-HLN4F) could be a nice low power fit in the N10: 4 SATA, M.2 x2 to boot, and the x4 slot might still be used for a M.2 drive on a cheap adapter; uses cheap DDR4 RDIMM.

I think the C3558 would be perfect for NAS use only, but for my use case, it seems a bit too weak. I’m currently running two Raspberry Pi 5 devices, and they are both struggling.

I will make a decision this weekend on whether to go with only SSDs or buy HDDs. I have ruled out the Jonsbo N2 because five 3.5" bays are too much for me.

There’s a guy who managed to fit two 3.5" HDDs in the Fractal Terra case, which likely has better thermals. Pretty cool! Link to PCPartPicker. I could also use the unused PCIe slot for one M.2 drive. It might be a bit wasteful, but it’s better than not using it at all.

Atom C3000 should provide significantly more compute than a Pi.
Then it a matter of finding your balance of power, thermal… and price.

I don’t quite understand how the HDDs are mounted. That’s maybe a bit too much of hassle compared with finding a case which does have bays for two 3.5" HDDs.

I agree, but the C3558 (Geekbench Search - Geekbench) scores significantly lower than the processor used in the Raspberry Pi 5 (Geekbench Search - Geekbench). Am I overlooking something?

Well, if I look on Passmark the C3558 has double the compute of a Raspberry Pi…
Trust whatever benchmark you want—or none at all.

Good Reminder to also check other benchmarks. I do like to see some numbers before I buy.

You compared with the BCM2835. That was the processor for the Pi 1. The Pi 5 uses the BCM2712.

Not sure how fair the comparison is though. It wouldn’t even be possible to run TrueNas on the Pi.

Although the SSDs should be more reliable (with no moving parts), in the event of you having to replace an SSD and resilvering it, you will find that the rebuild time is c.10x slower than you would think. This is because in order to perform a write, an SSD can only write to an unallocated block, which means that first a used block must be trimmed (10x as long) if a free block is unavailable, before the write takes place. This is called the “write cliff”. So your vulnerability to a second disk failure will be much longer.

And this is why, before a resilver, the code should execute a full drive trim to tell the SSD that it is now empty and to start to erase all the cells in order to have them ready to be written.

Of course, it might not take that long for all the previously erased cells to be used up and then things will slow to a crawl.

OP here :slight_smile:

Small update: I got an hp elite mini g9 with 64gb for cheap. It’s definitely lacking options to extend storage further but it’s super compact, 1l! Cute little guy.

Storage is planned to be:
1x M.2 2230: Boot Drive (old sk hynix 256gb)
2x M.2 2280: SN850X 4TB drives (for a 4TB mirror)
1x 2.5 SATA: unused

Any ideas what I could do with the left over sata that would benefit my system?