My 1st build, 21 SSDs and a Deep Mini ITX board in a Jonsbo N3

This is my first attempt at building a home server, I did a lot of research to try and get it right but still managed to make a couple of mistakes. My main issue is airflow, especially for the m.2 drives. My aim was for a small and quiet build with 40Tb usable space and plenty of room to expand. There is a separate thread when I’m discussing the airflow issues here. I know I need to rotate the CPU cooler 90 degrees, I’ve got the adaptor waiting to do that.

Full spec;

Case    - Jonsbo N3
PSU     - HDPlex 500w GaN ATX
Board   - ASRockRack AM5D4ID-2T/BCM (Deep Mini-ITX)
Memory  - 2 x Kingston 48GB 5600MT/s DDR5 ECC (96GB total)
CPU     - AMD 7600 (6 cores)
HBA     - Broadcom 9500 16i
TPU     - Dual CoralAI 
Data    - 8 x Kingston DC600M 7.69TB in RaidZ2
Apps    - 3 x Micron 7450 800GB NVMe in a Mirror
Boot    - 2 x Kingston DC600M 480GB in a Mirror via USB adaptors
Cooling - 5 x Noctua case fans & Noctua NH-U9S CPU cooler

Some pictures.

It’s not in ‘prod’ yet, I’m still burning it in and using my old Synology NAS at the same time. When I first got the case I ripped out all the front panel connectors and the backplane for x8 3.5 inch drives. I drilled a small hole in the back for an after market power switch. At some point I’ll remove the middle front panel and fill all the holes so it looks neater, focused on getting it working for now. The only other mod was to create a bracket to mount two 2.5 inch drives in the top compartment, I used a PCIe bracket for this.

Used a bunch of icy-dock bay adapters to make space for 16 SATA drives in the bottom which are connected to the Broadcom HBA, only 8 used at the moment. I’ve bifuricated the PCIe slot to x8x4x4 with 8 lanes for the HBA and two lots of 4 lanes for a pair of m.2 drives, there is another m.2 on the motherboard. Finally there are two 480GB SATA SSDs mounted in the main bay for the boot pool. I’ve got space for 8 more SATA SSDs some of which will come from my old NAS when I’m done with it.

USB adaptors
Had a bit of fun with these. It turned out that some of the adaptors I tried didn’t report serial numbers correctly which caused issues for TrueNas and I also had left over meta data on the drives they were connected to. This also caused an issue but it took me a while to realise that was the problem and not the USB adaptor. I settled on the UGreen 3.0 adaptors in the end. Full sordid details on this thread.

Power usage
Measured at the wall with no UPS connected. My old Synology get then until I switch over to this one. I’ve been really impressed with the HDPlex PSU, runs cool and quiet and really easy to mount on the front panel. Based on the below usage I’m tempted to try the smaller 250w PSU from HDPlex.

  • Off with just IMPI active - 11w
  • Peak draw while booting - 120w (mostly 80w to 100w)
  • Idle power with default settings - 81w
  • Scrub of data-pool, file transfer and Plex indexing - 132w

So yeah, it’s been fun building and setting up. My background is more software than hardware so it’s been an education.

5 Likes

I can’t really tell from you pictures and the airflow but the middle pic with the three fans showing. You might want to flip two fans that are together around to blow into the case and the lonely one to blow out. It looks like you have one facing in and two blowing out. Two in and one out will pressurize the case a little bit. It looks like the MB section is stacked on a drive section? Guessing the bottom has two more fans and the drives?

I’m saying try a different cooling pattern to see if it helps any

I’ve got 4 fans blowing out of the case at the moment. The lonely one is mounted on a pci bracket, it’s not actually on the side of the case. I’ll try different fan configurations as well as making the heatsinks better. I did intend for the cpu cooler to blow towards the back as well but the motherboard had the mounting sideways which caught me out. I have an adaptor to rotate it so that everything is blowing front to back.

  1. This is a beast of a build - and especially with the 2x TPU I am assuming that you are going to run some serious VMs or Docker apps on it.

  2. I have to advise you STRONGLY against using USB ports for your boot drive(s). I use a USB drive for boot and depending on which USB port I plug it into it is either less reliable than a SATA port OR is completely unreliable. Of course, your hardware USB ports, USB port drivers and USB → SATA bridges might be better than mine, but for what is such a high-spec quality build in all other respects, using USB for boot drives is IMO a major mistake.

A few more minor points:

  1. RAIDZ2 looks like a good choice for your data pool.

  2. The 9500 is a PCIe 4.0 card, and the MB is PCIe 5.0, so some wasted performance here. But PCIe 4.0 is still pretty fast.

  3. Zero point in using 480GB drives for boot when 32GB is twice as much as needed. And since the boot drive plus configuration file is supposed to be an appliance, keeping a spare boot drive in a drawer with a copy of TrueNAS pre-installed may be good enough for your needs i.e. boot drive fails, fit the replacement, upgrade TrueNAS, import your backed up configuration file and you should be up and running again.

  4. I would suggest that you put the 2x480GB USB drives and the motherboard M.2 800GB drive in a drawer (or return them for a refund) and instead buy 2x the smallest quality NVMe drive you can to use as a single boot drive on the MB plus a backup in a drawer, and stick with a 2x mirror for the apps pool.

  5. With all SSD disks and no GPU I would have thought that a 500W PSU was a lot bigger than you need - but it should also be very reliable and run cool (and so quiet) since it will not be stressed. Also, not sure what heat is put out by the SATA SSDs but probably a lot less than HDDs so hopefully cooling won’t be an issue.

  6. Unclear where the TPUs plug in - but I assume that is just me being dense.

Now for some implementation hints:

  1. Make sure that you flash the HBA to IT Mode (if you haven’t already done so).

  2. Implement @joeschmuck’s Multi-Report script so that you get storage errors and your TrueNAS configuration file emailed to you.

Good luck and let us know how it goes by updating this thread.

1 Like

Thanks for the comments.

  1. Yes, at the very least I’m planning on Plex, Roon and Gitea. I want to have a play with Frigate and ZoneMinder and see which one I like the best. I can imagine Grafana, Cloudflared, NextCloud and others before long. I’ve given myself plenty of RAM and space on the app pool to experiment and grow.
  2. I’ve read all the warnings about using USB and researched the various bridge/controller chips. Hopefully the ASMedia bridges (ASM225CM) in the UGreen adaptors I’m using will be stable. They are running the latest firmware which is well regarded in various articles I read. I’m keeping an eye on it and have accepted the risk.
  3. Thanks
  4. I know but I only want SATA SSDs for the data pool. 16 of them going full out will only just max the bandwidth of PCIe 4.0 x8. I’d rather waster a bit of performance than a bit of electricity.
  5. I searched and I searched, I couldn’t find any drives that were smaller and actually cost less. I paid £74 (I’m in the UK) each for the boot drives. Anything smaller cost more for some reason. In the end I just got fed up looking and went with those at the same time as I purchased the main data pool drives.
  6. I wanted 2 drive redundancy on the app pool, I really care about uptime. This comment really is making me reconsider the approach I’ve taken for the boot and app drives though, especially considering the cooling issue I have with the m.2 drives. hmmm…
  7. The SATA SSDs run nice a cool, well under 40 degrees. I wasn’t sure about the PSU so I went bigger to play it safe. Will try swapping it for a 250W supply from HDPlex and see how that does.
  8. There is a 2 lane OCULink port on the motherboard, I’m going OCULink → U.2 adaptor → M.2 adaptor → PCIe x1 bifurcation adaptor → 2x TPUs. A bit convoluted but it’s working great. :smiley: Adaptors all the way down!

For the implementation notes; As far as I can tell, the HBA came in IT mode, so I didn’t bother. I checked it has the latest firmware which it does. It’s not a model that supports any raid features. I’ve added a job to my todo list, checkout the multi-report script. Thanks for the tip.

That is not cheap, probably because they are DC600M Data Center mixed use drives? (never heard of those before)

But just for booting, something like a Lexar NQ100 2.5” SATA III (6Gb/s) 240GB SSD on amazon.co.uk for £15+shipping should be more than good enough.

As mentioned above, adding a new drive and importing the config is easily done, when you have easy access to the hardware and easy access to the drive.

1 Like

Interesting idea to fit 16 SATA SSDs[1] in a 8-bay case… Probably quite expensive though.
It’s intesting that a Deep Mini-ITX board fits in.

But cooling is all over the place with many fans blowing in different directions. That may actually reduce performance compared to having less fans.
The top mesh of a N2 or N4 could have helped (with a NH-L12x77 maybe?); with the closed top of the N3 it’s more difficult.
I also have some trouble reading your picture. The two fans on the right must be the exhaust at the back, so left is the front, which should be an intake but appears to be blocked. What is that?
At the top of the picture, you’ve placed a fan on the side to cool the HBA. It seems that the fan is blowing TO the HBA. But the CPU cooler is also blowing towards the HBA from the other side, so there’s a clash of flows here, and a 2.5" SSD is blocking the other part of the side panel. I would turn the side fan into an exhaust.


  1. My own 16 SSD pool (2*8-wide raidz2) is an a single IcyDock—opportunistic second-hand purchase from STH. And then I went looking for a case with the required two 5.25" bays for the IcyDock. ↩︎

Sweet board

Are you using the boards Occulink connector yet?

I liked the case that is why I looked it up. Its the power supply you are looking at at the front.

There usually is no fan at the side, that is mounted to a bracket holder, I guess mainly in order to cool the HBA.

Not sure if reversing the flow and sucking air of the case would generate enough air flow over the heatsink of HBA.

But hey, surely worth a try, air moves in mysterious ways, especially in PC cases :slight_smile:

1 Like

That was the one thing I noticed. Triple mirror NVMe…

It’s also why I mention the Occulink. I think the board is supposed to come with an Occulink to sata adapter.

Or you can probably get an Occulink to m.2/u.2 adapter too.

Ah, I thought the front was an intake.

Possibly less than blowing. But right now, how does air gets out once it’s warmed by the HBA? Not by the top. Not by the front: There’s the PSU. Not by the side panel: It’s closed by a SSD. Not by the CPU area: There’s a flow coming from there. Not by the slots at the back: There’s the flow from the side fan.
There’s air coming from the CPU. And

Already in use, I guess…

1 Like

:scream:

I figured USB coral :wink:

Photos of that?

Or, can you add captions to the pictures in the first post. It can be a bit hard to tell what we’re looking at!

1 Like

I’ve annotated the picture, hope that helps. Arrows show which way the fans are blowing. When the cover is on every opening has a fine mesh to stop dust apart from the ones at the back where air blows out. I would normally prefer a positive pressure case but this one doesn’t seem to be designed that way.

@prez02 I think I’ll have to learn a lesson here, don’t over do the boot pool. I guess going for enterprise drives here was too much? Given that I’ve already got them install, is there any benefit in changing them to something smaller/cheaper now?

@etorix Yeah, all those icydock bay adaptors soon added up. I purchased parts from all over the place to keep costs down. The icydock bays came from amazon.de and the motherboard was shipped from the USA (to the UK) thanks to amazon.com. The 8 SATA SSDs for the main data pool made up about 60% of the total cost :astonished:

I really intended the cpu cooler to be blowing to the back but I didn’t notice the motherboard mounts the cooler sideways. I’ve got the adaptor from Noctua to rotate it but haven’t fitted it yet. So at the moment it’s blowing warm air on to the HBA which isn’t great. I’ve got a temperature sensor from the motherboard attached to the HBA heatsink, it’s actually staying nice and cool, under 30 degrees C. The only thing in the entire build reporting over 40C are the three M.2 drives which are all 55C to 60C.

The deep-itx motherboard only fits in this case if you unscrew and remove the mounting for the standard sfx psu. The HDPlex psu fits really nicely and completely misses any of the other components. Both PSUs block about half the opening in the front panel, so a lot of air can still get in that way.

@Stux Yeah, the OccuLink port is connected to the TPU. Sadly, there is no SATA controller behind it, just 2 lanes straight to the CPU. So an OccuLink to SATA cable won’t work and I couldn’t find any SATA controllers that connect via OccuLink. Otherwise I would of connected the boot drives to the OccuLink and the TPU via USB instead. (There is a different version of the TPU available that connected directly to USB, no adaptors required). I’ll get a picture of the many adaptors for the TPU tomorrow, I’m not in my office any more today.

3 Likes

This is what it looks like with just the motherboard mounted. The sfx psu that should go in this case would go where the bottom of the RAM slots are, up to the yellow sticker. You can see the holes in the front panel where air can come in as well.

1 Like

Well, they are not much in the way, but still generate heat and block the air intake on that side a little bit.

Following Protopia’s suggestion below would reduce the heat and free up space, maybe allowing to move the TPUs or the NVME for the apps to that side?

It really depends how the temperatures develop when it is up-and running and how reliable the USB connection turn out to be.

Apart from that, if you cannot return them and do not need them in a different project in the foreseeable future, I would leave where they are.

1 Like

Hi @Stux this is the TPUs with many adaptors to get it to OccuLink. It’s working well but as you say, the USB version would be much simpler if I had a free internal usb port.

Two chips on the green board are the CoralAI TPUs themselves, the chips on the blackboard splits a single PCIe lane in two, one for each chip. The sideways board at the top is the OccuLink to U.2 adaptor. Works perfectly.

1 Like

Looked up the Kingston DC600M 7.69TB single unit price, I almost fell off my chair… And you have 8 of them :grimacing:

And the motherboard is expensive too!

Yikes, this is definitely not a budget build.

I started buying parts for this build a good few months ago and even in that time the cost of storage has gone up a crazy amount. So yeah, I spent a lot on this but if I did it again today would have to be cheaper parts. Planning to use it partly for work stuff (I’m a freelance software engineer) so really focused on making it reliable.

1 Like