Advice on Server Build

Hello

Here’s my plan build:

  • CPU: Intel i5-10600
  • MB: MSI model B460M-A PRO chipset intel B460
  • RAM: 2x Kingston HyperX Fury DDR4 2666Mhz
  • SSD: nvme Samsung 970 Evo - 500GB
  • PSU: Corsair 550W CV550 or Corsair CS550M 80 PLUS® Gold
  • Case: Nanoxia DEEP SILENCE 4 ANTHRACITE MiniTower

or this

  • MB/CPU: Asrock N100mAsrock N100M
  • RAM: 32GB
  • SSD: nvme Samsung 970 Evo - 500GB
  • ASM1166 x6 SATA
  • PSU: Corsair 550W CV550 or Corsair CS550M 80 PLUS® Gold
  • Case: Nanoxia DEEP SILENCE 4 ANTHRACITE MiniTower

I will use TrueNAS, My PC, Server and Router supports Gigabit speed.

I want to use RAIDZ1 and start with 3 HDD 16TB (Toshiba MG08ACA16T) one for parity and the other two for storing. I plan on the future to add more, i have read that this configuration is recommended up to 5 drives, after that RAIDZ2 is recommended, is that true?

On the server there will only be video files, i want to store my entire Bluray and 4K HDR collection Remuxed.

I will watch TVShows everyday so does that mean that the drives will be stressed constantly?

I will play locally on my Gaming PC but maybe in the future i will set up Plex Server (can i watch them but with no reenncoding through that), i have heard about transcoding but not really sure what that does for 4K HDR.

Also if i set my Plex Server later does that mean that i have to put the files again on the server?

On a final note you guys seem to really stress the power consumption, so how would i go to minimize it considering that i will watch content frommit everyday.

Is this good for what i wannt to achieve?

I can say that if you use either board, since they both have a realtek NIC it will cause stability issues without adding an intel PCIE nic card. I learned this the hard way after doing my recent build. Ever since installing the intel card though, no issues.

Also notice that if you use this disk for TN, you can’t use It for anything else, and Is a totally waste. Grab a smaller cheap SSD (32/64/120), either used, and use for boot.

Raidz1 give you more space at redundancy cost. Normally raidz2 Is raccomended but you will need at least another disk for have same capacity of the 3x raidz1.
If data are really only a movie collection, so i assume not critical, you probably wont need the raidZ2 redundancy… But think about mirroring more vdev in the same pool

The recommendation for RAIDZ2 has more to do with the drive size than the drive count, and 16 TB is well past the size where we typically recommend RAIDZ2. How important is your data to you?

No, you’d set Plex to read the files that are already there.

I wouldn’t say that we as a group stress power consumption, but certain of us do. I’m not among them. High-efficiency PSU, low-power CPU, SSDs rather than spinners if possible, avoiding 10 GbE or SAS HBAs are the general tools to achieve this.

1 Like

I am planning to use it for cache, and install TN on a USB or a external SSD

Hello, thank for responding, i dont really know what you mean by that. What kind of instabillity, you mean like network connection?

Good case, average PSU.
As already pointed by @dan, raidz2, implying four drives to begin, would be safer.

This I don’t like at all…

And, if you’re buying parts to build the NAS, it is a pity you’re not going for ECC.

By default drives would be kept spinning. You may try to have them spin down when not in use, but plan carefully, as repeatedly spinning up and down is what most stresses drives; a few cycles a day is fine.

Re-encoding at a lower resolution if the client cannot do it by itself. You may well not need it.

SSD + USB adapter Is viable (i’m doing the same).
Instead don’t think you will benefit of a l2arc

1 Like

While this is a nice project, you should ask yourself, is it worth the money just to have an “on-demand” video server? Think about it in our current world of internet based content. A good system is going to cost some good money and hard drives will need to be replaced, which cost money. You could spend that money on my BlueRay discs instead. I only say this because some of us did get into this as part of the reason for building a NAS. It turns out to become a very expensive endeavor if this is the only reason.

You could: Build a small server on your gaming computer using something like VirtualBox, and just make a small virtual server to play with. You should be able to support all the items you desire with a small capacity virtual drive and if this is what you really want, then purchase the hardware you need.

With all that said, you do not need a very powerful system to achieve what you want.

Both motherboards have RealTek NIC (as previously pointed out), so you will use a PCIe slot for an Intel NIC.

I found it is best to store the video in its original format and let the unit displaying the video content handle any transcoding required. The only possible issue doing things this way could be if you are streaming to a smartphone and you really do not need a high quality video to display on such a small screen. But hopefully your goal isn’t to stream on your smartphone but rather a family room TV. You should look into Plex transcoding more if this is a serious plan. And again, that is something virtual TrueNAS system could help you decide.

I hope my question it’s not stupid but why would i use a PCIe network card? The motherboard already has one.

The CPU: Intel i5-10600 configuration costs me 200€ (all from the same buyer) + 70€ for the case.

Realtek drivers are known to collapse under the load that ZFS can throw at them.

You may be happy with it, or not.

Not bad. So you’re in the EU. What about this 59€/69€ board (with or without cooler)?

Add a Ryzen CPU and DDR4 UDIMM, preferably ECC (or, at least, you’d have the option to upgrade to ECC later).

Arent AMD setups like not ideal for Server builds, from what i saw they are very bad when it comes to transcoding also very bad with idle power consumption compared to Intel. Also i see that there is no HDMI on it just VGA…

Intel iGPUs are better for transcoding, but it’s not clear you need transcoding at all—are you serving 4K HDR videos to stream to an old 5" phone?

TrueNAS has no use for any kind of video output. This is a genuine server board: It boots headless, the VGA port is powered by the BMC if you want to plug a monitor to set up the BIOS rather than doing it through IPMI. (With Gigabyte’s IPMI you don’t even need to flash TrueNAS to a USB stick to boot from, you can load the image remotely to install.)

Builds with this board have been measured at 20W idle, 33W with two HDDs. Much worse than Intel? I doubt that.

You don’t need a ‘cache’. Use it for installing your apps & VMs instead. Since you’ve not built the server & monitored your ARC hit rate, I’m confident that you don’t need L2ARC cache. I am certain that it would reduce performance more likely than increase it - specially with that amount of ram.

You also don’t want a special metadata vdev because it is only 1 consumer grade ssd (if your special vdev dies, you lose EVERYTHING on your pool).

In short, don’t get a cache. These should be used when you have a very firm & specific use case & understanding of problems & pitfalls it can introduce. This isn’t a “add nvme cache for super blazing speeds”.

As far as transcoding goes - if you’re going for 4k HDR, then remember that not all devices support HDR; if you’re watching HDR content from Plex on something without HDR you’d need something to do tone-mapping otherwise the colors would look awful. Otherwise without HDR, then you can think of transcoding as compression to make the stream look as good as possible with a lesser amount of bandwidth being used to stream the content to your device(s).

That is because it is a server board - with IPMI, you’d never need to connect a monitor or keyboard to it unless something seriously went wrong. Thanks to IPMI you’d be able to use another computer on the network to control the entire system even during boot-up for BIOS configuration.

Overblown. Main advantage for intel I’d argue is iGPU on intel cpus (not ‘f’ skews) is they are decent at transcoding. Depending on how many streams, the quality of the transcode, etc. you might be stuck installing a GPU to take over anyway.

1 Like

So this motherboard will be 65€+20€ transport.
Ryzen 7 3700X will this be a good combination with it?
Also the : 2x Kingston HyperX Fury DDR4 2666Mhz (2x8Gb) will be compatible with this? also i will be using the same PSU. The thing is will this be better than CPU: Intel i5-10600 build?
Im just thinking that if down the line i want to use it for more than just for stoing video files, like a VM i would not be bottleneck.
I have check the manual for the board but i could find any info if using the M.2 slots would disable some SATA ports.

the 3700x will give you 2 additional cores to play with if/when you want to get into VMs. I see no reason why the ddr4 kit you quoted wouldn’t work. You might want to spend a few extra $ on a 2x16 kit if you’re going to be playing with vms and apps.

No clue on the specifics of that motherboard - all I know is it is REALLY cost effective for a small home server. It gets recommended a lot for folks for best bang for the buck. Of course if you spend several times more, you could get more.

I think it’ll be a great starter motherboard due to low cost of entry & if you ever find yourself wanting more features, pcie lanes, etc. down the road, it’ll make an excellent motherboard for a back-up storage server.

If you’re going to go the route of that motherboard & the 3700x, you’re going to need a gpu if you want to do transcoding as the IPMI will only be useful for remote management.

*Edited for minor typos & rewording.

For NAS use, I would even drop the ‘X’.
If lower power is your focus, go for an APU: Their monolithic design uses less power. (And then you may take the bundled cooler for spare change. It used to be proposed for 10€ extra, now dropped to 4,01… because the base price for the board has rised by 5.99 from the last time I checked. Dynamic pricing, and Piospartslap “accepts offers”.)

Probably. Beside the price and IPMI, the strong point of this board is that it can use ECC RAM, and i suggest looking you can fit that within your budget to do “ZFS by the textbook”.

That’s an easy one from the specifications (or the block diagram in the manual): The M.2 slot is PCIe 3.0 x1 only and does not share anything. It is strictly intended for boot from a NVMe drive.
Note that the PCIe x4 slot is open, and comes from the CPU. It can take a x8 NIC card or an Arc dGPU for transcoding.
For further NVMe drives, you’re looking at the x16 slot, which can bifurcate x4x4x4x4 with most Ryzen CPUs (exactly all the desktop CPUs which are ECC-capable). Intel boards can’t do that… :stuck_out_tongue_winking_eye:

Exactly. It is a great entry point into server-grade hardware—and evidence that “server” is not necessarily more expensive than “consumer”.

1 Like

To add a bit; just make sure you’re looking at UDIMMs not RDIMMs for that motherboard if you’re going ecc route. I’m fairly confident that zen4 consumer cpus only accept udimms regardless of ecc.

1 Like

This one i have as spare so thats why i was asking for that

I can get one used for 70€, i also have the wraith coller for a 5600X, i think that would to the trick for it (the server will be in my room so i hope it’s not loud, the case that i choose does have noise reduction build in)

So i should use that to install TrueNAS on it ( Patriot P300 128GB, NVMe, M.2., i could get this new for 15€) and forget about usig cache.

I could get a AMD Ryzen 5 3400G for 30€ (im not sure if this is supported by the motherboard since these are ZEN+?), but i read that since this motherboard has a GPU on the board you can’t use the iGPU, also something about only using 8x4x4 insetad 4x4x4x4, not sure what that means. And i think that ECC RAM will not work with these.

I dont understant what is so special for this memory, i have watched videos on youtube and most of them concluded that this type of memory is overated and doesnt make a difference.

Ehh - peace of mind really. I’ve had maybe 2-5 ecc errors caught & correct in like last 3/4 years. I care too much about my data & while it is a big longshot that those errors would have caused corruption, it does help me sleep better at night.

If you’re going to treat it like a server instead of a desktop, you’re going to plan for 24/7 operations. Wrong bit flips & causes OS to crash or file corruption? Maybe ECC would have saved you. Is the 0.02% chance per year of this happening (made up, not actual probability) worth the cost of not grabbing ECC ram? In your usecase, yeah very likely ECC doesn’t matter.

Youtubers are generally where I’d go to watch fun goofy things that vaguely involve technology or for a fun project idea - when I want to do something properly I reach out to the vendor or engineer & read boring tomes of white pages. People who have the boring, well paid, salaried job of designing things that stay online >99.999% of the time; I trust those guys instead of folks that are trying to appeal to a mass public audience in exchange for add revenue.

But yeah - ECC is a soft suggestion for your usecase. If this was a server that was bringing you money & was vital for the operation of your business it’d be a no brainer to toss an extra ~$40 per dimm to reduce failure chance by 0.0whatever %.

While we’re at it, maybe a UPS is worth it for you - maybe not; ZFS is apparently very resilient about avoiding data loss on power loss. But maybe you don’t even want to risk that chance or deal with figuring out why your server went offline after a short power outage. Depends on what you consider is actually worth your money.

Everything listed here are suggestions; I’d argue very good suggestions.There are plenty enough folks running TrueNAS on… interesting choices… that ‘worked just fine’ until they ran into serious issues in the long run - hence why a lot of long time posters on the forums really go out of the way to recommend best practices. Sometimes fanatically.

This is a very good idea - especially the part about not using cache (whether it be L2ARC, write cache, or special vdev). I cannot stress enough how much your usecase does not require it & would either add critical points of failure or likely loss of performance.

I would, however, look into maybe using at least a Sata SSD for your apps (Plex) and any possible future VMs. It really sucks to run those things on HDDs.