Mooglestiltzkin's Build Log: Truenas build recommendation am5 2024?

where i am from they have rip off prices.

i had to order a tplink eap-773 abroad from the us, that was FREE shipping, and even after tax, it came to HALF the price of what they were selling over here. It’s terrible.

Thats the situation i have to deal with over here :sweat_smile:

tldr: no we dont get good deals over here :cry:

there r somethings i wont order overseas. example, i one time ordered a bunch of 4tb hgst nas drives from ebay (italy). Then later 1 drive died few years later, but no warranty. rip.

well there is a fan slot in center of case that is unpopulated. not sure if i should install a noctua fan or just leave it be. will see how it goes later when i check temps :thinking:

well more or less. ty for asking :blush:

plan a: if i go with my plan for the amd 7600 with a MAG B650 TOMAHAWK WIFI, i know that will work for my setup, no problem. its still within my budget.

plan b: is go with am4 leo gigabyte motherboard, use with my 5600x this is where most of my confusion still lies. whether my sfp+ card will work, and how to add m.2 nvme to it (which pcie card to get? does that setup have any problems? no idea). Too much uncertainty for me tbh :sweat_smile: also i have to deal with over seas, how to get warranty etc? i rarely order overseas unless i have to. too much trouble to deal with :cry:

The only thing ive ordered so far is the case.

rest of the parts ive just added to list for now and checking on prices.

i was going through the chat logs this is what the seller said last time

Because TS-653B only has a PCIe x2 interface and can only run 5G

I saw that it is also feasible for others to use QNAP TS-653B to install an X520 network card, but they use a DAC and only have a 5G speed

:thinking:

so u said regardless that the 10gtek says it only support x8 and x16 slot, u showed the card will fit in that x4 slot.


contrary to what it said, urs fit. why is the pcie slot like that? (to allow bigger pcie cards to fit in that) i’m not familiar with that.

from the leo specs it only says this

Expansion Slots
Slot_6: PCIe x16 (Gen4 x16) slot, from CPU
Slot_4: PCIe x4 (Gen4 x4) slot, from CPU

it did not mention anything about this.

also another reason, i was not able to zoom in picture on my browser. so how was i suppose to know :sweat: not familiar with this board.

it fits but will it work? cauz 10gtek picture is telling me something else. also if it did, will it be able to do 10g speeds if using that slot?

because when i tried something similar, the seller told me the speeds get crippled :grimacing:

Generally speaking PCIe cards work fine in slots with fewer lanes, as long as you can make it physically fit. Some slots are open ended to take bigger cards, some slots might be physically x16 but have fewer lanes available(or share lanes with other slots) and some people just modify their slots to be open-ended.

https://www.reddit.com/r/unRAID/comments/gyivxz/pcie_x8_sas_card_in_x4_slot_any_experience/

:exploding_head:

Ok so that answers that. But is there any con for doing this for the sfp+ 10gtek card? will it impact 10g speeds? that is what i’m not sure

anyway reading through thread, to check whether i missed anything

B550 only has enought PCIe 4.0 lanes for one PCIe x16 slot (most likely for the GPU) and one PCIe 4.0 x4 NVME for an SSD. All of the other lanes are PCIe 3.0. With X570, everything is 4.0.

https://www.reddit.com/r/Amd/comments/hq8l64/can_someone_explain_pcie_40_lanes_to_me_b550_vs/

:thinking:

The platform also comes with four SATA. We are seeing fewer SATA ports in this generation as a clear trend.

newer leo got downgraded :grimacing:

No. PCIe lanes are PCIe lanes; the cards we have discussed are passive risers which rely on bifurcation by the motherboard and convert one kind of slot into another: This is totally transparent to ZFS. No extra controller is involved, contrary to “PCIe SATA cards”.

This. It’s not a bad card actually, it just solves a different problem (fitting M.2 drives to an old motherboard which does not have such slots), and solves it in a different way (PCIe x4 slot to M.2 x4 for one NVMe, plus holding one SATA M.2 drive, cabled to a motherbaord port; and power for all from the PCIe slot).

Cooling is about moving air where it matters. For a NAS that’s on the drives, so I’m not sure that forcefully pushing air to the CPU area while bypassing the drive cage will help. You’ll have to test.

Yes, just that. Consumer boards do that with “x4 electrical in x16 mechanical slot”; servers tend to do that just by opening the back of the slot.

4 lanes of PCIe 3.0 is 32 GT/s; with PCIe 2.0, like your card, 20 GT/s. Still enough for a 10 Gb/s link.

1 Like

yeah the youtube i link was explaining bifurcation in detail. was still digesting it. so basically chipset/cpu has so and so pcie lanes. and that has to be shared out to the pcie slots and other stuff.

for the m.2 addon card, in that youtube example 1 m.2 ssd was x4, and it had 2 of them so x8 lanes. then u need birfucation for that to work (so id have to check if the mobo did that or not. but seeing as u recommended it, this shouldnt be an issue for this board for that).

ok ty thats what i wanted to know.

with that set, now i got to figure out which m.2 ssd x 4 (doubt they have just 3 only, since thats all i need), addon card to go with this leo.

So now the current plan

reuse 5600x cpu.

get leo board (am4) *still checking price and shipping with seller.

ecc ram not sure. still figuring out where to buy :cry:

:eyes:

leo specs

Memory

4 x DIMM slots
DDR4 memory supported only
Dual channel memory architecture
*****ECC UDIMM 1Rx8/2Rx8 supported
Non-ECC UDIMM 1Rx8/2Rx8/1Rx16 supported
Total capacity up to 128GB
Memory speed: Up to 3200/2933/2667/2400/2133 MT/s

*ECC support is only available on AMD Ryzen™ Desktop Processors with PRO technologies.

i checked but cant find ddr ecc udimm. most available is registered ecc and also used :smiling_face_with_tear:

https://www.reddit.com/r/HomeServer/comments/12leufy/is_it_worth_it_to_get_rdimm_instead_of_udimm/

ic that makes sense. thats why u had extra lanes to spare. i was worried there wouldn’t be enough.

:+1: good video recovering the mobo.

but few questions arise from it.

he said, if the mobo uses an older bios, it might not boot up when using ur cpu. so how?

@fredwupkensoppel8949)

Right I’ma save everyone here a day of troubleshooting: If you processor isn’t supported out of the box (but is compatible with the latest firmware), you need to update the IPMI first using the “rom.ima_enc” file you get from the .zip file on the support page. Yes, I know the manual says something different. It performs a check first, which fails when the file ain’t proper. You can try that using the .bin file the manual wants you to use. Anyway, only after upgrading the IPMI, you can update the UEFI to support your processor. Before that, the IPMI will misidentify basically every component. Don’t worry about that, just update the IPMI and it’ll be fine. If you only use two sticks of DRAM, from the perspective of the CPU socket: Leave one slot free, populate the next, leave the next one free, populate the outer one. Single stick of RAM: Leave a slot free, populate the next (still counting from the perspective of the CPU socket). Also, there’s one Ethernet jack for the IPMI and two separate ones for the actual system, so check which one you’re plugging into. See 5:12. Oh and I think the front panel IO might be mislabeled, but that could also be my cheap-ass case. You possibly need to split one of your connectors (think it was the power LED connector) into two single Duponts. See 3:14 in the video. Regarding 10:50: You do not need a compatible CPU to flash the BIOS if you’re using the up-to-date IPMI. It’s a great board imo, especially considering the price, but the documentation isn’t “consumer-ready”.

next, the bios no profile. Ok so have to manual set. But then he said you cant set the ram voltage. Is that going to be an issue?

:face_with_monocle:

there was a pkfail bios update for the board

found this for sale

SK Hynix 16GB PC4-2400T 2666V DDR4-2400MHz 2666MHz 288Pin 1.2V ECC UDIMM Server Memory RAM

does the 2666MHz matter? cauz the mobo supports up to 3200


so 2 of these and run them in dual channel?

its probably used. so ill have to do a memtest

Not my build but someone elses. but it would look roughly like this when installed. I too plan to do something similar. i see that he placed his on a 1u shelf at the bottom (which i have). I also plan to make it like that. Not sure if he uses rails, but with the shelf it at least will be able to carry the weight.

someone was asking whether this silverstone has issues or not. this guy brought up some of the cons mostly cabling, also not able to handle big graphics cards

still waiting to hear back from leo seller :cry:

this review is impressive. for benchmarking the power consumption on the board. i wanted to know for the b650 tomahawk but he instead did it for b550 :smiling_face_with_tear:

anyway the results were

IDLE: 15.6 W

on load max: 34.2 W when set at a max 95 Watt Limit for cpu.

that looks alright. wish there were results for the b650 for this, so i’ll assume it would roughly be the same?

checked the kwh still within budget, seems ok?

While Ryzen 7000 series consumed roughly 20w more than 12600k or 13600k, the author stated all 7000 series in this test were paired with a very high end power hungry X670 Extreme chipset. Still, unless someone does another system idle power comparison for 7000 series using different set of AMD motherboards, we won’t know for sure.

https://www.reddit.com/r/Amd/comments/10evt0z/ryzen_vs_intels_idle_power_consumption_whole/

:face_with_raised_eyebrow:

old thread but u can have it… for that reason i downgrade from 5800x to 5700g, which ist monolithic:

5700g vs 5800x (both PBO disabled and at rougly 1.1V 4GHz). Sidenote: Power consumptions goes through the roof with PBO)

idle: 5w vs 27w

youtube: 10w vs 32w

fallout76: 18w vs 45w

and the best part of the 5700G vs 5800X: i can connect my 2 displays to the mainboard and my GPU goes full idle while working. And when booting up windows and starting games (thanks to hybrid graphics) the dGPU still does the computation for games depsite displays being connected to the MB.

with stuff like microsoft teams, my 1080 TI would often sit at 70W when using it sololy

unrelated side note:

:face_with_raised_eyebrow:

7600 am5 build for nas. pbo probably disabled, and also underclocked.

he also settled for the ASRock B650M-HDV/M.2 which i initially looked at. it also was rated as a top board by hardware unboxed

4x sata and 2x m.2 nvme.

in the video he also demoed how to install the silverstone rails. it didn’t come with my case, i’m going to have to order this some other way :thinking: but the shelf will do for now.

demo for the m.2 nvme riser. 3 old video

in the video maybe look for a riser as shown in the top.

for heatsink i found this


though im not sure if it will fit in that tight space for matx leo board. :thinking: but the reviews say this can keep the m.2 ssd temps down to 40-50c

doing more search on MC12-LE0 , some concerns.

  1. if the bios is not up to date, or even impi not up to date, not sure how to update bios to support the 5600x cpu. there is thread discussing this problem. to be fair, once u have that sorted out, it’s smooth sailing.

  2. seller didnt reply back yet with the details im waiting on :sweat:

  3. i was looking for completed nas builds using this board, because i wanted to see how people had setup their m.2 nvmes in it including a riser. there is a m.2 slot right beside the pcie x16 slot to be used for the m2 riser. with the heatsink i was looking at, if you put them on, wouldn’t it block the heatsink for the m2 slot thats on the board? thats why i wanted to use other peoples builds as reference for this.

  4. would a seasonic focus + gx 750 psu work with this motherboard?
    Reddit - Dive into anything

https://www.reddit.com/r/buildapc/comments/174enpf/is_it_worth_getting_atx_30_psu/

Ok i already ordered.

leo seller didnt get back to me. and bit doubtful i can get that to work tbh. it’s good for nas and good value, just i’m not confident i can navigate around it’s quirks :smiling_face_with_tear:

settled on this, already ordered and on the way.

MAG B650 TOMAHAWK WIFI + 7600 am5 cpu

silverstone 4u case

KINGSTON FURY BEAST BLACK 32GB (2*16GB) DDR5 6000MHZ CL30

SEASONIC Focus+ GX Series 80+ 750w (checked the mobo compatibility and also confirmed with seller this works)

scythe mugen 6 black edition single tower (i wanted the single fan, but no stock, so went with this. it was also cheaper than the noctua)

cooleo m.2 heatsink (one of the m2 ssd slots does not have a heatsink, so using this for that.

Silicon Power NVMe PCIe Gen4x4 M.2 2280 SSD UD90 500gb (didnt cost too much more than 256gb)

Silicon Power NVMe PCIe Gen4x4 M.2 2280 SSD US75 1tb x2

ill order the seagate x12 12tb x4 at a later date. i just want to install truenas first before i commit (i’m already balls deep in … :rofl: )

inspired by Greg Salazar’s build (he made 2 videos. 1 was using the same case i ordered, and the 2nd was using a ryzen 7600 cpu. i went with a diff motherboard since i wanted more sata ports, but his pick was good as well). It’s not the perfect build, but it’s what i can work with without too much hassle. i don’t think i did too badly i hope. plz be kind :sob:

with the youtube i’ll try install this myself :sweat:

i want to thank etorix and every1 else for their useful tips. i learned a lot. i wasn’t even aware ipmi was a thing before this :sweat_smile:

i’ll have to order the 10gtek sfp+ pcie card, a sfp+ transceiver, and om5 optic cable once the seller is back from their vacation :sweat_smile: till then i can use the 2.5gb port (yes it might be an aweful ethernet port, but once my 10gtek arrives, i’ll be using that full time like i am for my qnap. no big deal)

not much experience with proxmox, and will probably just add another later of complexity. so ill stick to just installing truenas baremetal for dedicated use. i’m aware with proxmox you don’t limit yourself to truenas, you can also install other stuff like vms e.g. linux os maybe. with my rack build that may be possible maybe. ill look it up another time, but for now sticking to what i know.

did ample research and others also using something like this if not the same exact

The Ryzen 5700 plays quite nicely for me.
I disabled PBO and put it into eco mode so its running very efficiently.
My whole rack consumes about 160W.
Server:
5 HDDs 16TB each,
4 SSDs,
Intel X550-T2
And the 5700 + 128GB DDR4 ECC consuming probably less than half of that.
My Netgear 12 port 10GbE PoE switch, the Router and UPS using the other half.
Is there a command to check power consumption of the CPU?
I would assume the switch uses the most significant amount as it’s fully populated and running 4 Access Points off oft PoE.

joe, i noticed in the old forum u mentioned ur interest in a ryzen 7000 series build for truenas. did u ever proceed with that? if so how did it go?

Anything that fits the size of the case, and your coolling model.

  • With fan (e.g. Asus Hyper M.2)
  • With a big heatsink
  • Plain (each drive brings its heatsink if desired)

  • Low-profile (2 drives on each side)


This is just traces on a PCB and a few components to ensure signal integrity, so basically anything should go. And this is still applicable to any other AM4/AM5 board.

1 Like

That is my NVME System in my signature link below. It turned out very well.

1 Like

:thinking:

i changed the 500gb ssd to 256gb. i check google/reddit every1 points to this like you all mentioned. also save money. if it dies, it dies, ill just get another. ill just take this opportunity to test how long it lasts as a truenas boot drive.

intel optane looked interesting. but opted not to since the ones i found were second hand. i heard for swap u needed slightly more than 64gb or something (for truenas setup for 32gb we dont need to enable swap right?). but the 100gb optane was a bit pricey. gave up on that settled for the m.2 ssd 256gb.

for helping me figure out which psu w to go with

https://www.reddit.com/r/HomeServer/comments/1dei2py/will_a_750w_platinum_psu_use_more_power_than/

We actually have a resource for sizing power supply:

As for boot, I happily use 16 GB Optane M10 with CORE (bought for 9.99€ a piece).
The 64 GB variety is more rare and pricey.
100 GB Optane would be DC P4801X, and such fine DC drives are best kept for SLOG if this feature is ever needed.
The first quality of a TrueNAS boot drive is to be cheap.

1 Like

Pretty sure swap is dead by default on lastest versions of scale due to not playing nice with zfs arc on linux

Pretty sure @mooglestiltzkin meant “L2ARC” rather than “swap”. In which case, he’s correct: At least 64 GB RAM recommended. But first check with arc_summary whether L2ARC would actually be useful.

In this context, I think he meant swap.

TrueNAS used to offer to format a boot disk with swap if it was 64GB or larger. I think.

Maybe it still does. Installs blur together :wink:

2 Likes

Unless he was talking about the Swap file on the boot drive. 64GB is the magic number.

1 Like

i was going through the truenas install again, preping for new build.

during the truenas install it asks if u want to allow swap partition, yes or no.

it means if not enuff ram, u start dumping to hdd/ssd. we all know thats not ideal. if you have enough ram, it would never trigger. but in the slim chance it does happen, truenas will simply crash if you run out of ram and swap is not enabled.

wasnt sure if i needed to have swap on or off x-x;

then i read for storage space they recommend something like a little over 64gb to be able to use the swap. i was checking just how low u needed to go.

because i was looking those intel optane. the 100g optane hard to find and not particularly cheap at that price.

gave up on that and opted for a cheap 256gb m.2 nvme ssd. could have bumped to 500gb but im not sure thats money well spent. so stuck to 256gb which some said was alright.

Don’t stress on boot drive, just periodically make a backup of your config. Swap is generally not recommended as arc should auto shrink if memory is needed & there was something or other about swap & arc on linux that wasn’t playing nice after 50% of memory max arc restrictions were lifted.

1 Like