Mooglestiltzkin's Build Log: Truenas build recommendation am5 2024?

yeah i wasnt too sure how arc worked in regard to the swap issue. now i know thx :}

im going to have to study how to install these parts. i more or less know, but one area i might be blurry on is the psu wires :grimacing:

can i do raid z1 for 12tb x4?

i know raidz1 is fine for 4tb x4 (done this for raid5 for many years without issue). But iā€™m new to 12tb :thinking:

i do keep backupsā€¦ so even if something did happen i would probably be alright.

This is what i normally use
https://magj.github.io/raid-failure/

the 12tb x12 exos seagate drive have a 10^5

so input 4 drives, raid5,

75%

seems ok :face_with_monocle:

for 4tb drives it was 91%

and if that fails, there is always the backup. so raidz1? then i also get more usable space

Losing two drives in a Z2 vdev is incredibly unlikely to lose you an entire pool, but it might be enough to make existing on-disk corruption permanent: once you lose all parity, any remaining CSUMs are permanently unrepairable. Thatā€™s unlikely to lose you the pool because metadata blocks are also stored redundantly, above and beyond device-level redundancy. But it can easily leave individual files permanently corrupt.

This is why regular scrubbing is so important. Without the scrubs, uncovering a vdev is vastly more dangerousā€¦ especially to data you access rarely, but it still important.

https://www.reddit.com/r/zfs/comments/12aurgh/raidz1_queston/

:thinking:

12tb

32.7 tb (raid5)

21.8tb (raid6)

4tb

10.9 tb (raid5)

so instead of tripling my usable space, it will only double from my current setup.

i guess for my current storage needs thatā€™s fine. iā€™m just wondering though if raidz1 is possible with a 12tb x4 configuration. any horror stories with that setup?

I know the worse case scenario resilver fails, then ur expected to go to backup.

i know the rules for switching between z1 and z2 is, the more drives you use, and the higher the capacity drives u use, then go z2. but so far iā€™ve been in the z1 zone so i never reached that point to know where the limit is :sweat_smile:

https://www.reddit.com/r/zfs/comments/12aurgh/raidz1_queston/

https://www.reddit.com/r/zfs/comments/xvt0uq/raidz1_vs_raidz2/

:hushed:

good explanation on the ecc situation. while doing my research i found this

https://www.extremetech.com/computing/ddr5-prices-expected-to-skyrocket-as-sk-hynix-plans-dram-price-hike

https://www.reddit.com/r/truenas/comments/z38gc4/is_ddr5_ondie_ecc_enough_for_zfs/

that said the 32gb ddr5 kit was within my budget

KF560C30BBEK2-32

based on pcpartpicker the prices for it is lowest than before :face_with_monocle:

few years back if u were trying to get a 32gb kit youā€™d be crying blood :rofl:

yes i know i should have tried for ecc ram but i couldnt get a good source for it or at a price i was willing to pay for. if it ever became available much later, then i can get that, then move this ram to my desktop pc. thats why i went this route ^^;

i almost went for corsair vengence. corsair brand when i had an issue, they shipped me replacement ram at no cost, no comments. very nice :blush:

It is hard to find this RAM in bulk at reasonable prices with a lifetime warranty claim. Supermicro is out of stock for 16GB tested name brand sticks. Similar named brand Hynix RAM is available on eBay in large qualities from tm_space for $63 per stick with no lifetime warranty claim. However, the price from NEMIX is currently $35 per stick on Amazon and Newegg and large batches can be purchased. OWC also provides equivalent RAM on Amazon for $39 per stick available in large batches. If OWC provides better customer service this is a path to consider (I may try a couple of these sticks). Several other sites that offer name brand RAM sticks do not have enough supply to satisfy my use case (if you find one please share).

this guy sells it but i heard there is no warranty. seems pricey too (maybe that is to be expected i guess). considering my past experience trying claim warranty from ebay, not too keen on that :sweat:
https://www.ebay.com.my/str/tmgalaxy?_trksid=p4429486.m3561.l161211

omgā€¦ just heard from 10gtek they r shutting down their shopee platform store. now ill have to order through their main website instead :cry:

no idea what happened there, o well. i can use the 2.5 nic temporarily, till i can order the sfp+ pcie part. i own 2 of those already and liked their performance reliability with my switches.

only downside, it doesnt have recoding like the fs.com transceivers (though this requires an additional purchase for their equipment to recode, and only makes sense if u plan to move transceivers to different switches often)

seems like i bought the last RM41-H08 in the store :sweat_smile:

alternative interesting nas cases. but reason i didnā€™t get something like that (no local availability, also i really wanted a rack case for my next nas build. this will be my first time by the way for this type of build)

Some quirks for the msi b650 tomahawk wifi

https://www.reddit.com/r/buildapc/comments/1alj6hk/msi_mag_b650_slow_boot_time_issue/

slow boot times roughly 90 seconds. someone even mentioned a 4minute boot which is nuts.

Booting on the Tomahawk took something like 90 seconds with 32 GB of Kingston Fury when I first set everything up, but after updating the BIOS and activating MCR this has been cut down to roughly 30 seconds, without any fast boot windows BS. Could it be faster? Sure. But not by much, really.

https://www.reddit.com/r/buildapc/comments/1alj6hk/msi_mag_b650_slow_boot_time_issue/

possible solution (you may also want to update to latest bios as well)

Enable Memory Context Restore in BIOS. If you run into issues after enabling or right before you want to swap out parts disable it or perform a CMOS Reset

Well, its well known fact that you donā€™t disable power down if enabling memory context restore. those two features works together, even was confirmed by amd employee on reddit long time ago.

Been running for a while now with zero issues with MCR. But it is allergic to Powerdown Off.

Either MCR On with Powerdown On
Or MCR Off with Powerdown Off

Boot time is long enough that honestly MCR Off is kinda unworkable depending on what you do with the PC

just an fyi, since AsRock does have MCR enabled by default, and I have used custom timings that lostswede helped me with, just wanted to give you an update, I have had and still have 0 issues with my rig. So, apprently AsRock is the only manufacturer to figure out MCR? i know before you said to keep it disabled, and that probably does apply to other brands, but yeah just wanted to give you an fyi, my rig is still flawless. AsRock is the AM5 champion this round I guess

mix comments whether it helped or not.

so how do the other brands fare on boot times

based on what Iā€™ve heard Gigabyte boards are the best for boot time in the 600 board series, Asus and Asrock are in the middle, and MSI is the slowest

anyway i donā€™t think this is a major issue for me. just pointing it out in case anyone followed my footsteps (which i donā€™t particularly recommend since there are probably better truenas builds out there). well i mentioned it :sweat_smile:

i did try looking for something from asrock first, but nothing quite fit. my 2nd pick was gigabyte, also same. then my third choice was the msi cause it met my requirements.

one big downside is it does not have a pcie x16 - 5 slot. instead 4. and afaik all the m.2 nvme were all gen4. this wasnā€™t a deal breaker for me, but it might be for someone else.

B650 tomahawk does not have a post code screen like higher end boards, instead it has debug led that is color coded: red (cpu), yellow (ram), white (gpu), green (OS).

The debug led colors are documented in the motherboard manual.

https://www.reddit.com/r/MSI_Gaming/comments/1fsyx4l/msi_b650_tomahawk_wifi_has_problems_with_ram/

Some possible gotchyas with that caseā€¦ and ATX boards.

2 Likes

ty this is exactly what i need. all the steps for me to consider when install.

so he is using an atx motherboard, so that is a good reference. i did check the specs and it said it supported atx. now i have a clearer idea of the fit.

in one of the videos there was a wire for an alarm if someone opens the case. not planning to use that :sweat_smile:

off to a good start he almost body slammed that huge case onto the motherboard :open_mouth:

there are the other 2 youtube references for installing for the case i ordered

these reference the case

this one is specifically covering the msi b650 tomahawk wifi motherboard

and this for psu (if i do this wrong and plug in wrong cable to motherboard etcā€¦ boomā€¦ :grimacing: )

For my psu, the pics to indicate which cable is for what, this helps. refer to your own psu site for similar help on this
https://seasonic.com/focus-gx/

so i have to go between these guides to get a clear idea :sweat_smile:

1 Like

Yeah, Iā€™ve been figuring out if I want to go with EPYC 700[1-3], or a Sienna EPYC 8004.

Basically, 128 lanes or 96. Sienna is pretty much Zen 4c (ie 9004) in an SP3 sized socketā€¦ (ie same as EPYC 7003)

I would like at least PCIe4 and faster single core clocks.

Or thereā€™s ThreadRipper Pro :slight_smile:

(which I think may actually be the sweet spot, with lanes and single core clock speed)

I was interested in that case, but it seems its too small to actually use with large boards.

With the ĀµATX boards I think I would go with the CS382.

Meanwhile, With RaidZ expansionā€¦ You could add another 12TB laterā€¦ Or use an M.2 SATA expansion card. I know they get a bad rep, but when you need to add just a handful of SATA ports, I think they can be a good solution.

2 Likes

Thereā€™s been alot of discussion surrounding NVME in this thread. Iā€™ve been foolinā€™ with this stuff for a while now. Figured Iā€™d post some musingsā€¦

The PLX-chip based card with 4 Samsung 9A1 SSDs configured as 2xMirrors (No Compression No Dedupe) resulted in an average read of 1091 MB/s using my arbitrary FIO script. Going to 8 SSDs netted 2412 MB/s.
These results are from 2 years ago running in Bluefin.


But that testing was done with AMD Epyc Rome which has 8 memory channels. At ~25GB/s per channel (DDR4 3200) it has a maximum theoretical bandwidth of ~200GB/s. AM5 is faster per channel, but it has 2 memory channels. At DDR5 6000 weā€™re talking like ~45GB/s per lane or like 90GB/s.

More on the previous work Iā€™ve posted on the topic of NVME
TrueNAS Scale NVME Performance Scaling | TrueNAS Community

Iā€™m currently also doing a new build. Iā€™ve begun repurposing my older AM4 gaming PC, Gigabyte Auorus Elite X570 board with a meager 32GB of (nonECC) DDR4 3200 RAM. I just performed a slight refresh to a 5600G so I could save a PCI-E slot that would have been wasted for a GFX card. I currently have 4 Optane 905Ps (mirrors) plugged into one of these cards Linkreal PCIE Gen3 PLX cards.

This system has SCALE on it and is primarily just a dumb block storage device that serves my desktopā€™s ā€œDā€ drive.

While not apples-to-apples with my old dataā€¦ Itā€™s clear that software has come a long way in the past 2 years.

Also gotta give AMD credit for better CPU IPC/single thread performance. Iā€™m sure thatā€™s at play hereā€¦ Even with the 2-channel memory configuration of desktop platforms limiting me. bw=2399MiB/s

Using my same old arbitrary FIO script:

randrw: (groupid=0, jobs=16): err= 0: pid=428919: Thu Nov  7 23:38:34 2024
  read: IOPS=19.2k, BW=2399MiB/s (2516MB/s)(141GiB/60112msec)
   bw (  MiB/s): min=  604, max= 7947, per=100.00%, avg=2404.39, stdev=75.82, samples=1824
   iops        : min= 4828, max=63572, avg=19228.09, stdev=606.56, samples=1824
  write: IOPS=19.2k, BW=2403MiB/s (2520MB/s)(141GiB/60112msec); 0 zone resets
   bw (  MiB/s): min=  607, max= 7895, per=100.00%, avg=2408.09, stdev=75.82, samples=1824
   iops        : min= 4856, max=63158, avg=19257.83, stdev=606.52, samples=1824
  cpu          : usr=1.68%, sys=0.32%, ctx=633595, majf=0, minf=584
  IO depths    : 1=0.9%, 2=1.8%, 4=4.0%, 8=21.3%, 16=57.0%, 32=15.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=96.6%, 8=0.3%, 16=0.5%, 32=2.6%, 64=0.0%, >=64=0.0%
     issued rwts: total=1153644,1155429,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
   READ: bw=2399MiB/s (2516MB/s), 2399MiB/s-2399MiB/s (2516MB/s-2516MB/s), io=141GiB (151GB), run=60112-60112msec
  WRITE: bw=2403MiB/s (2520MB/s), 2403MiB/s-2403MiB/s (2520MB/s-2520MB/s), io=141GiB (151GB), run=60112-60112msec
root@prod[/mnt]#
root@prod[/mnt]#
root@prod[/mnt]# zpool status optane_vm
  pool: optane_vm
 state: ONLINE
  scan: scrub repaired 0B in 00:12:10 with 0 errors on Sun Oct 27 00:12:12 2024
config:

        NAME                                      STATE     READ WRITE CKSUM
        optane_vm                                 ONLINE       0     0     0
          mirror-0                                ONLINE       0     0     0
            0a3ec38c-61bc-4c92-989e-bb886b22c241  ONLINE       0     0     0
            e33c70aa-009f-4032-8ae6-0465156ad9ca  ONLINE       0     0     0
          mirror-2                                ONLINE       0     0     0
            8a08efe1-8f61-4577-9180-0ed287029443  ONLINE       0     0     0
            a9db22ec-c458-4e1e-b1f9-45b8667e07ec  ONLINE       0     0     0

errors: No known data errors

Have fun with your builds :slight_smile:

2 Likes

well let us know how that goes. would be a very interesting read :blush:

hope you tally your budget. my wallet is crying :sob: but itā€™s an investment for long term, o well.

iā€™ve been gradually making progress for my homelab.

started out with a pc no backup (yes shocking. i didnt know better back then). To a nas (my first a ts-509) , to eventually a ts-877, to finally rack server for my switches > to sfp+ 10 fiber > to now adding a 4u nas case (where i am right now)

In all seriousness, TR 2950X/X399 sounds like an option to consider too, if you can find a good deal.

But EPYC Rome (and older TR) suffers in terms of single core performance, even when compared to my measly little 5600G :wink:

2 Likes

ill just chime in on this cauz i did some research on the subject.

5600g does not support ecc. but the 5000 series g PRO does. But if ecc isnā€™t needed then thatā€™s fine. this comes with igpu yes?

then for am5, wendel recommended the 7900 which has lots of cores and threads if your not going xeon or epyc, but want ryzen instead. but the reason i didnā€™t go down that route, cauz my use case isnā€™t as much, so a 7600 is fine.

for xeon/epyc, no comment since i have no info on that :sweat_smile:

if u r chasing down a motherboard for ryzen with ipmi, they exist. etorix pointed out a Gigabyte MC12-LE0 (AM4, MATX).

and joe pointed out ASRock Rack B650D4U (AM4, MATX)

my case just arrived. the box is hugeā€¦ was worried. but when open box, it actually is smaller. yes it fits for my rack :blush:

case in box was bubble wrapped to the tilt. no dents iā€™m glad to report.

i unscrewed it open, everything looked alright. just waiting for the other parts to arrive before i can put it together.

before then i got to rewire, relocate my qnap nas :sweat: thatā€™s gonna be hell.

1 Like

60 lanes of PCIe3 though.

Probably the bare minimum of lanes that Iā€™d consider, but PCIe4 is a minimum for me for t

There are 64 lanes of PCI ExpressĀ® Gen 3 connectivity in the AMD Ryzenā„¢ Threadripperā„¢ CPU. 4x lanes are reserved for communication with the AMD X399 chipset. Up to seven PCIeĀ® devices may be connected to the system (not to exceed 60 lanes); additional concurrent devices may be supported with a PCIeĀ® clkgen on the port or device. An additional 2x PCI ExpressĀ® Gen3 lanes within the AMD X399 chipset may be used by the motherboard vendor to provide an additional 4x SATA or 2x SATA Express connections

But I am thinking that 128 lanes is probably more than I need, and I know 40 lanes is not enough :wink:

So, sweet spot for me is probably somewhere between 60 and 96, will need to put some more thought into that :wink:

ā€¦

I did build a bunch of Ryzen 3950x systems in the pastā€¦ theyā€™ve proven quite competitive :wink:

1 Like

This is true. But Iā€™m not one who evangelizes ECC for every build. Iā€™ve run many TrueNAS servers on old SFF corporate desktops that youā€™ll often find for less than $100.
e.g.

Having good data hygiene and backup strategies is more important than ECC in my opinion. As an exampleā€¦People forget, even Google had humble beginnings. Their first servers looked like thisā€¦


And letā€™s just say that modern DDR5 is probably far less likely to flip a bit than those monstrosities were xD

Homelab doesnt have to be enterprise grade.

1 Like

In my opinion, youā€™ll find bottlenecks elsewhere in most systems that would render the extra bus speed of negligible performance difference in real-world use. Unless your use case isā€¦more exotic than most. That, or weā€™re not talking about servers in your house :stuck_out_tongue:

PCI-E 4 never got much wide spread adoptionā€¦ and are an order of magnitude more expensive than their PCI-E 3 ancestors. PCI-E 5 is unobtanium unless youā€™re buying net-new storage servers, and even then Iā€™d bet youā€™ll still find PCI-E 3 parts in current generation products from many vendors. Storage servers donā€™t follow the same yearly cadence of release cycles that desktops do.


Shoot for most people the trusty 9211-8i is still very relevantā€¦and thats Gen2 :stuck_out_tongue:

2 Likes

This.

I want to play with 25/100gbe :wink:

1 Like

gosh ordering from 10gtek from cost a bomb in shipping.

ordered from aliexpress instead. For me the shipping was way cheaper (your milleage may vary). probably because the logistics was fine from there.

Shame they shutdown local :smiling_face_with_tear:

*good news, my purchase went through. now just have to wait for the 10gtek :blush:

Looks nice in advertisement, but badly designed when you actually try to use it?
Thatā€™s my experience with Silverstone cases. Never more.

Thanks for the information that their rackmounts are of the same breed than their desktops.

1 Like

in amazon review, someone posted a comment whats wrong with this case. ill just have to find out when i install. this will be my first time though :sweat_smile:

but tldr: he said the problem is no space to put the cables. so he shoved it somewhere near the psu. cause there is no where else for it. and the graphics card length limits to certain graphics cards that will fit.

he goes onto say the bay stuff isnā€™t removable. but when i watch another youtube they showed itā€™s removable, just u have to go through a bunch of screws to do so. heck the left side can be replaced for more hotswap trays if you needed more hard drives for that (optional addon purchase though).

but when i look at his install, despite that, it looked fine even if what he said itā€™s not as nice as some other cases in that regard.


anyway there is room for expansion with it to add more hdds or ssds assuming your mobo supports that. looked ok for my needs anyway.

realized i made a mistake. cpu cooler too tall. any suggestions for what will fit? preferably scythe brand or maybe noctua

Case

Limitation of CPU cooler
Height w/ expansion card retainer: 130mm
Height w/o expansion card retainer: 148mm

scythe mugen 6

Overall Dimensions (WxHxD) 132 x 154 x 132mm / 5.19 x 6.06 x 5.19 inch (incl. Fan)

:sob: no refundsā€¦

found this

Noctua NH-L12Sx77

NH-L12S/NH-L12Sx77 general specifications

  1. Fan dimensions: 120mm x 120mm x 15mm

  2. Max rotational speed: 1850RPM

  3. Max rotational speed with L.N.A.: 1400RPM

  4. Min rotational speed (PWM): 450RPM

  5. Max air flow: 94.2mĀ³/h

  6. Max air flow with L.N.A.: 70.8mĀ³/h

  7. Max noise intensity: 23.9dBA

  8. Max noise intensity with L.N.A.: 16.8dBA

  9. Supply power voltage: 12V PWM

  10. Current: 0.13A

  11. Rated power: 1.56W

  12. Power supply interface: 4-pin PWM

  13. SSO2 bearing

  14. MTTF: 150,000hours

  15. Package list: 1 x Cooler, 1 x Fan, 1 x Low Noise Adaptor (L.N.A.), 1 x Thermal paste, 1 x SecuFirm2ā„¢ Mounting Kit, 1 set x Noctua Metal Case-Badge