Help deciding on a custom build vs used server

Hello guys,

I’m planning on building a custom NAS for use with TrueNAS (probably Community). This will be my first NAS ever.
Will need 9x4TB drives (currently I need redundancy and some other features over space) for pictures and edited video footage (grows slowly).
It is mostly for the storage, backup (to external drives) automation and Plex/Jellyfin.
Aiming at 2.5 gbps connection.

With the help of google and gpt I composed this build:

Component Model/Details Price per Unit Quantity Total Price
Case Fractal Design Define 7, Black €180,00 1 €180,00
HDD Tray Kit Fractal Design FDE HDD Tray Kit Type-B Black €43,92 2 €21,96
SSD Western Digital Red SN700, 250GB, M.2 Gen3 x4 €131,48 2 €65,74
Processor (CPU) Intel Core i3-12100 (4C/8T, 3.30 GHz, 12MB Cache, LGA1700, 60W) €116,17 1 €116,17
Memory (RAM) Micron Server Memory Module, 16GB, DDR4, 3200 MHz, CL 22 €327,88 2 €163,94
Power Supply Unit (PSU) Be Quiet Straight Power 12, 750W, 80PLUS Platinum €181,98 1 €181,98
Motherboard Asus Pro WS W680M-ACE SE €554,45 1 €554,45

Total: €1535.88

I was ready to order, but thought it would be nice to check if cheaper options exist.
If I cut the cost on motherboard and ram, I get no ECC.
So checked the local store and found this used server:

Dell PowerEdge R730 SFF

Chipset: Intel C610 Dual
RAID Controller: Dell PERC H730
Number of CPUs: 2
CPU Model: Intel Xeon E5-2680 v4
RAM: 32GB DDR4 ECC Reg.
HDD Slots: 16
Number of PSUs: 2
PSU Power: 750W
Number of HDDs: 2
HDD Capacity: 300GB (2.5" 10000rpm)
HDD Interface: SAS 12Gbps
Hot-plug HDD: Yes
Video Adapter: Integrated
Network Adapter: Dual-port Gigabit + Dual-port 10Gbit SFP+

This thing costs €420.00
They don’t say anything about it been refurbished, so I assume it’s not. Just a plain used thing.

What can you say about the second option? How statistically safe it is to rely on it?
If my custom build fails, I can either request a substitute or find it as a spare part, but what about server hardware? I have no clue how to deal with it and I highly doubt this kind of hardware is widely available in the place I live.

1520 vs 420 is some good difference, but does it outweigh the cons?

The second question:
Could you verify if my choice of hardware is optimal?

P.S.
There’s also an option to buy a new PC (workstation) first and transform my existing pc into a NAS. But then no ECC again.

I am very big fan of used servers removed from corporate use.

My primary systems are all SuperMicro, but that is a preference and not a recommendation (OK, maybe it is a recommendation).

I have been happy with The Server Store - Used & Refurbished Servers at Great Prices and Computer Parts | Online Hardware & Related Products | ServerSupply.com for used / refurbished servers and parts. But you have to know what you want and make sure that is what you are getting.

In my opinion, a used server that is a few years old is a better solution for TrueNAS (which is a server OS) than a new non-server. The new system may be new, and have a faster CPU and RAM, but will it make a noticeable difference in your use case? Or would ECC memory and more of it and a CPU with more cores get you more usability? Would a used LSI HBA at 6Gbps actually work better for you than a new Marvell SATA controller (I know the answer to that one from experience and I use only LSI HBAs flashed to IT mode now).

Servers and server parts are built for a reliability that typical desktops are not. They are designed to run 24x7 and have sufficient cooling. They have hot swap parts where possible and typically have user replaceable parts where the hot swap is not.

I also only use Enterprise HDD, those that came from the factory with a 5-year warranty, but I get them used as pulls from working systems from reputable vendors. Sometimes you learn who is reputable and who is not the hard way. I have had very good experience with Western Digital RE series (now Gold), I had bad experience with Seagate ES.2 but very good experience with Seagate ES.3. I have had good experience with HGST (now WD) and bad experience with WD Red but good with WD Red Pro.

P.S. I have little direct experience with DELL servers, so I have no informed opinion on the unit you listed. I have had experience with SuperMicro and Intel servers and like them both.

3 Likes

I had used the Define R7 with that many drives previously and it was very difficult to keep cool. I changed to the Fractal meshify case which is noisier (can’t hear it in another room) but vastly less temps.

Regarding the used system you found, not familiar with your “RAID” card but make sure it can be used in Truenas as non Raid, Raid controllers not good for Truenas unless they can be set to a suitable non raid mode. ZFS is not raid.

I got used server hardware off eBay mostly. And most of it was cheap. For example, my server motherboard was like $125 or something like that, new in box (old model of course, an X10 Supermicro). Server hardware has a lot of advantages over consumer and gaming hardware. Your price is decent I suppose. I’d say the used system is far superior to the custom build you were planning myself, assuming it works.

Seconded.

Supermicro delivers:

  • good system documentation
  • empty HDD/SSD trays in all drive bays, not blind covers
  • boards in retail (not bulk) packaging come with all cables included
  • cases come with all screws you might ever need included
  • a reasonably sane IPMI implementation (X10 and newer)
  • long term availability of products, sometimes over a decade or more
  • support

We used to swear by Fujitsu, when they were still Fujitsu/Siemens and development was in Augsburg, Germany. The power efficiency was superior to any other brand at the time.

But that seemingly ridiculous drive tray issue becomes severe if you have a hundred servers and you cannot buy drives at market prices but need to get them from $VENDOR including the special tray you cannot get anywhere else.

Similarly BIOS warning about 3rd party memory - ECC, server grade, all up to spec, that’s what specs are for - just utterly annoying.

So for us it’s Supermicro for everything at the moment.

Last system I built for internal development was a bit funny. 1 unit rack case, mini-ITX motherboard, and then according to their docs I needed specific frames for the case fans and an “air shroud” (piece of plastic) to funnel enough ventilation to the CPU. Part numbers missing in the docs.
But support was very helpful, gave me all the part numbers, and then I found a reseller in Poland who was happy to deliver. European Union - what a concept :slight_smile:

Kind regards,
Patrick

1 Like

2 CPUs but only 2 HDDs? You’re not looking at the right kind of server.

Not at all. You picked an excessively expensive motherboard which can at least do ECC… to pair it with a CPU which cannot!
You’re paying way too much for the motherboard, DDR5 RAM, SSDs (65€ for 250 GB??), and even the extra HDD trays. (The Define 7 in “storage mode” is not fun to deal with.)
Which brings me to the last point: 9*4 TB. That make no sense… (and I’m trying hard to suppress the expletives here)
Too many ridiculously small drives. That’s 28 TB of total raw space, assuming raidz2 since you want redundancy. (For reference, Seagate has just launched 30 TB hard drives.)

Make that FOUR drives, for raidz2, in the 16-20 TB range. Whatever has the best price per TB now.
Then, with the storage resized, scale down the motherboard (older generation, 6 SATA ports would do, with some margin), PSU, and quite possibly the case. Or buy a second hand desktop workstation, a small tower with some Xeon E(3) in there, and make that your NAS.
Or some rackmount server if noise isn’t an issue.
A 10€ drive for boot—single. And a pair of better sized SSDs for apps, 2*1 TB maybe.

2 Likes

With RaidZ2, and modern drive sizes, I really do think that 6-way is a sweet spot. Gets you 66% storage efficiency and dual disk redundancy.

Using 8TB drives would get you 32TB of space.

And a 6 drives means you can use boards that support 6 sata.

And small cases like the Node 304. Lots of nice builds.

Used servers are the economical way to go… but it can also be fun to build your own. I’d suggest supermicro either way.

1 Like

Really not the best plan–$/TB is better at larger capacities (assuming spinners), power draw decreases, potentially smaller chassis. Consider a pair of 20 TB spinners instead.

I third, or fourth, or whatever the recommendation for Supermicro gear; I’ve also been fairly satisfied with my Dell R630, and by extension other 13th-gen Dell servers. But you’d want the LFF version of the 730 (or the 730xd, which would be better for storage applications).

So this is the problem of living somewhere not in the US/Canada — you get much, much less choices. I found a couple of online stores that sell used gear, but the cheapest server is about ~900 euro (all prices include VAT already). And the selection is more limited. Buying from US isn’t feasible as VAT will apply (total of 24% or 25%, don’t remember), plus the risk of damaging.

This is definitely attractive, but I’m always paranoid about used gear (not only IT hardware) being worn out to a degree when I happen to be the guy who’s arms it dies in. It is impossible to measure the extent of wear, or at least I don’t have any experience to evaluate it. So as I imagine it, I’m paying money for something that has 50% probability of disappearing.

This is even worse, never saw used/refurbished enterprise/NAS HDDs being sold. Will try to find something, but I highly doubt I will.

Thanks, I’ll take it into account if I go with a custom build.

I assume you’re in EU too, so the SuperMicro world seems to be available here. But probably will have to spend some time collecting all the necessary parts. This is a bit annoying and slightly risky — what if something breaks or is already broken? Then there’s all that fuss with returning over mail etc.

Canadians appear to disagree with that…
You have to find out the refurbishers in your area. They do exist, but may not advertise to consumers.
Here is a Dell R530 chassis at 95,60 €, from a Dutch shop:

A fully configured server without drives should end up around 500-600 (though I would need some help or sanity check with part numbers).

I return it, of course. Lots of resellers in Germany, but for single very specific parts which I ordered from Poland.

Jacob, Mindfactory, Cyberport, …

Oh crap… This is so embarrassing… I relied on stuff I’ve read on a few forums. People say that with 12th gen CPUs ECC is supported, and that this particular CPU is good for NAS (Plex, transcoding, efficiency), but didn’t actually check the official info. Thank you for pointing this out! It makes this build useless.

If we omit the fact the CPU ruins everything, this is what it all costs… RAM is DDR4, it was the cheapest I’ve found in the store. SSDs are also one of the cheapest. Of course, there’s also WD Green which would be ~30 euro, but I’ve read so many negative feedback on it that I perceived it as a faulty product by design.

I also relied on people’s opinions (Reddit, this forum), they swear by it. By specs (and the fast-swappable bays) it seemed to be the choice for NAS.

The thing is, I’m not for the storage size, but for reliability, ZFS features (including snapshots), automation (to copy snapshots to external drives). I was going to organize it this way:

  1. Pool “A” (3-way ZFS mirror, 4TB usable) for important data
  2. Pool “B” (2-way ZFS mirror, 4TB usable) for storing snapshots to
  3. Pool “C” (RAIDZ2, 8TB usable) for not important data (e.g. DVD/Blu-ray footage and movie rips, temporary storage for footage from camera)

I would then copy snapshots from pool “B” to three external drives. I thought this is important to have the exact same snapshots at all backup locations. Also, copying from one pool to another is faster as one is reading at max possible speed, and the other one is writing at max.

As for the “C” pool, I’d probably just duplicate content (synchronize) from “C” to two external drives. Or maybe shapshot too.

Pool “B” could be a single drive, but what if a bit rot occurs before the snapshot has gotten to all external backup locations? With 2-way mirror this risk would be eliminated.

Probably a single RAIDZ2 pool could be used for all the data, but again, people say that splitting important and not-important data into different pools is like not putting all eggs in one basket. Also, people claim now that RAIDZ2/3 is prone to failures during resilvering, is very slow during that process, and also slower in use. All this compared to a mirror.

The available storage space is more than enough currently. Main reason to move to NAS is that I have primary storage in my workstation, I’d like to have freedom of working on a stationary PC or to have a laptop, without losing access to my data. I could have a DAS, but then I lose all the features TrueNAS and ZFS gives, also, it would stay at my workplace, but I could use that space for something more handy for work.

I’ll look into that, but didn’t find anything yet. Not much choice here. If a workstation supports ECC, maybe it’ll do the job, just thought that only servers have ECC.

No need for apps (at the moment), but will a 10 euro SSD be capable/reliable enough? From what I read, many use 2x drives for redundancy.

First, hard drive selection. Choose only models made for a NAS and no SMR.
SMR vs CMR ServeTheHome

I think you need to go over the primer and pool choices to get a better understanding of ZFS features and chosing a pool layout. Your pool A can lose two drives and be okay, the same with pool C. Not sure why you want a pool just for snapshots. Do you mean replication of snapshots? There is also Datasets in ZFS. Raid-Z1 with large drives is warned against, not Raid-Z2 or Z3.

If the NAS is your primary storage, you back up the data elsewhere.

BASICS

iX Systems pool layout whitepaper

And all those ridiculous derlivery costs!
I experience it, when I watch a Youtube video from a US Youtuber…
I “check the link below”, that leads to the US ebay.
It is indeed 100 bucks, just:

  • Already out of stock
  • Does not deliver to Europe
  • They do, but the delivery cost is like 160+ USD for it

Well, if you compare the 1500+USD proce for the new HW and the 420USD price for the used one, you have 1100 USD headroom left to fix all those problems.
That means, that even, if you are not able to buy the sepcific component (even refurbished, but most of the time at least tested) from any vendor, you stil can buy 3 other full systems to cannibalize them for parts.
And then you are still at the same price point.

Well, you are searching then incorrectly.
They are around and they are throughly recertified.
There are not too many suppliers that is true and even less who sell in the UE or delivers cheaply here, but they are deinitely there.
They are even present on Amazon.

1 Like

I think, that was a really maen, pointless and pity cashgrab from WD that time.
Their reputation was damaged in my eyes a lot.
Especially that they are way too expensive in general.

I have a define 7 XL.

Air flow and heat are a problem if the server is in a room with no air flow (duh), but otherwise isn’t a problem. I’m using 6 fans (3 intake, 3 exhaust).

The storage mode is fine, compared to other consumer cases, but not a patch on a proper rack mount system. However for PSU I have a Corsair RM Shiftx, so power cables are a doddle. A normal ATX power supply would be a nightmare.

However there is an effective limit of 16 HDD and 8 SSD in that case, so if you are planning to use small drives you will need to plan to replace them, or use an HBA and expander with an external box/cage to hold more HDD.

I think I’ve found that 16TB drives are just starting to be edged out by some 20TB drives for cost effectiveness.

All discussion here is about my personal server at home.

My server has 3 pools, all mirrors.
My main data pool is 10 x 2-way mirrors of 2TB SAS HDD, plus a 500 GB SATA SSD L2ARC, a 200 GB SAS SLOG, and 2 x 2TB hot spares. Data is separated by type via datasets, and I do tune the recordsize for each as appropriate.
My VMs pool is 4 x 2-way mirrors of 2TB SAS HDD, plus a 500 GB SATA SSD L2ARC, a 200 GB SAS SLOG, and 1 x 2TB hot spare. This is where I will store my VM image data once I get VMs set up.
My Apps pool is 1 x 2-way mirrors of 500 GB SATA SSD. This is where Apps are configured to land.

When I started down this journey, over a decade ago, I used RAIDz2 (and I still do on my backup server). The first time I needed to grow, I added an external disk enclosure and added a 2nd RAIDz2 vdev. Then the fun started as the enclosure had a SATA port multiplier. I never lost any data but had a couple hair raising events. I was also using 5-wide RAIDz2 pools, so the storage efficiency was not much better than 2-way mirrors.

I converted to using 2-way and then 3-way mirrors. I had made a buy of 20 x WD RE3 1TB SAS drives removed from server service. These easily met my storage needs. As time has passed, I added vdevs, I upgraded to 2TB SAS HDD, I upgraded chassis. I now have a SuperMicro 16-bay 3RU server, a SuperMicro 12-bay SAS expansion chassis, and a home grown 10-bay SAS/SATA drive bay. So, with 38 drive bays available I can have hot spares and do not feel the need for 3-way mirrors anymore.

The issue for me is time to resilver after a failure event. A 4TB HDD drive will resilver in about 8 hours (I just had to do that yesterday, so the number is fresh in my head). If you have x1 redundancy (2-way mirror, RAIDz1) you have a window of vulnerability during which a second failure may cost you the pool. Look up Richard Elling and MTTDL (mean time to data loss) for a very good relative comparison of the various ZFS topologies. How long will an 18TB or a 22TB HDD take to resilver?

I do use RAIDz and larger drives, in my backup server where loss of a pool is not loss of data, but loss of a service and loss of redundancy. My backup used to be a 4-way RAIDz1 of 4TB HDD. I am (one HDD at a time) moving to 10TB SAS HDD and eventually I will move to a 5-way RAIDz2 of 10TB SAS HDD [I bought a batch of 10 removed from service Seagate EXOS 10TB SAS HDD].

The other big factor is performance. Performance scales (for certain operations) pretty linearly with the number of top level vdevs. A 2 x 2-way mirror will have about double the performance of the same drives in a 4-way RAIDz2, with the same capacity and slightly less reliability.

Mirrors are also easier to grow in that you can just add another vdev. I started with 6 x and added 7, and layer 8, and finally the last 2 to get to 10x, I did not start with a 10 x 2-way mirror topology. Now we can add devices to an existing RAIDz vdev, but there are conditions which I am not comfortable with.

All that to say, if the OP is more comfortable with more, smaller drives, in a configuration that may be faster and will recover from faults quicker, there is nothing wrong with that layout.

P.S. To the OP, to copy ZFS data, use snapshots and the ZFS replication features, even if you are just copying between datasets on one system.

I’ll try then. But how do you know whom to trust? Anyone can write “refurbished” on a web page, but not everyone can actually do the job.

Oh, I see. I have no airflow in the room for it either. So maybe I should look at something like a Meshify case if I need one but don’t have rack-mount storage available.

The thing is, I don’t know what I’m comfortable with unless I try and fail (or succeed). Let me explain the logic behind my hypothetical system.

Pool “A” for Important Data
3-way ZFS mirror of 4TB drives

Currently, there’s about 1.5TB of data, and it grows very slowly (~150GB/year). It’s important, so I’d rather go with a simpler pool setup, like a 3-way ZFS mirror. It can withstand as many failures as RAIDZ2 but is overall more reliable and faster during resilvering. It is also easier to “maintain.”

Pool “B” for Snapshots
2-way ZFS mirror of 4TB drives

Primary/intermediate storage for snapshots of pool “A.”

It’s necessary to copy the exact same snapshot to external (air-gapped) locations. If I just switch external destinations and repeat the snapshot, it will store different versions of the original data.

Compared to using the same pool as the destination, it’s faster to copy from one pool to another (from “A” to “B”).

It’s a 2-way mirror to protect those snapshots from bit rot until they get copied to all the external destinations. From there, it is possible to compare versions from different drives if bit rot is suspected. The mirror would detect bit rot even with a single drive, but it won’t repair it. Once the snapshot is lost, you can’t repeat it because the original data has already been changed.

Pool “C” for Unimportant Data
4-wide RAIDZ2 of 4TB drives

Basically, space is a priority, but 8TB usable is plenty for now. Not sure if I’ll populate it with that much “garbage.” Nonetheless, losing the garbage would cause a significant amount of discomfort, so I still need it to be protected. (I’ll also do snapshots to external drives, but probably in a simpler, rotating manner.)

P.S.

Forgot to mention that pools “A” and “C” are saparated just to add more reliability though independency. If one pool gets corrupted, the other one survives.

I’m not saying this is a correct approach. Just explained it in more detail to get valuable notes and recommendations from you.

Try this: