Yes, I’ve had mixed results changing processor cores, ram, and layout with my Windows VMs running in XCP-NG. It just a don’t do it thing now. I was very surprised when one VM gave my problems with just a RAM increase.
Back to the original, a have some older CORE servers running 16GB, and some newer CORE/SCALE mixes with 32GB. I’m planning out some new servers to replace old servers and planning on 64GB (2x32).
My LAB has 96GB of DDR3 running Scale, but the ZFS cache never seems to fill past 64GB leaving a bunch of free RAM. All sockets are full in my lab server, so getting the best interleave possible (or whatever HP calls there magic memory stuff).
Dang! Now you got me thinking…maybe i should upgrade my motherboard/CPU/memory and maybe also my HDD’s. Maybe first the SAS HDD’s because they are old old…Maybe replace them by SSD’s…i have some choices to make.
Replacing old HDDs by SSDs, or even by modern, much bigger, HDDs means a very different class of storage and may go with a different pool layout. So if you go this road, that’s a whole new exercise in server design.
True. Since i am not in urgent need of much storage space, i figure replacing the current 2Tb SAS drives with new 4Tb SAS drives would be the easiest way to go for now. I could swap them over on the fly. It is not possible to shut down my TrueNAS for days as i use Nextcloud for my business. This is also one of the reasons that i am not changing over to Scale as this would mean installing Nextcloud from scratch with too much downtime. The only way that will ever work for me is building an entire new server first and then transfer all relevant files from the old server to the new one. Too much hassle for now. Maybe somewhere next year after i closed the books for 2024.
Agreed. I’d likely redo the pool and reduce the number of spinners by 2x by choosing higher-capacity drives, and go to Z2. That would cut drives in half and reduce my power needs by 30w without major changes.
The 96gb in my lab is ECC DDR 3, cost me under $50 shipped. It was of course used RAM, but how often do you really get bad RAM these days.
As far as old storage, the budget people hate this. I’m trying to budget for 2 new 4-5TB usable servers to replace a 2016 and 2017 server. Both of these are right on the edge where people should be nervous. Both have had drives changed out about 4 years ago and 2 years ago, but the other components age too. these both hold the VMs for our entire domain, kind of a bad day when they fail.
My lab is a different story and worth risking old stuff.
If you have both a MB and processor that support ECC then it is a no-brainer to buy it (even if it means discarding your existing memory).
But if you have a MB or a processor that doesn’t support ECC memory, then it depends on what your requirements for perfect availability / reliability are as to whether it makes sense to replace everything with ECC capable hardware.
If it is a family media server and an occasional hang/crash means waiting 5 mins for a reboot, or minor corruption of your film because a cosmic ray happened to flip a bit before the data was written to disk results in a few pixels or frames of your video going screwy, then is it really worth a few hundred $$ to have ECC?
OTOH, if it is a business mission-critical server with a few paid employees twiddly their thumbs for 10mins when it is down, then of course it is worth spending a few hundred $$ to get ECC hardware.
It’s easier to justify ECC ram if you specify servers that only work with ECC ram. Then the bean counters have no choice. Same applies if you are able to buy fully built servers, they don’t nickel and dime you on the price of drives because they are already built in. Good for warranty too.
It can be the kiss-of-death when bean-counters save a few pennies but it costs the company a lot elsewhere.
ECC is an example - it is a NO BRAINER to buy ECC servers in any reasonably sized Enterprise because a single outage prevented by ECC will pay for itself several hundred times over.
Another example that comes to mind from a case-study in an MBA book decades ago - but I can’t find an online link - was a manufacturer of industrial bearings or turbines or some such, and multiple customers started to complain of vibrations and they spent weeks of effort trying to work out what was causing it and where in the manufacturing process things were going wrong. Eventually it turned out that the bean counters had changed the grease for a very slightly cheaper (but significantly inferior quality) one, saving a few $ but costing several $million in staff time, brand damage, lost orders and compensation.
Both Seagate and IBM had hard disk drives with spindle grease that hardened up after a while, and would not let the drives spin up. At home I temporarily worked around my 10s of MB sized Seagate HDD by using a screw driver to kick start the drive at power on.
Using ECC RAM on a motherboard and processor that supports ECC memory? Good idea.
Having a UPS hooked up? Good idea.
Settings up RAIDZ for hard drive failure? Good idea.
Setting up periodic snapshots for critical directories? Good idea.
Backing up data to external hard drives? Good idea.
Backing up critical directories to an offsite provider like Backblaze? Good idea.
Doing all of the above? EXCELLENT IDEA!
The cheap route is good when you are starting out with no idea of what you are doing. My first FreeNAS server consisted of an old Dell computer with 8GB of RAM and a Core 2 Quad processor and a 1TB drive, but it was enough to get experience. When I actually built my server in 2016, I went a little overboard with the memory (64GB instead of 32GB) but the price was right. That server is working just fine eight years later, although the hard drives were replaced three years ago with larger ones.
Hilarious. That guy must get a platter of expensive cheeses and wines sent to him every Christmas from the electric company? Or perhaps he’s got his very own hydro-electric power plant? Just wow.
Either way, there is no harm in occasionally planning for a eventual replacement of all NAS components - drives, motherboard, PSU, whatever. Especially for folk in the USA who may be looking at some supply-chain issues going forward.