I recently ran out of storage on my 3TB NAS, and since my budget is extremely small, I went looking for inexpensive ways to get HDDs. (sas drives + controller was considered). I asked some colleagues at work if they knew a website where i could buy some used drives that are still ok. Imagine my surprise and excitement when out of the blue somebody from the infrastructure team gave me 20 4TB HGST HUS726040ALA610 drives for free! They were about to be scrapped. Suffice to say, Iām excited!
My current TrueNAS rig is a Dell PowerEdge T110 II, it has 4 ānormalā sata ports and 1 optical drive sata port. 16GB ECC RAM.
I currently have 2x3TB desktop drives in a mirror, acting as my main storage (photos, videos, documents), and an Intel 120GB SSD on which i have a debian VM (as a wireguard server) and some apps (qbittorrent for official linux ISOs of course, and syncthing)
I also have a secondary rig, an HP with 4 SATA/SAS ports that i will be using as my backup storage. I will turn it off after each backup, because it is much louder and consumes more power than the Dell.
Iām trying to decide the best pool layout for my setup. Options are limited, due to only 5 ports on the main system, so Iām thinking of going with a single 5x4TB RAIDZ2 vdev. My priorities are data safety/protection and capacity, performance is not really a priority, because itās just a storage system, and the VM is mostly idle.
Iām going to use zfs send | zfs receive from my main rig regularly to copy snapshots to the backup rig. Also , very important data (financial documents, etc) is already being backed up to the cloud regularly.
Here are my options for the new pool:
1 pool of 2 mirror vdevs (4 drives total). The advantage of this is that i keep the 5th sata port free, which is an optical sata port and i am unsure if itās SATA III 6Gbps or SATA II 3Gbps. I know 3Gbps is plenty for a hard drive, but still, i worry that the difference in speed (if there is one!) might cause problems with ZFS.
3,4 or even 5 drives in RAIDZ1. I donāt think raidz1 is a good idea with 4TB drives, even though i will be backing up the data regularly. I feel itās a bit risky.
5 drives in RAIDZ2. I am leaning towards this option. It gives me a good balance of capacity and redundancy. However, iāve read many posts on the old forum of people swearing mirrors are better than RAIDZ2 because RAIDZ2 apparently puts much more strain on the remaining drives when resilvering compared to mirrors. While I am sure the more strain idea is true, my experience tells me that hard drives, especially enterprise drives, can usually handle writes fine as long as theyāre cooled ok. They are designed for continuous operation, after all, and drive failures, while not uncommon, are not the norm either.
I am also considering going with 4 drives in RAIDZ2 and storing my VM and torrents on a separate drive, because i am afraid that torrenting will put too much strain/wear on already old disks, which i could avoid by using a 5th drive alone in a separate pool. However, I do have plenty of spares in case of drives fail, so this option is not my favourite one so far.
I guess my biggest questions are:
2x2mirrors vdev or 4/5 drive RAIDZ2 setup? Is raidz2 really that hard on disks? Is it also using CPU? my cpu is an Intel Xeon E3-1230v2 4 core, 8 thread. My networking gear is 1G.
Are 16GB enough for this usecase? (the VM consumes maximum 1GB)
would my main sistem be able to run Jellyfin for streaming (1 stream) if thereās no transcoding going on? (mobile internet is pretty fast where i am from so i can watch 1080p videos without issues).
is there any benefit of running TrueNAS apps on an SSD for my usecase, since i donāt require performance?
also, i am using a sata to USB adapter to connect my boot pool to the system, and iām booting from that, is this really a big risk? or is it fine if i backup my truenas configuration regularly? (i am willing to spend the time to reinstall the server if needed, no problem)
Also, I have to mention that these drives are 512n drives (checked the official datasheet). So i plan plan on using ashift=9. Yes, i know it means the pool wonāt be compatible with newer drives, but since I am broke, by the time I make a new storage system, iāll probably remake it from scratch anyway. I donāt plan on upgrading by swapping out drives with bigger capacity ones. And since I have 2 systems (main and backup), in the future when i do upgrade to bigger drives, i will be able to easily zfs send my then-old pool to the new one, if needed.
First thing is to use the second system to do HD tests on all your hard drives you aquired. You need to assess their health. Make a spreadsheet or something with the serial numbers and drive models, etc.
Thanks for the suggestion with the second system! I just realised that I might need to buy another SATA to USB adapter for the boot drive for the second system. I have a few SSDs sitting around so I can spare one for that.
Regarding testing, Iāve already done SMART short test for 5 of the drives, the other 15 are still at work as I didnāt have my car with me to carry them all, LOL! My colleagues from the infra also did some tests, and I know that the drives have been used for 3 to 4 years, in a corporate environment. Yes I know this is not ideal, but hey, I have 20 of them so if any fail, iāll just replace them.
Will do the SMART Long test, perhaps even the burn in. Do you know any good linux utilities for the burn in test? Or could I run such an app on TrueNAS directly?
Also, thank you so much for the references to the documentation. Even though I already read the ZFS Primer and the Comprehensive Solution brief and guides, the fact that i already know all this gives me confidence that I have a good grasp of the basics
Iām running into a question, In the secondary (backup) system i have 4 sata ports connected to a SAS port, and 2 other normal sata ports. Is it safe to create a RAIDZ2 pool using all 6 ports, even though 2 of them are different?
We need to know the SAS adapter in your Dell. The model number, etc. You only want an adapter that is a plain, HBA or can be flashed to āIT modeā
You should be able to get a detailed hardware listing from Dell support website using the Service Tag #. It will be accurate if you have not made any changes since it was ordered or purchased
You can mix SATA and SAS if you meet that requirement.
SAS
Whatās all the noise about HBAās, and why canāt I use a RAID controller?
You can use the default ashift=12 with 512e drives⦠ZFS will just always write multiples of four sectors. Donāt set a trap for yourself later on.
Your drives come with 3-4 years of burn-in! Check SMART reports and youāre done.
To clarify, main system is the Dell, which is already running TrueNAS with a 2x3TB mirror. It doesnāt have a raid controller (itās sold separately, or they give you āsoftware raidā as an option), drives are set to AHCI, works normally. The second system that i want to use for backups is an HP ML310e gen8 v2, i found how to set the SAS controller to āIT modeā, doesnāt need flashing, just havenāt done it yet because i gave up on this computer in favour of the Dell when i set up my TrueNAS, due to the noise and power consumption. I didnāt realize until now when i got the 20 drives, that i can use it as backup server and shut it down after the backup is finished (and scrubs). I am a bit worried about shutting down enterprise drives on a regular basis, so I was considering monthly backups instead of weekly, but i think iāll stick to weekly and accept possible drive failures. I have 20 of them, I will use 5 in the main system and 5 in the backup, so iāll have 10 more as spares.
@etorix Thank you also for the reply, iām thinking similar things regarding the burn-in⦠will see. The HUS726040ALA610 drives are not 512e, theyāre true 512 bit native enterprise drives, manufactured in 2017. I found the official documentation which gives the specs for the model. I didnāt find any errors in the smart reports of the first 5 drives, they look very clean! power on/spin up only 12 times, hours is around 33000 on these 5, (some more some less), no bad sectors or any other issues. I canāt wait to get the other 15 from work. I know ashift=12 gives better flexibility, but since i have 2 systems, whenever the time comes to upgrade, i can just replace the drives in one of them, make a new pool and slowly copy the data over from the other, over a couple of days. itās not that big of an issue, really. Theyāre next to each other so management is easy.
@prez02 Thank you for pointing that out! Iāll use the article to make sure itās working as expected.
@etorix using ashift=9 on 512B drives has several advantages:
Matching sector size means each ZFS block writes directly to a single physical sector without padding. If i would use ashift=12, if my data doesnāt fit naturally into 4k multiples ( for example small files, which i have lots of, or even metadata), I waste space because ZFS has to pad data to 4k.
Write amplification avoidance: with ashift=12 on a 512B drive, small writes (less than 4K) cause unnecessary writes:
Example: A 1KB write on ashift=9 touches two 512B sectors.
But on ashift=12, the same 1KB write forces ZFS to modify a full 4K block. (This results in read-modify-write overhead.)
The drive physically has to read 4K, modify part of it, and write 4K back instead of just writing 1K efficiently.
This means writes are amplified, more latency, unnecessary drive wear
I totally get being cautions and using ashift=12 for most use-cases, because if one of your 512B drive fails and you have to replace it, you need another 512B drive. But my case is not usual, remember, I have 10 spare HDDs! And Iām fine with tinkering, I donāt mind having to copy the whole pool to a future new pool using 2 computers/servers. I get that most of the time, going with the norm, or the defaults, or what everybody usually recommends is ok. However, sometimes itās ok to use something else, if you have an edge case, and the ānormā doesnāt fit your needs.