I have an ancient Synology DS413J to be replaced very soon. Everything keeps pointing to truenas as a non-oem NAS OS. I’m not doing anything too heavy and the primary usage is holding my 5TB photo+video library (my own camera photos/videos)
I have a mix of drives in my Synology in SHR so that’s what I’m used to - “throw in what’s available” sort of thing.
but the idea of moving into TrueNAS feels a. It daunting:
I have a spare 8TB to start right now. But not a bd, and I wouldn’t want to purchase a new “only” 8TB drive
I almost bought a 14TB drive which seems great value right now and size makes sense. But I wasn’t planning on buying 2 of them?!
I could see years down the line buying another drive to further expand. But even the even less likely to go for a 14TB … Probably 20TB+?
So how does this actually work in progress? Buying another 8TB seems the biggest waste… so is my existing 8TB useless? Do I have to buy 2x 14TB drives to start off? What if I’ve got the 2x14tb up and running and down the way end up buying a 20TB drive… what then? I’d have to buy 2? But I couldn’t combine 2x20 AND 2x14 could I?
You don’t have to do anything besides providing a dedicated boot drive for TrueNAS, preferably a reasonably cheap and small SSD.
Apart from that it all revolves around what you want.
TrueNAS runs perfectly fine with a single data disk of any size. Only you won’t have any redundancy. And if that single disk fails it’s time for the backup - which hopefully you set up in advance.
If you want any sort of redundancy in TrueNAS, yes, it’s about disks of the same size.
But you can perfectly well combine e.g. 2x 14 TB and 2x 20 TB in a single pool. Actually a pool built from mirrored vdevs is the most flexible to expand. Start with e.g. 2x 14 TB and then add more pairs of disks later as needed.
You might want to read the ZFS primer to get started.
Yes I do have a 256gb 2.5” ssd so at least I’ve got that part sorted!
I will take a look at that link, thanks. And frankly it may already “answer” for me… but, is it just me? The inability to mix drive sizes just feels like a huge headache.
I don’t WANT to have to go buy 2x14. I already have an 8, but don’t want to buy another that small. So, what do I even do with the 8?
I guess I COULD just concede to my perceived issue and just bite the bullet and buy 2x14, but that just feels… forced and non-ideal. What if a drive fails prematurely? I am forced to buy another 14tb… not a more future-oriented one? Not a 20TB that’s on sale for good price?
Especially because of the current ram fiasco HDDs are seeing price spikes too and there may be more, so it’s not as straightforward and just “picking up a couple extra batteries” sort of thing. They’re expensive!
you can mix drive sizes, there are just limitations depending on how you do it. in different vdevs you get full capacity of each drive but no redundancy across them, in the same vdev you get redundancy but are limited to the smallest drive size (except stripe i think, never tried). i have a pool of mixed size drives, 8x3tb and 4x2tb both in their own z2 vdevs, for several years and never had any problem or issues with it.
a benefit to getting more 8tbs is they are cheaper obviously, and especially if you can get them in a lot, so for the same or less money you could have the same or more capacity as fewer larger drives but with more redundancy and performance. for example a 2x14tb in a mirror gets you 12.25tib with 1 drive redundancy and 2x read speed gain, and 4x8tb in a z1 gets you 20.5tib also with 1 drive redundancy but also with 3x read. if you want more security, 4x8tb in z2 is 13.6tib with 2 drive redundancy and 2x read. so there are upsides to going with more smaller disks, but the downsides include more power use (really not an issue unless your rates are exorbitant and/or you want to run a dozen+ drives 24/7) and technically more points of failure (but this partially mitigated by raidz).
you mention 12.25 from 14tb drives. How? Isn’t usable space on a 14tb drive more like 12.7?
You have those 2 pools, but in the equivalent of a windows environment that’s two entirely separate drive letters right? So imagine I had a photo library. And eventually it just kept growing. Within the folder structure, it couldn’t just overflow on to the other could it? It would be “out of space”?
Part of the drive pricing I’m seeing is related to shucking and that often seems to be where the big deals Are when they’re basically just slapping a whitelabel NAS-quality drive inside an enclosure…
I feel like redundancy and speed are not the biggest factors for me. Some redundancy makes sense for the setup, but I’ll also have a local backup plus offsite backup, and it’s not mission critical data that not having access to it during downtime is acceptable. I’m not moving around 4K footage constantly (yet/foreseeable future!) so the speed isn’t massively important to me either. Plus I feel like I’d be more likely to shift to a nvme “working disk” sort of thing
I imagine if I did have 2x8tb drive in z1, I could add a 3rd and gain capacity? I don’t have to jump to 4?
Is there any difference between a 2drive z1 and a mirror?
Thank you for your insights still wrapping my head around it all including all the random terminology that gets thrown out there
Zfs itself reserves a small amount of space to make sure it can do operations and stuff even if the pool is completely full otherwise, it’s called slop space. This is the calculator I use that takes all of this into account and makes it easy ZFS Capacity Calculator - WintelGuy.com. also those theoretical pool capacities were in tib, tebibyte, not tb, terabyte. The calculator shows both tb and tib.
No, a pool is a “drive” that gets assigned a “drive letter" and a vdev is a collection of drives under that drive letter. You can have 1 vdev or 100 vdevs (or whatever the limit is idk) with any number of drives inside a single pool, and it will still all be 1 drive letter, just with the data spread across all drives in the pool.
Can’t comment as I just buy old enterprise drives, but yes that is usually the case with shucking.
That’s fine, but even if you don’t need massive redundancy and speed, you can still get more space while maintaining at least 1 drive of safety by doing a z1 vs a mirror (through you do need at least 3 drives for z1).
You can add 1 drive at a time, there is no requirement to add in certain multiples. However there are minimum drive counts for raidz arrays.
Raidz1 needs 3 drives minimum, z2 is 4 and z3 is 5. Mirror is minimum 2.
For mixing drive sizes UNRAID is IMO the way to go. At least until the “Anyraid” feature gets implemented in ZFS.
With single drive stripes you are missing out on basically all the benefits of using ZFS.
2 wide raidz1 is only possible to create via CLI, not through the GUI.
It has the “advantage” of beeing able to be expanded 1 drive at a time (with some caveats and disadvantages) compared to a 2 wide mirror, that needs to be grown 2 drives at a time.
Then there is the copies=2 option. Basically data mirrored on the same drive…
So actually this alone may address my headaches! I originally thought you simply couldn’t do it.
I could buy a 14TB drive on sale, as I only “need” 1, and pair it with my existing 8TB. I am not at the limit of 8TB yet and when I approach it I can buy another 14TB or maybe a 20TB would be on sale. Then I jump from 8 to 14TB. And that’s totally fine with me. Why have the 14TB space from day one if you don’t need it?
And especially with the potential idea of shucking external drives, they a) are cheaper; and b) often a larger drive goes on great sale for even less price than the smaller capacity one.
I do understand I can’t just ADD the 3rd drive and get SOME incremental space like SHR, but I also feel like that’s far less of an issue than when I was dealing with 1-3TB drives and larger capacity was prohibitively expensive. I’m not entirely sure what the old rotated-out drive (8TB in this example) would be used for, but I assume I can find some use for it!