Go ahead! ZFS is designed to handle entities (volume size, file size, number of files, drives,…) that are far beyond today’s practical limits. As long as you have enough room for hard drives, you don’t have to worry about any limit.
Just some advice from experience… Having a home media library is nice however the amount of money for the hardware, energy, cooling, and maintenance (there is maintenance, it is not build and forget), it may be better to consider streaming services.
My server started out as a media server and backup server. I have reduced my storage significantly and now it is just a backup server.
Of course if I had a budget like you to spend on this kind of thing, I may do that too. But I’d rather have a new truck, mine is old as dirt.
LSI 3008 and expander on-board, exposed as… a long wall of SATA plugs!
I’m flabbergasted that someone had the idea to design that, that someone else approved it being manufactured, and that the contraption found at least one client.
Back to the question, you can have as many pools as you want, with as many vdevs in each pool as you want (because you may want to add vdevs to th existing pool rather than multiply pools). And when using proper SAS backplanes and SAS expanders you can have these multiple JBODs hanging on just one HBA. You do NOT need one SAS port per drive!
However if you’re ready to invest in tens of 18+ TB drives, you may want to upgrade to a motherboard which supports more RAM (preferably ECC).
What about the system/setup will cause TrueNAS to require more and different memory?
I am not doing any transcoding, and currently im testing out running Plex on a separate computer targeting the server for files only.
Writing to nas will not take place during movie watching.
Also on cost, i am aware that this is not a one time invest but has running cost, but compared to buying everything which i used to do it is alot cheaper.
Streaming services is a big h3ll n0. The way they keep changing their plans, removing access to 4k hdr armos etc at will, only option is to store locally for me
I will still buy 4k discs on sale to own, there are bad rips from time to time, hdr artifacts etc.
I had 700+ TB with only 64gb RAM. If you’re using it primarily for media, additional RAM isn’t absolutely necessary. 32GB is a bit tight, but the system will run just fine…
If you’re not doing any transcoding, I really don’t understand why you’d run Plex from a separate computer and pull the media over the network: server → Plex → client. Just put Plex on the server. If you DO need video transcoding, you’ll probably want a cheap GPU (Intel Arc) that supports HEVC since I don’t believe the 4790K iGPU has accelerated decode.
With one 9300-16e you can connect an additional 64 drives (in JBODs) at 3gbps, sufficient for spinning hard drives. You don’t need SAS drives; SATA drives are fine.
Writing to NAS is fine while watching movies. Even the highest bitrate remuxes averages ~100mbps (12.5MB/s).
Watched one video on SAS extenders, but wouldnt that require a motherboard in each jbod cabinet ?
With the reasonable price of lsi 16e cards on ebay, i was thinking of having a psu + 20 drives in a case, then run the 20 cables from back of current unit, into each jbod enclosure. having a 24pin psu trigger card to power up drives with main.
As long as the motherboard and TrueNAS can handle the 60 (64) extra sata channels in addition to the 21 (20+boot drive) currently in use…
And again, as long as memory isnt an issue. If it is, i can always get another cheap older board with a decent cpu and memory and run 2 truenas servers with 40 drives each instead of upgrading current to support 80 drives.
But 80 drives is still years in the future, just trying to futureproof… if there even is such a thing with computers
Run 20 cables to a JBOD? I’m not sure how you would do that with a 4-port 16e card or how you’re planning to connect the JBODs.
My old system was the motherboard/system in a case with a single 16e card. From each port was a SAS fanout that split into two SAS connections. Each SAS connection went to an expander in a JBOD. Each expander had 2 x SAS-to-SATA fanout cables to connect 8 drives. Each JBOD needed a PSU, expander, and fan controller. Power consumption was around 800W.
My current system has a motherboard with 2 x 16e cards. A SAS connects from each port to a 12-disk JBOD. The JBOD has an external-to-internal SAS adapter. An internal SAS cable runs from the adapter to the backplane. Each JBOD has a Supermicro power board, fan controller, PSU, SAS internal/external adapter. Power consumption is around 1300W.
Hell no! Each SAS cable (SFF-8088 or 8644) is worth 4 SAS lanes, and you can have more than one drive on each lane.
The JBOD enclosure only need a backplane with a SAS expander, and you’re all set with one cable to the HBA: No extra motherboard.
Just watched a few more videos, but im still not understanding 100%.
so this guy explained a 2 port card connecting to a 6 port card to expand sas connectivity, but is the card passive ? does it not need power to do the sas expansion ? is it powered through the sas port feeding it ?
Expanders are active. In a JBOD chassis the would be powered by the chassis PSU, just like the drives. Expanders on add-on cards are powered by the PCIe bus or by a power connector (and no PCIe!).
There’s no power in a SAS cable.