Trying to decide what OS to use on my new NAS for purely shared storage

I will preface I’m a novice homelab user. I did go to college for my CIT/CIS degrees but that was a long time ago and didn’t do much in the field before pursuing other things. My knowledge is lacking, but I’m capable of learning and understanding.

I currently have a PowerEdge 710 LFF running ESXi 6.5.
It’s specs are 2x L5640 cpus, 96gb ddr3, h700 controller

I set this up years ago. Created a hardware Raid 10 with 6x 3tb sata drives. Installed OMV on a VM to create a network share of the raid volume so all my VMs and local PCs could use the storage via windows network or smb etc. 2 SSDs for the VMs. Can’t say I was well versed in anything when I set all this up. However, it’s been working for me without error.

Fast forward to now I need more space for media and misc data that can be pulled from on other VMs and by my emby server running on a VM from the system above. So, I recently purchased a PowerEdge 730xd LFF and 12 Seagate ST6000NM0095 6TB SAS3 enterprise drives (second hand but good condition). Waiting for it all to get delivered.

That servers specs are 2X E5 2609 V3, 96GB DDR4, H330 controller, not a PCIe, IDRAC EXP. T7-A17. I can provide BIOS and other firmware versions if needed.

I had considered just doing what I did on the other server but do I really need 2 virtualization servers. Though this new server is more capable with better CPU options and newer hardware. Anyways, I saw I can use TN similar to how I did OMV in VM but not making a hardware raid setup like I did. I only have gbit network which is plenty for my needs so raw performance above that isn’t super important.

Collecting some dust I have a spare samsung 970 pro 1TB drive and a 1tb m.2 drive that can be utilized for this build. As esxi runs off a flash drive I’d like to know if that is a viable option for TN. But if the ESXi route isn’t bad I can use the samsung for VM space where TN would get deployed.

Will I be able to do what I would like fairly easily without too much of a learning curve? Should I go baremetal or the esxi route. I’m also open to other suggestions. Apologies if this sounds scatterbrained. Been having some head fog.

1 Like

ESXi and hardware raid are dead, aren’t they? Unless you pay money for ESXi…

The learning curve for ZFS is a strange one. Some people take to it like a duck to water, others find it more difficult to grasp than rocket science.

It is not recommended to boot TrueNAS from a USB flash drive.

I’ll say this, although it’s clearly biased. TrueNAS is the only NAS OS I would run, if it didn’t exist, I would be looking into OMV.

Edit: I would also not virtualize TrueNAS, it’s been done successfully by others, I just see no reason to do so.

3 Likes

I would also reconsider not virtualizing TN. TN works well virtualized if done well (used to do it myself for a long time with ESXi 7.0 but since their departure from homelabbers I ditched it). I did evaluate using ProxMox in a similar setup but came to the conclusion that TN on metal did everything I needed it to without the added complexity. Especially since we now have (really well working) jails on Scale with Jailmaker. If all you need is a couple of instances of Emby or similar then by all means I would go TN Scale on metal. Whether you serve up your media using VMs, apps or jails is up to you but personally I’d go with jails.

You can use one of the mentioned SSDs to install TN on and then use those SAS drives for data. Just don’t install TN on a USB stick; this used to be ok but there’s simply no reason to do so anymore.

If however you did decide you want to virtualize your TN then always get a seperate LSI HBA (a real one, using IT and not IR (RAID) firmware), and pass the whole HBA through to the TN VM. However, if you’re not well versed in virtualization or TN then I would really recommend NOT virtualizing TN until you are.

Then of course ditch your old heating system (that also served up your files and media) as it’s really not appropriate use of energy anymore.

2 Likes

I appreciate the replies. Seems my brain fog has cleared so maybe this will be more understandable.

Doing some searching the H330 mini can be set to HBA(IT) without flashing and passed through. However, it may not be the best option. I’m not really familiar with things like docker, or in this case jailed containers. I can see how with this newer server it might be best to ditch the old 710 if I were to upgrade the CPUs and use this as the main server. Is TNC/TNS capable or doing the necessary windows VMs I have on my old server? I used a server 2012 r2 for my emby and torrenting, a server 2019 for my security camera system, and then I have some archival VMs for archivebox and similar. Not sure I want to redo those fresh or not.

My original plan for this new machine was to have it as an NFS that my other server could use as extended storage or anyone connected to my network could access the single array. So if I had it as raid 6, that would allow 60TB of space to dump media and other data into. I also wanted to pull off all my media in my google business drive which is 31TB and have it stored locally again and not mounted with rclone on my windows VM which also has emby on it.

So with TNC/TNS which ever is most recommended could I basically do that in a baremetal install? Meaning as a NFS volume that can be connected to via windows, smb, etc? I was researching unRAID but it seems more for those who have mismatched sized and branded drives who want a software raid solution which I don’t need having all the same drives.

EDIT:
Just ordered the 2.5" flexkit for the 730xd so I can have two ssd slots in the back. Can use one as a boot drive and the other as cache I suppose. I also didn’t know there is a midplane kit for an additional 4x 3.5" drives. Might consider that later. This server is full of surprises. Next on the list is a P2000/P4000 for hardware transcoding

EDIT2:

Grabbed a matched pair of E5-2683v4’s and a HBA330 monolithic so I don’t have to mess with flashing.

I really do want an AIO setup like I have with my 710. So I might try virtualizing with ESXi 8 with an enterprise key and see how it goes running TN with passthrough of the drives. If that doesn’t go well I’ll rethink the bare metal approach. I’ll use this build as a test server to see how it all works in a few different ways.

1 Like

Actually, I am currently running my CORE system on a MLC USB drive… zero issues so far.

Actually you really should flash it, and even that could not be enough… suggesting reading What's all the noise about HBAs, and why can't I use a RAID controller? | TrueNAS Community.

ZFS does not use hardware raid, it’s called RAIDZ3; it’s imortant to use the correct terminology because it’s possible (but very, very bad) to run hardware raid on top of ZFS: using correct terminology tells us not to raise alarms for your data. Terminology and Abbreviations Primer | TrueNAS Community

If you want to do block storage, mirrors are the best way. Suggesting reading Resource - The path to success for block storage | TrueNAS Community.

SCALE is the only version being actively developed, FreeBSD 13 (on which CORE is built) will get EOL in less than 2 years.

1 Like

For just running simple VMs. You can run them on scale.

And you can probably even port their disk images across.

And you can easily convert vm images to ZVOL

qemu-img convert -O raw <path to vhd/vmdk/etc> /dev/zvol/path/to/zvol

Can take a while. Good idea to run in tmux

3 Likes

Yep.

This mirrors my experience, I ditched ESXI for running TN on metal.

And now with TNS and sandboxes I can run all my services essentially on metal too.

And VMs etc all benefit from access to storage as fast as possible via virtio

2 Likes

I think I’m just worried about complexity for something simple. Since it’s been a long time, maybe the way I setup my older server was complex and just forgot that. I just want to take a lot of disks and turn them into network accessible root folder. However, now seeing the power advantages this new server has over the older one with my upgrade choices maybe that isn’t the best use-case anymore. To be completely honest, I think I logged into my OMV VM GUI years ago. Everything just works and the knowledge of how I did it has already been purged.

I’ve heard of ZFS a long time ago but didn’t dive more into it. So as far as which style of storage layer I want to go with I have no idea. I basically want a datastore drive on an SSD like I would in ESXi where my VMs would reside and be deployed to. Then the 12 6tb drives as singe pool I guess it would be called. With 1 or 2 of the drives being parity like would be for raid 5/6. The pool would be one large share that any VMs could mount folders from, read/write to as storage but not where the OS’s reside. ON my main PC I could mount it as a network drive for instance. I know TN is completely capable of this but moving away from ESXi I think is my crutch since it’s so familiar. I should probably watch some videos on TN running VMs in the meantime.

Containers are not something I’m familiar with. Maybe once I have it running I will have a better understanding than looking at guides and reading articles. I’m a hands on learner so without it running in front of me I can’t quite visualize it.

Then I suggest either two RAIDZ2 VDEVs or four three-way mirrors.
I also suggest reading iX's ZFS Pool Layout White Paper.

1 Like

Thanks, I read it over and watched some videos on setting up the VDEVs. Seems less daunting now which is nice. I also looked at VM usage and seem fairly sold on it. Though I did read something that your VMs can’t access you NAS shares/pools? I could be mistaken on what I heard though. Also is it possible to have your VMs install on the same drive your TNS was used as boot? So if I have a single 1tB SSD in there and installed from TNS from flash drive to it can itself used without messing with partitions? Not to say I’m opposed to grabbing another ssd but don’t want to waste a 1tb drive on just TNS.

As for the VDEV setup. The only downsize I see personally for doing two 6 drive pools/vdev in Z2 is that I’m now losing 4 drives worth of space vs making a 12 drive pool in Z2. In essence a dual raid 6 if this was hardware and not ZFS. That and the pools are split so would have I assume 2 SMB locations or maybe you can bridge it as I haven’t gotten that far in reading.

Raw space is more my concern I guess than redundancy. However, I see that IOPS are affected in doing it the way I want vs split. That and these while enterprise drives are used so have a higher likelihood of failure than if they were new. Also, do I need to have a cache drive or is that optional but less optimal without.

I have no idea how to quote or direct reply so I apologies for not including quotes.

Select the passage you want to reply to and hit “Quote” or “Reply” below.

You will create the SMB share at the pool level, not the vdev level. So the storage space of the 2 vdevs will be combined in one SMB share.
You will gain speed and IOPS.

While there is a way to partition your boot drive to also use it as storage, it is not recommended at all.

VMs run best from fast drives with high IOPS. If you have a spare PCIe x8 slot, i suggest you put the VMs on a pair of mirrored NVME drives via a simple adapter card (bifurbication needed).

2 Likes

Not without configuring, like any other system on your network.

Not reccomended. Also, it appeared to me you wanted to use your HDD to host them…

…which is the reason you want to have multiple VDEVs instead of a single one: IOPS.

No, just a single pool with two VDEVs.

My suggestion, like many others, is to use a pair of SSDs for the VM on a dedicated pool: this would allow you to have your data pool in a 12-wide RAIDZ3 (or RAIDZ2 if you feel adventurous) VDEV.

You should have at least 64GB of RAM in order to consider L2ARC.

Then don’t do it and buy a satadom or a 250GB SSD instead. Although it’s frown upon you could even go quality USB boot-drive where quality means MLC/Enterprise grade USB. I have to waste an entire drive just for booting? | TrueNAS Community

1 Like

Oh, ok now I understand. The hierarchy is a bit confusing at first.

Sorry, I should have been more specific. I want to keep teh spinning disks for storage and any solid state stuff for VM/jails

I have never even heard of SATADOM, those are super cool. Found a 64gb one for Dells that go in the yellow SATA and the 4pin power next to it. Cool little piece of tech. It might not even need power it seems if you have a SATA port like the yellow one.

1 Like

I will try this out. The largest VM I have is ~150GB. The rest I can just spin up fresh.

I’m not sure why there are two vmdk files for my server 2012 VM. Do I convert both or just the one targeted by the VM as Disk File? All my VMs have two files.

I also read zvol or DD isn’t required and you can use the raw image? Not sure what all that means right at this moment though.

My server won’t likely be here till the end of the week so making sure I get all my ducks in a row.

image
image

Might just be easier to abandon 2012 r2 and update to a newer MS server OS and start fresh. Just hate to have to start my emby server over but might be able to migrate without too much issues.

Though I did read something that your VMs can’t access you NAS shares/pools?

Setup a bridge in networking; add your interface to that bridge, remove all IPs from your interface & put them on that bridge.

Your VM can now access your shares/pools if you properly setup SMB/NFS permissions for it to do so.

1 Like

EDIT: I made a video describing how to set a static IP and make a network bridge so that you can access your NAS host from a VM or Sandbox/Jail

1 Like

You’re just saying the same thing as I did but better :frowning:

Edit: Well now you’re just showing off. That being said awesome work making new resources!

1 Like

That’s not how I meant it :slight_smile:

Same as what you said, but in extreme detail

3 Likes