Trying to decide what OS to use on my new NAS for purely shared storage

I appreciate the replies. Seems my brain fog has cleared so maybe this will be more understandable.

Doing some searching the H330 mini can be set to HBA(IT) without flashing and passed through. However, it may not be the best option. I’m not really familiar with things like docker, or in this case jailed containers. I can see how with this newer server it might be best to ditch the old 710 if I were to upgrade the CPUs and use this as the main server. Is TNC/TNS capable or doing the necessary windows VMs I have on my old server? I used a server 2012 r2 for my emby and torrenting, a server 2019 for my security camera system, and then I have some archival VMs for archivebox and similar. Not sure I want to redo those fresh or not.

My original plan for this new machine was to have it as an NFS that my other server could use as extended storage or anyone connected to my network could access the single array. So if I had it as raid 6, that would allow 60TB of space to dump media and other data into. I also wanted to pull off all my media in my google business drive which is 31TB and have it stored locally again and not mounted with rclone on my windows VM which also has emby on it.

So with TNC/TNS which ever is most recommended could I basically do that in a baremetal install? Meaning as a NFS volume that can be connected to via windows, smb, etc? I was researching unRAID but it seems more for those who have mismatched sized and branded drives who want a software raid solution which I don’t need having all the same drives.

EDIT:
Just ordered the 2.5" flexkit for the 730xd so I can have two ssd slots in the back. Can use one as a boot drive and the other as cache I suppose. I also didn’t know there is a midplane kit for an additional 4x 3.5" drives. Might consider that later. This server is full of surprises. Next on the list is a P2000/P4000 for hardware transcoding

EDIT2:

Grabbed a matched pair of E5-2683v4’s and a HBA330 monolithic so I don’t have to mess with flashing.

I really do want an AIO setup like I have with my 710. So I might try virtualizing with ESXi 8 with an enterprise key and see how it goes running TN with passthrough of the drives. If that doesn’t go well I’ll rethink the bare metal approach. I’ll use this build as a test server to see how it all works in a few different ways.

1 Like

Actually, I am currently running my CORE system on a MLC USB drive… zero issues so far.

Actually you really should flash it, and even that could not be enough… suggesting reading What's all the noise about HBAs, and why can't I use a RAID controller? | TrueNAS Community.

ZFS does not use hardware raid, it’s called RAIDZ3; it’s imortant to use the correct terminology because it’s possible (but very, very bad) to run hardware raid on top of ZFS: using correct terminology tells us not to raise alarms for your data. Terminology and Abbreviations Primer | TrueNAS Community

If you want to do block storage, mirrors are the best way. Suggesting reading Resource - The path to success for block storage | TrueNAS Community.

SCALE is the only version being actively developed, FreeBSD 13 (on which CORE is built) will get EOL in less than 2 years.

1 Like

For just running simple VMs. You can run them on scale.

And you can probably even port their disk images across.

And you can easily convert vm images to ZVOL

qemu-img convert -O raw <path to vhd/vmdk/etc> /dev/zvol/path/to/zvol

Can take a while. Good idea to run in tmux

3 Likes

Yep.

This mirrors my experience, I ditched ESXI for running TN on metal.

And now with TNS and sandboxes I can run all my services essentially on metal too.

And VMs etc all benefit from access to storage as fast as possible via virtio

2 Likes

I think I’m just worried about complexity for something simple. Since it’s been a long time, maybe the way I setup my older server was complex and just forgot that. I just want to take a lot of disks and turn them into network accessible root folder. However, now seeing the power advantages this new server has over the older one with my upgrade choices maybe that isn’t the best use-case anymore. To be completely honest, I think I logged into my OMV VM GUI years ago. Everything just works and the knowledge of how I did it has already been purged.

I’ve heard of ZFS a long time ago but didn’t dive more into it. So as far as which style of storage layer I want to go with I have no idea. I basically want a datastore drive on an SSD like I would in ESXi where my VMs would reside and be deployed to. Then the 12 6tb drives as singe pool I guess it would be called. With 1 or 2 of the drives being parity like would be for raid 5/6. The pool would be one large share that any VMs could mount folders from, read/write to as storage but not where the OS’s reside. ON my main PC I could mount it as a network drive for instance. I know TN is completely capable of this but moving away from ESXi I think is my crutch since it’s so familiar. I should probably watch some videos on TN running VMs in the meantime.

Containers are not something I’m familiar with. Maybe once I have it running I will have a better understanding than looking at guides and reading articles. I’m a hands on learner so without it running in front of me I can’t quite visualize it.

Then I suggest either two RAIDZ2 VDEVs or four three-way mirrors.
I also suggest reading iX's ZFS Pool Layout White Paper.

1 Like

Thanks, I read it over and watched some videos on setting up the VDEVs. Seems less daunting now which is nice. I also looked at VM usage and seem fairly sold on it. Though I did read something that your VMs can’t access you NAS shares/pools? I could be mistaken on what I heard though. Also is it possible to have your VMs install on the same drive your TNS was used as boot? So if I have a single 1tB SSD in there and installed from TNS from flash drive to it can itself used without messing with partitions? Not to say I’m opposed to grabbing another ssd but don’t want to waste a 1tb drive on just TNS.

As for the VDEV setup. The only downsize I see personally for doing two 6 drive pools/vdev in Z2 is that I’m now losing 4 drives worth of space vs making a 12 drive pool in Z2. In essence a dual raid 6 if this was hardware and not ZFS. That and the pools are split so would have I assume 2 SMB locations or maybe you can bridge it as I haven’t gotten that far in reading.

Raw space is more my concern I guess than redundancy. However, I see that IOPS are affected in doing it the way I want vs split. That and these while enterprise drives are used so have a higher likelihood of failure than if they were new. Also, do I need to have a cache drive or is that optional but less optimal without.

I have no idea how to quote or direct reply so I apologies for not including quotes.

Select the passage you want to reply to and hit “Quote” or “Reply” below.

You will create the SMB share at the pool level, not the vdev level. So the storage space of the 2 vdevs will be combined in one SMB share.
You will gain speed and IOPS.

While there is a way to partition your boot drive to also use it as storage, it is not recommended at all.

VMs run best from fast drives with high IOPS. If you have a spare PCIe x8 slot, i suggest you put the VMs on a pair of mirrored NVME drives via a simple adapter card (bifurbication needed).

2 Likes

Not without configuring, like any other system on your network.

Not reccomended. Also, it appeared to me you wanted to use your HDD to host them…

…which is the reason you want to have multiple VDEVs instead of a single one: IOPS.

No, just a single pool with two VDEVs.

My suggestion, like many others, is to use a pair of SSDs for the VM on a dedicated pool: this would allow you to have your data pool in a 12-wide RAIDZ3 (or RAIDZ2 if you feel adventurous) VDEV.

You should have at least 64GB of RAM in order to consider L2ARC.

Then don’t do it and buy a satadom or a 250GB SSD instead. Although it’s frown upon you could even go quality USB boot-drive where quality means MLC/Enterprise grade USB. I have to waste an entire drive just for booting? | TrueNAS Community

1 Like

Oh, ok now I understand. The hierarchy is a bit confusing at first.

Sorry, I should have been more specific. I want to keep teh spinning disks for storage and any solid state stuff for VM/jails

I have never even heard of SATADOM, those are super cool. Found a 64gb one for Dells that go in the yellow SATA and the 4pin power next to it. Cool little piece of tech. It might not even need power it seems if you have a SATA port like the yellow one.

1 Like

I will try this out. The largest VM I have is ~150GB. The rest I can just spin up fresh.

I’m not sure why there are two vmdk files for my server 2012 VM. Do I convert both or just the one targeted by the VM as Disk File? All my VMs have two files.

I also read zvol or DD isn’t required and you can use the raw image? Not sure what all that means right at this moment though.

My server won’t likely be here till the end of the week so making sure I get all my ducks in a row.

image
image

Might just be easier to abandon 2012 r2 and update to a newer MS server OS and start fresh. Just hate to have to start my emby server over but might be able to migrate without too much issues.

Though I did read something that your VMs can’t access you NAS shares/pools?

Setup a bridge in networking; add your interface to that bridge, remove all IPs from your interface & put them on that bridge.

Your VM can now access your shares/pools if you properly setup SMB/NFS permissions for it to do so.

1 Like

EDIT: I made a video describing how to set a static IP and make a network bridge so that you can access your NAS host from a VM or Sandbox/Jail

1 Like

You’re just saying the same thing as I did but better :frowning:

Edit: Well now you’re just showing off. That being said awesome work making new resources!

1 Like

That’s not how I meant it :slight_smile:

Same as what you said, but in extreme detail

3 Likes

Thanks guys. It was very important that my VMs could access the pools/smb. I had read people praise that your VMs get in their words direct access if using TN to virtualize but then find out it’s really not quite that way and also needs a workaround(bridge). So either way there isn’t direct access (correct?) but using SMB shares like any device connected to the same network and not a VM. That isn’t really a big deal and how I do it already with ESXI and OMV. Ill read over the Accessing NAS From a VM | documentation.

Also do I need SLOG/L2ARC or drives for metadata? I do have 96gb ram in it but would like to not use up a bunch so VMs can use it. With that said I’ve looking over build options and what everything means. I’m thinking more about a 12 drive ZR2 or maybe a 11(ZR2)+1 hot spare. Redundancy isn’t as important as available space and IOPS isn’t as important since nothing will be running off that space and its more about stream/transfer speeds.

Edit: I see there are “Intel HPE IBM P3600 1.6TB HHHL PCI-E NVMe SSD 100% Life Remaining” drives on ebay for 120 bucks and saw a 400gb in your build Stux.

That is from before Optane really took over the SLOG market. At the time the only Optane drive was the newly released p4800x (iirc). Later on I used a p4801x m2 drive for another build.

Unfortunately Optane is getting harder to source now :(, but still probably the bees knees.

I have a few m.2 drives laying around 500gb-1tb drives. Any recommendation on a PCIe card to slap them into that doesn’t require external power or a sata cable? Maybe a 2 or 4 m.2 port card?