Trying to decide what OS to use on my new NAS for purely shared storage

NVM, had to enable legacy in the network settings of TNC. I’m able to now sync my data to TNC.

However it doesn’t detect any IO or stats in the report dashboard. Does that mean I won’t be able to detect drive issues or failures? I guess if it scrubs data should be ok. Then can see drive failure from idrac or the LED’s on the driver bays

Going to mark this solved.

Decision was ESXi 8.0.2 with TNC virtualized with HBA330 passed through. Transfers over SMB easily stay at 99MB/s.

1 Like

Apparently using PCIe passthrough does not pass SMART, or other information, to TN.

Well darn. Guess I can kind of rely on the server itself to monitor health then. I’ve personally never had a hard drive fail but know it does happen. My hitachi nas drives are going on 6ish years now always spun up on the old r710 server. So crossing fingers. These seagate enterprise sas drives are rated for 550TB/year. They will never see those kind of numbers. Granted are second hard.

Hmm, stats are showing for the drives now. Even temperature. When I moved the server to the garage and everything booted back up there was a database error for statistics so cleared it. Didn’t show anything before that. Interesting. I might have it run a smart test just for giggles.

I see some of the drives have tests already and they say:

Does that mean it can access SMART? I did a manual short on that one and no error. Not sure what lifetime means? If that’s online hours then these drives are about 5ish years old. Hope they hold up for another 5.

Had to zoom out to get them all. Same drive as above

The online hour the test started.
It’s good you can have all the standard data!
I suggest using @joeschmuck’s Multi-Report

1 Like

Will do!

Forgot to thank everyone again for all the help. Very welcoming community and glad to be a part of it. I hope I can eventually return the favor and help others once I learn more and become acquainted with TrueNAS.

PS.
Cloud Sync is freaking AWESOME! So easy to use and can now pull down selectively from my 30TB of data from gdrive all in the background. I’ve been over their new 5TB enterprise policy for some time and now have to pull what I want before it’s deleted.

2 Likes

That doesn’t make any sense. If the controller is passed through, that’s it, the VM controls it, just like the host could if it were not passed through.

1 Like

I read somewhere in this forum that if a drive is passed through you don’t get smart data… but was sceptical, hence the apparently. I could have been clearer.

Passing through a disk != passing through a disk controller. Passing through a disk is fraught with traps and gotchas. Passing through a controller is pretty safe and well-understood. One edge case is with NVMe drives, doing PCIe passthrough of the drive is akin to passing through a SAS controller with disks attached; passing through a block device hosted on the drive has all the same problems that would exist with SATA or SAS disks doing the same thing.

1 Like

I was wondering about the 80% rule. Considering you lose drives to parity adding another 20% loss is quite a punch to the stomach. How much performance loss are we talking the closer you get to 100% capacity? This being a pool that doesn’t do much writing besides one time (media mostly) will it be that noticeable? I don’t see myself quite hitting 80%+ for a little while but it will happen.

I just added back in my 1tb samsung pro ssd to the flexbay. I plan on using it for emby metadata and other things I change a lot. No sense scattering all that across the sas plates. TN is telling me a single drive pool isn’t a great idea, and I get that. But as it’s connected through the HBA then it’s only available to TN. Is that an ok idea? None of the data is critical if it decided to die.

It’s fine if you understand the risks, and it’s especially fine if you then replicate its contents to your rusty pool.

Additionally, as I understand it, the Samsung is a 970 Pro. Which is a PCIe device. Pass it in via PCIe Passthrough.

1 Like

I miss spoke earlier. It’s an 860 Pro, so a sata drive.

I created a pool, then a dataset as SMB. Gave the Windows Group access like the SAS “WindowsShare” but ti doesn’t show up as an option when you open SMB only the Windows Share folder. Maybe it just takes some time to update or I need to restart the service.

image

Nevermind, I forgot that doesn’t mean it creates a share. Still need to go into sharing and set it. Works now

A bit off topic but how do you add the arrow drop down for system builds in personal profile?

1 Like

Performance loss will come with fragmentation, which can be expected to grow as the pool fills but there’s no strict relation between the two. If the pool accumulates writes without ever deleting or rewriting files, fragmentation may remain low.
Performance, however, is the wrong concern. The main issue is that as you go over 99.9% and into 100% the pool will lock itself with no easy way out. A CoW file system has to record new metadata before it can “delete” anything; so, unlike FAT/NTFS/HFS/APFS/ext#, ZFS needs free space to recover free space… You can add a new vdev to have free space again, but with raidz this operation is irreversible.
So the guidance is to always keep the pool below 80% (50% for block storage!) and expand well before you hit the wall at 100% full.

I see. I had been reading about fragmentation and the like. Never saw any data about performance numbers so was curious. I doubt I will keep it at 80% so accept that risk but will keep it from being full. I have no room for expansion to mitigate this and doubt I will be building another server anytime soon so want to maximize data availability. I suppose I should have considered this before using TN, or ZFS.

What I’m going to do to prevent a bunch or writes and changes is adjust my workflow for writing to the pool. Grabbed a 2TB ssd for torrents so nothing will be written to the pool like I did with my OMV pool on my other server. Since I have to seed, I can’t just rename media and have it read by emby. So then I will copy media needing processed for proper naming using filebot to my 1tb ssd. From there moved to the zpool on my sas drives. This way I limit to mostly single writes and all reads from the pool. Try to limit deletions.

Start thinking about your upgrade before 80% and do it before 90%. This depends on how fast you fill a pool obviously.

Iirc ZFS will begin to throttle your writes at 90% full

1 Like