Trying to decide what OS to use on my new NAS for purely shared storage

I was wondering about the 80% rule. Considering you lose drives to parity adding another 20% loss is quite a punch to the stomach. How much performance loss are we talking the closer you get to 100% capacity? This being a pool that doesn’t do much writing besides one time (media mostly) will it be that noticeable? I don’t see myself quite hitting 80%+ for a little while but it will happen.

I just added back in my 1tb samsung pro ssd to the flexbay. I plan on using it for emby metadata and other things I change a lot. No sense scattering all that across the sas plates. TN is telling me a single drive pool isn’t a great idea, and I get that. But as it’s connected through the HBA then it’s only available to TN. Is that an ok idea? None of the data is critical if it decided to die.

It’s fine if you understand the risks, and it’s especially fine if you then replicate its contents to your rusty pool.

Additionally, as I understand it, the Samsung is a 970 Pro. Which is a PCIe device. Pass it in via PCIe Passthrough.

1 Like

I miss spoke earlier. It’s an 860 Pro, so a sata drive.

I created a pool, then a dataset as SMB. Gave the Windows Group access like the SAS “WindowsShare” but ti doesn’t show up as an option when you open SMB only the Windows Share folder. Maybe it just takes some time to update or I need to restart the service.

image

Nevermind, I forgot that doesn’t mean it creates a share. Still need to go into sharing and set it. Works now

A bit off topic but how do you add the arrow drop down for system builds in personal profile?

1 Like

Performance loss will come with fragmentation, which can be expected to grow as the pool fills but there’s no strict relation between the two. If the pool accumulates writes without ever deleting or rewriting files, fragmentation may remain low.
Performance, however, is the wrong concern. The main issue is that as you go over 99.9% and into 100% the pool will lock itself with no easy way out. A CoW file system has to record new metadata before it can “delete” anything; so, unlike FAT/NTFS/HFS/APFS/ext#, ZFS needs free space to recover free space… You can add a new vdev to have free space again, but with raidz this operation is irreversible.
So the guidance is to always keep the pool below 80% (50% for block storage!) and expand well before you hit the wall at 100% full.

1 Like

I see. I had been reading about fragmentation and the like. Never saw any data about performance numbers so was curious. I doubt I will keep it at 80% so accept that risk but will keep it from being full. I have no room for expansion to mitigate this and doubt I will be building another server anytime soon so want to maximize data availability. I suppose I should have considered this before using TN, or ZFS.

What I’m going to do to prevent a bunch or writes and changes is adjust my workflow for writing to the pool. Grabbed a 2TB ssd for torrents so nothing will be written to the pool like I did with my OMV pool on my other server. Since I have to seed, I can’t just rename media and have it read by emby. So then I will copy media needing processed for proper naming using filebot to my 1tb ssd. From there moved to the zpool on my sas drives. This way I limit to mostly single writes and all reads from the pool. Try to limit deletions.

Start thinking about your upgrade before 80% and do it before 90%. This depends on how fast you fill a pool obviously.

Iirc ZFS will begin to throttle your writes at 90% full

2 Likes

I could probably fill to 90% Rather quickly if I brought down 30TB from the cloud plus the 10TB from my OMV pool on the other server. I’m going to be skipping a lot of media from the cloud because I don’t use it. I really don’t want to move away from TN as I’m rather enjoying the ease of use and the WebUI.

Can the drives be upgraded in a pool? So let’s say I eventually can afford larger capacity drive. I know the larger size won’t be reflected as it will take on the smallest drive capacity in the pool. So eventually if they all are upgraded would the new size be reflected or is that not something ZFS can do?

I should have gone with 8-12TB drives but they weren’t in the budget and couldn’t pass up $300 for 12x 6TB enterprise SAS drives.

EDIT:
I suppose I could use an external drive bay enclosure setup to a PCIe HBA and pass through to TN to expand the pool down the line if not?

Both upgrate paths are viable: Replacement with larger drives, or an external shelf.

1 Like

Can I restart my router while doing that gdrive cloud sync task? Will it just time out till connection is re-established? I did it as a sync task and not a copy. I figured it would only pull down stuff not already on the destination if I had to rerun it. Reason is the edgelite CPU is pegged at 100% and causing issues. It should not be at 100% for what the network is doing.

EDIT:
I think I want to try hardware offloading on the router to see if that helps but don’t want to wait for 5 more days to try it till the cloud sync is done.

My internet has come to a crawl since really using TN and the new server. I have no idea what it is. I did run a flat cat 6 cable to the AP replacing the cable I made to clean up the run where it is. Thing is I can download a torrent and max out my gbit connection but anything else we are talking 50kbps. Thats direct connected with a same flat cable. Guess I have to hope the gdrive sync doesn’t force me to run start all over I’m restarting this router.

Welp, restarted the router and gdrive sync is maxing me out now haha
image

But did not fix all the other slow downs

EDIT:
Nope, wasn’t cable. Back to a crawl. Only thing new is the server. Been like this for a handful of days now and progressively gotten worse. Hmmm

So TNC is sending out data and I don’t know why or where.

image

Seems there isn’t anything causing issues on the fiber ISP side with my symmetrical line. I guess once the sync finishes I’ll remove the ethernet to the new server and see if that’s the issue. It was never an issue until the new server was connected which is the frustrating thing. Maybe the NIC is causing it. It’s a BRCM GbE 2P 5720-t Adapter 2-port

Let’s say it’s time to upgrade my router to handle more throughput without maxing out the CPU. What would you guys suggest? Something budget friendly. The new server has the 10gb SPF+ sister card.

Currently using a EdgeRouter Lite and a 24-port unmanaged switch. The fiber line comes in with a ONT device to the router then the router to the switch.

Mikrotik CRS305 or 309 for SFP+. Some QNAP QSW-M408 or M2108 for a mix of fibre and good old copper.

Little pricey but will see if any good deals come along.

Could this work? TRENDnet TEG-30262 10GB 24 Port & 2x SFP+ Switch unmanaged? Found one for $65 before you mentioned those.

EDIT
I just wish I could make sense of it. The NAS is only pulling down 20MB/s. The rest of my devices are pretty idle but getting dial up speeds. I hate networking so much sometimes.

Oh I see the CRS305 is a router with only 10gbit ports.

Snagged it for 50 shipped. Now to find a deal on that Mikrotik or similar router.