Novice: NFS and caching questions

Hi all,

This scream for help is aimed at those who run Proxmox and use TrueNAS as a dumb NAS only.
No Docker, No Apps, Proxmox only!!!

I just started into this NAS wormhole, I tried a TrueNAS alternative and got some deal breakers.
Before I try TrueNAS, I would love to see if part of my problems or if all of them will be solved.

  1. I will run it as a dumb NAS, let Proxmox do what it does best.
    No Docker, No Apps, I only need a NAS

  2. Cache: TrueNAS uses RAM as cache, I won’t throw away all the DDR4 I have just to buy ECC for AUD$500 64GB.
    I have 800GB of data on Proxmox to send over, how is my currently 32GB DDR4 gonna handle that??
    From what I saw online, you cannot control or know what is on the cache and not on the disk, close your eyes and pray haha
    I am assuming TrueNAS will even allow me to use non-ECC memory.

  3. NFS: How good is the NFS support??
    I run Linux only and everywhere, I have no need for SMB.
    My main blocker with the current solution is that NFS does not support cache, the shared folder just gets disconnected.
    I must save data to the slower disks instead OR running everything locally via Docker which I will not.
    I run everything via Proxmox as Infrastructure as a Code with Ansible, I am not touching docker just to have caching.
    This NFS issue also means I cannot use my NMVe 500GB for cache.

    Does TrueNAS allow me to use NFS share AND caching or do I must write everything to the disks.

  4. 4 x 4TB WD RED: These are band new disks atm set to 1x parity and 2x data atm.
    From what I gathered, setting it to Raid1, will give me 12TB and that is all for life, there is no adding new disks down the road without setting a whole new spool from scratch.

    I do wanna setup a local Forgejo with mirror to its cloud, try out some services and all, their databases, config files, will have offsite backup to Mega NZ, Ansible takes care of the rest.

    Important documents are on Proton Drive (no Linux support boooooo) Raid1 will allow me to lose 1x disk without losing any data.
    If things go really bad, I should have offsite backup so not the end of the world.

Thanks a lot for any help. :slight_smile:

Couple of things:

  1. Make sure those disks are CMR drives and not SMR, like so many WD Red drives are. SMR drives are not a good fit for use with zfs.
  2. With raidz expansion it’s now possible to add single disks to existing vdevs. There’s still a space reporting bug after the expansion, but it is possible to add disks.

Check if the disks are SMR or CMR. If SMR then don’t use ZFS.

If you 4 disks as in Raid1 - that implies 2 vdevs of 2 disks (4 disks total). You can always add another vdev to this pool (presumably another 2 disks) as another mirror.

Beginners luck, I dodged 4 bullets without knowing it.

It turned out that all 4 disks are CMR:

  1. 1x WD40EFPX (2023)
  2. 3X WD40EPRX(2024)

I do not have the AX that appears to be SMR.
Well, another thing to keep in mind when ordering disks.

As for the questions I asked, I checked YT and folks running TrueNAS, seems to be largely using it as a……….. wait for it ………..NAS! :slight_smile:

Unfortunately tho, I found posts after posts with folks having NFS issues which sucks, it is the issue I am having with the alternative solution.

Some seem to be hardware related, others seems to be TrueNAS itself timing out, it is safe to assume that the DIY solutions do not handle NFS as they should.

Some mentioning having no problems with SMB which kind sucks, I haven no need for it and its overhead kills performance. In the worst case scenario, that will have to cut it.

Well shi….

I will give it a try anyway, better than not having a NAS or forced to use docker which is not TrueNAS main focus, I have hope :slight_smile:

In the worst case scenario, I will follow a video where the guy uses raw Linux + RAID + CLI to build a NAS from scratch on Fedora Linux.

As for the RAID, it is just me and I won’t be doing insane I/O which is zRAID weakness, so zRAID1 is more than enough for continued writing. I don’t need MIRROR

Until yesterday, I had no centralised NAS, and important stuff will be synced to the cloud (encrypted) so that gives me 12TB of 16GB.

If storage space goes sideways, once you get one NAS folks end up with 10x haha, I will build a bigger one with more redundancy in place.

Good idea! It is complicated enough.
TrueNAS is a great NAS, ok hypervisor.
Proxmox is a great hypervisor, ok NAS.
Get both bare metal and you get the best of both worlds.

Not really. Asyc writes in 5s transaction groups are in RAM. So 1GBit/s - network overhead = 115mb/s

5s * 115mb/s = 0.575 GB “cache” in ZFS ARC.

Not sure how that would be connected to cache. Fstab lets you mount folders without them dropping out.

No. Mirror, is well a mirror. That means you get 8TB.

wrong again.

I think you should really start with the basics, read the docs and start with a test host.

1 Like

It does not like it, adding my NVMe to the disks kills NFS/SMB connections so why I wanna give TrueNAS a try.

zRAID1, you can afford to lose one disk so 12TB from my 16TB disks (4x 4TB)
At least this is what I understood from Techno Tim and other folks.
And yes, I know what a disk mirror is :slight_smile:

Wrong again is adding no help and it is public knowledge that you cannot just add disks as you please with TrueNAS.
Things might have changed but that is the understanding, you cannot add a 4TB disk today, another 4TB disk tomorrow and they most match sizes.

When setting up a TrueNAS server, you must have all the disks at once which again, things might have changed recently but this was not always the case, and why folks prefer other alternatives.

This is basic to me, mistakes will be made and that is how people learn.
I doubt everybody in here was born expert in TrueNAS.

Please, don’t get me wrong but your replies aren’t adding help and sounds more like “this is not a place for novice”.
If you cannot provide help, that is fine, but “wrong again” is not how I help novices with other hobbies like 3D printing, telescope and alike.

Thanks anyway

Do a search for RAID-Zx-expansion tag in the forum You can add additional disks to Raid-Z(1,2,3) pools recently. You can also try searching for ZFS Raid Z expansion and see if you get internet hits.

Hmm… your posts are a strange mixture of

  • I want your advice
  • I don’t want your advice, because I know better already
  • sloppy writing
  • sparse information

So my guess is you meant RAIDZ1 and not traditional RAID1 aka mirrors by that:

either way, since I am apparently not helpful, because you ignored everything I wrote, despite “wrong”, I wish good luck with your project.

I think that maybe we should start with workload.

If your workload is a lot of small (size) Write transactions, cache (size) isn’t really an issue in the traditional sense, or when dealing with large sequential files.

ZFS will use the RAM available in a very performant way, and so the only real bottleneck on the Write side is if you exceed the available RAM as an aggregate.

ZFS is constantly storing and flushing as needed.

The same applies to Reads, except there is a READ cache in the form of the L2Arch. This can speed things up in a RAM starved environment. But the same concept as write cache still persists. The L2Arch will still pass through the RAM dynamically. But allow you heavily aggregate demands at the speed of the SSD, as opposed to the HDD. RAM is obviously faster than both. Which is why RAM is GOD in ZFS.

So it really does all boil down to your workload.

if using NFS exclusively, I would suggest a small, highly resilient (Optane 16-32GB) SSD to speed up the ZIL handling. You need low latency, more than fast, and RESILIENT in the SLOG. It’s not a cache per-se. It is a •trick• that ZFS does to tell NFS/SMB that the write occurred •before• it actually happened, allowing a failure to occur and determine on the backend if the write •actually• occurred. Allowing you to check and reinitialize the transfer after a power failure or kernel panic.

as for the expansion of a RaidZ1… Expansion is easy and supported. But this is where VDEV size comes into play.

if your VDevs are 4 Wide. The traditional upgrade/expansion is to add another VDEV in the same config (Z1@4 Wide with same size/type drives). It is still the •recommended• method, but •adding a disk• CAN be done, but honestly I’ll advised and your results may vary.

this is why planning your pool is important, with “Workload Type” being the #1 factor in •It Depends•.

I have the same type of setup as you on a much larger scale, and have not experienced any issues with NFS disconnecting.

I run a Proxmox 5X Cluster and 2 TrueNas bare metal NAS only mules. Network is usually the bottleneck.

Hope that helps a little.