Truenas Scale memory requirements

I am running a truenes scale vm in proxmox with hba pass thru with 4x 12tb ironwolf, 4x 6tb wd red and 2x 4tb wd red all in there own mirror totally up to 80tb, How much memory should be allocating to this truenas scale VM …?

You also need to blacklist the HBA so that Proxmox can’t see the drives and then start to auto import the zfs pools it sees on the connected drives before the VMs start.

Be sure to verify that all those WD Reds are CMR.
SMR based drives, which WD Reds sometimes are, should not be used with ZFS.

How much RAM do you have? ZFS loves RAM since it caches data in it. I wouldn’t run 80TB with less than 32GB, but you can probably squeak by with the minimum 16GB. Less than so is doable but not definitely recommended, especially so if you have any inklings on the server doing more than basic file hosting in a limited way.

Could I ask in the last 5 months, I have been buy HDD for my trusnas scale on my proxmox server, over a period 5 months 4 HDD have come up with errors and failed on me which have been like 10tb HDD each, I have now purchased 2x 20tb HDD and 2x 12tb HDD 2 weeks ago and woke up this morning to truenas telling me that both these raid group which are mirrored are degraded. further investigation say the smart test was aborted so I ran a short smart test on both which was a success and also so ran a srub with no error but it still says the storage pool on both is degraded could someone please tell me what it could be please …?
may I also add I am running two other pools with 2x 4tb and 2x 3tb mirred pools with no issues

Additional details are required. Specifically;

  • TrueNAS SCALE version
  • How are the disks passed through from Proxmox to TrueNAS SCALE?
  • TrueNAS SCALE VM configuration, (how many CPUs and how much RAM)
  • Make and model of all hard drives, including details for the WD Reds which could be SMR
  • Other hardware details, like CPU, system board, disk controller

There is plenty of poor choices for hardware when wanting to use ZFS. Some works until it does not work. Thus we discourage certain hardware in the name of reliability.

The version of Truenas scale is the latest update,

The hdd are passed through via hba to true as vm

True nas scale allocated 8 cores, 70gb ram

The hdd are as follows 4 x 12tb IronWolf pros 2x 20tb Toshiba mg, 2x 6tb wd red 2x 3tb wd red 4x 10tb Hus drives. All wd red are smr

The system running proxmox is CPU : I9 10940X 14 cores 28 thread
Memory: 128 Ddr4 2666mhz
Motherboard ASUS tuff mk2 299
EVGA 1000 watt psi
Not branded 16 port HBA

The current version of TrueNAS is 25.04.0.
Please verify if this is the one you have, it’s visible on the first page Dashboard.

You don’t want us assuming it’s what you mean and give you unsuitable advice.

There’s even a “copy” button next to it, to save you the trouble of having to type it out (though I can’t see how “25.04.0” would be more trouble to type than “the latest update”).

1 Like

It is confusing on which disks are in which pool, and which disks are having trouble. Please supply the output of these 2 commands:

zpool status -v
zpool list -v

Plus, you mention “All wd red are smr”, which is usually a bad sign.

1 Like

I’ve been running TrueNAS CORE VM on Proxmox for the last 3-4 years with just 4 cores and 64 GB RAM.

8 cores is overkill in my opinion. Especially since you’re running on Proxmox, which means that you probably won’t be using the Apps feature, which is what would tend to suck CPU cycles in a normal install. Honestly, I could probably even run it on 2 cores as it’s mostly idle, but I gave it 4 just to give it a bit of a headroom.

2 Likes

Quad core e3 Xeon, 64gb, 700tb. Zero problems. Utilization 10-20% with a lot of containers.

Yes it is ver 25.04.0

I am away at the moment so when I get back I will do this.

But could you explain why the smr disk would be a problem please .? As these disk have not been a problem. I bought brand new 2x 20TB Toshiba MG10ACA20TE Enterprise Hard Drive, 3.5" HDD, SATA, 7200rpm, 512MB Buffer, OEM 85YMX less than two weeks again and now one of them is reporting a smart error and degraded the mirror pool. But I have had this same problem in the past with second hand drives hence why I decided to buy new drives,

Sure, they aren’t a problem–until they are.

Ok I see what you re saying.

Here is another thought I had, could my HBA. Be the problem is it’s a cheap 16 port unbranded card .?

And also I have just seen a post that says when passing through HBA to truenas vm the hba should be added to the blacklist in proxmox, is this the case ?

I haven’t used TrueNAS under Proxmox, but I also understand that to be the case.

Yes, this can be a problem.

ZFS was not designed to have multiple servers importing a pool. Proxmox understands ZFS, so in theory, it could import a TrueNAS pool at the same time as TrueNAS. That would be bad.

There are some protections in place to prevent multiple importers, but they don’t always work.

When 2 servers import the same pool, this can cause un-fixable pool corruption.

Honestly, I think that may be from an older version of Proxmox prior to 7.3 maybe?

I’ve never had to “blacklist” it. I just set it as passthrough for my VM and I’ve had it like that for about 3 years and I’ve never had any problems.

Thank you for that can I ask what hba are you using …?

And also what wattage psu are you using ?

Not sure to whom this question is because you neither tagged nor quoted anyone.

what hba are you using …? [Whattteva]