Install first time True-NAS-Scale

i’m using Linux since 2009 but i’m not an expert.
Now i have 8xWD Red Pro NAS 4TB & can start with my long planed NAS.

Reading different threads & looking videos confusing me more than get answer, so i come to the experts here & thank you in advance for any advice.

The most confusing topic (for me) is the partitioning in general & for-all the fact that my OS/boot-media are all 4Kn. Well, my questions:

My OS/boot-DC (data-carrier) will be a nvme-pcie 4Kn = 1863 GiB, my intention is to setup a 4GiB ef00 format to vfat + limit the root-partition bf00 & let some empty space at end of DC as reserve, partition-scheme:

  • 4 GiB ef00 format to vfat.
  • 1850 GiB bf00 non formatted.
  • 9 GiB as reserve, partition type not known, bf06 or bf07?
  1. Can i choose by the installation of True-NAS-Scale the partition-s where i want to install? or at least to limit the size of root?
  2. If yes, which partition-code have the 9 GiB intended as reserve?

My HDDs for data-storage have each 3,725.29 GiB of place, this i want to limit to 3,720 GiB in order to can replace (one day) these HDDs.

  1. Should i partition it manually? in this case submit me please the partition-code. If a manual-partitioning is not necessary, can i choose the to-be-used-size in GiB in the GUI? & where is this?

greetings

Let me simplify that topic, then: don’t. There’s no reason for you to be doing anything with partitions in TrueNAS. Not on the boot device, not on the data drives, not on cache or log devices. Give it whole devices, and let it do what it needs to with them.

See also:

1 Like

@Terence64w Your mistake is in thinking that TrueNAS is Linux.

Whilst it is based on Debian Linux, TrueNAS is an appliance - a pre-packaged FIXED capability appliance which uses Debian Linux as a base in the same way that Android uses Linux as a base. In essence although a few things require use of a Linux shell, mostly things are either automated or use the GUI.

Aside from thinking about Swap space (and even that is unnecessary from 24.04 onwards), install partitions are standardised, and data partitions are created only through the UI.

That said, if you accept the risks associated with doing so, you can use the boot drive for other things if you really have to (I have and have not had any issues as yet), but IMO you would be better off buying the cheapest NVME drive you can which is > 16GB (which is pretty much all of them) and save your 2TB NVME for something more useful.

I would also recommend that you take more advice than this on your specification and / or read Uncle Fester’s (incomplete) TrueNAS Guide. Changing e.g. your HDD choice once you have data on this will be difficult or impossible, so you need to get it right.

For example, how much memory does your system have? What are you trying to do with your TN server?

1 Like

Please don’t take me wrong, this is exactly what i will not do= end the partitions in the pampas and truncate (from OS) anywhere. In this way i will never be able to replace any HDD, & the OS, each time need calculating the capacity, have to make more work seeing where DC/partition end up, see also 4kn-Formatting.

Look, every Linux let me pre-partition the DC, also Debian recommend a special formatting-parameters of EFI-partition if it’s 4Kn, CachyOS let also format the EFI-partition + just make the root-partition bf00 (without any formatting) and install the OS on ZFS-Root.

Please don’t take me wrong, i never handle any partition like a garbage-basket or public landfill nor will i start now to do it now.

i want be sure, if one day my array fails, that i have done my best to avoid it.

What’s about reserve-cluster if some of them on the main partitions going bad?
Or do you mean that if WD, or another supplier, modify the size of their HDDs, i have to replace the whole array instead of a single HDD?

i have also see partitioning of an Optane on which sit L2ARC & SLOG, than i read here in the forum to install additional applications on separate NVM-e… this mean i have to populate my NVM-e_HBA whit 4 NVM-e, one for OS/Boot, one for additional Apps, one for L2ARC & last-one for SLOG, is this correct?

You’re using a lot of terminology very strangely, which is making it difficult to understand what you’re saying.

DC? TrueNAS does not let you partition the boot device, and ordinarily you shouldn’t. It doesn’t matter what Debian recommends. It doesn’t matter what CachyOS (whatever that is) does. Again, as I said above, you should never be manually messing with partitions in TrueNAS. Period. You don’t need to, and should not, partition the boot device. You don’t need to, and should not, partition the data drives. You don’t need to, and should not, partition anything else. I do not like them, Sam I am.

TrueNAS is an appliance OS.

Read the above slowly, several times.

TrueNAS SCALE is NOT “another Linux distro”. You do not manage partitions, you let the TrueNAS GUI do it for you. TrueNAS does manage some padding (swap) so it can replace a drive by a slightly smaller one if needed. Replacing by a larger drive is never an issue.
But the requirement for all of this to work smoothly is that you use whole drives, not partitions.

Correct. Except that you may not need a SLOG at all (hint: it is NOT a write cache), and may not benefit from a L2ARC and/or may not have the required ressources (min. 64 GB RAM).
Assuming you do have an actual use for both SLOG and L2ARC, using partitions on an Optane drive (and nothing else!) is the edge case where it may make sense for a home lab to manually manage partitions with TrueNAS. But then your requirements and your workload are “business-class”, and you may as well do it in the business-validaded manner: Separate drives.

1 Like

DC is Data-Carrier

i have seen several videos where the capacity of the HDDs was limited via the True NAS GUI in order to at least allow exchange with approximately the same capacity.

Even HDDs of the same manufacturer, Type & series have minimally different capacities.

So, a 4TB HDD has about 3.6379 Tib & my intention was / is to limit the capacity of all these HDDs to 3.632 Tib each & you say that this is not possible although it is done in videos with a mouse click?

This story with the “appliance” is beautifully told & there are appliance f. SoC, embedded devices, smart tv, routers, switches, firewalls, desktops, servers, etc., but each one also cooks only with water = UNIX.

I actually don’t give a damn how they all call themselves, as long as there is no Windows… everything is OK. The Debian substructure of True-NAS-Scale was/is a plus point & inevitable.

As for the RAM, I already have 64GB now & soon my Threadripper will get 128GB.

i want to set up a system that lasts a very long time & therefore must be on the verge of perfection. In order to realize this, I need to understand the system.

An NVMe with 16 GB seems to me definitely too small, even today’s USB sticks are bigger, that sounds like a joke & not future-oriented but just for small tests.

You have been given sound advice from people having years of experience with Truenas.

Ultimately it is you, who will build the system and ultimately it will also be you who is gonna put (important?) data on it.

Format and partition it the way you like.
We just wanted to make sure, your Threadripper is only gonna rip threads and not your data.

:man_shrugging:

1 Like

That’s what the swap size setting in TrueNAS is for. TN will by default create 2G swap partition on each of your drives.

I change that to at least 16G before creating pools, because I’m old school and while the system should in theory have enough memory no to swap, it can help in certain situations.

1 Like

you have 500GB for Bootpool on your system & consider 16GB NVMe for bootpool as professional Answer?

& by the way, your answer don’t help anybody, even less helps yourself.

No one figure, no one explanation, no one motivation (cause/effect), nothing than polemic on my motivation.

Thank you

The swap partition is created on the data drives, not on the boot pool drives.

I have a 32G boot pool without redundancy for my SCALE system and a mirrored 32G boot pool for my CORE system.

The boot pool is just for booting the OS. You can pick any cheap SSD you can get, which nowadays will be large enough. Redundancy for boot is optional, because reinstall, import saved config, go, takes less than an hour. In a commercial/mission-critical environment just use a mirror.

1 Like

Thank you first for your answer, i’m old school too.

on spinning DC (Data-Carrier) you put for sure SWAP at beginn of the HDD!?
bf02 Solaris swap ? or maybe 8200 Linux swap for True-NAS-Scale based on Debian ?
& the Data-Part bf00 Solaris root ?
At least i use bf00 for ZFS-on-Linux-Root.

Thanks again for your competent answer.

The bootpool just contain systemdboot = ef00 EFI system partition & the CORE system is the ScaleOS on bf00 Solaris root ?

Well the smallest NVMe i found have 128 GB with a cost of 14-15 €, even if i mirror boot & CORE… have i 60 € + 20 € NVMe-HBA, or do you think i can use only 2 NVMe and let make boot + Core mirrored on these 2 NVMe?

This’s clear, the Swap come where to read & write data.

Do you maybe know why True-NAS don’t use ZRAM with zstd-compression like some Linux-Distros do?

Probably so - but why would you care? Just install TN SCALE and it will create the correct layout. No user serviceable parts inside.

Command (m for help): p

Disk /dev/sda: 29.82 GiB, 32017047552 bytes, 62533296 sectors
Disk model: TS32GSSD370S    
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 6336841E-1B5F-4764-AB8A-8BA8490870D4

Device       Start      End  Sectors  Size Type
/dev/sda1     4096     6143     2048    1M BIOS boot
/dev/sda2     6144  1054719  1048576  512M EFI System
/dev/sda3  1054720 62533262 61478543 29.3G Solaris /usr & Apple ZFS

You never use any hardware mirroring/RAID/whatever with TrueNAS and ZFS. Just install to two boot drives simultaneously and the installer will create a mirrored boot pool. It will also create a UEFI boot partition on each, so in case your first drive fails and you can get at the console via IPMI, it’s pretty trivial to boot the system from the second drive - assuming a modern BIOS and good UEFI support which should be standard in 2024.

If you have enough free SATA ports and space in your enclosure you might consider SATA SSDs for boot and use the NVMe (M.2 I assume?) slots for something that can actually use the performance.

What for would you suggest to use that in the context of a storage appliance?