I wouldn’t mind paying for a more liberal and open truenas version with good options and features for lots of different use cases as long as it remained fully open source. Sounds great. I like what truenas is foundationally, I really do, but I don’t like the imposing grips - it’s my hardware. I’m trying to play the harmonica and I’m being told no.
Keeping the OS separate from your data is a sensible thing. I haven’t been brainwashed by iX to think that.
You can be as liberal as you want, ignoring the recommendation and manually partitioning the boot device putting your eggs in the same basket, if you so choose - iX won’t stop you; They just won’t actively help you accomplish it.
Since you brought liberal up you could take that line of thinking further and see that others can have different views from you, and that’s perfectly fine. After all, iX owns TrueNAS, it’s up to them to decide what they spend their time on and what data integrity practices they choose to promote in their appliance OS.
That final line about Apple… come on, tossing insults around is not going to win people over to your side.
…and then there are some who would take it as a compliment.
Not sure if I understood the issue here, but as far as I understand it seems related to the choice of whether a boot disk should be used for other purpose than boot, such as storing data on a spare partition.
Using Windows, flavors of Linux on my main PC, I have long made the decision of storing the Windows boot and program installs on the C: drive.
However, I have kept all my other data on drives other than C:. What this gives me is the ability to do a disk clone/backup/restore involving boot and Windows. My personal data when stored on different drives remain unaffected. I am less prone to deleting data if I restore an older version of the C: drive.
With Truenas, the issue is a little bit different, but somewhat the same.
If I were to allocate the boot drive to store VM partitions, then having to restore/recover the boot drive would most likely wipe the rest of the drive.
While a 512GB SSD or NVME drive is overkill to store a few GB of partition/boot environment, the real benefit is really on the ability for the drive to perform reallocation of defective sectors.
While SSD, HDD and NVMe have hidden sectors for such recovery, I believe the non allocated portion of the drive can still be used to extend the usable life of the drive.
This may not be a noticeable feature on the short term, but on the long run it should prove as an added benefit.
In the old NAS4free and FreeNas, USB keys have undergone short lived presence due to high failure rate. Now SSD are doing the work of the USB but now last decades as they are able to allocate section of the drive which are not in use.
I use Apple products every single day, by choice.
But I also believe that it’s how you say something that determines what the intent is and by extension, essentially any word can be used as an insult.
My read of the usage here was that it was meant to deride. To reuse an old adage, it’s the thought that counts, and the thought here wasn’t positive.
It wasn’t an insult. What was incorrect about what I said? ix can do whatever they want, obviously, and I can choose to express what I think. I don’t think keeping OS and data separate is unreasonable at all, in fact, it’s very reasonable and smart. That isn’t my point. My point is that that’s not every single use case, and not every scenario, and not for every single user; when i used the term liberal, what I meant was precisely that, that everyone has a different view, with a different machine and a different use case. I think we ultimately agree here.
I agree, and have done the same previously. In regards to truenas, if the boot and app pool dies, it’s not the end of the world for me. I keep a backup regardless. Thus it’s an investment in hardware for features I don’t really need, due to software limitations. But I do see the benefits of separation though. Either way, I ended up using a pcie 3.0 x1 nvme adapter for boot. The difference in boot time between x4/x1 seemed negligible when I tested, which makes sense I suppose.
But… no one is actually stopping you? You still have every chance to do whatever you want. Manually partition your boot pool, enable dev mode, & apt-get whatever you feel like if you really want. The world is yours!
Just understand the risks (if any), and keep in mind that no one will offer any level of official support if you do. No one is stopping you. TrueNAS, can & will work fine if you make that choice (unless you fuck it up) - if it does stop working, then your bug tickets will be ignored by the devs & everyone on the forums will say “I told you so”. Such is the price of freedom; being accountable for your failures (if any).
If you are unable to manually make partitions without the GUI holding your hand, then don’t cry that people aren’t letting you paint with all the colours of the wind or whatever.
Right, but ix is not accounting for it, which means there’s unpredictable behavior associated with it. What you’re suggesting is a “hacky” solution. Sure I could do it, but it also wouldn’t take much from ix side to simply account for it. This isn’t just in regards to boot, I’m seeing a trend: hdd spin-down disabled, the recycle bin vanishing, difficulty lowering the grub menu seconds… All of these things require solutions that may break sooner or later, despite the fact that the things I mention are configurations that are very commonly used by users. I don’t think it’s good design to limit options that are functional and in use by users.
“if it does stop working, then your bug tickets will be ignored by the devs & everyone on the forums will say “I told you so”.”
This is just another way of saying: don’t do it in the first place. Which further proves my point. Truenas is not designed for “users running servers”, it’s designed for “servers with a pre-defined layout and vision with an idea set in stone”. I simply disagree with the philosophy of intentionally imposing restrictions that have very little reason to not exist in the first place. I don’t know what ix plans are recently in regards to “going closed source”. Either way, they’re not really catering to users in the first place so I don’t quite understand the response. I get a feeling that ix wants to be paid for their product, which is perfectly fair and reasonable, but if the product isn’t designed for users… I don’t see how you can reconcile those two. ..
I really don’t get the point of the continued discussion–this is the way TrueNAS is, it’s the way FreeNAS was, and it’s the way m0n0wall was before that. “Dedicated boot device” has been the model for over 20 years, and this thread (or section of the forum) isn’t the place to try to convince iX otherwise.
If you want to try to convince them otherwise, post a Feature Request. But note it’s been done before:
Long-time TrueNAS user here – from the time before the name change.
Just want to add my 2¢ here:
Option #3 from above (2230/2242 M.2 SSDs with SATA –> USB3 adapter) is the way to go. At one point, I ran 5 TrueNAS servers with them, and they haven’t let me down.
Advantages:
- You don’t lose a SATA port.
- You don’t have to cry about wasted disk space. I bought 32GB and 64GB SSDs dirt cheap on eBay. The adapters can be had on AliExpress (or Amazon)
Disadvantages:
- You cannot mirror your boot pool.
For any SOHO setup, a mirror boot pool is a waste of resources. You have the config backed up, you don’t need more. Replacing the USB drive is done in 15 minutes when you have a spare at hand.
Just want to share this and help others make an informed decision.
Installed truenas today and found out about this while trying to install scrutiny (smart options removed from the ui but the docs still say it exists) asking me to make a “pool” to install an app offered inside the ui.
Who thought it would be a good idea to waste perfectly usable storage by nulling out entire drives when you’re only using 15gb? Are people specifically buying exact size boot drives to run truenas? But most people have 128-512gb+ drives and there’s a pretty good chance they’ll reuse their existing stuff.
And it’s not like it’s pshysically impossible, the post literally tells you you can partition the drive so clearly it’s not just me. The only issue with that is you need to install it and find out that it works like that and then search for why that is in the first place.
1. boot.expand() - The Boot Partition Fills the Entire Disk
middleware/src/middlewared/middlewared/plugins/boot.py:241-278
disk_size = await self.middleware.call('disk.get_dev_size', disk)
if partitions[-1]['end'] > disk_size / 1.1:
return # already >90% of disk, skip
await run('sgdisk', '-d', '3', f'/dev/{disk}') # delete partition 3
await run('sgdisk', '-N', '3', f'/dev/{disk}') # recreate using ALL remaining space
The ZFS partition (partition 3) is deleted and recreated to consume all remaining space on the disk. The trigger threshold is disk_size / 1.1, if the partition doesn’t already cover >90.9% of the disk, it gets expanded.
Is this technically necessary?
No. The sgdisk -N 3 command is explicitly “use all remaining space.” Nothing about ZFS or the boot process requires the boot pool partition to be this large. The boot pool typically holds boot environments (OS images), which are far smaller than most disks. A 32 GB partition would be generous for most deployments. This is a policy choice. It’s the code actively claiming the space, not a fundamental constraint.
[!IMPORTANT]
Ifboot.expand()did not run, or if it capped the partition at a fixed size, there would be free space on the disk for a 4th partition.
2: disk.format() - Wipes Before Partitioning
[middleware/src/middlewared/middlewared/plugins/disk_/format.py:96-97]
[middleware/src/middlewared/middlewared/plugins/disk_/format.py:119]
[middleware/src/middlewared/middlewared/plugins/disk_/format.py:137]
# Step 1: Wipe the entire disk
self.middleware.call_sync('disk.wipe', disk, 'QUICK', False).wait_sync(raise_error=True)
# Step 2: Create partition 1 starting at sector 0
cmd += ["-n", f"1:0:+{int(size / 1024)}k", "-t", "1:BF01", f"/dev/{disk}"]
# Step 3: Assert exactly 1 partition exists
if len(self.middleware.call_sync('disk.get_partitions_quick', disk, 10)) != 1:
Three deliberate choices compound here:
-
disk.wipedestroys existing partition tables, butsgdiskdoesn’t require a clean disk -
-n 1:0:hardcodes partition number 1 at sector 0, butsgdisksupports any partition number and offset (e.g.,-n 4:0:+SIZEwould create a 4th partition in free space) -
!= 1assertion enforces that the disk has exactly one partition afterwards, an explicit expectation of sole ownership
This is a policy choice. The same tool (sgdisk) is used by boot.format() to create partitions 1, 2, and 3 alongside each other:
middleware/src/middlewared/middlewared/plugins/boot_/format.py:98-105
# boot.format() creates multiple partitions on the SAME disk without wiping between them:
['sgdisk', '-n1:0:+1024K', '-t1:EF02', f'/dev/{dev}'], # BIOS boot
['sgdisk', '-n2:0:+524288K', '-t2:EF00', f'/dev/{dev}'], # EFI
['sgdisk', f'-n3:0:{zfs_part_size}', '-t3:BF01', f'/dev/{dev}'] # ZFS data
The boot formatter proves sgdisk can create a partition alongside existing ones. disk.format() for data pools could do the same thing, create a partition in unallocated space without wiping. It simply doesn’t offer that path.
3: get_reserved() - Whole-Disk Reservation
middleware/src/middlewared/middlewared/plugins/disk_/availability.py:155-156
async def get_reserved(self):
return await self.middleware.call('boot.get_disks') + await self.middleware.call('pool.get_disks')
boot.get_disks() returns whole-disk names because resolve_block_path() walks sysfs to find the parent device:
middleware/src/middlewared/middlewared/plugins/zfs_/pool_status.py:20-21
dev = Path(path).resolve().name # "sda3" (partition)
resolved = Path(f'/sys/class/block/{dev}').resolve().parent.name # "sda" (whole disk)
return resolved
check_disks_availability() then compares user-requested disks against this list of whole-disk names:
disks_reserved = await self.middleware.call('disk.get_reserved')
already_used = disks_set - (disks_set - set(disks_reserved))
if already_used:
verrors.add('topology', f'The following disks are already in use: ...')
This is a policy choice. The code deliberately resolves sda3 → sda to block the entire device. It could instead track which partitions are in use, or recognize that a disk with free unpartitioned space is partially available.
4. WebUI DiskStore - Hides Boot Disks Entirely
webui/src/app/pages/storage/modules/pool-manager/store/disk.store.ts:32
const disksWithExportedPools = usedDisks.filter((disk) => !disk.imported_zpool);
return sortBy([...unusedDisks, ...disksWithExportedPools], 'devname');
Any disk with imported_zpool set (including boot-pool disks) is excluded from the selectable list. The user never sees boot disks in the pool creation wizard.
This is a policy choice. The filter could be made more granular. For instance, showing boot disks with a “partial” indicator, or allowing selection of free space on disks with existing pools.
The Redundancy Pattern
All four mechanisms block the same scenario independently:
| # | Mechanism | What it does | Type |
|---|---|---|---|
| 1 | boot.expand() |
Fills the disk so there’s no free space | Policy - sgdisk can create fixed-size partitions |
| 2 | disk.format() |
Wipes the disk before partitioning | Policy - sgdisk can add partitions alongside existing ones (boot.format() proves this) |
| 3 | get_reserved() |
Blocks by whole-disk name | Policy - resolves partition → parent device deliberately |
| 4 | WebUI DiskStore |
Hides disk from UI | Policy - simple boolean filter |
If any single one were removed, the others would still block it. This redundancy means the prohibition is deeply embedded, but each layer is a choice, not a technical wall. The underlying tools (ZFS on partitions, sgdisk multi-partition support) already support the scenario the application prevents.
What the Code Proves Is Possible
The boot pool itself is the strongest evidence that this restriction is purely policy:
-
ZFS runs on partition 3 (
BF01type, GPT), not the whole disk -
sgdiskcreates multiple partitions on the same disk (partitions 1, 2, 3 inboot.format()) -
resolve_block_path()already knows how to map partitions to disks, it could just as easily map disks to their free partitions -
details_impl()already queriesdisk.list_partitions, the partition information is available in the system
Nothing in the code suggests a fundamental incompatibility between having a boot pool partition and a data pool partition on the same physical disk.
For the comment in the op
Where the post’s framing doesn’t match the code:
- “Clear engineering reasoning”, but the code contains no engineering reasoning. No comments explaining why, no risk checks, no safety guards. It’s four layers of “no” with zero explanation in the source. If there were clear engineering reasoning, you’d expect at least one comment somewhere saying “we don’t allow this because X.”
- “Would introduce lots of new complexity and edge cases”, but the actual delta is surprisingly small.
boot.expand()needs a size cap instead of-N 3.disk.format()needs a path that skips wipe and uses a higher partition number.get_reserved()needs partition-level granularity. The boot formatter already demonstrates all the underlying mechanics. - The “appliance firmware” framing is doing heavy rhetorical lifting. The code is a Python middleware with a full Angular web UI, CRUD services, a plugin architecture, and database-backed state. That’s a full application stack, not firmware. The “it’s firmware, don’t question it” framing doesn’t match the codebase’s actual character.
What the post is really saying:
The honest core of the argument is in this one sentence: “iXsystems is trying to build an amazing product with a limited staff, and they need to be able to focus on the things that pay the bills.” That’s a resource allocation decision. It’s not that it can’t be done, or that it’s unsafe, or that it’s technically unsound. It’s that they don’t want to spend the time, and their enterprise customers don’t need it because they ship dedicated boot hardware.
The rest of the post, the “appliance firmware” framing, the “don’t be demanding,” the “clear engineering reasoning”, is dressing up a business decision as a technical one, and then preemptively discouraging further discussion about it. The code supports none of the technical mystique. It’s four straightforward policy filters.
So yeah, turns out it’s not some nasa space engine four leaf hidden technique that prohibits it. The smugness of people I’ve seen talking about this is not very justified, but it’s what you come to expect from linux adjacents
@imi - You bring up an interesting point.
What if someone put in a feature request to limit the boot-pool to 16GB to 32GB, instead of full capacity of the boot device?
This solves one particular problem in boot device replacement. If you used a 128GB boot device and later have to replace it with something that is 64GB, having the actual boot-pool being smaller makes sense. Simple replacement, without having to save configuration, re-install and restore configuration. Saves down time too.
With this very minor change, the user can then manually create another partition and make the secondary pool. Clumsy but much more likely to be implemented than full sharing of boot-pool devices.
I will not go into sharing the boot device(s) with other uses. It is clearly obvious that on failure, replacement would require dealing with those other uses, besides the boot-pool. This is something TrueNAS does not do for any pool member. (Well, except Hot Spare device(s), which can be shared among different pools…)
I was quite lucky as my Dell Server had a DVD which I replaced with the following:
" Universal 9.5mm SATA to SATA 2nd SSD HDD Hard Drive Caddy Adapter Tray", and installed a 64GB m.2 adapter in the case.
Works a treat.
Have to laugh, my boot drive was born in January 2009. WDC WD3200BEVT-00A23T0 is a 320GB 2.5-inch SATA laptop hard drive from Western Digital’s Scorpio Blue series. I have no plans to change it and the server turns off and on each day to save me money of electricity. It’s been doing that for 2 years now. Trust the rust ![]()
