Striped boot-pool

Hello

I know what I am saying is not the most ideal solution. But I would like to try and figure out how to do it anyway. Any help would be appreciated.

I have a little n100 based board with 2 nvme slots and I have 2 new 16GB m10 Optane drives. These will work in mirrored mode or by themselves.

I am interested in running them in a striped boot-pool.
These M10 Optane drives have a sequential speed of about 150mb/s write, 900mb/s read.
This won’t speed up the system by any great deal but I would like to try just to see.

I note that in the Truenas Scale documentation it does say that installing on a striped pool is not recommended unless you have a good reason.

With all that in mind is it possible to do with scale? Potentially using the CLI?

I could of course could buy additional drives etc but just would like to see what could be possible with what I have. Will back up config etc but this is more of a fun we side project.

Thanks

Any particular reason to not just use a mirrored boot pool which is good for redundancy reasons?

Anyway, TrueNAS does not expose the ability to stripe the boot pool. So you would have to do it manually.

1 Like

Thanks for replying!

So no real reason to be totally honest. I do like the idea of having a bit more space in my boot pool for future update etc. These Optane disks are very high write endurance and are brand new so it a risk I am willing to take.

I know redundancy is key in nearly all situations, in my other 2 systems I mirror my boot pools, back up my configs and run a tight ship.

I have been using truenas scale for the last year installed across 2 self built nas systems, so this third system is more of a testbench nas (I have a nas problem it would seem)

Do you have any pointers on how I could do it? Not saying it’s wise (as its really not) but just to see what is possible.

I am would imagine I would go into the CLI and create the pool before install. But during the installation itself it asks for disks and if you give it 2 in auto sets up for mirror (99.9% of people should)

Yeah.

Mirror the drives in the UI

Then using cli, detach one of the zfs partitions, then extend the boot pool with the same partition

Then duplicate your boot environment in GUI. Delete old one.

That should rebalance.

Ok thanks for the tip! Really helpful.

I will try this as soon as the board arrives. Been waiting for it for a couple of weeks now so thoroughly in thought process mode right now.

Might change my mind again and just run a more normal setup.

Definitely going to try this tho just for fun.

:innocent:


:smiling_imp:


Which face do you trust more?

:sweat_smile:
They both live on my shoulder on a daily basis!

1 Like

Just for personal curiosity, there are any difference from Scale to Core, that encourage to mirror the boot pool?

One thing people sometimes forget, is that older alternate boot environments are at some point useless.

For example, at work I was requested to update the Solaris root pool version just before patching. This would mean that the current and future boot environments would be fine.

But, I’d have to manually check the oldest alternate boot environments to see if they would work. And any that are not able to use the new root pool version might as well be destroyed. (We normally keep 2 to 3 old alternate boot environments, any more than that, get destroyed by our patching policy.)

This also applies to TrueNAS boot pools. If a newly enabled feature or feature set has a blocker for older boot environment(s), then that old ABE is useless. So it might as well be removed.

Basically keeping 2 old boot environments and current is really all that is needed.

Thus, if those 3 take up more than 16GBs of space, you might have a different problem.

2 Likes

Scale writes a lot more logs/netdata to the boot drive.

1 Like