How to add a TB SSD to a RAIDz1

After migrating from RAIDz2 to a RAIDz1, I would like to add and extra 1TB SSD to the existing pool. I have tried several avenues, but I am running into issues:

  1. The drive is not formated
  2. There is no gptid assigned to the drive
  3. Unable to find the option to format the drive with gpt partition

Doing it through the WGUI, it is giving an error:
“Caution: Adding data vdevs with different numbers of disks is not recommended. First vdev has 3 disks, new vdev has 1”

Can someone provide some guidance on how to add the SSD to the current pool?

Thank you in advance for your assistance.

We need a bit more information:

  1. Is your raidz1 pool all ssd or is that the only ssd you want to add?
  2. Which version of Truenas are you using? Because it sounds like you want to expand a vdev with a single disk, which is only possible on truenas scale 24.10 RC1 which released 2 days ago. It’s a new feature and not well tested right now.

…and is still in pre-release status.

None of these are issues that would affect your adding the SSD to the pool. But what’s currently in the pool? If it’s currently a pool of three spinners, there’s just no real use for a fourth device that’s a 1 TB SSD–it’s far too large to use as L2ARC (not to mention SLOG, even if it were suitable for that purpose, which is highly unlikely), and there’s just no sense at all putting a SSD in a vdev with spinners. If it currently consists of three 1 TB SSDs, then it would make some sense–but then you’d want to wait for the release of 24.10 before proceeding.

2 Likes

@LarsR

My apologies about that, I was late at night still working on this issue and totally skipped my mind. Thank you for the reminder!!

A1: Yes, the entire pool was moved from mechanical to SSDs.
A2: It is running the latest TrueNAS Core 13.0-U6.2

I kind of figure TNC might be on the brink of being faced-out as I see more and more TNS. I personally don’t like TNS, and before you ask, yes I test it and see so much lacking on features on the WGUI.

Here is my question though, is that feature base on the WGUI only? The reason I am asking this, is because there are feature in TNC that are prohibited on the WGUI, but the same feature is available via the CLI.

I am also noticing that the version for Openzfs is falling behind:

zfs-2.1.14-1
zfs-kmod-v2023120100-zfs_f4871096b

While the latest release is 2.2.26, which I think, it might impacting the addition of the drive. Here is the error I am receving at the CLI:

invalid vdev specification
use '-f' to override the following errors:
mismatched replication level: pool uses raidz and new vdev is disk

Thanks in advance for your assistance.

AFAIK vdev expension is not available in core 13.0-U6.2 because it uses a version of zfs which doesn’t have that feature. You could try to update to core 13.3 and see if the newer zfs version includes raidz expansion.
I’ve switched to scale over 3 years ago, so my core knowledge is pretty rusty and i don’t follow its release notes as closely as the ones for scale

Edit: fixed typos.

1 Like

Hello dan,

it seems it is since the error I am receiving from Openzfs:

invalid vdev specification
use '-f' to override the following errors:
mismatched replication level: pool uses raidz and new vdev is disk

In your inference of mixing mechanical and SSD drives, in the same pool is not recommended, as you mentioned. But there are advance systems that use the SSD or M.2 drives as data cache. For this particular RAIDz1, all the disks are SSD.

As previously mentioned, I prefer to stay in FNC vs moving to FNS. But again, I can see the push to move to FNS more and more. Makes me wonder if FNC will be discontinued at some point in the future.

Thanks for the assistance.

There is no sensible way to use the additional SSD in your pool in CORE. In SCALE starting with 24.10 (not yet released), you’ll be able to add it.

It is not and in all likelihood never will be in CORE.

Thanks for the info, I will keep that in mind.

If I do move to Scale, I would just go with pure ZFS vs going with TrueNAS. The methods and WGUI used by TrueNAS are very limiting and IMO useless in some cases. Scale is base in Linux, I believe some flavor of Ubuntu, which I am not that attracted to it. If need to be and push comes to shove, I will move to XigmaNAS.

Thanks for the info.

@dan

For what I found out in the FreeBSD forums, there are two methods:

  1. replace each individual drive in the existing raidz1 vdev with larger drives, which will expand the total available space in the pool once the last drive is replaced; or
  2. add another raidz1 vdev to the pool, using the same number of drives as the other vdev uses

This means I have to go to the entire painful process of replicating the entire datasets, where TNC is horrible and very buggy at the WGUI level. If you read my one of my previous post. Specially when it move the subdatasets outside the parent.

In any case, I will have to do the migration process again.

@dan

Makes sense, otherwise, why have an enterprise version?

With neither of the two methods you suggest do you need to replicate or migrate anything–the larger drives, or additional vdev, are just part of the same pool; there’s no need for you to move anything around.

As to CORE vs. SCALE, CORE is going away entirely–the community and the Enterprise versions. The timeframe is unclear, but from iX’ public statements, it appears highly unlikely there will be any more significant releases of CORE–though there may be bugfix releases.

@dan

Allow me to clarify:
First, it wasn’t my suggestion, it was posted in the FreeBSD forums.
Second, yes you do in order to expand the RAIDz1 since it doesn’t allow just adding a single drive. Either 3 drives RAIDz1 be added or create a new entire zpool since a single drive addition can create issues.
Since I am already in the largest drive I want for this system, I will have to migrate the datasets to the temp system, destroy the current RAIDz1 and created a new one with all the drives, which is very impractical for either home users or business.

About your comment about FNC, yes, that was on the horizon for a while. Since iX took over, FN has beeing going on a down right decline. I have been noticeable for those using it since version 9.0 and previous to that, when it was OpenNAS.

Scale looks rough. It looks like it is still in beta testing, contrary to TrueNAS that was a solid product. But, hey, $$ talk BS walk. That is my perspective as I have seen it in the IT industry for many decates.

Neither of the two suggestions you referred to in your post dealt with expanding the RAIDZ1. You mentioned replacing all the drives with larger ones, and adding a second vdev, as the two possibilities. Neither of those involves expanding the RAIDZ1, and CORE is quite capable of doing both (and has been for a good long time). If you want to turn a three-drive RAIDZ1 into a four-drive RAIDZ1, yes, that involves RAIDZ expansion, and that means SCALE 24.10 or later. Or, of course, destroy and rebuild the pool, which you can do under any version of Free/TrueNAS.

If you really liked FreeNAS better before iX took over (which would have been before 8.0), why not use XigmaNAS? That’s the current version of that project.

@dan

You might want to revisit the initial post, which clear stated to add a single drive (disk) to a pool. Which it is obviously preexisting pool.

Both method mentioned are directly in relation to expanding the pool. That is the sole and whole idea of adding a drive to the pool, to extend the capacity of pool. Given that a single drive cannot be added directly to the pool, either destroy/create or add the same number of disk in a RAIDz1. I would recommend reading the FreeBSD post.

As I mentioned, I joined TN on version 9.0, before that I stood sill in time with FreeNAS. then I tried NAS4Free for a while, and after that I decided to run both forks. As you can see, I already run XigmaNAS in some of my systems. Since it is a bit lighter than TNC in resources. I run both NAS systems. Each for different approach.

Even home users should have backups.
The current pool is 2 TB, so it should not be difficult to find one or two hard drives to replicate to.

1 Like

@dan

The point is mute at this time. I still have to go to the process to destroy and create the pool, not to mention a downtime, even with backups. Still there is a downtime. Moving it to the temp system is the smart move to prevent down time. Once the replication is completed, all I have to do it change the IP in the DNS Server and the moved it completed. Totally invisible to enduser.

The temp system is the the twin system for the production. Restoring from backup will take longer, meaning a longer downtime for the system. The only down size for temp, is the size of 1TiB. The idea is to move it to 3TiB to accomodate more data.

The GUI is quite correctly preventing you from adding a single disk VDev to a pool with a raidz VDev in it.

I assume what you actually want to do is increase the width of your 3-wide RaidZ1 VDev to 4-wide.

This is a feature in TrueNAS SCALE 24.10, Electric Eel.

And it’s a big deal that we’ve been waiting for for 5 years or so :wink:

1 Like

@Stux

It is also the CLI, beside the WGUI. For some reason I thought there was still running Oracle zfs when in reality, it is Openzfs which seems to be behind Oracle’s.

And you are absolutely right. I want to expand from 3-wide to 4-wide vdev in RAIDz1. It would be nice to have the feature in TNC since that is what I prefer. I tested TNS and only lasted about 20 min installed. It looks very basic with less features and still under dev process. I guess it is still in it infancy, since 5 years is really not that long time for a product.

Thanks for info though!!!

Just so you know, on Eel (which I am not migrating to until the .2 release but am testing it) zfs is on zfs 2.2. You are at the “end” so to speak of Dragonfish release. Yes, it’s supported for a lot longer but I doubt they’ll update zfs in Dragonfish.