How to import individual ZFS disks from XigmaNAS to TrueNAS SCALE?

I’m migrating from XigmaNAS to TrueNAS SCALE and have a question about importing my disks. In XigmaNAS, the disks are not part of a ZFS pool; each disk was set up individually using the ZFS format.

I’ve already installed TrueNAS SCALE on a separate SSD and connected the disks to the server, but I’m unsure how to proceed to mount and access the disks individually on TrueNAS SCALE. My questions are:

  1. Is it possible to import individual ZFS disks into TrueNAS SCALE without creating a pool?
  2. Are there specific steps or commands to mount these disks in SCALE?
  3. Can I configure new shares directly on these disks without recreating them or losing data?
  4. Any tips to avoid issues during this process?

I’d appreciate any guidance or links to materials that could help with this migration. :blush:

Thanks in advance!

Specific answers to specific questions:

  1. There is no such thing as a ZFS disk. There are only ZFS pools. But each disk is probably a separate pool.

  2. You should be able to import these pools through the TrueNAS UI. If that doesn’t work then please open System Settings / Shell and run sudo zpool import and copy and paste the output here.

  3. Yes. But this is NOT a good idea. Single disk ZFS pools are NOT the best configuration and you should move away from this.

  4. If the import works through the UI, then no. But if it doesn’t work, we can help you fix it.

However once you have mounted them to access the data on them you will need to decide what to do. You certainly should not leave them as single disks because they have no redundancy.

If you can add your system specs to your signature (MB, memory, storage controller(s), SSDs, HDDs (including both sizes and model numbers of the disks) and the % fill of each of the disks we can almost certainly advise on how to proceed - without this information we can only make generic recommendations which might be wrong in your specific situation.

ou are right, there are 4 ZFS pools.

Currently, I am migrating from old hardware. The old system is a Supermicro X7SBL with 8 GB of RAM and the disks are directly connected to the motherboard, without a RAID controller. The disks are:

  • 2 x Seagate IronWolf, NAS, 4TB, 3.5"
  • 2 x Seagate IronWolf, NAS, 2TB, 3.5"

The configured volumes are:

volume1

  • Total: 3.98 TB | Allocated: 849.04 GB | Free: 3.13 TB | State: ONLINE

volume2

  • Total: 3.98 TB | Allocated: 888.94 GB | Free: 3.09 TB | State: ONLINE

volume3

  • Total: 1.99 TB | Allocated: 175.5 GB | Free: 1.81 TB | State: ONLINE

volume4

  • Total: 1.99 TB | Allocated: 200.5 GB | Free: 1.78 TB | State: ONLINE

Now, I am migrating to more current hardware, which is a System X3300 with 32 GB of RAM and a Xeon CPU E5-2420. The new system also includes an LSI MegaRAID SAS 2008 controller. With this new hardware, I am planning to optimize the configuration and redundancy of the volumes.

What does this mean? Are you planning to make mirrors out of the four disks?
Move the data to a new Raid-Z2 VDEV and pool?

I am planning to migrate to TrueNAS first, and then later decide if I will create a Raid-Z2 pool. Initially, the goal is just to migrate the ZFS pools that are currently installed on XigmaNAS.

1 Like

Did you check if it is flashed to IT mode?

You can check by running

sudo sas2flash -listall

You can paste the results back here using Preformatted text (Ctrl+e), looks like </> on toolbar when you post comments

With these drives, your best choice is a pair of mirrors, which gets you 6TB of redundant and flexible storage.

And for what it’s worth, you should be able to consolidate the 2nd drive onto the first, then attach it as a mirror to the first, then consolidate the other two drives, and then add them as an additional mirror.

You should be able to use local replication for the consolidation.

Ensure your mega raid is running IT firmware.

2 Likes

@tngxsantos Just to put a little meat on the bones of what other people have said (and which I agree with):

  1. You really need to be using a redundant disk configuration so that if a disk dies, you don’t lose any data.

  2. If you had 4x drives of the same size, the best configuration for 4x 2TB or 4TB drives would be a single RAIDZ1 vDev - but unfortunately you have 2 pairs of drives of different sizes, and a RAIDZ1 vDev across all 4 drives would only use 2TB of the 4TB drives which would be a waste. (A RAIDZ vDev essentially expects drives to all the same size.)

    Note: In case you are relatively new to ZFS, a pool is made up of 1 or more data Virtual Devices (or vDevs) - primarily data vDevs (but for completeness there are other types of vDev which for your small requirement you don’t need).

  3. So you need to use pairs of matching drives to make 2x vDevs. You can either have two pools with separate free-space - if one pool gets full then you cannot use the space from the other pool, and instead you would manually need to move data to rebalance the free space - OR you can make a single pool with 2x vDevs, one of which is 4TB in size the other being 2TB in size, and this single pool is what I would personally recommend.

  4. However, contrary to the mirrored pairs other people are recommending, I would recommend that you create 2x vDevs, each of which is a RAIDZ1 pair of matching sized disks. (Creating a RAIDZ1 with only 2 disks is new in TrueNAS 24.10 Electric Eel.) Although the performance of a 2x RAIDZ1 is probably not quite as good as a mirror, the benefit of this is that if at a later date you need more space, then you can add e.g. another 1x 4TB drive to the 4TB RAIDZ1 vDev and actually get 4TB more space. If you use mirrored pairs, then you would have to add 2x 4TB drives to get 4TB more space.

    However, there is a down-side to this - namely that reports are that you cannot create a 2x RAIDZ1 through the TrueNAS UI, and you would have to use the command line to achieve it. The good news is that we can talk you through this if you decide that this is the way to go.

  5. That said, a configuration with 2x 4TB in one vDev and 2x 2TB in another vDev would give you TB of useable space, but equally all 4x drives in a single RAIDZ1 vDev would also get you TB of useable space. The difference here is that if you later add a single 4TB drive to this vDev you would only get 2TB additional useable space, so this alternative would make sense if your system can’t take more than 4x drives and if you need more space at a later date you want to replace the 2x 2TB drives with 2x 4TB drives - with 2x vDevs you would get 2TB more useable space, with a single vDev you would get 6TB of additional useable space.

  6. Another factor would be how many SATA drives your system can support - and to be able to determine this we will need to know what MB you are using for on-board SATA ports, and if your drives are not connected directly to the MB then what other controllers you might be using (and how many disk positions your case has).

  7. The good news is that whichever design you decide to go with, you have easily enough space to achieve it by moving data between these drives, and not have to move data to a different system. However, the route to achieving this will depend upon which design you wish to go with.

I hope that this helps.

2 Likes

Not really. Last I heard doesn’t work in the GUI, and was always there in the background.

EDIT: oops. Jumping the gun!

Asides from that, I really don’t think there is much point buying more 2 or 4TB disks. They’re uneconomical, so if one fails you’d want to replace with something larger anyway.

Mirrors are very flexible. Upgrade in pairs. Remove a mirror, add a mirror, etc.

And Raidz expansion is not without its own issues.

2 Likes

@stux makes some valid points - RAIDZ expansion is new and currently has some issues with how the new space is reported - but these will likely be fixed in the next year or so. And as we both pointed out, you would need to use the CLI to create the 2x RAIDZ1 vDevs and the route to achieving this would need you to do more data moves.

That said, I think that 2x mirrored pairs vs. 2x RAIDZ1 pairs vs. 1x RAIDZ1 4-wide are all equally flexible, just flexible in different ways.

But I think what everyone would agree upon is that understanding the options and selecting the one which will meet your expansion needs in a couple of years is going to be a whole lot easier than discovering in a couple of years that you made the wrong choice and then finding it difficult (or impossible) to switch over.

On reflection I would suggest that you weigh up the following options:

  1. A single pool with 2x mirrored vDevs, first 2x 4TB and second 2x 2TB - 6TB useable space, easiest migration, most flexible - can add or remove drives as you wish, fixed 50% redundancy overhead.
  2. A single pool with a single 4x RAIDZ1 vDev - 6TB useable space, best if you are going to stick with 4x drives total and later swap 2x2TB for 2x4TB to get 12TB of useable space.
  3. Two pools, each containing a single RAIDZ1 vDev - first 2x 4TB, second 2x 2TB - 6TB useable space, but two separate areas of free space so more manual balancing of data - best for adding additional 4TB or 2TB drives (rather than replacing existing).

(So I am no longer recommending a single pool consisting of two 2x RAIDZ1 vDevs - because it is inflexible e.g. you cannot remove the smaller vDev.)

1 Like

Thank you for the explanation. I am new to ZFS. The new motherboard is an IBM SYSTEM x3300 M4, which has 4 SATA ports, in addition to the LSI MegaRAID SAS 2008 RAID controller.

From what I’ve read, it’s not ideal to use this controller for native RAID, correct? Should I disable it to use with ZFS?

Lastly, do you have any video recommendations that explain how to import ZFS pools?

No - you should be able to use the LSI MegaRAID SAS 2008 so long as you flash the firmware on it to “IT mode”. And the utility to do this is included in TrueNAS Scale.

Which ports are your disks currently connected to? MB or LSI SAS HBA?

1 Like

Today, the board running XigmaNAS is an old one, Supermicro X7SBL, which supports 6 SATA ports. There is no controller.

The LSI MegaRAID SAS 2008 is in another hardware where I will install TrueNAS.

Thank you very much for the explanation and help.

P.S. I just took a look at the IBM (or Lenovo) x3300 M4 - and it has 8x 3.5" drive bays.

According to this page, the LSI SAS 2008 is most likely a ServeRAID M1115 SAS/SATA Controller which has the following specifications:

  • Eight internal 6 Gbps SAS/SATA ports
  • Two x4 mini-SAS internal connectors (SFF-8087)
  • Supports RAID levels 0, 1, and 10
  • Supports RAID levels 5 and 50 with optional M1100 Series RAID 5 upgrades
  • 6 Gbps throughput per port
  • PCIe 3.0 x8 host interface
  • Based on the LSI SAS2008 6 Gbps ROC controller

So it looks like this will support 8x SATA / SAS drives @ 6Gb/s per port.

The spec does NOT seem to include any motherboard SATA ports, but there does seem to be a built-in ServeRAID C105 which supports 4x SATA. This is an important distinction because:

  1. This is also a RAID controller so you need to see if it can be flashed to IT mode; and
  2. The port speeds are only 3Gb/s.

There are no M.2 or mSATA ports, so you will need some 2.5" SATA SSDs for a boot drive and (if you are going to run VMs or Apps) for a mirrored app pool.

If boot times are not a major concern - and this is likely to be running 24x7 anyway - then the boot drive can probably live with being 3Gb/s, but apps pool probably want to be 6Gb/s.

So with the available HBAs, you are probably limited to 9 drives, with 5 drives needed for boot drive and your existing HDDs, leaving 4 ports spare for app pool and future expansion. But there are plenty of PCIe slots, so you should be able to add another HBA at a future date easily enough if you need more expansion.

NOTE: I think you need different drive cages for 3.5" HDDs and 2.5" SSDs so this may limit your choices (or if you have spare 3.5" bays then you can use an adapter to install 2.5" drives).