Assistance in migrating from Unraid to TrueNAS

Hi everyone!
I know this action has been taken many times before and already raised a few times on the forums… but I’m hoping for some assistance with my specific situation, and hope to provide more insights to better assist the responders.

As the title suggests, I’m strongly contemplating migrating from Unraid to TrueNAS Scale. I have some questions about TrueNAS and the migration process. I’ve tried to research these but I can’t find the level of detail I’m after.
Before the questions, I think it’s best to share my setup and my reasoning.

Current Setup

  • Silverstone RM41-H08 Case
  • Gigabyte Z370M D3H Mobo
  • 32gb DDR4 RAM (slots available to upgrade to 64gb)
  • LSI 9207-8i HBA
  • Intel i7 8700k CPU
  • 4 installed drives (4tb SAS parity HDD, 4tb SAS data HDD, 2tb SATA data HDD, 500gb SATA data HDD, 500gb SATA cache SSD).
  • 2 spare HDD (4tb SAS HDD with some bad sectors, on its way out, 1tb SATA HDD).
    My current arrangement allows for two; 2.5in drives (occupied by the 500gb HDD and 500gb SSD), five 3.25in drives (2 spots available), 2 NVMe drives (no plans to use at this stage).

Planned Actions

  • Buy two 4tb SAS HDD and place into the 2 available drive spots. The four 4tb SAS drives will become my storage pool.
  • Install Proxmox on baremetal (on 500gb SSD) and virtualise TrueNAS Scale Electric Eel (v24.10) for the purpose of NAS.
    Unsure if TrueNAS can share the Proxmox boot drive, or should have its own?
  • Install Debian VM to run all docker containers. Will likely use the 500gb HDD for this.
  • Install 2 other VMs for various activities (on the 2tb HDD).

Reasoning

  • I have the physical capability to use more drives, but am limited by the Unraid license and don’t want to pay for the upgrade.
  • Electric Eel brings about a change that originally turned me away from TrueNAS (the ability to add individual disks to expand a pool)
  • I’d like to utilise Proxmox for its superior VM capabilities (so I’ve read… havent experience TrueNAS or Proxmox myself yet), as well as being able to separate machines for dedicated responsibilities.
  • Virtualising Unraid isn’t officially supported and looks to be a real pain in the butt.
  • TrueNAS looks more polished and is visually appealing.
  • Mature ZFS, and Snapshots… don’t think i need to elaborate there.
  • Not turned off by any required learning curve for TrueNAS & Proxmox. I like learning new stuff.

Questions

  1. Does TrueNAS have the ability to spin down drives after a set period of utilisation, and does this work with SAS and SATA drives? I rely on this feature in Unraid to reduce power consumption.
  2. Are there any considerations, comments, thoughts and reasonings (in general, given the above info) as to why i should, or should not proceed with the change? If the latter, any alternative suggestions? Currently, my only negative is “its a crap load of work” haha.
  3. Is my planned drive utilisation alright, or is there a better configuration with my current drive setup or with minimal cost outlay?
  4. I’d like a 2 drive redundancy… would RAIDZ2 be best for this (as opposed to 4 drives in mirrored config?)
  5. This is probably the biggest hurdle i face. How best can I migrate data from my existing array to a new ZFS pool in TrueNAS? The following should be considered:
  • I can only plug 1 more physical drive into my system before Unraid array won’t start due to drive count limitation.
  • All of my data can be moved to fit on the 4tb data drive in Unraid, freeing up the Cache, 500gb drive, and 2tb drive.
  • I don’t want all my data sitting on a single 4tb drive without backup/parity (especially the bad sector drive).
  • I could possibly split up my data to backup drives, 1 at a time… 2.5tb of non critical data on the 4tb bad SAS drive. 1tb of critical data copied twice (on the 1tb spare drive, and 2tb currently used drive)… This is my current thought, but hoping there may be better suggestions?

I apologies for such a long post… but ive got a few unknowns, and its a big change to make.

I look forward to your response!
Thank you :blush:

Edit:
The selected solution was just for formalities. In reality, every response has been extremely valuable given the range of questions above.

Hi!

Recently, so many people lose theyr pool due to proxmox change (do a quick search for more detail). And honestly, i wouldn’t consider that as a reliable option for my data, at least for now.
For sure, if you wanna still try this way, better put TN in Is own disk (32gb are more than enough for the purpose)

Passing to your question:

  1. Yes, you can spin down disks, and i don’t think change anything from SATA to SAS. Everything can done via GUI for each disk
  2. the worst lack on your components Is the ecc support. For the rest, (not considering are consumer parts) RAM amount Is enough (and upgradable), mainboard have an Intel integrated nic, CPU have enough power to virtualized/run app…
  3. First of all, ensure that all your disks are CMR. If you run apps and VM on SSD/NVME rather than spinning disks (or im wrong?) obv you will gain a lot on performance.
  4. hard to answer that. In your place, with 4 disks, probably i would have choice a 2x mirrored vdev pool; but if you point more on resiliency raidz2 Is better
  5. probably there opting for a mirror pool make things simple: you can create the pool with 2 disks, move data into It, check data, then add the other 2 into It (use those bad disks for more backup
1 Like

Thanks @oxyde for the prompt and detailed reply!

I tried to search this but struggled to find specifics in recent cases (due to my poor searching no doubt).
It is a little worrying however.
Would it be safer to run TN on baremetal, then forget about my “single responsibility” goal and have TN host my Docker apps + VMs? I don’t really mind this approach, but am unsure of the VM performance vs Proxmox.
In terms of load, it would be jellyfin, nextcloud, “arr suite”, general use/test VM, and possibly a gaming VM (I can utilise a GTX 1070TI GPU but i may drop this vm plan in favour of lower power consumption).

Regarding your remaining responses…

  1. Brilliant news :slight_smile:
  2. How important is ECC and your level of recommendation? I have read a lot of comments ranging from “its fine, its not required”, to, “don’t bother with ZFS if you don’t have ECC”.
  3. Drives are definitely all CMR, however, to-date, ive only run vms on spinning discs in Unraid. I believe the docker apps are running on the SSD cache. I may add some more solid state storage to assist though. Thanks!
    1. The idea of creating a 2 drive mirror, moving data across, then expanding it when complete is quite enticing! Thanks for that idea. I may just read into the resilience differences between mirroed and z2 to help inform my decision.

EE will be relased in a few days, just wait a little and check if the the new apps work for you, and see how the ecosystem develops.

Many people what a little longer anyway to see if any major problemes arise, even though the RCs where pretty stable already.

I catch this one, should link you to other thread.

Totally your call. But you are already on a borderline setup, the risks outweigh the benefits IMHO

My little 7th gen i3 handle without problems 2 jails (Wireguard, Nextcloud) and an Alpine VM with ~15 containers. Excluding the gaming VM ( if you are not talking about retro gaming), don’t think you will have problem (but RAM can be a bit poor in this scenario, if multiple instance run together).

I started explore TN without, but now i have on both system, so for me Is a yes, better have it.
The situation Is exactly what u describe, hard to find a definitive answer. Recent thread about, take a look into It if you want

You can use 1 single disk striped for app and VM, and replicate data on the storage pool. Either a pci-ex to NVME adapter (the single NVME version!) to gain an extra sata (i have 2 xD)

(Btw, the bad disks can be imported/exported for local replication. Ok, not the best backup solution, but still a plus free backup)

1 Like

Hi

I tried to spin down my SAS disks and it’s not working, the disks don’t spin down and the power consumption is also not going down. In my opinion the SAS drive (at least the ones I have, they are from an enterprise storage system) don’t have this feature (as it doesn’t make sense to spin down the disks in an enterprise environment).

@BlueBell_XA Nicholoas

Apologies for the lengthy response and for it being a bit of a brain dump.

  1. I agree with others - run TrueNAS on bare metal and use TN virtualisation if you want to run VMs.

  2. You can run Docker native on TrueNAS EE. No need to spin up a VM for this.

  3. I would personally use the following drives in your new setup: 2 new 4TB drives, 2 4TB SAS drives, 2x 500GB SSDs PLUS 2 new M.2 128GB NVME drives (MB supports Optane, so think about these if you can). I would use these as follows:

    • 4x 4TB drives as a RAIDZ1 (or RAIDZ2) HDD pool
    • 2x 500GB SSDs - as a mirrored Apps/VMs SSD pool
    • 1x NVME - boot drive
    • 1x NVME - SLOG for SSD pool (because VMs do synchronous writes)

    I would probably connect all the SATA drives directly to the MB (which has SATA ports), and put only the SAS drives on the HBA. You will need to check that using the M.2 slots doesn’t shut down SATA 4/5.

    If you want to keep using the other misc. sized drives for misc data you can, but misc. sized drives are not really suitable for redundant pools in TrueNAS, but are great if you ever need some disk for temporary, less important data, (experimental VMs, downloaded files which you can download again if need be etc.)

  4. 32GB memory to start with should be fine - keep an eye on your App/VM memory usage and your ARC hit rate to check when you need to add more memory.

  5. As you have already identified, you will need to think carefully about how you will migrate your data.

    You are probably running your HBA in RAID mode - you will need to reflash it to IT mode for TrueNAS / ZFS, and you will likely lose access to any existing data connected to the HBA when you do this.

    Ideally you will copy your data off the machine - either to another computer or two copies to separate unencrypted drives - ideally with ZFS filesystem, but another file system like ext3 will do if need be (but for non-ZFS I think you may have to mount it manually to migrate the data back).

    I would imagine that if you set your HBA to see one of the new SAS drives as JBOD, and put a temporary ZFS file system on it, then there is a good chance that you will retain access after you switch the HBA to IT mode. But you cannot be sure of this, so put another copy of your data onto the smaller SATA drives just in case.

    Before installing TrueNAS, physically remove the drives holding your data so there is no chance of it being overwritten.

I hope that this helps.

1 Like

Sorry for hijacking this thread, but would that even be beneficial if the VMs are already running off SSDs ? Also for a slog I would assume you want something with very high endurace, like an optane, and not just some random NVME.

Thank you everyone for your valuable insights! I really appreciate the help and honesty.

@prez02
I’ll certainly be waiting for EE to reach stable release before taking any action at all.

@oxyde
Thank you greatly for the other thread references… I’ve had a read through those and along with the comments here, have concluded that:

  • I will not bother using Proxmox for my system.
  • I will likely build a system that uses ECC. I live in Australia and may need some assistance with finding components that are cheap, fit my needs, and power efficient… but that is beyond the scope of this post and may consult ServerBuilds.
  • I will either use my current components for a dedicated gaming pc, or sell them to recoupe money. Either way, wont bother with a gaming VM.

@MrTux
Thanks for this insight. In Unraid, its not too reliable but still sort of works. If thats the case though, i will put more emphasis on overall power efficiency to offset it.
Im aiming for <100w idle with all drives spun up.

@Protopia
Really appreciate the detailed response and recommended drive setup.
Firstly, my HBA is already flashed to IT mode, acting as a jbod for my current Unraid system.
From my understanding then… i can add my 4tb dodgy drive in Unraid as ZFS and copy all my data to it. For my 1tb of critical data, i can just make a couple extra copies of that on the mismatched drives as redundancy.
Buy the extra 4tb drives and when i build the TN system, create a new RaidZ2 pool with four drives… that 5th 4tb drive could then be mounted to copy all the data into the pool?
Or would the zfs drive with all my data just be absorbed into the pool? I may need to actually virtualise TN EE and see how this acutally works to wrao my head around it.

  1. Even if you were using an SSD with the same performance for the SLOG it might still have a small performance benefit because you would be splitting the ZIL write and the data write onto two separate drives.

  2. But when an NVME (or even better Optane) drive has far faster performance than an ordinary SATA SSD, you should (in theory at least) see significantly better disk write response times from the perspective of the workload.

  3. All ordinary SSDs have a Total Bytes Written lifetime because the NAND technology they are based on suffers from degradation of the memory circuits every time they are erased and rewritten. I am not sure how true that is of NVME (i.e. is it the same as, better than or worse than NAND), nor any idea how Optane drives differ. But NVME and Optane drives are intended for SSD style write usage (as opposed to e.g. USB Flash drive usage) and so I am unsure why NVME or Optane drives would have a worse lifetime than NAND.

1 Like

I was actually suggesting that you junk the dodgy 4TB drive ASAP and temporarily use one of the new drives to hold your data.

You can then create a 3x 4TB RAIDZ1/2 with the remaining good 4TB drives and after copying your data back onto that you can then extend the vDev by adding the 4th 4TB drive back in.

If it were me…

If I was running a mission critical business server, with a lot of active data, and where availability was crucial and downtime costs money, I would buy a brand new rack-mounted server that supports ECC. No question.

For a home server, where most data is at rest and possibly media files where the very very rare glitch is liveable, …

  • If I already had hardware that didn’t support ECC I would use it and not splash out a whole lot extra to buy a new ECC-supporting MB/processor/memory.
  • If I was buying new kit and the cost of an ECC system was not much greater (and remember easily the biggest cost is the disks), I would definitely spend the extra.
1 Like

Absolutely agree with you there.

I will look for components for a new build and compare the costs, the amount i could recoupe by selling parts, and my risk tolerance before i land on a final decision.
My 1tb of critical data is irreplaceable family photos… i could keep multiple backups/snapshots of those on external sources. The rest of my data consists og movies/tv shows and can easily be replaced.

At the very least though, I’ve got all the answers now that i originally came for :slight_smile:

ECC essentially protects data in memory. A malicious cosmic ray would have to alter memory in the time between sending it to TrueNAS across the network and actually writing it to disk a second or two later to corrupt the file written to disk.

Of course, a cosmic ray can corrupt memory and cause data in ARC to be corrupted, but that is a temporary copy. Or it could corrupt a system area and TrueNAS could crash.

So ECC memory is a good thing for a permanently running server - no argument about that - but the risk profile for non-mission-critical home servers is pretty small IMO.

P.S. You can (apparently) get “bit-rot” in the at-rest files stored on disk, hence ZFS’ checksums, extensive redundancy error correction on read, and recommendation to run regular scrubs to correct single errors long before random factors create a second error in the same block and make the file unreadable.

1 Like

I could be wrong but as far as i can recall, spinning down disks in TrueNAS is not recommended.

1 Like

More for 24/7 opperations than Truenas specific. Spinning up/dn 30 times a day can cause more damage than just never spinning down

1 Like