I have seen some posts on this topic but they don’t seem to answer my question or don’t give quite the detail I need. I am new to TrueNAS Scale and I am using it only at home for domestic use, so I do not store critical business data etc and I have simple needs.
When I first set it up I had two physical 6TB disks and I chose to setup a mirror (IIRC I didn’t have a choice with only two disks). I have now got a third 6TB disk and would like to use it to expand the space available. I think this means I need to change from a mirror to RAIDZ1. I have seen that it isn’t recommended for disks greater then 1TB, but given that this is for non-business use I suspect that this isn’t really an issue in my case (or is it?).
I am wondering what the best process is to convert to RAIDZ1 without losing data and without losing my configuration of datasets etc? I am happy to take the risk of being vulnerable to a disk failure during the conversion process, again because of the nature of the data.
Given all this, are the steps as follows?
Remove a disk from the mirrored pool.
Create a new pool from the three disks?
If so, then I am not sure about the second step in the sense that I don’t know how to make sure the data is preserved and the configuration of datasets etc is preserved. Should I just wipe the lot and start again? I would prefer not to if I can avoid this.
Business or not, how happy would you if you were to lose the data? (Family pictures/movies, etc…)
Then your process is:
Remove a disk from the mirror (or split it).
Create a 3-wide raidz1 with the new disk, the disk you have removed and a sparse file. (CLI required. Here be dragons!)
Remove the sparse file, to end up with a degraded raidz1.
Replicate your data from the single-drive mirror to the degraded raidz1. A recursive replication from the root dataset will preserve your configuration.
Export the old pool, and add its drive to the new raidz1 pool to replace the missing sparse file; let the pool resilver.
Optionally, rename the new pool with the old name so you don’t have to adjust the path to your shares.
Very simple: You can’t preserve data. Drives are instantly erased at the moment they are added to a pool, or used to create one. Hence the replication, and multi-step process above.
Obviously, it would be easier and safer if you had more drives and could replicate without intermediate steps with degraded pools and/or end up with a raidz2.
Thanks, I will study your recommendations carefully. The data I have on the NAS is mostly just backups of other data I already have elsewhere. I could trash it and start again, but I would like to see if it is possible without doing this and perhaps learn some new stuff along the way.
This worked remarkable well in my case with 2x 2TB SSD’s into a 3x 2TB SSD Z1. With about 1TB of data even the replication and resilver time could be counted in the minutes of work.
It did end up confusing the middleware. The GUI displays all the disk that are in the new raidz1 as unused. Probable from the CLI pool creation and/or CLI pool renaming.
I think you probably used the wrong disk names which you can solve by exporting the pool, importing it on cli using partuuids (that’s TrueNAS’s default I think), exporting it there again and importing it via GUI.
I also think you meant TB instead of GB and yes, with SSDs this will be extremely fast - with large HDDs it will take a long time.
Correct, I’m showing my age thinking about GB as the normal unit of disk size
I did go through a cycle of export/import in the GUI hoping for that to clear things up. The pool creation and such on the cli I did indeed using device names. That was easier to track for me as human. Sounds logical a cli switch from name to uuids will sync with the logic of the middleware. I’ll investigate a bit and try that out.
In the end I failed to change the pool disk references to UUID’s. The closest I got was using the by-id references, which better then name but does not match the expectation of the GUI. After recreating the pool and restoring a snapshot from a different set of drives all is well again.