TRuenas scale server with two storage pools. One hosted in the server, the other on the backend array.
Ran out of capacity and space on the main pool (due to snapshots), so used the “zfs-delete-snapshots -vR 1w Storage1” command to clear all snapshots older than 1 week.
Last snapshotting wouldve been on Sunday.
It cleared a lot of things, then following a restart, the dashboard showed space had cleared.
However, my main windows-data pool that holds all the files is not listed in the dataset dropdown. Not accessable from the PC (so SMB not working).
The files seem to be there on the server however, as when I go under the cli to /mnt/Storage1/Windows-data/ there are plenty of files.
So, running a cp -r to copy the Storage1 folder to Array1/bkp/ to ensure I keep a copy.
Ok.
how do I add the dataset back? I’ve got the files, I just need to have the system see the “Windows-data” folder on the drives as a dataset again.
Panicing a bit here as I just had to spend 3 days resilvering a replacement drive in the main pool, happy that I didnt lose anything, but following the tidyup I cant reference the files that are on the box…
Is that a new thing in SCALE? I’ve never seen such a command with TrueNAS Core or ZFS in general.
“windows-data” is a pool or dataset? Your next statement reveals it is a dataset.
Which dropdown are you referring to?
The dataset still exists (according to you) when it was verified in the command-line.
What system needs to “see” the “Windows-data”? The TrueNAS server itself? A client via SMB?
It honestly sounds like the dataset and its data still exists, since you confirmed this by listing the contents with the command-line directly on the server.
So, is it only SMB that is not working or listing the files?
That command is a tool someone “handily” created and I found on the old forum as purging masses of snapshots was a pain.
Looking at it, there’s a weird config issue with the shares.
I wanted to share the Storage1 folder historically as “Windows-data” but it wouldn’t do that (cant share at top level) so somehow I managed to do it previously with a “Windows-data” subfolder in Storage1 that seemed to be a referential link back to the folder it was in?
I’ve checked through a few folders and the sub level version is missing files, but the parent level “Storage1” seems to have everything in there.
Trying to re-plumb the smb shares, I’ll have to remove the parent folder one that links to the top of the tree and I guess flesh out the rest of the subfolders with their own shares, then make sure the PC share links are pointed correctly. I’ve also set up a mapped drive using iscsi, so might just do that instead, but of course I’ll need to ensure I have space free for the task.
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
Storage1 7.25T 5.83T 51.4M 4.72T 0B 1.11T
So, looking at it, the data seems to be correct I guess…
I’ll check the shares for the PC’s etc, then the apps for plex etc…
[EINVAL] sharingsmb_create.path_local: SMB shares containing the apps dataset are not permitted .
that’s what I get when trying to share any of the subfolders under SMB… I guess that’s why I did the (cant remember how) wierd link in the first place…
just gave the top row as the rest is the breakdown of the folders etc…
There are a few child datasets but mostly its all under the one single Storage group.
Yes, I need to split it up into more chunks/datasets/iscsi drives etc, just struggling with drive capacity (until now) to be able to have multiple data copies etc…
I might just do a quick shuffle of non-essential data and see…
just need to work out what’s the easiest way to move all the deta to the new folders on the server. I can I guess just create new datasets which will appear as folders under the Storage1 mount point, then just mv the contents from the old to the new folders?
Or possibly just rename the old folders to match the new name?
Trying to think of the quickest and easiest approach which wont need me to move Terabytes of data between the folders…
So, as I’ve got a bit of capacity free I’m running an rsync job to copy everything to the new dataset, thenI’ll test and set up ACL permissions etc, then if all ok I can scrub the old version and then do the same with the next folder.
Still dont know how I managed to get the folder shared in the first place at the root level mind you :).
Moved stuff with a few rsync jobs and backed up any changes to the backend array. Looks ok so far. Had to of course make rsync replicas of all the data into new datasets (i.e. uploads folder rsync’ed to the “upstream” dataset, which was then shared and is remotely accessible…)
deleted some of the big folders and running a scrub to check for issues. I guess I’ll need to leave it for a while to reflect the changes in the dashboard etc. Still showing about 80% in use when it should be about 50% max…
Managed to bag a JBOD expander on eBay for a song with about 10TB of raw disk space. Going to set that up to swap from the Heath-Robinson affront to the machine god I have currently set up (old pc case, mobo, PSU, sas expander and 8 x 3.5inch assorted drives).
The plan is to swap my main PC’s 8TB sata (for big file storage etc) over to iSCSI from the server, then use the 8TB with the 6 in the old case instead of the pile of 2’s and 3’s as the occasional backup stack…