Delete unwanted dataset

Hello all

During my re-build of my backup server, I screwed up when creating an rsync task.

The rsync task created a dataset on server B (my backup server) called Tv Shows.

This is not a dataset that i can import nor can i create an smb share it

In the shell of Server B and I go to /mnt/media i see all my datasets including ‘Tv Shows’ (as written)

i tried to perform rm /mnt/media/‘Tv Shows’ -R
i get rm descend into write-protect directory /mnt/media/‘Tv Shows’ ?

If i click y it starts listing files and i have to click y to each file
it will ask me to remove the directory? Y
it will then ask to remove a file? Y
then it tells me the file name /.~tmp
and finally tells me access denied

How do i permanently delete /mnt/media/‘Tv Shows’ and all it’s content.
I just want to delete this whole dataset ‘Tv Shows’

Thank you

First I would recommend deleting the dataset from within the WGUI and see if that works. The hamburger menu on the right side of the dataset will take you through the steps to remove it.

Second, if you are using the rm command, try using -f with -R, the command should. Just make sure there aren’t any files opened or any CIFS/SMB shares attached to it.

@ThePhantom

Thank you for your prompt reply.

This dataset does not show up in the cli/gui at all.

I have now ran rm /mnt/media/‘Tv Shows’ -f -R
it did delete many files.

However many subfolders with files named .~tmp~, i got permission denied for all these files.

I believe that these files are generated by rsync during the process of the rsync task.

How can i change the permissions for this dataset and therefore all the .~tmp~ files

No SMB or for that matter any other share are applied to this dataset

Thank you for your assistance

The Rsync Task created a dataset? Is this a new feature in SCALE or something?


What do you mean by a “dataset that I can import”?


Datasets or folders?


Why is your inclination to immediately do a destructive and recursive rm command? Especially if you’re still diagnosing the underlying issue?


It sounds like you’re dealing with a directory (“folder”), not a dataset.

If you truly want to “delete” it, you can do it as the root user or via sudo. Again, be extra careful you know what you’re doing and proceed with caution whenever using the rm command. Be aware of “white spaces” in paths, as well as to not accidentally hit Enter before you review exactly what you typed. (And for the love of all that is holy, make sure you don’t put a space after a forward slash by mistake; otherwise, you’ll delete everything under /mnt or /mnt/media.)

2 Likes

With a ZFS target, you should be passing the --inplace option in rsync.

Thank you all for replying

Sorry i do no t know what this sentence means.

I did an su root and inserted the command line above, and it worked thank you for your patience, help and teachings

In the “Auxiliary Parameters”, add this option: --inplace

This is important for CoW-based filesystems, such as ZFS.

@winnielinnie

thank you for your immediate response.

sorry to be very pedant what does --inplace do?

Any files that have been modified on the source are modified “in place” on the destination. (No “temp” files are created and then renamed after the write operation completes.) It simply writes into the existing file, in place.

ZFS is “copy-on-write” (CoW), which means that if you do not do an “in place” modification to the file, every small modification will consume the entire file’s size in a snapshot, for example.

If it’s a 4 GiB “inbox mail file”, but only 1 MiB was modified or added? Doesn’t matter. Rsync is dealing with two distinct files that are 4 GiB in size, which means that if you had created a ZFS snapshot, it will consume 4 GiB for this minor 1 MiB modification! (And that’s only considering that one file!)

But if you use --inplace, then only 1 MiB is being written/modified, and only 1 MiB will be reflected in the snapshot’s usage.


EDIT: For non-CoW filesystems, --inplace isn’t as important. They’re not “block-based” or “copy-on-write”, so from a file-based perspective, not using --inplace has no real effects on space consumption or efficiency. (Other than perhaps network usage.)

1 Like

@winnielinnie

wow
no wounder by rsync tasks were taking so long.

thank to your suggestion i cold survive until tuesday when i get new drives to extend the vdev on server b (was tunning low on space hence i’m adding another 8 tb)

Thank you very for this teaching

--inplace won’t necessarily speed up your Rsync Tasks.

The files on the destination still must be read by the remote server. So it depends what the main bottleneck is. (Network? VPN? Internet speed? Metadata not in ARC? Drive speeds?)


The greatest boost to Rsync Tasks is to keep metadata in the ARC as long as possible, and for it to be prioritized over data.

What versions of TrueNAS are the source and destination?

@winnielinnie

Thankyou for this explanation.

I already added --inplace to all my rsync tasks (tomorrow is backup day) .

Yes my network till november will be the bottleneck.

my drives are all hgst 7200 sas (my server A dell r720 my server b supermicro x10).

I do have an issue with the connection to my supermicro it’s not as solid as i like. THerefore i’m thinking of adding a 4 port network card (2X2.5gb and 2X10gb) to both servers. this should improve my situation.

I am running tailscale, but this is involved in the rsync task.
Metadata not in ARC . Have no clue about this and IMHO i do not think that i would benefit from it.

I have a very simple environment. I just store my media and my documents.

I built this just to learn something and keep me busy during my retirement.

Once again thank you for all the help

Have a great weekend and you all be well

Anything that crawls or traverses a long list of files and folders will absolutely benefit from keeping the metadata in ARC. This is especially true for rsync, and even more so on both ends (both servers).

Not only will it increase performance, but it will require less reading from storage media.

See this for TrueNAS Core 13.0 and earlier. (It also goes into the importance of keeping metadata in the ARC, which is relevant for any version of ZFS, even into the future.)

See this for TrueNAS SCALE and TrueNAS Core 13.3+. [1]

I won’t go off on a tangent and ask why you’re using Rsync instead of ZFS replications. :wink:


  1. For what it’s worth, on TrueNAS Core 13.3, I’m using a value of 2000 for this parameter, and have been quite happy. (The default is 500.) ↩︎

@winnielinnie

Hi thank you for this information.

Both servers are running 24.04.2.2

Server A has allocated 64gb ram and has zfs cache of 55.1 gb. server B is a bare metal server with 24gb ram of which 18gb are allocated to zfs cache.

I tried running rsync task with the --inplace attribute. However when i run the task, i get the following error.
[EFAULT] rsync command returned 1 - SYNTAX. Check logs for further information.
add_circle_outline
More info…

I delete the --inplace attribute and run the task, the task is successful

I viewing Laurence systems video on arc it says that as of 24.04 arc is no longer an issue ???

THank you for your assistance

Ah, for the GUI, you might have to add parameters without the double-dash.

So instead of --inplace, you would add inplace .

EDIT: Or type it in manually, just in case copying+pasting is inserting the incorrect (though visually similar) characters.

EDIT 2: You can check the logs to see the exact error message, which might clue you in.

That goes to this point:

There’s still a parameter you can adjust, which is defaulted to 500. You can always increase it in the future if you want to further prioritize metadata in the ARC. But if you’re happy, no need to change it. :slightly_smiling_face:

@winnielinnie

Thank you

I tried inplace. no joy

with out this parameter the rsync tasks work.

I will keep it as they are and hopefully I will not foobar this solution in the the future by always tinkering with it.

Many thanks to all for all your help, patience and teachings

be all well and have a gr8t weekend

That’s weird. Not sure why it won’t let you use a common option.

1 Like