I’m running into a limitation on TrueNAS where package-management tools are disabled, so I’m unable to install utilities like rename. Because of this, I’m restricted to whatever tools are already built into the system.
I need a command-line solution that can recursively rename all files and directories by converting any uppercase characters to lowercase, without making any other changes. Doing this manually would be impossible given the volume of data, so I’m hoping someone with more experience can point me in the right direction.
This is very important to me, and any help would be greatly appreciated.
Sounds easy enough to do with shellscript. You could also use python if you have more a more complex use case.
Here’s an example in shellscript. You can copy&paste that into a bash shell, then run rename_files_lowercase_recursive inside the folder where you want the renaming to occur.
For safety, the script will only print what it would do. I am leaving the removal of the safety as an exercise for the reader.
rename_files_lowercase() {
echo "Folder: $PWD"
for file in *; do
RENAMED=$(printf '%s' "$file" | tr '[:upper:]' '[:lower:]')
[ "$file" != "$RENAMED" ] && echo mv "$file" "$RENAMED"
done
}
rename_files_lowercase_recursive() {
rename_files_lowercase
for file in *; do
[ -d "$file" ] && (cd "$file" || exit 1; rename_files_lowercase_recursive)
done
}
Also note that lowercasing is locale sensitive. You’ll probably get issues if your filenames have characters which are encoded as multiple bytes.
I switched from zsh to bash and did exactly as instructed, but it seems this doesn’t apply recursively to all files and folders wherever I execute it. Each directory I ran it in seems to only apply it in the directory instead of recursively, which is really important for speed. Can that be helped?
There are two functions defined. The rename_files_lowercase function only applies to the current directory. Use the rename_files_lowercase_recursive function which will apply to the current directory and all subdirectories recursively.
I think you might have just picked the wrong function? If you did use rename_files_lowercase_recursive and it didn’t recurse into subdirectories then I don’t know why.
EDIT: In my initial testing it would appear that the scripts also works in zsh. I did make sure to only use posix compliant commands, so they should work outside of bash.
I’m running into strange anomalies now. The folders I ran this script in are now not showing any of my files in CLI mode, but they do appear via NFS shares. I tried rebooting and it’s still the same result.
/mnt/StoragePool/Backups/ This is where I ran it and when I type ls in all sub folders in here, it shows nothing in CLI mode but in my Gnome file manager app nautilus, I can see my files still. I just launched midnight commander and I can’t see the files in there either.
To be sure, I tried to see other datasets in CLI to see if they were affected but fine in CLI mode.
How is that possible?
This is my CLI history
user@truenas ~ % history
241 cd /mnt/StoragePool/Backups/browsers/ && find . -depth -exec sh -c ‘mv – “$0” “$(dirname “$0”)/$(basename “$0” | tr “[[:upper:]]” “[[:lower:]]”)”’ {} ;
245 find . -depth -exec sh -c ‘old=“$0”; new=“$(dirname “$0”)/$(basename “$0” | tr “[:upper:]” “[:lower:]”)”; [ “$old” != “$new” ] && mv – “$old” “$new”’ {} ;
247 apt update
250 apt install
258 rename_files_lowercase() {\n echo “Folder: $PWD”\n for file in *; do\n RENAMED=$(printf ‘%s’ “$file” | tr ‘[:upper:]’ ‘[:lower:]’)\n [ “$file” != “$RENAMED” ] && echo mv “$file” “$RENAMED”\n done\n}\n\nrename_files_lowercase_recursive() {\n rename_files_lowercase\n\n for file in *; do\n [ -d “$file” ] && (cd “$file” || exit 1; rename_files_lowercase_recursive)\n done\n}
260 cd
268 rename_files_lowercase_recursive
269 bash
277 ls -l /mnt/StoragePool/Backups/browsers\nls -l /mnt/StoragePool/Backups/browsers/browsers
278 ls -la /mnt/StoragePool/Backups/browsers
280 cd ..
282 cd Backups
286 cd browsers
287 ls
288 mc
289 clear
The rename_files_lowercase_recursive command you executed has no effects on any files or folders. It can be seen from your history output that all safeties are still on. The same cannot be said of your previous find .. -exec .. attempts, those ran without any safeties.
I don’t know what your file manager app show. But it’s going to access the data via some kind of share, check in the share config what path is configured.
My bad, I spoke too soon. I rolled back to a snapshot from yesterday midnight and it’s the same thing.
I’m not sure what this new issue I just came across is all about, but it’s preventing me from seeing files in this specific dataset via CLI but not via NFS, and I’m not sure why.
TrueNAS and my desktop are both the same UID:GID so it shouldn’t be permissions. I do however have “mapuser" to root” and “mapgroup to wheel” set in an NFS share for this dataset. Would that be causing this? Otherwise, I’m clueless.
I tried that and same result. The immediate dataset /Backups/ has about 13 folders inside it that sorts all my stuff and the CLI will show those, but as soon as I get into those folders it’s completely empty. I’ve been using Linux for many years and this is a first.
It is a newer dataset I made not long ago and copied files from a different dataset via NFS. I haven’t done much else in it. This is very strange.
/Backups/ is the dataset and everything inside is just normal folders with their files.
That command does show folders inside the dataset, but those first layer of folders were always visible. It’s the folders and files one further step beneath those that are not visible in CLI.
For example, when you do a ls /mnt/StoragePool/Backups/browsers you are listing the contents of the root folder that is stored inside the StoragePool/Backups/browsers dataset. Which is empty, hence you get an empty output.
Currently all your /mnt/StoragePool/Backups files live solely in your StoragePool/Backups dataset and not in any of the child datasets you created.
Your NFS setup doesn’t cross filesystem boundaries. That means when you “explore” the Backups/browsers folder via NFS you are actually viewing the contents of the browsers folder inside the StoragePool/Backups dataset and not the intended StoragePool/Backups/browsers dataset.
For now, your easiest option is to CAREFULLY remove the empty datasets. After that you can at least see the files/folders again inside the shell.
If you wish to distribute the files into individual datasets you need migrate them from one dataset to the other. After that, you also need to fix your NFS setup. For best practice, only ever share leaf datasets. Leaf datasets are datasets which have no children. That avoids a lot of issues related to crossing dataset boundaries.
Oh wow, this is all starting to make a bit more sense.
So basically what I did was I used to have my /Backups/folder inside my /StoragePool/Personal/dataset as /StoragePool/Personal/Backups/. Then I decided I wanted to organize a bit more, so I decided to move the /Backups/folder to its own dataset and I did, which is the current /mnt/StoragePool/Backups/. Then I decided since this new dataset is used for important data that maybe I should make the services backup folders into child datasets for snapshot purposes, and so I did.
How did I get all my files from the /Backups/folder into the new /Backups/dataset? I cut and paste them on Fedora Linux through the mounted NFS shares.
Are you saying the screw-up is how I copied the files? Is the NFS transfer the issue here? If so, what was the proper way? A replication task?
Considering all this, are you still recommending I solve it the way you mentioned? Or does this change anything.
It is sort of a quirk of NFS. But in general sharing datasets that have children has its quirks. The manual might be able to explain it better than me:
NFS treats each dataset as its own file system. When creating the NFS share on the server, the specified dataset is the location that the client accesses. If you choose a parent dataset as the NFS file share location, the client cannot access any nested or child datasets beneath the parent.
If you need to create shares that include child datasets, SMB sharing is an option. Note that Windows NFS Client versions currently support only NFSv2 and NFSv3.
I wouldn’t call it a “screw-up”, but I suppose the proper way would have been to create 13 NFS shares - one for each child dataset. But that is probably pretty inconvenient, depending on your use case. SMB might work better for this use case. If you had used SMB or the 13 individual shares then the copy would have put the files in the correct location.
For now, you have the following options (as far as I see):
You delete the empty datasets and only keep the one big StoragePool/Backups dataset. This requires no change on the NFS part. This is pretty quick and easy to do, but it does mean you have no granularity when it comes to snapshot purposes.
You distribute all files from the StoragePool/Backups dataset into the child datasets. After that, StoragePool/Backups itself is going to be mostly empty and all data is stored in child datasets. For sharing, you need to remove the existing NFS share and create one Share for each child dataset (13x). There is also the option for SMB which can technically work with a single share (altough I think one share per dataset might be a cleaner setup).
Wow, I learned something new today. I went with option 1 to keep things simple, but now I understand how it works. If it wasn’t for you, I might’ve accidentally nuked my data and never known why. Deleting the child datasets immediately allowed me to see the actual data in the Backups dataset.
Unfortunately, though, the renaming command still doesn’t seem to work recursively. I also tried using the root user to execute it, but it didn’t make a difference. I’m not sure why it renamed all the immediate folders before, but recursive renaming just doesn’t seem to happen. I tried both zsh and bash. Anyway, I really appreciate your help a lot.
There are many failure modes and edge cases for this seemly simple process, but as long as all your files and directories are just that and are sanely named and ordered[1], something much like this should work:
This will show you how each file and directory will be renamed. Remove the echo to perform all the moves.
This will produce harmless errors for files which are already all lower case which won’t be renamed, for directories which are already all lower case which won’t be moved into themselves, and files which already exist as the lower case version (due to noclobber).
find /path/to/directory -mount -depth will start listing all files and directories before their parents so that they can be renamed in the upper or mixed case directory, while not crossing any filesystem boundaries
while read file … ; do … ; done performs the action on each file output by find
mv -v -n a b renames a file or directory a to b or moves file a into directory b unless b already exists, and tells you exactly what changed
”file”does its best to protect anything in the filename from the shell and commands operating on it, particularly whitespace, which we would otherwise handle with -print0 and xargs -0 if we weren’t KISS
dirname “$file”is the parent directory and basename “$file” is the file or directory in it which we are renaming
tr [:upper:] [:lower:] is and does what you think it does because tr is great
Take a snapshot before running the real version, test on a small patch of fabric, and always mount a scratch monkey.
Please follow up with how this didn’t work and what you did to fix it.
If you have named pipes and device special files with circular symlinks and filenames containing ASCII nulls and bells and emoji, then it might still work and I still get paid the same. ↩︎