Shell command (on Mac OSX El Capitan) to recursively rename all my DOCX files - macos

I am trying to build a shell command (on Mac OSX El Capitan) to recursively rename all my DOCX files to have extension ZZZZ and then to immediately rename them back again to the DOCX extension. This is a workaround to hopefully fix a problem as follows:
I am doing this to try to get around a Mac Spotlight bug which doesn't search for content inside Mac Word 2011 files correctly. It gives intermittent results and seem to miss a lot of hits (this issue seem to be well-known for a few years on Apple Mac Forums). Renaming a file seems to kick-start Spotlight into action.
Mac Shell doesn't have the BASH Rename command so I am trying to iteratively use the "MV" command. I've had partial success with the following code but don't know how to tie it together...
cd ~/Documents/TESTING/
# FINDS MY DOCX'S RECURSIVELY IN TOP-LEVEL FOLDER AND IT'S SUBFOLDERS. NOT SURE OF SYNTAX TO USE FOR "MV" COMMAND TO RENAME DOCX FILES
# find . -wholename '*.docx' -type f -exec mv UNSURE1HERE UNSURE2HERE \;
# WORKS BUT ONLY IN TOP-LEVEL FOLDER - I NEED IT TO WORK RECURSIVELY ON DOCX'S IN TOP-LEVEL FOLDER AND IT'S SUBFOLDERS:
# for files in *.docx; do mv "$files" "${files%.docx}.zzzz"; done

You can do this using process substitution:
#!/bin/bash
cd ~/Documents/TESTING/
while IFS= read -r -d '' file; do
echo mv -- "$file" "${file%.*}.zzzzz"
done < <(find . -iname '*.docx' -type f -print0)
If you're satisfied with the output then remove echo before mv
From .zzzz to .docx:
while IFS= read -r -d '' file; do
echo mv -- "$file" "${file%.*}.docx"
done < <(find . -iname '*.zzzz' -type f -print0)

Related

search and rename files

I've found a bad habit file naming among the users of the QNAP Linux NAS system I'm administering.
We have a Mac OS network, and some folders and files are named using the "/" character that I know it's causing problems to Linux file system.
As a matter of fact, from the Mac OS side, the files containing "/" simply disappear, since QTS replaces automatically all instances with "/" with ":".
I'd like to search and rename filenames from ":" to "_".
Discussing on the web I've found that I can SSH connect to the Linux NAS from my Mac with Terminal and perform a script like this:
for f in $(find /share/Public/demofind -name "*:*"); do mv $f ${f/:/_}; done
assuming that the files are in /demofind folder.
I launched the script, but got this error:
[/share/Public] # for f in $(find /share/Public/demofind/ -name "*:*"); do mv $f ${f/:/_}; done
mv: unable to rename `/share/Public/demofind/Redazionali': No such file or directory
mv: unable to rename `01:03:19.pdf': No such file or directory
mv: unable to rename `/share/Public/demofind/Redazionali': No such file or directory
mv: unable to rename `06:09:19.pdf': No such file or directory
[/share/Public] # for f in $(find /share/Public/demofind/ -name "*:*"); do mv $f ${f/:/_}; done
mv: unable to rename `/share/Public/demofind/Redazionali': No such file or directory
mv: unable to rename `01:03:19.pdf': No such file or directory
mv: unable to rename `/share/Public/demofind/Redazionali': No such file or directory
mv: unable to rename `06:09:19.pdf': No such file or directory
By the way, the files to rename have a syntax like this: "Redazionali 06:09:19.pdf"
The NAS seems to be running BusyBox v1.01 (2021.12.12-04:24+0000) so I would need a solution which is compatible with this platform.
The immediate problem is that you have broken quoting but your code has other problems too.
The robust solution would be
find /share/Public/demofind -name "*:*" -depth -exec bash -c '
for f; do mv "$f" "${f//:/_}"; done' _ {} +
Putting the for loop inside find -exec works around pesky problems around file names with unusual characters in them, not just spaces (though your for loop would also inherently break on just single spaces, too; you would have to separately tell the shell to not perform whitespace tokenization if you used the output from find in a command substitution, but again, the proper fix is to simply not do that).
For details, please review https://mywiki.wooledge.org/BashFAQ/020
The -depth option to find says to traverse the directory tree depth-first; this is important when renaming, so that you don't end up renaming directories before you rename the files within them, which then leads to errors when the file you want to rename no longer exists where it was originally found.
The parameter expansion ${f//:/_} is a Bash feature; if you don't have Bash, you have to use an external tool (or a really hairy function around the more pedestrian parameter expansion facilities of POSIX sh; but let's not go there).
find /share/Public/demofind -name "*:*" -depth -exec sh -c '
for f; do mv "$f" "$(echo "$f" | tr : _)"; done' _ {} +
If your platform doesn't even support find -exec you can manually loop over the output from find but again, see the FAQ link above for a number of caveats.
find /share/Public/demofind -name "*:*" -depth -print |
while IFS='' read -r f; do
mv "$f" "$(echo "$f" | tr : _)"
done

Recursively Rename Files and Directories with Bash on macOS

I'm writing a script that will perform some actions, and one of those actions is to find all occurrences of a string in both file names and directory names, and replace it with another string.
I have this so far
find . -name "*foo*" -type f -depth | while read file; do
newpath=${file//foo/bar}
mv "$file" "$newpath"
done
This works fine as long as the path to the file doesn't also contain foo, but that isn't guaranteed.
I feel like the way to approach this is to ONLY change the file names first, then go back through and change the directory names, but even then, if you have a structure that has more than one directory with foo in it, it will not work properly.
Is there a way to do this with built in macOS tools? (I say built-in, because this script is going to be distributed to some other folks in our organization and it can't rely on any packages to be installed).
Separating the path_name from the file_name, something like.
#!/usr/bin/env bash
while read -r file; do
path_name="${file%/*}"; printf 'Path is %s\n' "$path_name"
file_name="${file#"$path_name"}"; printf 'Filename is %s\n' "$file_name"
newpath="$path_name${file_name//foo/bar}"
echo mv -v "$file" "$newpath"
done < <(find . -name "*foo*" -type f)
Have a look at basename and dirname as well.
The printf's is just there to show which is the path and the filename.
The script just replace foo to bar from the file_name, It can be done with the path_name as well, just use the same syntax.
newpath="${path_name//bar/more}${file_name//foo/bar}"
So renaming both path_name and file_name.
Or renaming the path_name and then the file_name like your idea is an option also.
path_name="${file%/*}"
file_name="${file#"$path_name"}"
new_pathname="${path_name//bar/more}"
mv -v "$path_name" "$new_pathname"
new_filename="${file_name//foo/bar}"
mv -v "${new_pathname%/*}$file_name" "$new_pathname$new_filename"
There are no additional external tool/utility used, except from the ones being used by your script.
Remove the echo If you're satisfied with the result/output.
You can use -execdir to run a command on just the filename (basename) in the relevant directory:
find . -depth -name '*foo*' -execdir bash -c 'mv -- "${1}" "${1//foo/bar}"' _ {} \;

Recursively copy/backup all .php files to .php.bak files and keep them in their current paths

I am not sure how to word this question to find the solution easily online, so after much searching I thought I would ask here.
I access my website's files using bitvise ssh client and I use command lines for various grep and sed functions that I've been recently taught, but I can't seem to find a simple way to do this:
What is the command line to make a backup copy (.bak) of EVERY file that ends in .php? I am looking for the command to instantly make a backup of every php file at once, so when I go into my files I see things like...
index.php
index.php.bak
For every php file.
Also, what is the command line to do this for EVERY file at once, regardless of extension?
Would be awesome to see a solution that uses xargs or find's -exec.
But here is how can do this with a shell loop and find:
Note, this recursively backs up files in sub directories.
For .php files:
find . -iname '*.php' -type f -print0 | while read -d $'\0' file; do cp "$file" "$file.bak"; done
For all files:
find . -type f -print0 | while read -d $'\0' file; do cp "$file" "$file.bak"; done
For all files that have an extension:
find . -iname '*.*' -type f -print0 | while read -d $'\0' file; do cp "$file" "$file.bak"; done
You can just use the cp command
enter the dir that you have all the .php files then type
cp *.php temp/ where temp is a directory in the current directory. The * means all
if you just want to copy the whole folder you could
cp -R foldername destinationArea

Bash scripting, loop through files in folder fails

I'm looping through certain files (all files starting with MOVIE) in a folder with this bash script code:
for i in MY-FOLDER/MOVIE*
do
which works fine when there are files in the folder. But when there aren't any, it somehow goes on with one file which it thinks is named MY-FOLDER/MOVIE*.
How can I avoid it to enter the things after
do
if there aren't any files in the folder?
With the nullglob option.
$ shopt -s nullglob
$ for i in zzz* ; do echo "$i" ; done
$
for i in $(find MY-FOLDER/MOVIE -type f); do
echo $i
done
The find utility is one of the Swiss Army knives of linux. It starts at the directory you give it and finds all files in all subdirectories, according to the options you give it.
-type f will find only regular files (not directories).
As I wrote it, the command will find files in subdirectories as well; you can prevent that by adding -maxdepth 1
Edit, 8 years later (thanks for the comment, #tadman!)
You can avoid the loop altogether with
find . -type f -exec echo "{}" \;
This tells find to echo the name of each file by substituting its name for {}. The escaped semicolon is necessary to terminate the command that's passed to -exec.
for file in MY-FOLDER/MOVIE*
do
# Skip if not a file
test -f "$file" || continue
# Now you know it's a file.
...
done

How do I rename the extension for a bunch of files?

In a directory, I have a bunch of *.html files. I'd like to rename them all to *.txt
How can I do that? I use the bash shell.
If using bash, there's no need for external commands like sed, basename, rename, expr, etc.
for file in *.html
do
mv "$file" "${file%.html}.txt"
done
For an better solution (with only bash functionality, as opposed to external calls), see one of the other answers.
The following would do and does not require the system to have the rename program (although you would most often have this on a system):
for file in *.html; do
mv "$file" "$(basename "$file" .html).txt"
done
EDIT: As pointed out in the comments, this does not work for filenames with spaces in them without proper quoting (now added above). When working purely on your own files that you know do not have spaces in the filenames this will work but whenever you write something that may be reused at a later time, do not skip proper quoting.
rename 's/\.html$/\.txt/' *.html
does exactly what you want.
This worked for me on OSX from .txt to .txt_bak
find . -name '*.txt' -exec sh -c 'mv "$0" "${0%.txt}.txt_bak"' {} \;
You want to use rename :
rename -S <old_extension> <new_extension> <files>
rename -S .html .txt *.html
This does exactly what you want - it will change the extension from .html to .txt for all files matching *.html.
Note: Greg Hewgill correctly points out this is not a bash builtin; and is a separate Linux command. If you just need something on Linux this should work fine; if you need something more cross-platform then take a look at one of the other answers.
On a Mac...
Install rename if you haven't: brew install rename
rename -S .html .txt *.html
For Ubuntu Users :
rename 's/\.html$/\.txt/' *.html
This is the slickest solution I've found that works on OSX and Linux, and it works nicely with git too!
find . -name "*.js" -exec bash -c 'mv "$1" "${1%.js}".tsx' - '{}' \;
and with git:
find . -name "*.js" -exec bash -c 'git mv "$1" "${1%.js}".tsx' - '{}' \;
This question explicitly mentions Bash, but if you happen to have ZSH available it is pretty simple:
zmv '(*).*' '$1.txt'
If you get zsh: command not found: zmv then simply run:
autoload -U zmv
And then try again.
Thanks to this original article for the tip about zmv.
Here is an example of the rename command:
rename -n ’s/\.htm$/\.html/’ *.htm
The -n means that it's a test run and will not actually change any files. It will show you a list of files that would be renamed if you removed the -n. In the case above, it will convert all files in the current directory from a file extension of .htm to .html.
If the output of the above test run looked ok then you could run the final version:
rename -v ’s/\.htm$/\.html/’ *.htm
The -v is optional, but it's a good idea to include it because it is the only record you will have of changes that were made by the rename command as shown in the sample output below:
$ rename -v 's/\.htm$/\.html/' *.htm
3.htm renamed as 3.html
4.htm renamed as 4.html
5.htm renamed as 5.html
The tricky part in the middle is a Perl substitution with regular expressions, highlighted below:
rename -v ’s/\.htm$/\.html/’ *.htm
One line, no loops:
ls -1 | xargs -L 1 -I {} bash -c 'mv $1 "${1%.*}.txt"' _ {}
Example:
$ ls
60acbc4d-3a75-4090-85ad-b7d027df8145.json ac8453e2-0d82-4d43-b80e-205edb754700.json
$ ls -1 | xargs -L 1 -I {} bash -c 'mv $1 "${1%.*}.txt"' _ {}
$ ls
60acbc4d-3a75-4090-85ad-b7d027df8145.txt ac8453e2-0d82-4d43-b80e-205edb754700.txt
The command mmv seems to do this task very efficiently on a huge number of files (tens of thousands in a second). For example, to rename all .xml files to .html files, use this:
mmv ";*.xml" "#1#2.html"
the ; will match the path, the * will match the filename, and these are referred to as #1 and #2 in the replacement name.
Answers based on exec or pipes were either too slow or failed on a very large number of files.
In Linux or window git bash or window's wsl, try below command to change every file's extension in current directory or sub-directories or even their sub-directories with just one line of code
find . -depth -name "*.html" -exec sh -c 'mv "$1" "${1%.html}.txt"' _ {} \;
Try this
rename .html .txt *.html
usage:
rename [find] [replace_with] [criteria]
After someone else's website crawl, I ended up with thousands of files missing the .html extension, across a wide tree of subdirectories.
To rename them all in one shot, except the files already having a .html extension (most of them had none at all), this worked for me:
cd wwwroot
find . -xtype f \! -iname *.html -exec mv -iv "{}" "{}.html" \; # batch rename files to append .html suffix IF MISSING
In the OP's case I might modify that slightly, to only rename *.txt files, like so:
find . -xtype f -iname *.txt -exec filename="{}" mv -iv ${filename%.*}.{txt,html} \;
Broken down (hammertime!):
-iname *.txt
- Means consider ONLY files already ending in .txt
mv -iv "{}.{txt,html}"
- When find passes a {} as the filename, ${filename%.*} extracts its basename without any extension to form the parameters to mv. bash takes the {txt,html} to rewrite it as two parameters so the final command runs as: mv -iv "filename.txt" "filename.html"
Fix needed though: dealing with spaces in filenames
This is a good way to modify multiple extensions at once:
for fname in *.{mp4,avi}
do
mv -v "$fname" "${fname%.???}.mkv"
done
Note: be careful at the extension size to be the same (the ???)
Rename file extensions for all files under current directory and sub directories without any other packages (only use shell script):
Create a shell script rename.sh under current directory with the following code:
#!/bin/bash
for file in $(find . -name "*$1"); do
mv "$file" "${file%$1}$2"
done
Run it by ./rename.sh .old .new.
Eg. ./rename.sh .html .txt
A bit late to the party. You could do it with xargs:
ls *.html | xargs -I {} sh -c 'mv $1 `basename $1 .html`.txt' - {}
Or if all your files are in some folder
ls folder/*.html | xargs -I {} sh -c 'mv $1 folder/`basename $1 .html`.txt' - {}
Similarly to what was suggested before, this is how I did it:
find . -name '*OldText*' -exec sh -c 'mv "$0" "${0/OldText/NewText}"' {} \;
I first validated with
find . -name '*OldText*' -exec sh -c 'echo mv "$0" "${0/OldText/NewText}"' {} \;
Nice & simple!
find . -iname *.html -exec mv {} "$(basename {} .html).text" \;
If you prefer PERL, there is a short PERL script (originally written by Larry Wall, the creator of PERL) that will do exactly what you want here:
tips.webdesign10.com/files/rename.pl.txt.
For your example the following should do the trick:
rename.pl 's/html/txt/' *.html
The easiest way is to use rename.ul it is present in most of the Linux distro
rename.ul -o -v [oldFileExtension] [newFileExtension] [expression to search for file to be applied with]
rename.ul -o -v .oldext .newext *.oldext
Options:
-o: don't overwrite preexisting .newext
-v: verbose
-n: dry run
Unfortunately it's not trivial to do portably. You probably need a bit of expr magic.
for file in *.html; do echo mv -- "$file" "$(expr "$file" : '\(.*\)\.html').txt"; done
Remove the echo once you're happy it does what you want.
Edit: basename is probably a little more readable for this particular case, although expr is more flexible in general.
Here is what i used to rename .edge files to .blade.php
for file in *.edge; do mv "$file" "$(basename "$file" .edge).blade.php"; done
Works like charm.
You can also make a function in Bash, add it to .bashrc or something and then use it wherever you want.
change-ext() {
for file in *.$1; do mv "$file" "$(basename "$file" .$1).$2"; done
}
Usage:
change-ext css scss
Source of code in function: https://stackoverflow.com/a/1224786/6732111
Here is a solution, using AWK. Make sure the files are present in the working directory. Else, cd to the directory where the html files are located and then execute the below command:
for i in $(ls | grep .html); do j=$(echo $i | grep -oh "^\w*." | awk '{print $1"txt"}'); mv $i $j; done
I wrote this code in my .bashrc
alias find-ext='read -p "Path (dot for current): " p_path; read -p "Ext (unpunctured): " p_ext1; find $p_path -type f -name "*."$p_ext1'
alias rename-ext='read -p "Path (dot for current): " p_path; read -p "Ext (unpunctured): " p_ext1; read -p "Change by ext. (unpunctured): " p_ext2; echo -en "\nFound files:\n"; find $p_path -type f -name "*.$p_ext1"; find $p_path -type f -name "*.$p_ext1" -exec sh -c '\''mv "$1" "${1%.'\''$p_ext1'\''}.'\''$p_ext2'\''" '\'' _ {} \;; echo -en "\nChanged Files:\n"; find $p_path -type f -name "*.$p_ext2";'
In a folder like "/home/<user>/example-files" having this structure:
/home/<user>/example-files:
file1.txt
file2.txt
file3.pdf
file4.csv
The commands would behave like this:
~$ find-text
Path (dot for current): example-files/
Ext (unpunctured): txt
example-files/file1.txt
example-files/file2.txt
~$ rename-text
Path (dot for current): ./example-files
Ext (unpunctured): txt
Change by ext. (unpunctured): mp3
Found files:
./example-files/file1.txt
./example-files/file1.txt
Changed Files:
./example-files/file1.mp3
./example-files/file1.mp3
~$
You could use a tool designed for renaming files in bulk, e.g. renamer.
To rename all file extensions in the current folder:
$ renamer --find ".html" --replace ".txt" --dry-run *
Many more usage examples here.

Resources