Related
how can I replace a part of the filename, of a certain type (.zip), with another string, recursively through all potential nested subdirectories?
This is my filesystem structure:
dir/
|
subdir/
|
filename_strToReplace.zip
|
subdir/
|
subdir
|
filename_strToReplace.zip
filename_strToReplace.zip
filename_strToReplace.zip
So as you can see, files whose filenames need to be modiffied can be nested few levels deep. I have some moderate terminal and shell experience but not real scripting.
I believe the solution is the combination of mv, RegEx (which I can use pretty decently) and a for loop.
For what it's worth I am on a Mac, using "default" terminal (haven't messed with this) with Oh-my-zshell.
Thanks!
Using find and rename commands you can achieve that:
find . -name '*strToReplace*' | xargs -I{} rename 's/strToReplace/replacement/' {}
find search all files whose name contains strToReplace.
Then rename uses a regex to rename those files.
Use zmv:
autoload zmv
zmv -n '(dir/**/filename)_(.*).zip' '($1)_replacementStr.zip'
Remove the -n to actually perform the rename after verifying that the command will do what you want.
In bash you could achieve this using find + a custom function
#!/bin/bash
function namereplacer()
{
for file in "$#"
do
mv "$file" "${file/%stringToReplace.zip/newstring.zip}"
done
}
export -f namereplacer
find /base/path/ -depth -type f -name "*stringToReplace.zip" \
-exec bash -c 'namereplacer "$#"' _ {} +
# The 'exec {} +' form builds the command line, see find man
Note Replace /base/path with your path to base folder
I used rename similar to sjsam's answer to create a shell script. My use case was to remove .bak extension from the end of the first filename that matched the .tsx pattern:
dir=$1
extensionToChange=.bak
for file in $(find $dir -type f -name *.tsx$extensionToChange); do
echo $file
mv "$file" "${file/$extensionToChange/}"
break;
done
Had to grant execute permission on the script with chmod +x rename_first.sh
Example execution: ./rename_first.sh ../UI/test/src
How do I find and replace every occurrence of:
subdomainA.example.com
with
subdomainB.example.com
in every text file under the /home/www/ directory tree recursively?
find /home/www \( -type d -name .git -prune \) -o -type f -print0 | xargs -0 sed -i 's/subdomainA\.example\.com/subdomainB.example.com/g'
-print0 tells find to print each of the results separated by a null character, rather than a new line. In the unlikely event that your directory has files with newlines in the names, this still lets xargs work on the correct filenames.
\( -type d -name .git -prune \) is an expression which completely skips over all directories named .git. You could easily expand it, if you use SVN or have other folders you want to preserve -- just match against more names. It's roughly equivalent to -not -path .git, but more efficient, because rather than checking every file in the directory, it skips it entirely. The -o after it is required because of how -prune actually works.
For more information, see man find.
The simplest way for me is
grep -rl oldtext . | xargs sed -i 's/oldtext/newtext/g'
Note: Do not run this command on a folder including a git repo - changes to .git could corrupt your git index.
find /home/www/ -type f -exec \
sed -i 's/subdomainA\.example\.com/subdomainB.example.com/g' {} +
Compared to other answers here, this is simpler than most and uses sed instead of perl, which is what the original question asked for.
All the tricks are almost the same, but I like this one:
find <mydir> -type f -exec sed -i 's/<string1>/<string2>/g' {} +
find <mydir>: look up in the directory.
-type f:
File is of type: regular file
-exec command {} +:
This variant of the -exec action runs the specified command on the selected files, but the command line is built by appending
each selected file name at the end; the total number of invocations of the command will be much less than the number of
matched files. The command line is built in much the same way that xargs builds its command lines. Only one instance of
`{}' is allowed within the command. The command is executed in the starting directory.
For me the easiest solution to remember is https://stackoverflow.com/a/2113224/565525, i.e.:
sed -i '' -e 's/subdomainA/subdomainB/g' $(find /home/www/ -type f)
NOTE: -i '' solves OSX problem sed: 1: "...": invalid command code .
NOTE: If there are too many files to process you'll get Argument list too long. The workaround - use find -exec or xargs solution described above.
cd /home/www && find . -type f -print0 |
xargs -0 perl -i.bak -pe 's/subdomainA\.example\.com/subdomainB.example.com/g'
For anyone using silver searcher (ag)
ag SearchString -l0 | xargs -0 sed -i 's/SearchString/Replacement/g'
Since ag ignores git/hg/svn file/folders by default, this is safe to run inside a repository.
This one is compatible with git repositories, and a bit simpler:
Linux:
git grep -l 'original_text' | xargs sed -i 's/original_text/new_text/g'
Mac:
git grep -l 'original_text' | xargs sed -i '' -e 's/original_text/new_text/g'
(Thanks to http://blog.jasonmeridth.com/posts/use-git-grep-to-replace-strings-in-files-in-your-git-repository/)
To cut down on files to recursively sed through, you could grep for your string instance:
grep -rl <oldstring> /path/to/folder | xargs sed -i s^<oldstring>^<newstring>^g
If you run man grep you'll notice you can also define an --exlude-dir="*.git" flag if you want to omit searching through .git directories, avoiding git index issues as others have politely pointed out.
Leading you to:
grep -rl --exclude-dir="*.git" <oldstring> /path/to/folder | xargs sed -i s^<oldstring>^<newstring>^g
A straight forward method if you need to exclude directories (--exclude-dir=..folder) and also might have file names with spaces (solved by using 0Byte for both grep -Z and xargs -0)
grep -rlZ oldtext . --exclude-dir=.folder | xargs -0 sed -i 's/oldtext/newtext/g'
An one nice oneliner as an extra. Using git grep.
git grep -lz 'subdomainA.example.com' | xargs -0 perl -i'' -pE "s/subdomainA.example.com/subdomainB.example.com/g"
Simplest way to replace (all files, directory, recursive)
find . -type f -not -path '*/\.*' -exec sed -i 's/foo/bar/g' {} +
Note: Sometimes you might need to ignore some hidden files i.e. .git, you can use above command.
If you want to include hidden files use,
find . -type f -exec sed -i 's/foo/bar/g' {} +
In both case the string foo will be replaced with new string bar
find /home/www/ -type f -exec perl -i.bak -pe 's/subdomainA\.example\.com/subdomainB.example.com/g' {} +
find /home/www/ -type f will list all files in /home/www/ (and its subdirectories).
The "-exec" flag tells find to run the following command on each file found.
perl -i.bak -pe 's/subdomainA\.example\.com/subdomainB.example.com/g' {} +
is the command run on the files (many at a time). The {} gets replaced by file names.
The + at the end of the command tells find to build one command for many filenames.
Per the find man page:
"The command line is built in much the same way that
xargs builds its command lines."
Thus it's possible to achieve your goal (and handle filenames containing spaces) without using xargs -0, or -print0.
I just needed this and was not happy with the speed of the available examples. So I came up with my own:
cd /var/www && ack-grep -l --print0 subdomainA.example.com | xargs -0 perl -i.bak -pe 's/subdomainA\.example\.com/subdomainB.example.com/g'
Ack-grep is very efficient on finding relevant files. This command replaced ~145 000 files with a breeze whereas others took so long I couldn't wait until they finish.
or use the blazing fast GNU Parallel:
grep -rl oldtext . | parallel sed -i 's/oldtext/newtext/g' {}
grep -lr 'subdomainA.example.com' | while read file; do sed -i "s/subdomainA.example.com/subdomainB.example.com/g" "$file"; done
I guess most people don't know that they can pipe something into a "while read file" and it avoids those nasty -print0 args, while presevering spaces in filenames.
Further adding an echo before the sed allows you to see what files will change before actually doing it.
Try this:
sed -i 's/subdomainA/subdomainB/g' `grep -ril 'subdomainA' *`
According to this blog post:
find . -type f | xargs perl -pi -e 's/oldtext/newtext/g;'
#!/usr/local/bin/bash -x
find * /home/www -type f | while read files
do
sedtest=$(sed -n '/^/,/$/p' "${files}" | sed -n '/subdomainA/p')
if [ "${sedtest}" ]
then
sed s'/subdomainA/subdomainB/'g "${files}" > "${files}".tmp
mv "${files}".tmp "${files}"
fi
done
If you do not mind using vim together with grep or find tools, you could follow up the answer given by user Gert in this link --> How to do a text replacement in a big folder hierarchy?.
Here's the deal:
recursively grep for the string that you want to replace in a certain path, and take only the complete path of the matching file. (that would be the $(grep 'string' 'pathname' -Rl).
(optional) if you want to make a pre-backup of those files on centralized directory maybe you can use this also: cp -iv $(grep 'string' 'pathname' -Rl) 'centralized-directory-pathname'
after that you can edit/replace at will in vim following a scheme similar to the one provided on the link given:
:bufdo %s#string#replacement#gc | update
You can use awk to solve this as below,
for file in `find /home/www -type f`
do
awk '{gsub(/subdomainA.example.com/,"subdomainB.example.com"); print $0;}' $file > ./tempFile && mv ./tempFile $file;
done
hope this will help you !!!
For replace all occurrences in a git repository you can use:
git ls-files -z | xargs -0 sed -i 's/subdomainA\.example\.com/subdomainB.example.com/g'
See List files in local git repo? for other options to list all files in a repository. The -z options tells git to separate the file names with a zero byte, which assures that xargs (with the option -0) can separate filenames, even if they contain spaces or whatnot.
A bit old school but this worked on OS X.
There are few trickeries:
• Will only edit files with extension .sls under the current directory
• . must be escaped to ensure sed does not evaluate them as "any character"
• , is used as the sed delimiter instead of the usual /
Also note this is to edit a Jinja template to pass a variable in the path of an import (but this is off topic).
First, verify your sed command does what you want (this will only print the changes to stdout, it will not change the files):
for file in $(find . -name *.sls -type f); do echo -e "\n$file: "; sed 's,foo\.bar,foo/bar/\"+baz+\"/,g' $file; done
Edit the sed command as needed, once you are ready to make changes:
for file in $(find . -name *.sls -type f); do echo -e "\n$file: "; sed -i '' 's,foo\.bar,foo/bar/\"+baz+\"/,g' $file; done
Note the -i '' in the sed command, I did not want to create a backup of the original files (as explained in In-place edits with sed on OS X or in Robert Lujo's comment in this page).
Happy seding folks!
just to avoid to change also
NearlysubdomainA.example.com
subdomainA.example.comp.other
but still
subdomainA.example.com.IsIt.good
(maybe not good in the idea behind domain root)
find /home/www/ -type f -exec sed -i 's/\bsubdomainA\.example\.com\b/\1subdomainB.example.com\2/g' {} \;
Here's a version that should be more general than most; it doesn't require find (using du instead), for instance. It does require xargs, which are only found in some versions of Plan 9 (like 9front).
du -a | awk -F' ' '{ print $2 }' | xargs sed -i -e 's/subdomainA\.example\.com/subdomainB.example.com/g'
If you want to add filters like file extensions use grep:
du -a | grep "\.scala$" | awk -F' ' '{ print $2 }' | xargs sed -i -e 's/subdomainA\.example\.com/subdomainB.example.com/g'
For Qshell (qsh) on IBMi, not bash as tagged by OP.
Limitations of qsh commands:
find does not have the -print0 option
xargs does not have -0 option
sed does not have -i option
Thus the solution in qsh:
PATH='your/path/here'
SEARCH=\'subdomainA.example.com\'
REPLACE=\'subdomainB.example.com\'
for file in $( find ${PATH} -P -type f ); do
TEMP_FILE=${file}.${RANDOM}.temp_file
if [ ! -e ${TEMP_FILE} ]; then
touch -C 819 ${TEMP_FILE}
sed -e 's/'$SEARCH'/'$REPLACE'/g' \
< ${file} > ${TEMP_FILE}
mv ${TEMP_FILE} ${file}
fi
done
Caveats:
Solution excludes error handling
Not Bash as tagged by OP
If you wanted to use this without completely destroying your SVN repository, you can tell 'find' to ignore all hidden files by doing:
find . \( ! -regex '.*/\..*' \) -type f -print0 | xargs -0 sed -i 's/subdomainA.example.com/subdomainB.example.com/g'
Using combination of grep and sed
for pp in $(grep -Rl looking_for_string)
do
sed -i 's/looking_for_string/something_other/g' "${pp}"
done
perl -p -i -e 's/oldthing/new_thingy/g' `grep -ril oldthing *`
to change multiple files (and saving a backup as *.bak):
perl -p -i -e "s/\|/x/g" *
will take all files in directory and replace | with x
called a “Perl pie” (easy as a pie)
I am trying to remove files within a directory. Some of the files have double-quotes around their name while others do not. An example of these files would be:
"DDD344".csv
D2DW.csv
Both these files are located in sub-directories within the directory YM.
To find such files and remove them, I invoke find like so:
find YM -name "*.csv" -print | xargs rm
The above command results in a lot of No such file or directory errors.
I tried using sed in the following way:
find yum/yum_hyd -name "\"*\".csv" | sed 's/"/\"/g' | xargs rm
but to no avail. How do I remove the files?
The problem is that you're using xargs. xargs is a horribly broken program that should never be used for anything except in conjunction with the nonstandard -0 option. Even so, I can't think of any advantages to doing that in this case. You should just execute rm directly from find.
find . -type f -name '"*".csv' -exec rm -f -- {} +
Will work. If you have GNU find, you may also use -delete.
try this:
find yum/yum_hyd -name "\"*\".csv" |sed 's/"/\\"/g'|xargs rm
explanation:
you want to replace " with \". but if you write \" directly, sed considers it as plain ", you have to escape the backslash. so \\" works.
I wasn't aware of this option until recently but you can list the inode of the file in the following way:
$ ls –il
In the output you will see that the first column contains the inode value. You can then use that value to find -inum the offending files and remove them.
Output
2616366 -rw-r--r-- 1 etc etc
$ find . -inum 2616366 -exec rm -f {} \;
This will remove the file with that specific inum.
As a test you can run the following to locate your files.
ls -il \"* | awk '{print $1}' | xargs -n1 -I {} find -inum {}
Replace the final portion of this command (the "find -inum {}") with the "rm" command once you are satisfied.
This is also similar to the question on SuperUser
How do I find and replace every occurrence of:
subdomainA.example.com
with
subdomainB.example.com
in every text file under the /home/www/ directory tree recursively?
find /home/www \( -type d -name .git -prune \) -o -type f -print0 | xargs -0 sed -i 's/subdomainA\.example\.com/subdomainB.example.com/g'
-print0 tells find to print each of the results separated by a null character, rather than a new line. In the unlikely event that your directory has files with newlines in the names, this still lets xargs work on the correct filenames.
\( -type d -name .git -prune \) is an expression which completely skips over all directories named .git. You could easily expand it, if you use SVN or have other folders you want to preserve -- just match against more names. It's roughly equivalent to -not -path .git, but more efficient, because rather than checking every file in the directory, it skips it entirely. The -o after it is required because of how -prune actually works.
For more information, see man find.
The simplest way for me is
grep -rl oldtext . | xargs sed -i 's/oldtext/newtext/g'
Note: Do not run this command on a folder including a git repo - changes to .git could corrupt your git index.
find /home/www/ -type f -exec \
sed -i 's/subdomainA\.example\.com/subdomainB.example.com/g' {} +
Compared to other answers here, this is simpler than most and uses sed instead of perl, which is what the original question asked for.
All the tricks are almost the same, but I like this one:
find <mydir> -type f -exec sed -i 's/<string1>/<string2>/g' {} +
find <mydir>: look up in the directory.
-type f:
File is of type: regular file
-exec command {} +:
This variant of the -exec action runs the specified command on the selected files, but the command line is built by appending
each selected file name at the end; the total number of invocations of the command will be much less than the number of
matched files. The command line is built in much the same way that xargs builds its command lines. Only one instance of
`{}' is allowed within the command. The command is executed in the starting directory.
For me the easiest solution to remember is https://stackoverflow.com/a/2113224/565525, i.e.:
sed -i '' -e 's/subdomainA/subdomainB/g' $(find /home/www/ -type f)
NOTE: -i '' solves OSX problem sed: 1: "...": invalid command code .
NOTE: If there are too many files to process you'll get Argument list too long. The workaround - use find -exec or xargs solution described above.
cd /home/www && find . -type f -print0 |
xargs -0 perl -i.bak -pe 's/subdomainA\.example\.com/subdomainB.example.com/g'
For anyone using silver searcher (ag)
ag SearchString -l0 | xargs -0 sed -i 's/SearchString/Replacement/g'
Since ag ignores git/hg/svn file/folders by default, this is safe to run inside a repository.
This one is compatible with git repositories, and a bit simpler:
Linux:
git grep -l 'original_text' | xargs sed -i 's/original_text/new_text/g'
Mac:
git grep -l 'original_text' | xargs sed -i '' -e 's/original_text/new_text/g'
(Thanks to http://blog.jasonmeridth.com/posts/use-git-grep-to-replace-strings-in-files-in-your-git-repository/)
To cut down on files to recursively sed through, you could grep for your string instance:
grep -rl <oldstring> /path/to/folder | xargs sed -i s^<oldstring>^<newstring>^g
If you run man grep you'll notice you can also define an --exlude-dir="*.git" flag if you want to omit searching through .git directories, avoiding git index issues as others have politely pointed out.
Leading you to:
grep -rl --exclude-dir="*.git" <oldstring> /path/to/folder | xargs sed -i s^<oldstring>^<newstring>^g
A straight forward method if you need to exclude directories (--exclude-dir=..folder) and also might have file names with spaces (solved by using 0Byte for both grep -Z and xargs -0)
grep -rlZ oldtext . --exclude-dir=.folder | xargs -0 sed -i 's/oldtext/newtext/g'
An one nice oneliner as an extra. Using git grep.
git grep -lz 'subdomainA.example.com' | xargs -0 perl -i'' -pE "s/subdomainA.example.com/subdomainB.example.com/g"
Simplest way to replace (all files, directory, recursive)
find . -type f -not -path '*/\.*' -exec sed -i 's/foo/bar/g' {} +
Note: Sometimes you might need to ignore some hidden files i.e. .git, you can use above command.
If you want to include hidden files use,
find . -type f -exec sed -i 's/foo/bar/g' {} +
In both case the string foo will be replaced with new string bar
find /home/www/ -type f -exec perl -i.bak -pe 's/subdomainA\.example\.com/subdomainB.example.com/g' {} +
find /home/www/ -type f will list all files in /home/www/ (and its subdirectories).
The "-exec" flag tells find to run the following command on each file found.
perl -i.bak -pe 's/subdomainA\.example\.com/subdomainB.example.com/g' {} +
is the command run on the files (many at a time). The {} gets replaced by file names.
The + at the end of the command tells find to build one command for many filenames.
Per the find man page:
"The command line is built in much the same way that
xargs builds its command lines."
Thus it's possible to achieve your goal (and handle filenames containing spaces) without using xargs -0, or -print0.
I just needed this and was not happy with the speed of the available examples. So I came up with my own:
cd /var/www && ack-grep -l --print0 subdomainA.example.com | xargs -0 perl -i.bak -pe 's/subdomainA\.example\.com/subdomainB.example.com/g'
Ack-grep is very efficient on finding relevant files. This command replaced ~145 000 files with a breeze whereas others took so long I couldn't wait until they finish.
or use the blazing fast GNU Parallel:
grep -rl oldtext . | parallel sed -i 's/oldtext/newtext/g' {}
grep -lr 'subdomainA.example.com' | while read file; do sed -i "s/subdomainA.example.com/subdomainB.example.com/g" "$file"; done
I guess most people don't know that they can pipe something into a "while read file" and it avoids those nasty -print0 args, while presevering spaces in filenames.
Further adding an echo before the sed allows you to see what files will change before actually doing it.
Try this:
sed -i 's/subdomainA/subdomainB/g' `grep -ril 'subdomainA' *`
According to this blog post:
find . -type f | xargs perl -pi -e 's/oldtext/newtext/g;'
#!/usr/local/bin/bash -x
find * /home/www -type f | while read files
do
sedtest=$(sed -n '/^/,/$/p' "${files}" | sed -n '/subdomainA/p')
if [ "${sedtest}" ]
then
sed s'/subdomainA/subdomainB/'g "${files}" > "${files}".tmp
mv "${files}".tmp "${files}"
fi
done
If you do not mind using vim together with grep or find tools, you could follow up the answer given by user Gert in this link --> How to do a text replacement in a big folder hierarchy?.
Here's the deal:
recursively grep for the string that you want to replace in a certain path, and take only the complete path of the matching file. (that would be the $(grep 'string' 'pathname' -Rl).
(optional) if you want to make a pre-backup of those files on centralized directory maybe you can use this also: cp -iv $(grep 'string' 'pathname' -Rl) 'centralized-directory-pathname'
after that you can edit/replace at will in vim following a scheme similar to the one provided on the link given:
:bufdo %s#string#replacement#gc | update
You can use awk to solve this as below,
for file in `find /home/www -type f`
do
awk '{gsub(/subdomainA.example.com/,"subdomainB.example.com"); print $0;}' $file > ./tempFile && mv ./tempFile $file;
done
hope this will help you !!!
For replace all occurrences in a git repository you can use:
git ls-files -z | xargs -0 sed -i 's/subdomainA\.example\.com/subdomainB.example.com/g'
See List files in local git repo? for other options to list all files in a repository. The -z options tells git to separate the file names with a zero byte, which assures that xargs (with the option -0) can separate filenames, even if they contain spaces or whatnot.
A bit old school but this worked on OS X.
There are few trickeries:
• Will only edit files with extension .sls under the current directory
• . must be escaped to ensure sed does not evaluate them as "any character"
• , is used as the sed delimiter instead of the usual /
Also note this is to edit a Jinja template to pass a variable in the path of an import (but this is off topic).
First, verify your sed command does what you want (this will only print the changes to stdout, it will not change the files):
for file in $(find . -name *.sls -type f); do echo -e "\n$file: "; sed 's,foo\.bar,foo/bar/\"+baz+\"/,g' $file; done
Edit the sed command as needed, once you are ready to make changes:
for file in $(find . -name *.sls -type f); do echo -e "\n$file: "; sed -i '' 's,foo\.bar,foo/bar/\"+baz+\"/,g' $file; done
Note the -i '' in the sed command, I did not want to create a backup of the original files (as explained in In-place edits with sed on OS X or in Robert Lujo's comment in this page).
Happy seding folks!
just to avoid to change also
NearlysubdomainA.example.com
subdomainA.example.comp.other
but still
subdomainA.example.com.IsIt.good
(maybe not good in the idea behind domain root)
find /home/www/ -type f -exec sed -i 's/\bsubdomainA\.example\.com\b/\1subdomainB.example.com\2/g' {} \;
Here's a version that should be more general than most; it doesn't require find (using du instead), for instance. It does require xargs, which are only found in some versions of Plan 9 (like 9front).
du -a | awk -F' ' '{ print $2 }' | xargs sed -i -e 's/subdomainA\.example\.com/subdomainB.example.com/g'
If you want to add filters like file extensions use grep:
du -a | grep "\.scala$" | awk -F' ' '{ print $2 }' | xargs sed -i -e 's/subdomainA\.example\.com/subdomainB.example.com/g'
For Qshell (qsh) on IBMi, not bash as tagged by OP.
Limitations of qsh commands:
find does not have the -print0 option
xargs does not have -0 option
sed does not have -i option
Thus the solution in qsh:
PATH='your/path/here'
SEARCH=\'subdomainA.example.com\'
REPLACE=\'subdomainB.example.com\'
for file in $( find ${PATH} -P -type f ); do
TEMP_FILE=${file}.${RANDOM}.temp_file
if [ ! -e ${TEMP_FILE} ]; then
touch -C 819 ${TEMP_FILE}
sed -e 's/'$SEARCH'/'$REPLACE'/g' \
< ${file} > ${TEMP_FILE}
mv ${TEMP_FILE} ${file}
fi
done
Caveats:
Solution excludes error handling
Not Bash as tagged by OP
If you wanted to use this without completely destroying your SVN repository, you can tell 'find' to ignore all hidden files by doing:
find . \( ! -regex '.*/\..*' \) -type f -print0 | xargs -0 sed -i 's/subdomainA.example.com/subdomainB.example.com/g'
Using combination of grep and sed
for pp in $(grep -Rl looking_for_string)
do
sed -i 's/looking_for_string/something_other/g' "${pp}"
done
perl -p -i -e 's/oldthing/new_thingy/g' `grep -ril oldthing *`
to change multiple files (and saving a backup as *.bak):
perl -p -i -e "s/\|/x/g" *
will take all files in directory and replace | with x
called a “Perl pie” (easy as a pie)
In a directory, I have a bunch of *.html files. I'd like to rename them all to *.txt
How can I do that? I use the bash shell.
If using bash, there's no need for external commands like sed, basename, rename, expr, etc.
for file in *.html
do
mv "$file" "${file%.html}.txt"
done
For an better solution (with only bash functionality, as opposed to external calls), see one of the other answers.
The following would do and does not require the system to have the rename program (although you would most often have this on a system):
for file in *.html; do
mv "$file" "$(basename "$file" .html).txt"
done
EDIT: As pointed out in the comments, this does not work for filenames with spaces in them without proper quoting (now added above). When working purely on your own files that you know do not have spaces in the filenames this will work but whenever you write something that may be reused at a later time, do not skip proper quoting.
rename 's/\.html$/\.txt/' *.html
does exactly what you want.
This worked for me on OSX from .txt to .txt_bak
find . -name '*.txt' -exec sh -c 'mv "$0" "${0%.txt}.txt_bak"' {} \;
You want to use rename :
rename -S <old_extension> <new_extension> <files>
rename -S .html .txt *.html
This does exactly what you want - it will change the extension from .html to .txt for all files matching *.html.
Note: Greg Hewgill correctly points out this is not a bash builtin; and is a separate Linux command. If you just need something on Linux this should work fine; if you need something more cross-platform then take a look at one of the other answers.
On a Mac...
Install rename if you haven't: brew install rename
rename -S .html .txt *.html
For Ubuntu Users :
rename 's/\.html$/\.txt/' *.html
This is the slickest solution I've found that works on OSX and Linux, and it works nicely with git too!
find . -name "*.js" -exec bash -c 'mv "$1" "${1%.js}".tsx' - '{}' \;
and with git:
find . -name "*.js" -exec bash -c 'git mv "$1" "${1%.js}".tsx' - '{}' \;
This question explicitly mentions Bash, but if you happen to have ZSH available it is pretty simple:
zmv '(*).*' '$1.txt'
If you get zsh: command not found: zmv then simply run:
autoload -U zmv
And then try again.
Thanks to this original article for the tip about zmv.
Here is an example of the rename command:
rename -n ’s/\.htm$/\.html/’ *.htm
The -n means that it's a test run and will not actually change any files. It will show you a list of files that would be renamed if you removed the -n. In the case above, it will convert all files in the current directory from a file extension of .htm to .html.
If the output of the above test run looked ok then you could run the final version:
rename -v ’s/\.htm$/\.html/’ *.htm
The -v is optional, but it's a good idea to include it because it is the only record you will have of changes that were made by the rename command as shown in the sample output below:
$ rename -v 's/\.htm$/\.html/' *.htm
3.htm renamed as 3.html
4.htm renamed as 4.html
5.htm renamed as 5.html
The tricky part in the middle is a Perl substitution with regular expressions, highlighted below:
rename -v ’s/\.htm$/\.html/’ *.htm
One line, no loops:
ls -1 | xargs -L 1 -I {} bash -c 'mv $1 "${1%.*}.txt"' _ {}
Example:
$ ls
60acbc4d-3a75-4090-85ad-b7d027df8145.json ac8453e2-0d82-4d43-b80e-205edb754700.json
$ ls -1 | xargs -L 1 -I {} bash -c 'mv $1 "${1%.*}.txt"' _ {}
$ ls
60acbc4d-3a75-4090-85ad-b7d027df8145.txt ac8453e2-0d82-4d43-b80e-205edb754700.txt
The command mmv seems to do this task very efficiently on a huge number of files (tens of thousands in a second). For example, to rename all .xml files to .html files, use this:
mmv ";*.xml" "#1#2.html"
the ; will match the path, the * will match the filename, and these are referred to as #1 and #2 in the replacement name.
Answers based on exec or pipes were either too slow or failed on a very large number of files.
In Linux or window git bash or window's wsl, try below command to change every file's extension in current directory or sub-directories or even their sub-directories with just one line of code
find . -depth -name "*.html" -exec sh -c 'mv "$1" "${1%.html}.txt"' _ {} \;
Try this
rename .html .txt *.html
usage:
rename [find] [replace_with] [criteria]
After someone else's website crawl, I ended up with thousands of files missing the .html extension, across a wide tree of subdirectories.
To rename them all in one shot, except the files already having a .html extension (most of them had none at all), this worked for me:
cd wwwroot
find . -xtype f \! -iname *.html -exec mv -iv "{}" "{}.html" \; # batch rename files to append .html suffix IF MISSING
In the OP's case I might modify that slightly, to only rename *.txt files, like so:
find . -xtype f -iname *.txt -exec filename="{}" mv -iv ${filename%.*}.{txt,html} \;
Broken down (hammertime!):
-iname *.txt
- Means consider ONLY files already ending in .txt
mv -iv "{}.{txt,html}"
- When find passes a {} as the filename, ${filename%.*} extracts its basename without any extension to form the parameters to mv. bash takes the {txt,html} to rewrite it as two parameters so the final command runs as: mv -iv "filename.txt" "filename.html"
Fix needed though: dealing with spaces in filenames
This is a good way to modify multiple extensions at once:
for fname in *.{mp4,avi}
do
mv -v "$fname" "${fname%.???}.mkv"
done
Note: be careful at the extension size to be the same (the ???)
Rename file extensions for all files under current directory and sub directories without any other packages (only use shell script):
Create a shell script rename.sh under current directory with the following code:
#!/bin/bash
for file in $(find . -name "*$1"); do
mv "$file" "${file%$1}$2"
done
Run it by ./rename.sh .old .new.
Eg. ./rename.sh .html .txt
A bit late to the party. You could do it with xargs:
ls *.html | xargs -I {} sh -c 'mv $1 `basename $1 .html`.txt' - {}
Or if all your files are in some folder
ls folder/*.html | xargs -I {} sh -c 'mv $1 folder/`basename $1 .html`.txt' - {}
Similarly to what was suggested before, this is how I did it:
find . -name '*OldText*' -exec sh -c 'mv "$0" "${0/OldText/NewText}"' {} \;
I first validated with
find . -name '*OldText*' -exec sh -c 'echo mv "$0" "${0/OldText/NewText}"' {} \;
Nice & simple!
find . -iname *.html -exec mv {} "$(basename {} .html).text" \;
If you prefer PERL, there is a short PERL script (originally written by Larry Wall, the creator of PERL) that will do exactly what you want here:
tips.webdesign10.com/files/rename.pl.txt.
For your example the following should do the trick:
rename.pl 's/html/txt/' *.html
The easiest way is to use rename.ul it is present in most of the Linux distro
rename.ul -o -v [oldFileExtension] [newFileExtension] [expression to search for file to be applied with]
rename.ul -o -v .oldext .newext *.oldext
Options:
-o: don't overwrite preexisting .newext
-v: verbose
-n: dry run
Unfortunately it's not trivial to do portably. You probably need a bit of expr magic.
for file in *.html; do echo mv -- "$file" "$(expr "$file" : '\(.*\)\.html').txt"; done
Remove the echo once you're happy it does what you want.
Edit: basename is probably a little more readable for this particular case, although expr is more flexible in general.
Here is what i used to rename .edge files to .blade.php
for file in *.edge; do mv "$file" "$(basename "$file" .edge).blade.php"; done
Works like charm.
You can also make a function in Bash, add it to .bashrc or something and then use it wherever you want.
change-ext() {
for file in *.$1; do mv "$file" "$(basename "$file" .$1).$2"; done
}
Usage:
change-ext css scss
Source of code in function: https://stackoverflow.com/a/1224786/6732111
Here is a solution, using AWK. Make sure the files are present in the working directory. Else, cd to the directory where the html files are located and then execute the below command:
for i in $(ls | grep .html); do j=$(echo $i | grep -oh "^\w*." | awk '{print $1"txt"}'); mv $i $j; done
I wrote this code in my .bashrc
alias find-ext='read -p "Path (dot for current): " p_path; read -p "Ext (unpunctured): " p_ext1; find $p_path -type f -name "*."$p_ext1'
alias rename-ext='read -p "Path (dot for current): " p_path; read -p "Ext (unpunctured): " p_ext1; read -p "Change by ext. (unpunctured): " p_ext2; echo -en "\nFound files:\n"; find $p_path -type f -name "*.$p_ext1"; find $p_path -type f -name "*.$p_ext1" -exec sh -c '\''mv "$1" "${1%.'\''$p_ext1'\''}.'\''$p_ext2'\''" '\'' _ {} \;; echo -en "\nChanged Files:\n"; find $p_path -type f -name "*.$p_ext2";'
In a folder like "/home/<user>/example-files" having this structure:
/home/<user>/example-files:
file1.txt
file2.txt
file3.pdf
file4.csv
The commands would behave like this:
~$ find-text
Path (dot for current): example-files/
Ext (unpunctured): txt
example-files/file1.txt
example-files/file2.txt
~$ rename-text
Path (dot for current): ./example-files
Ext (unpunctured): txt
Change by ext. (unpunctured): mp3
Found files:
./example-files/file1.txt
./example-files/file1.txt
Changed Files:
./example-files/file1.mp3
./example-files/file1.mp3
~$
You could use a tool designed for renaming files in bulk, e.g. renamer.
To rename all file extensions in the current folder:
$ renamer --find ".html" --replace ".txt" --dry-run *
Many more usage examples here.