Changing file content using sed in bash [duplicate] - bash
How do I find and replace every occurrence of:
subdomainA.example.com
with
subdomainB.example.com
in every text file under the /home/www/ directory tree recursively?
find /home/www \( -type d -name .git -prune \) -o -type f -print0 | xargs -0 sed -i 's/subdomainA\.example\.com/subdomainB.example.com/g'
-print0 tells find to print each of the results separated by a null character, rather than a new line. In the unlikely event that your directory has files with newlines in the names, this still lets xargs work on the correct filenames.
\( -type d -name .git -prune \) is an expression which completely skips over all directories named .git. You could easily expand it, if you use SVN or have other folders you want to preserve -- just match against more names. It's roughly equivalent to -not -path .git, but more efficient, because rather than checking every file in the directory, it skips it entirely. The -o after it is required because of how -prune actually works.
For more information, see man find.
The simplest way for me is
grep -rl oldtext . | xargs sed -i 's/oldtext/newtext/g'
Note: Do not run this command on a folder including a git repo - changes to .git could corrupt your git index.
find /home/www/ -type f -exec \
sed -i 's/subdomainA\.example\.com/subdomainB.example.com/g' {} +
Compared to other answers here, this is simpler than most and uses sed instead of perl, which is what the original question asked for.
All the tricks are almost the same, but I like this one:
find <mydir> -type f -exec sed -i 's/<string1>/<string2>/g' {} +
find <mydir>: look up in the directory.
-type f:
File is of type: regular file
-exec command {} +:
This variant of the -exec action runs the specified command on the selected files, but the command line is built by appending
each selected file name at the end; the total number of invocations of the command will be much less than the number of
matched files. The command line is built in much the same way that xargs builds its command lines. Only one instance of
`{}' is allowed within the command. The command is executed in the starting directory.
For me the easiest solution to remember is https://stackoverflow.com/a/2113224/565525, i.e.:
sed -i '' -e 's/subdomainA/subdomainB/g' $(find /home/www/ -type f)
NOTE: -i '' solves OSX problem sed: 1: "...": invalid command code .
NOTE: If there are too many files to process you'll get Argument list too long. The workaround - use find -exec or xargs solution described above.
cd /home/www && find . -type f -print0 |
xargs -0 perl -i.bak -pe 's/subdomainA\.example\.com/subdomainB.example.com/g'
For anyone using silver searcher (ag)
ag SearchString -l0 | xargs -0 sed -i 's/SearchString/Replacement/g'
Since ag ignores git/hg/svn file/folders by default, this is safe to run inside a repository.
This one is compatible with git repositories, and a bit simpler:
Linux:
git grep -l 'original_text' | xargs sed -i 's/original_text/new_text/g'
Mac:
git grep -l 'original_text' | xargs sed -i '' -e 's/original_text/new_text/g'
(Thanks to http://blog.jasonmeridth.com/posts/use-git-grep-to-replace-strings-in-files-in-your-git-repository/)
To cut down on files to recursively sed through, you could grep for your string instance:
grep -rl <oldstring> /path/to/folder | xargs sed -i s^<oldstring>^<newstring>^g
If you run man grep you'll notice you can also define an --exlude-dir="*.git" flag if you want to omit searching through .git directories, avoiding git index issues as others have politely pointed out.
Leading you to:
grep -rl --exclude-dir="*.git" <oldstring> /path/to/folder | xargs sed -i s^<oldstring>^<newstring>^g
A straight forward method if you need to exclude directories (--exclude-dir=..folder) and also might have file names with spaces (solved by using 0Byte for both grep -Z and xargs -0)
grep -rlZ oldtext . --exclude-dir=.folder | xargs -0 sed -i 's/oldtext/newtext/g'
An one nice oneliner as an extra. Using git grep.
git grep -lz 'subdomainA.example.com' | xargs -0 perl -i'' -pE "s/subdomainA.example.com/subdomainB.example.com/g"
Simplest way to replace (all files, directory, recursive)
find . -type f -not -path '*/\.*' -exec sed -i 's/foo/bar/g' {} +
Note: Sometimes you might need to ignore some hidden files i.e. .git, you can use above command.
If you want to include hidden files use,
find . -type f -exec sed -i 's/foo/bar/g' {} +
In both case the string foo will be replaced with new string bar
find /home/www/ -type f -exec perl -i.bak -pe 's/subdomainA\.example\.com/subdomainB.example.com/g' {} +
find /home/www/ -type f will list all files in /home/www/ (and its subdirectories).
The "-exec" flag tells find to run the following command on each file found.
perl -i.bak -pe 's/subdomainA\.example\.com/subdomainB.example.com/g' {} +
is the command run on the files (many at a time). The {} gets replaced by file names.
The + at the end of the command tells find to build one command for many filenames.
Per the find man page:
"The command line is built in much the same way that
xargs builds its command lines."
Thus it's possible to achieve your goal (and handle filenames containing spaces) without using xargs -0, or -print0.
I just needed this and was not happy with the speed of the available examples. So I came up with my own:
cd /var/www && ack-grep -l --print0 subdomainA.example.com | xargs -0 perl -i.bak -pe 's/subdomainA\.example\.com/subdomainB.example.com/g'
Ack-grep is very efficient on finding relevant files. This command replaced ~145 000 files with a breeze whereas others took so long I couldn't wait until they finish.
or use the blazing fast GNU Parallel:
grep -rl oldtext . | parallel sed -i 's/oldtext/newtext/g' {}
grep -lr 'subdomainA.example.com' | while read file; do sed -i "s/subdomainA.example.com/subdomainB.example.com/g" "$file"; done
I guess most people don't know that they can pipe something into a "while read file" and it avoids those nasty -print0 args, while presevering spaces in filenames.
Further adding an echo before the sed allows you to see what files will change before actually doing it.
Try this:
sed -i 's/subdomainA/subdomainB/g' `grep -ril 'subdomainA' *`
According to this blog post:
find . -type f | xargs perl -pi -e 's/oldtext/newtext/g;'
#!/usr/local/bin/bash -x
find * /home/www -type f | while read files
do
sedtest=$(sed -n '/^/,/$/p' "${files}" | sed -n '/subdomainA/p')
if [ "${sedtest}" ]
then
sed s'/subdomainA/subdomainB/'g "${files}" > "${files}".tmp
mv "${files}".tmp "${files}"
fi
done
If you do not mind using vim together with grep or find tools, you could follow up the answer given by user Gert in this link --> How to do a text replacement in a big folder hierarchy?.
Here's the deal:
recursively grep for the string that you want to replace in a certain path, and take only the complete path of the matching file. (that would be the $(grep 'string' 'pathname' -Rl).
(optional) if you want to make a pre-backup of those files on centralized directory maybe you can use this also: cp -iv $(grep 'string' 'pathname' -Rl) 'centralized-directory-pathname'
after that you can edit/replace at will in vim following a scheme similar to the one provided on the link given:
:bufdo %s#string#replacement#gc | update
You can use awk to solve this as below,
for file in `find /home/www -type f`
do
awk '{gsub(/subdomainA.example.com/,"subdomainB.example.com"); print $0;}' $file > ./tempFile && mv ./tempFile $file;
done
hope this will help you !!!
For replace all occurrences in a git repository you can use:
git ls-files -z | xargs -0 sed -i 's/subdomainA\.example\.com/subdomainB.example.com/g'
See List files in local git repo? for other options to list all files in a repository. The -z options tells git to separate the file names with a zero byte, which assures that xargs (with the option -0) can separate filenames, even if they contain spaces or whatnot.
A bit old school but this worked on OS X.
There are few trickeries:
• Will only edit files with extension .sls under the current directory
• . must be escaped to ensure sed does not evaluate them as "any character"
• , is used as the sed delimiter instead of the usual /
Also note this is to edit a Jinja template to pass a variable in the path of an import (but this is off topic).
First, verify your sed command does what you want (this will only print the changes to stdout, it will not change the files):
for file in $(find . -name *.sls -type f); do echo -e "\n$file: "; sed 's,foo\.bar,foo/bar/\"+baz+\"/,g' $file; done
Edit the sed command as needed, once you are ready to make changes:
for file in $(find . -name *.sls -type f); do echo -e "\n$file: "; sed -i '' 's,foo\.bar,foo/bar/\"+baz+\"/,g' $file; done
Note the -i '' in the sed command, I did not want to create a backup of the original files (as explained in In-place edits with sed on OS X or in Robert Lujo's comment in this page).
Happy seding folks!
just to avoid to change also
NearlysubdomainA.example.com
subdomainA.example.comp.other
but still
subdomainA.example.com.IsIt.good
(maybe not good in the idea behind domain root)
find /home/www/ -type f -exec sed -i 's/\bsubdomainA\.example\.com\b/\1subdomainB.example.com\2/g' {} \;
Here's a version that should be more general than most; it doesn't require find (using du instead), for instance. It does require xargs, which are only found in some versions of Plan 9 (like 9front).
du -a | awk -F' ' '{ print $2 }' | xargs sed -i -e 's/subdomainA\.example\.com/subdomainB.example.com/g'
If you want to add filters like file extensions use grep:
du -a | grep "\.scala$" | awk -F' ' '{ print $2 }' | xargs sed -i -e 's/subdomainA\.example\.com/subdomainB.example.com/g'
For Qshell (qsh) on IBMi, not bash as tagged by OP.
Limitations of qsh commands:
find does not have the -print0 option
xargs does not have -0 option
sed does not have -i option
Thus the solution in qsh:
PATH='your/path/here'
SEARCH=\'subdomainA.example.com\'
REPLACE=\'subdomainB.example.com\'
for file in $( find ${PATH} -P -type f ); do
TEMP_FILE=${file}.${RANDOM}.temp_file
if [ ! -e ${TEMP_FILE} ]; then
touch -C 819 ${TEMP_FILE}
sed -e 's/'$SEARCH'/'$REPLACE'/g' \
< ${file} > ${TEMP_FILE}
mv ${TEMP_FILE} ${file}
fi
done
Caveats:
Solution excludes error handling
Not Bash as tagged by OP
If you wanted to use this without completely destroying your SVN repository, you can tell 'find' to ignore all hidden files by doing:
find . \( ! -regex '.*/\..*' \) -type f -print0 | xargs -0 sed -i 's/subdomainA.example.com/subdomainB.example.com/g'
Using combination of grep and sed
for pp in $(grep -Rl looking_for_string)
do
sed -i 's/looking_for_string/something_other/g' "${pp}"
done
perl -p -i -e 's/oldthing/new_thingy/g' `grep -ril oldthing *`
to change multiple files (and saving a backup as *.bak):
perl -p -i -e "s/\|/x/g" *
will take all files in directory and replace | with x
called a “Perl pie” (easy as a pie)
Related
Recursive grep-sed yields error: sed: -e expression #1, char 1: unknown command: `?' [duplicate]
I run this command to find and replace all occurrences of 'apple' with 'orange' in all files in root of my site: find ./ -exec sed -i 's/apple/orange/g' {} \; But it doesn't go through sub directories. What is wrong with this command? Here are some lines of output of find ./: ./index.php ./header.php ./fpd ./fpd/font ./fpd/font/desktop.ini ./fpd/font/courier.php ./fpd/font/symbol.php
Your find should look like that to avoid sending directory names to sed: find ./ -type f -exec sed -i -e 's/apple/orange/g' {} \;
For larger s&r tasks it's better and faster to use grep and xargs, so, for example; grep -rl 'apples' /dir_to_search_under | xargs sed -i 's/apples/oranges/g'
Since there are also macOS folks reading this one (as I did), the following code worked for me (on 10.14) egrep -rl '<pattern>' <dir> | xargs -I# sed -i '' 's/<arg1>/<arg2>/g' # All other answers using -i and -e do not work on macOS. Source
This worked for me: find ./ -type f -exec sed -i '' 's#NEEDLE#REPLACEMENT#' *.php {} \;
grep -e apple your_site_root/**/*.* -s -l | xargs sed -i "" "s|apple|orange|"
Found a great program for this called ruplacer https://github.com/dmerejkowsky/ruplacer Usage ruplacer before_text after_text # prints out list of things it will replace ruplacer before_text after_text --go # executes the replacements It also respects .gitignore so it won't mess up your .git or node_modules directories (find . by default will go into your .git directory and can corrupt it!!!)
I think we can do this with one line simple command for i in `grep -rl eth0 . 2> /dev/null`; do sed -i ‘s/eth0/eth1/’ $i; done Refer to this page.
In linuxOS: sed -i 's/textSerch/textReplace/g' namefile if "sed" not work try : perl -i -pe 's/textSerch/textReplace/g' namefile
Bash find filter and copy - trouble with spaces
So after a lot of searching and trying to interpret others' questions and answers to my needs, I decided to ask for myself. I'm trying to take a directory structure full of images and place all the images (regardless of extension) in a single folder. In addition to this, I want to be able to remove images matching certain filenames in the process. I have a find command working that outputs all the filepaths for me find -type f -exec file -i -- {} + | grep -i image | sed 's/\:.*//' but if I try to use that to copy files, I have trouble with the spaces in the filenames. cp `find -type f -exec file -i -- {} + | grep -i image | sed 's/\:.*//'` out/ What am I doing wrong, and is there a better way to do this?
With the caveat that it won't work if files have newlines in their names: find . -type f -exec file -i -- {} + | awk -vFS=: -vOFS=: '$NF ~ /image/{NF--;printf "%s\0", $0}' | xargs -0 cp -t out/ (Based on answer by Jonathan Leffler and subsequent comments discussion with him and #devnull.)
The find command works well if none of the file names contain any newlines. Within broad limits, the grep command works OK under the same circumstances. The sed command works fine as long as there are no colons in the file names. However, given that there are spaces in the names, the use of $(...) (command substitution, also indicated by back-ticks `...`) is a disaster. Unfortunately, xargs isn't readily a part of the solution; it splits on spaces by default. Because you have to run file and grep in the middle, you can't easily use the -print0 option to (GNU) find and the -0 option to (GNU) xargs. In some respects, it is crude, but in many ways, it is easiest if you write an executable shell script that can be invoked by find: #!/bin/bash for file in "$#" do if file -i -- "$file" | grep -i -q "$file:.*image" then cp "$file" out/ fi done This is a little painful in that it invokes file and grep separately for each name, but it is reliable. The file command is even safe if the file name contains a newline; the grep is probably not. If that script is called 'copyimage.sh', then the find command becomes: find . -type f -exec ./copyimage.sh {} + And, given the way the grep command is written, the copyimage.sh file won't be copied, even though its name contains the magic word 'image'.
Pipe the results of your find command to xargs -l --replace cp "{}" out/ Example of how this works for me on Ubuntu 10.04: atomic#atomic-desktop:~/temp$ ls img.png img space.png atomic#atomic-desktop:~/temp$ mkdir out atomic#atomic-desktop:~/temp$ find -type f -exec file -i \{\} \; | grep -i image | sed 's/\:.*//' | xargs -l --replace cp -v "{}" out/ `./img.png' -> `out/img.png' `./img space.png' -> `out/img space.png' atomic#atomic-desktop:~/temp$ ls out img.png img space.png atomic#atomic-desktop:~/temp$
Displaying the result of find / replace over multiple documents on bash
I love to use the following command to do find / replace across multiple files in bash: find -wholename "*.txt" -print | xargs sed -i 's/foo/bar/g' However, the above command process everything in silence, and sometimes I would like the above command to print all the changes it made in order to double check if I did everything correctly. Can I know how should I improve the above command to allow it to dump such information? I tried the -v argument in the xargs command but it gives me the invalid option error.
You can do something like: find -wholename "*.txt" | xargs sed -n '/foo/p;s/foo/bar/gp' What this will do is print the line that you wish to substitute and print the substitution in the next line. You can use awk and get filename as well: find -wholename "*.txt" | xargs awk '/foo/{print FILENAME; gsub(/foo/,"bar");print}' To print entire file remove print and add 1 find -wholename "*.txt" | xargs awk '/foo/{print FILENAME; gsub(/foo/,"bar")}1' Regex will have to be modified as per your requirement and changes in-file is only available in gawk version 4.1 Test: $ head file* ==> file1 <== ,,"user1","email" ,,"user2","email" ,,"user3","email" ,,"user4","email" ==> file2 <== ,,user2,location2 ,,user4,location4 ,,user1,location1 ,,user3,location3 $ find . -name "file*" -print | xargs awk '/user1/{print FILENAME; gsub(/user1/,"TESTING");print}' ./file1 ,,"TESTING","email" ./file2 ,,TESTING,location1
In order to see the differences you can redirect the output of sed to a new file for every input file and compare it with the original. for i in `find -wholename "*.txt"`; do sed 's/foo/bar/g' ${i} > ${i}.new; diff -u ${i} ${i}.new; done If the changes seem ok, move the new files to their original names. for i in `find -wholename "*.new"` ; do mv ${i} ${i/.new}; done
All can be done with find and sed. Only a little modification needed: find -path "*.txt" -exec sed -i.bak 's/foo/bar/g' {} + This calls sed with the max number of files (mind + at the end of -exec), so xargs is not needed. In sed -i.bak does an in-place-editing renaming the original file as .bak. So You can check the differences later if needed. In man find one can read: -wholename pattern See -path. This alternative is less portable than -path.
"grep -R" replacement?
I have a machine with grep installed but option -R is not compiled-in and there is also no replacement switch. How can I replace it in bash? I tried: for i in `find *`; do grep 'pattern' $i; done but that is not right re-interpretation, isn't it?
Try piping the output of find to xargs so that grep only gets invoked a few times (xargs keeps reading input until it gets so much that more would not fit in an argument list): find -type f | xargs grep foo
We usually use find . -exec grep 'pattern' {} \; That usually works similarly to grep -R.
How can I do a recursive find/replace of a string with awk or sed?
How do I find and replace every occurrence of: subdomainA.example.com with subdomainB.example.com in every text file under the /home/www/ directory tree recursively?
find /home/www \( -type d -name .git -prune \) -o -type f -print0 | xargs -0 sed -i 's/subdomainA\.example\.com/subdomainB.example.com/g' -print0 tells find to print each of the results separated by a null character, rather than a new line. In the unlikely event that your directory has files with newlines in the names, this still lets xargs work on the correct filenames. \( -type d -name .git -prune \) is an expression which completely skips over all directories named .git. You could easily expand it, if you use SVN or have other folders you want to preserve -- just match against more names. It's roughly equivalent to -not -path .git, but more efficient, because rather than checking every file in the directory, it skips it entirely. The -o after it is required because of how -prune actually works. For more information, see man find.
The simplest way for me is grep -rl oldtext . | xargs sed -i 's/oldtext/newtext/g'
Note: Do not run this command on a folder including a git repo - changes to .git could corrupt your git index. find /home/www/ -type f -exec \ sed -i 's/subdomainA\.example\.com/subdomainB.example.com/g' {} + Compared to other answers here, this is simpler than most and uses sed instead of perl, which is what the original question asked for.
All the tricks are almost the same, but I like this one: find <mydir> -type f -exec sed -i 's/<string1>/<string2>/g' {} + find <mydir>: look up in the directory. -type f: File is of type: regular file -exec command {} +: This variant of the -exec action runs the specified command on the selected files, but the command line is built by appending each selected file name at the end; the total number of invocations of the command will be much less than the number of matched files. The command line is built in much the same way that xargs builds its command lines. Only one instance of `{}' is allowed within the command. The command is executed in the starting directory.
For me the easiest solution to remember is https://stackoverflow.com/a/2113224/565525, i.e.: sed -i '' -e 's/subdomainA/subdomainB/g' $(find /home/www/ -type f) NOTE: -i '' solves OSX problem sed: 1: "...": invalid command code . NOTE: If there are too many files to process you'll get Argument list too long. The workaround - use find -exec or xargs solution described above.
cd /home/www && find . -type f -print0 | xargs -0 perl -i.bak -pe 's/subdomainA\.example\.com/subdomainB.example.com/g'
For anyone using silver searcher (ag) ag SearchString -l0 | xargs -0 sed -i 's/SearchString/Replacement/g' Since ag ignores git/hg/svn file/folders by default, this is safe to run inside a repository.
This one is compatible with git repositories, and a bit simpler: Linux: git grep -l 'original_text' | xargs sed -i 's/original_text/new_text/g' Mac: git grep -l 'original_text' | xargs sed -i '' -e 's/original_text/new_text/g' (Thanks to http://blog.jasonmeridth.com/posts/use-git-grep-to-replace-strings-in-files-in-your-git-repository/)
To cut down on files to recursively sed through, you could grep for your string instance: grep -rl <oldstring> /path/to/folder | xargs sed -i s^<oldstring>^<newstring>^g If you run man grep you'll notice you can also define an --exlude-dir="*.git" flag if you want to omit searching through .git directories, avoiding git index issues as others have politely pointed out. Leading you to: grep -rl --exclude-dir="*.git" <oldstring> /path/to/folder | xargs sed -i s^<oldstring>^<newstring>^g
A straight forward method if you need to exclude directories (--exclude-dir=..folder) and also might have file names with spaces (solved by using 0Byte for both grep -Z and xargs -0) grep -rlZ oldtext . --exclude-dir=.folder | xargs -0 sed -i 's/oldtext/newtext/g'
An one nice oneliner as an extra. Using git grep. git grep -lz 'subdomainA.example.com' | xargs -0 perl -i'' -pE "s/subdomainA.example.com/subdomainB.example.com/g"
Simplest way to replace (all files, directory, recursive) find . -type f -not -path '*/\.*' -exec sed -i 's/foo/bar/g' {} + Note: Sometimes you might need to ignore some hidden files i.e. .git, you can use above command. If you want to include hidden files use, find . -type f -exec sed -i 's/foo/bar/g' {} + In both case the string foo will be replaced with new string bar
find /home/www/ -type f -exec perl -i.bak -pe 's/subdomainA\.example\.com/subdomainB.example.com/g' {} + find /home/www/ -type f will list all files in /home/www/ (and its subdirectories). The "-exec" flag tells find to run the following command on each file found. perl -i.bak -pe 's/subdomainA\.example\.com/subdomainB.example.com/g' {} + is the command run on the files (many at a time). The {} gets replaced by file names. The + at the end of the command tells find to build one command for many filenames. Per the find man page: "The command line is built in much the same way that xargs builds its command lines." Thus it's possible to achieve your goal (and handle filenames containing spaces) without using xargs -0, or -print0.
I just needed this and was not happy with the speed of the available examples. So I came up with my own: cd /var/www && ack-grep -l --print0 subdomainA.example.com | xargs -0 perl -i.bak -pe 's/subdomainA\.example\.com/subdomainB.example.com/g' Ack-grep is very efficient on finding relevant files. This command replaced ~145 000 files with a breeze whereas others took so long I couldn't wait until they finish.
or use the blazing fast GNU Parallel: grep -rl oldtext . | parallel sed -i 's/oldtext/newtext/g' {}
grep -lr 'subdomainA.example.com' | while read file; do sed -i "s/subdomainA.example.com/subdomainB.example.com/g" "$file"; done I guess most people don't know that they can pipe something into a "while read file" and it avoids those nasty -print0 args, while presevering spaces in filenames. Further adding an echo before the sed allows you to see what files will change before actually doing it.
Try this: sed -i 's/subdomainA/subdomainB/g' `grep -ril 'subdomainA' *`
According to this blog post: find . -type f | xargs perl -pi -e 's/oldtext/newtext/g;'
#!/usr/local/bin/bash -x find * /home/www -type f | while read files do sedtest=$(sed -n '/^/,/$/p' "${files}" | sed -n '/subdomainA/p') if [ "${sedtest}" ] then sed s'/subdomainA/subdomainB/'g "${files}" > "${files}".tmp mv "${files}".tmp "${files}" fi done
If you do not mind using vim together with grep or find tools, you could follow up the answer given by user Gert in this link --> How to do a text replacement in a big folder hierarchy?. Here's the deal: recursively grep for the string that you want to replace in a certain path, and take only the complete path of the matching file. (that would be the $(grep 'string' 'pathname' -Rl). (optional) if you want to make a pre-backup of those files on centralized directory maybe you can use this also: cp -iv $(grep 'string' 'pathname' -Rl) 'centralized-directory-pathname' after that you can edit/replace at will in vim following a scheme similar to the one provided on the link given: :bufdo %s#string#replacement#gc | update
You can use awk to solve this as below, for file in `find /home/www -type f` do awk '{gsub(/subdomainA.example.com/,"subdomainB.example.com"); print $0;}' $file > ./tempFile && mv ./tempFile $file; done hope this will help you !!!
For replace all occurrences in a git repository you can use: git ls-files -z | xargs -0 sed -i 's/subdomainA\.example\.com/subdomainB.example.com/g' See List files in local git repo? for other options to list all files in a repository. The -z options tells git to separate the file names with a zero byte, which assures that xargs (with the option -0) can separate filenames, even if they contain spaces or whatnot.
A bit old school but this worked on OS X. There are few trickeries: • Will only edit files with extension .sls under the current directory • . must be escaped to ensure sed does not evaluate them as "any character" • , is used as the sed delimiter instead of the usual / Also note this is to edit a Jinja template to pass a variable in the path of an import (but this is off topic). First, verify your sed command does what you want (this will only print the changes to stdout, it will not change the files): for file in $(find . -name *.sls -type f); do echo -e "\n$file: "; sed 's,foo\.bar,foo/bar/\"+baz+\"/,g' $file; done Edit the sed command as needed, once you are ready to make changes: for file in $(find . -name *.sls -type f); do echo -e "\n$file: "; sed -i '' 's,foo\.bar,foo/bar/\"+baz+\"/,g' $file; done Note the -i '' in the sed command, I did not want to create a backup of the original files (as explained in In-place edits with sed on OS X or in Robert Lujo's comment in this page). Happy seding folks!
just to avoid to change also NearlysubdomainA.example.com subdomainA.example.comp.other but still subdomainA.example.com.IsIt.good (maybe not good in the idea behind domain root) find /home/www/ -type f -exec sed -i 's/\bsubdomainA\.example\.com\b/\1subdomainB.example.com\2/g' {} \;
Here's a version that should be more general than most; it doesn't require find (using du instead), for instance. It does require xargs, which are only found in some versions of Plan 9 (like 9front). du -a | awk -F' ' '{ print $2 }' | xargs sed -i -e 's/subdomainA\.example\.com/subdomainB.example.com/g' If you want to add filters like file extensions use grep: du -a | grep "\.scala$" | awk -F' ' '{ print $2 }' | xargs sed -i -e 's/subdomainA\.example\.com/subdomainB.example.com/g'
For Qshell (qsh) on IBMi, not bash as tagged by OP. Limitations of qsh commands: find does not have the -print0 option xargs does not have -0 option sed does not have -i option Thus the solution in qsh: PATH='your/path/here' SEARCH=\'subdomainA.example.com\' REPLACE=\'subdomainB.example.com\' for file in $( find ${PATH} -P -type f ); do TEMP_FILE=${file}.${RANDOM}.temp_file if [ ! -e ${TEMP_FILE} ]; then touch -C 819 ${TEMP_FILE} sed -e 's/'$SEARCH'/'$REPLACE'/g' \ < ${file} > ${TEMP_FILE} mv ${TEMP_FILE} ${file} fi done Caveats: Solution excludes error handling Not Bash as tagged by OP
If you wanted to use this without completely destroying your SVN repository, you can tell 'find' to ignore all hidden files by doing: find . \( ! -regex '.*/\..*' \) -type f -print0 | xargs -0 sed -i 's/subdomainA.example.com/subdomainB.example.com/g'
Using combination of grep and sed for pp in $(grep -Rl looking_for_string) do sed -i 's/looking_for_string/something_other/g' "${pp}" done
perl -p -i -e 's/oldthing/new_thingy/g' `grep -ril oldthing *`
to change multiple files (and saving a backup as *.bak): perl -p -i -e "s/\|/x/g" * will take all files in directory and replace | with x called a “Perl pie” (easy as a pie)