Function in /etc/bashrc is not passing quotes through variables - bash

I am trying to write a bashrc function to add a safety check to the find command when it is paired with "-exec rm". This script cuts off everything beginning with "-exec rm" and replaces it with a "-print0" which is then passed over to sed to re-add missing quotes if the user uses a quoted experssion.
I am running into a issue where the quotes I am adding via sed are not being passed to the line with the execution of the find command.
Rewriting the find command and re-adding quotes:
FIND_VAR=$(echo "$#" | sed "s/-exec rm.*/-print0/g" | sed 's/\*.* / "&" /g' | sed 's/ "/"/g')
Running the find command with the modifications:
FIND_LIST=$(/bin/find $FIND_VAR | sed 's|\./| \./|g')
What I would like to accomplish is if the user types the following:
find ./ -type f -name "*.txt" -exec rm -rf {} \;
The command is re-written via the bashrc to run as:
find ./ -type f -name "*.txt" -print0 | sed 's|\./| \./|g'
This generates a list of files each spaced out which is passed to a modified rm function:
./file1.txt ./file2.txt ./file3.txt ...
The full function is listed below for reference:
function find () {
if echo "$#" | grep -q '-exec rm' ; then
echo "Found rm command as part of find"
FIND_VAR=$(echo "$#" | sed "s/-exec rm.*/-print0/g" | sed 's/\*.* / "&" /g' | sed 's/ "/"/g')
echo "/bin/find $FIND_VAR"
FIND_LIST=$(/bin/find $FIND_VAR | sed 's|\./| \./|g')
echo "$FIND_LIST"
echo "rm$FIND_LIST"
else
/bin/find "$#"
fi
}

Don't put a command in a string. It will not work. See http://mywiki.wooledge.org/BashFAQ/050 for more details and discussion.
You have the command in an array. Use it.
Walk the array, find the arguments you want to remove and remove them then add the arguments you want to add to the array and run the new command.

Related

Bash.Bad result of command substitution

I want to replace spaces in filenames. My test directory contains files with spaces:
$ ls
'1 2 3.txt' '4 5.txt' '6 7 8 9.txt'
For example this code works fine:
$ printf "$(printf 'spaces in file name.txt' | sed 's/ /_/g')"
spaces_in_file_name.txt
I replace spaces on underscore and command substitution return result to double quotes as text. This construction with important substitution is essential in the next case. Such commands as find and xargs have substitution mark like {}(curly braces). Therefore the next command can replace spaces in files.
$ find ./ -name "*.txt" -print0 | xargs --null -I '{}' mv '{}' "$( printf '{}' | sed 's/ /_/g' )"
mv: './6 7 8 9.txt' and './6 7 8 9.txt' are the same file
mv: './4 5.txt' and './4 5.txt' are the same file
mv: './1 2 3.txt' and './1 2 3.txt' are the same file
But I get error. In order to more clearly consider error, instead of mv I just use echo(or printf):
$ find ./ -name "*.txt" -print0 | xargs --null -I '{}' echo "$( printf '{}' | sed 's/ /_/g' )"
./6 7 8 9.txt
./4 5.txt
./1 2 3.txt
As we can see, spaces were not replaced on underscore. But without command substitution, the replacing will be correct:
$ find ./ -name "*.txt" -print0 | xargs --null -I '{}' printf '{}\n' | sed 's/ /_/g'
./6_7_8_9.txt
./4_5.txt
./1_2_3.txt
So the fact of the command substitution with curly braces is corrupt the result(because in the first command was correct result), but without command substitution the result is correct. But why???
Your command substitution is run before find and you're executing
mv '{}' "{}"
You could change the find command to match .txt files with at least one space character and use -exec and a small bash script to rename the files:
find . -type f -name "* *.txt" -exec bash -c '
for file; do
fname=${file##*/}
mv -i "$file" "${file%/*}/${fname// /_}"
done
' bash {} +
${file##*/} remove the parent directories (longest prefix pattern */) and leaves the filename (like the basename command)
${file%/*} removes the filename (shortest suffix pattern /*) and leaves the parent directories (like the dirname command)
${fname// /_} replaces all spaces with underscores
it's quite fast and simple with loop just replace absolute_path with your path :
for f in absolute_path/*.txt; do mv "$f" "${f// /_}";done
The ${f// /_} part utilizes bash's parameter expansion mechanism to replace a pattern within a parameter with supplied string.

how to find every file in my repo that has a specific word in the last line?

In other words, how to combine tail and find/grep command in bash.
I want to find all the files(including the files in subdirectories) in my repo have a specific word in the last line, say FIX in the last line. I tried grep -Rl "FIX" to display all the files containing "FIX", but I don't know how to combine the tail command in it. Anyone can help??
Run tail on all the files at once and then grep the output for FIX. Since tail prepends each line with the corresponding file name when given multiple file names, that's all you have to do.
find -type f -exec tail -n1 {} + | grep FIX
Or use ** to find all files and subdirectories, then run tail on each of them one at a time:
shopt -s globstar
for file in **; do
[[ -f $file ]] && tail -n1 "$file" | grep -q FIX && echo "$file"
done
Or use find to find all matches and pipe it to a while read loop:
find -type f -print0 | while IFS= read -rd '' file; do
tail -n1 "$file" | grep -q FIX && echo "$file"
done
Or do the same thing but with -exec + and an explicit sub-shell:
find -type f -exec sh -c 'for file; do tail -n1 "$file" | grep -q FIX && echo "$file"; done' sh {} +
If you want to know if the last line matches a pattern, use sed and restrict the match to the last line with $. sed doesn't easily give a return value or do pretty printing of the filename like grep, but it gets the job done.
find . -exec sh -c "sed -n '$ { /FIX/p; }' {} | grep -q . " \; -print
Here, we use -n to suppress printing, and then print (with /p) only when the last line matches the pattern /FIX/. The output is piped to grep to get a return value that find uses to decide whether or not to -print the name.
Or, you can avoid using grep for the return by doing something like:
find . -exec awk 'END{ exit ! match($0, "FIX")}' {} \; -print

Using a file's content in sed's replacement string

I've spent hours searching and can't find a solution to this. I have a directory with over 1,000 PHP files. I need to replace some code in these files as follows:
Find:
session_register("CurWebsiteID");
Replace with (saved in replacement.txt:
if(!function_exists ("session_register") && isset($_SERVER["DOCUMENT_ROOT"])){require_once($_SERVER["DOCUMENT_ROOT"]."/libraries/phpruntime/php_legacy_session_functions.php");} session_register("CurWebsiteID");
Using the command below, I'm able to replace the pattern with $(cat replacement.txt) whereas I'm looking to replace them with the content of the text file.
Command being used:
find . -name "*.xml" | xargs -n 1 sed -i -e 's/mercy/$(cat replacement.txt)/g'
I've also tried using variables instead replacement=code_above; and running an adjusted version with $(echo $replacement) but that doesn't help either.
What is the correct way to achieve this?
You don't need command substitution here. You can use the sed r command to insert file content and d to delete the line matching the pattern:
find . -name "*.xml" | xargs -n 1 sed -i -e '/mercy/r replacement.txt' -e '//d'
$(...) is not interpreted inside single quotes. Use double quotes:
find . -name "*.xml" | xargs -n 1 sed -i -e "s/mercy/$(cat replacement.txt)/g"
You can also do away with cat:
find . -name "*.xml" | xargs -n 1 sed -i -e "s/mercy/$(< replacement.txt)/g"
In case replacement.txt has a / in it, use a different delimiter in sed expression, for example #:
find . -name "*.xml" | xargs -n 1 sed -i -e "s#mercy#$(< replacement.txt)#g"
See also:
Use slashes in sed replace

Changing file content using sed in bash [duplicate]

How do I find and replace every occurrence of:
subdomainA.example.com
with
subdomainB.example.com
in every text file under the /home/www/ directory tree recursively?
find /home/www \( -type d -name .git -prune \) -o -type f -print0 | xargs -0 sed -i 's/subdomainA\.example\.com/subdomainB.example.com/g'
-print0 tells find to print each of the results separated by a null character, rather than a new line. In the unlikely event that your directory has files with newlines in the names, this still lets xargs work on the correct filenames.
\( -type d -name .git -prune \) is an expression which completely skips over all directories named .git. You could easily expand it, if you use SVN or have other folders you want to preserve -- just match against more names. It's roughly equivalent to -not -path .git, but more efficient, because rather than checking every file in the directory, it skips it entirely. The -o after it is required because of how -prune actually works.
For more information, see man find.
The simplest way for me is
grep -rl oldtext . | xargs sed -i 's/oldtext/newtext/g'
Note: Do not run this command on a folder including a git repo - changes to .git could corrupt your git index.
find /home/www/ -type f -exec \
sed -i 's/subdomainA\.example\.com/subdomainB.example.com/g' {} +
Compared to other answers here, this is simpler than most and uses sed instead of perl, which is what the original question asked for.
All the tricks are almost the same, but I like this one:
find <mydir> -type f -exec sed -i 's/<string1>/<string2>/g' {} +
find <mydir>: look up in the directory.
-type f:
File is of type: regular file
-exec command {} +:
This variant of the -exec action runs the specified command on the selected files, but the command line is built by appending
each selected file name at the end; the total number of invocations of the command will be much less than the number of
matched files. The command line is built in much the same way that xargs builds its command lines. Only one instance of
`{}' is allowed within the command. The command is executed in the starting directory.
For me the easiest solution to remember is https://stackoverflow.com/a/2113224/565525, i.e.:
sed -i '' -e 's/subdomainA/subdomainB/g' $(find /home/www/ -type f)
NOTE: -i '' solves OSX problem sed: 1: "...": invalid command code .
NOTE: If there are too many files to process you'll get Argument list too long. The workaround - use find -exec or xargs solution described above.
cd /home/www && find . -type f -print0 |
xargs -0 perl -i.bak -pe 's/subdomainA\.example\.com/subdomainB.example.com/g'
For anyone using silver searcher (ag)
ag SearchString -l0 | xargs -0 sed -i 's/SearchString/Replacement/g'
Since ag ignores git/hg/svn file/folders by default, this is safe to run inside a repository.
This one is compatible with git repositories, and a bit simpler:
Linux:
git grep -l 'original_text' | xargs sed -i 's/original_text/new_text/g'
Mac:
git grep -l 'original_text' | xargs sed -i '' -e 's/original_text/new_text/g'
(Thanks to http://blog.jasonmeridth.com/posts/use-git-grep-to-replace-strings-in-files-in-your-git-repository/)
To cut down on files to recursively sed through, you could grep for your string instance:
grep -rl <oldstring> /path/to/folder | xargs sed -i s^<oldstring>^<newstring>^g
If you run man grep you'll notice you can also define an --exlude-dir="*.git" flag if you want to omit searching through .git directories, avoiding git index issues as others have politely pointed out.
Leading you to:
grep -rl --exclude-dir="*.git" <oldstring> /path/to/folder | xargs sed -i s^<oldstring>^<newstring>^g
A straight forward method if you need to exclude directories (--exclude-dir=..folder) and also might have file names with spaces (solved by using 0Byte for both grep -Z and xargs -0)
grep -rlZ oldtext . --exclude-dir=.folder | xargs -0 sed -i 's/oldtext/newtext/g'
An one nice oneliner as an extra. Using git grep.
git grep -lz 'subdomainA.example.com' | xargs -0 perl -i'' -pE "s/subdomainA.example.com/subdomainB.example.com/g"
Simplest way to replace (all files, directory, recursive)
find . -type f -not -path '*/\.*' -exec sed -i 's/foo/bar/g' {} +
Note: Sometimes you might need to ignore some hidden files i.e. .git, you can use above command.
If you want to include hidden files use,
find . -type f -exec sed -i 's/foo/bar/g' {} +
In both case the string foo will be replaced with new string bar
find /home/www/ -type f -exec perl -i.bak -pe 's/subdomainA\.example\.com/subdomainB.example.com/g' {} +
find /home/www/ -type f will list all files in /home/www/ (and its subdirectories).
The "-exec" flag tells find to run the following command on each file found.
perl -i.bak -pe 's/subdomainA\.example\.com/subdomainB.example.com/g' {} +
is the command run on the files (many at a time). The {} gets replaced by file names.
The + at the end of the command tells find to build one command for many filenames.
Per the find man page:
"The command line is built in much the same way that
xargs builds its command lines."
Thus it's possible to achieve your goal (and handle filenames containing spaces) without using xargs -0, or -print0.
I just needed this and was not happy with the speed of the available examples. So I came up with my own:
cd /var/www && ack-grep -l --print0 subdomainA.example.com | xargs -0 perl -i.bak -pe 's/subdomainA\.example\.com/subdomainB.example.com/g'
Ack-grep is very efficient on finding relevant files. This command replaced ~145 000 files with a breeze whereas others took so long I couldn't wait until they finish.
or use the blazing fast GNU Parallel:
grep -rl oldtext . | parallel sed -i 's/oldtext/newtext/g' {}
grep -lr 'subdomainA.example.com' | while read file; do sed -i "s/subdomainA.example.com/subdomainB.example.com/g" "$file"; done
I guess most people don't know that they can pipe something into a "while read file" and it avoids those nasty -print0 args, while presevering spaces in filenames.
Further adding an echo before the sed allows you to see what files will change before actually doing it.
Try this:
sed -i 's/subdomainA/subdomainB/g' `grep -ril 'subdomainA' *`
According to this blog post:
find . -type f | xargs perl -pi -e 's/oldtext/newtext/g;'
#!/usr/local/bin/bash -x
find * /home/www -type f | while read files
do
sedtest=$(sed -n '/^/,/$/p' "${files}" | sed -n '/subdomainA/p')
if [ "${sedtest}" ]
then
sed s'/subdomainA/subdomainB/'g "${files}" > "${files}".tmp
mv "${files}".tmp "${files}"
fi
done
If you do not mind using vim together with grep or find tools, you could follow up the answer given by user Gert in this link --> How to do a text replacement in a big folder hierarchy?.
Here's the deal:
recursively grep for the string that you want to replace in a certain path, and take only the complete path of the matching file. (that would be the $(grep 'string' 'pathname' -Rl).
(optional) if you want to make a pre-backup of those files on centralized directory maybe you can use this also: cp -iv $(grep 'string' 'pathname' -Rl) 'centralized-directory-pathname'
after that you can edit/replace at will in vim following a scheme similar to the one provided on the link given:
:bufdo %s#string#replacement#gc | update
You can use awk to solve this as below,
for file in `find /home/www -type f`
do
awk '{gsub(/subdomainA.example.com/,"subdomainB.example.com"); print $0;}' $file > ./tempFile && mv ./tempFile $file;
done
hope this will help you !!!
For replace all occurrences in a git repository you can use:
git ls-files -z | xargs -0 sed -i 's/subdomainA\.example\.com/subdomainB.example.com/g'
See List files in local git repo? for other options to list all files in a repository. The -z options tells git to separate the file names with a zero byte, which assures that xargs (with the option -0) can separate filenames, even if they contain spaces or whatnot.
A bit old school but this worked on OS X.
There are few trickeries:
• Will only edit files with extension .sls under the current directory
• . must be escaped to ensure sed does not evaluate them as "any character"
• , is used as the sed delimiter instead of the usual /
Also note this is to edit a Jinja template to pass a variable in the path of an import (but this is off topic).
First, verify your sed command does what you want (this will only print the changes to stdout, it will not change the files):
for file in $(find . -name *.sls -type f); do echo -e "\n$file: "; sed 's,foo\.bar,foo/bar/\"+baz+\"/,g' $file; done
Edit the sed command as needed, once you are ready to make changes:
for file in $(find . -name *.sls -type f); do echo -e "\n$file: "; sed -i '' 's,foo\.bar,foo/bar/\"+baz+\"/,g' $file; done
Note the -i '' in the sed command, I did not want to create a backup of the original files (as explained in In-place edits with sed on OS X or in Robert Lujo's comment in this page).
Happy seding folks!
just to avoid to change also
NearlysubdomainA.example.com
subdomainA.example.comp.other
but still
subdomainA.example.com.IsIt.good
(maybe not good in the idea behind domain root)
find /home/www/ -type f -exec sed -i 's/\bsubdomainA\.example\.com\b/\1subdomainB.example.com\2/g' {} \;
Here's a version that should be more general than most; it doesn't require find (using du instead), for instance. It does require xargs, which are only found in some versions of Plan 9 (like 9front).
du -a | awk -F' ' '{ print $2 }' | xargs sed -i -e 's/subdomainA\.example\.com/subdomainB.example.com/g'
If you want to add filters like file extensions use grep:
du -a | grep "\.scala$" | awk -F' ' '{ print $2 }' | xargs sed -i -e 's/subdomainA\.example\.com/subdomainB.example.com/g'
For Qshell (qsh) on IBMi, not bash as tagged by OP.
Limitations of qsh commands:
find does not have the -print0 option
xargs does not have -0 option
sed does not have -i option
Thus the solution in qsh:
PATH='your/path/here'
SEARCH=\'subdomainA.example.com\'
REPLACE=\'subdomainB.example.com\'
for file in $( find ${PATH} -P -type f ); do
TEMP_FILE=${file}.${RANDOM}.temp_file
if [ ! -e ${TEMP_FILE} ]; then
touch -C 819 ${TEMP_FILE}
sed -e 's/'$SEARCH'/'$REPLACE'/g' \
< ${file} > ${TEMP_FILE}
mv ${TEMP_FILE} ${file}
fi
done
Caveats:
Solution excludes error handling
Not Bash as tagged by OP
If you wanted to use this without completely destroying your SVN repository, you can tell 'find' to ignore all hidden files by doing:
find . \( ! -regex '.*/\..*' \) -type f -print0 | xargs -0 sed -i 's/subdomainA.example.com/subdomainB.example.com/g'
Using combination of grep and sed
for pp in $(grep -Rl looking_for_string)
do
sed -i 's/looking_for_string/something_other/g' "${pp}"
done
perl -p -i -e 's/oldthing/new_thingy/g' `grep -ril oldthing *`
to change multiple files (and saving a backup as *.bak):
perl -p -i -e "s/\|/x/g" *
will take all files in directory and replace | with x
called a “Perl pie” (easy as a pie)

Iterating over associative array in bash

I am renaming strings recursively using an associative array. Th array part is working, when I echo $index & ${code_names[$index]}they print correctly. However the files are not modified. When I run the find | sed command in the shell it works however inside a bash script it doesnt.
Update
Also script runs ok if I just hardcode the string to be renamed: find . -name $file_type -print0 | xargs -0 sed -i 's/TEST/BAT/g'
#!/usr/bin/env bash
dir=$(pwd)
base=$dir"/../../new/repo"
file_type="*Kconfig"
cd $base
declare -A code_names
code_names[TEST]=BAT
code_names[FUGT]=BLANK
for index in "${!code_names[#]}"
do
find . -name $file_type -print0 | xargs -0 sed -i 's/$index/${code_names[$index]}/g'
done
The bare variable $file_type gets expanded by the shell. Double quote it. Variables are not expanded in single quotes, use double quotes instead. Note that it can break if $index or ${code_names[$index]} contain characters with special meaning for sed (like /).
find . -name "$file_type" -print0 \
| xargs -0 sed -i "s/$index/${code_names[$index]}/g"

Resources