So I built a super simple script to allow me to search across all directories relative to the one the script is run from that will find the first argument and replace it with the second one:
#!/usr/local/bin/bash -f
word_to_look_for=$1
substitue=$2
find ./ type f -exec sed -i "" "s/$word_to_look_for/$substitute/g" {} \;
echo "Replaced ($word_to_look_for) with ($substitue)"
For some reason though, this bit -> "s/$word_to_look_for/$substitute/g"
would only output as s/wordImlookingfor//g and as result sed would replace it with empty text, to get this to work as intended I had change the script to the following:
sed_arg="s/$word_to_look_for"
sed_arg="$sed_arg/$substitue/g"
find ./ type f -exec sed -i "" "$sed_arg" {} \;
echo "Replaced ($word_to_look_for) with ($substitue)"
I'm just wondering, why did bash not seem to like the way I had it in the first version?
You misspelled substitute as substitue everywhere except for one place:
"s/$word_to_look_for/$substitute/g"
So bash expanded the variable $substitute, which was never set, while the other variable ($substitue) was set but never used.
Related
for newfile in `find . -type f ! -path "./data/*" ! -name new_changes.txt`; do
if ! grep -q "\$newfile" new_changes.txt; then
rm \$newfile;
fi
done
The above code works fine if sh """#!/bin/bash +x is given at the starting of the code block. But when it is commented out - It throws the below error
rm: cannot remove '$newfile': No such file or directory
Any suggestions on how we can modify this for loop to work without sh """#!/bin/bash +x?
#alaniwi already explained in his comment the error in your rm command. You are trying to remove a file with the name $newfile, and you probably don't have any file starting with a Dollar sign.
The other problem is similar, but not identical: Your grep command searches for a literal string $newfile, while you probably want to search for a string stored in the variable newfile. Hence you have to drop the \.
But this still means that the content of the variable newfile is subject to interpretation as a regular expression. For example, if newfile has the value abc.txt, grep would also succeed if new_changes.txt contained just abcdtxt. To avoid this error, you should use the -F option to grep, to avoid interpretation as a regexp.
And still, there is one more error: Say newfile has the value abc, and new_changes.txt contained just xxxabc, but they would still match, since abc is a substring of xxxabc. To avoid this error, you should use the -x option to grep, which forces to match the whole line.
Hence, your command should be grep -qFx "$newfile" new_changes.txt
I am trying to remove specific characters from a file in bash but am not getting the desired result.
bash
for file in /home/cmccabe/Desktop/NGS/API/test/*.vcf.gz; do
mv -- "$file" "${file%%/*_variants_}.vcf.gz"
done
file name
TSVC_variants_IonXpress_004.vcf.gz
desired resuult
IonXpress_004.vcf.gz
current result (extention in filename repeats)
TSVC_variants_IonXpress_004.vcf.gz.vcf.gz
I have tried to move the * to the end and to use /_variants_/ and the same results. Thank you :).
${var%%*foo} removes a string ending with foo from the end of the value of var. If there isn't a suffix which matches, nothing is removed. I'm guessing you want ${var##*foo} to trim from the beginning, up through foo. You'll have to add the directory path back separately if you remove it, of course.
mv -- "$file" "/home/cmccabe/Desktop/NGS/API/test/${file##*_variants_}"
find . -type f -name "*.vcf.gz" -exec bash -c 'var="$1";mv $var ${var/TSVC_variants_/}' _ {} \;
may do the job for you .
From time to time I have to append some text at the end of a bunch of files. I would normally find these files with find.
I've tried
find . -type f -name "test" -exec tail -n 2 /source.txt >> {} \;
This however results in writing the last two lines from /source.txt to a file named {} however many times a file was found matching the search criteria.
I guess I have to escape >> somehow but so far I wasn't successful.
Any help would be greatly appreciated.
-exec only takes one command (with optional arguments) and you can't use any bash operators in it.
So you need to wrap it in a bash -c '...' block, which executes everything between '...' in a new bash shell.
find . -type f -name "test" -exec bash -c 'tail -n 2 /source.txt >> "$1"' bash {} \;
Note: Everything after '...' is passed as regular arguments, except they start at $0 instead of $1. So the bash after ' is used as a placeholder to match how you would expect arguments and error processing to work in a regular shell, i.e. $1 is the first argument and errors generally start with bash or something meaningful
If execution time is an issue, consider doing something like export variable="$(tail -n 2 /source.txt)" and using "$variable" in the -exec. This will also always write the same thing, unlike using tail in -exec, which could change if the file changes. Alternatively, you can use something like -exec ... + and pair it with tee to write to many files at once.
A more efficient alternative (assuming bash 4):
shopt -s globstar
to_augment=( **/test )
tail -n 2 /source.txt | tee -a "${to_augment[#]}" > /dev/null
First, you create an array with all the file names, using a simple pattern that should be equivalent to your call to find. Then, use tee to append the desired lines to all those files at once.
If you have more criteria for the find command, you can still use it; this version is not foolproof, as it assumes no filename contains a newline, but fixing that is best left to another question.
while read -r fname; do
to_augment+=( "$fname" )
done < <(find ...)
I'm trying to connect the inputs/outputs of two bash functions with a
pipe. Here is a complete program which illustrates my issue:
function print_info {
files=$(ls);
echo $files;
}
touch "file.pattern"
print_info | grep "pattern"
rm -f file.pattern
But this simply outputs a list of all files, not those that match
"pattern". Can anyone help me understand why?
The reason this isn't working is that in
echo $files;
the variable $files is subject to shell expansion (i.e., it is expanded into individual arguments to echo), and the resulting tokens are printed by echo delimited by spaces. This means that the output of it is a single line, and grep handles it accordingly.
The least invasive fix is to use
echo "$files";
Dont parse the output of ls command. You could do the same using find command like:
find . -maxdepth 1 -type f -exec grep "pattern" {} \;
if you are getting file names from a function then do it like:
grep "pattern" $(print_info)
I'm new to bash scripts (and the *nix shell altogether) but I'm trying to write this script to make grepping a codebase easier.
I have written this
#!/bin/bash
args=("$#");
for arg in args
grep arg * */* */*/* */*/*/* */*/*/*/*;
done
when I try to run it, this is what happens:
~/Work/richmond $ ./f.sh "\$_REQUEST\['a'\]"
./f.sh: line 4: syntax error near unexpected token `grep'
./f.sh: line 4: ` grep arg * */* */*/* */*/*/* */*/*/*/*;'
~/Work/richmond $
How do I do this properly?
And, I think a more important question is, how can I make grep recurse through subdirectories properly like this?
Any other tips and/or pitfalls with shell scripting and using bash in general would also be appreciated.
The syntax error is because you're missing do. As for searching recursively if your grep has the -R option you would do:
#!/bin/bash
for arg in "$#"; do
grep -R "$arg" *
done
Otherwise you could use find:
#!/bin/bash
for arg in "$#"; do
find . -exec grep "$arg" {} +
done
In the latter example, find will execute grep and replace the {} braces with the file names it finds, starting in the current directory ..
(Notice that I also changed arg to "$arg". You need the dollar sign to get the variable's value, and the quotes tell the shell to treat its value as one big word, even if $arg contains spaces or newlines.)
On recusive grepping:
Depending on your grep version, you can pass -R to your grep command to have it search Recursively (in subdirectories).
The best solution is stated above, but try putting your statement in back ticks:
`grep ...`
You should use 'find' plus 'xargs' to do the file searching.
for arg in "$#"
do
find . -type f -print0 | xargs -0 grep "$arg" /dev/null
done
The '-print0' and '-0' options assume you're using GNU grep and ensure that the script works even if there are spaces or other unexpected characters in your path names. Using xargs like this is more efficient than having find execute it for each file; the /dev/null appears in the argument list so grep always reports the name of the file containing the match.
You might decide to simplify life - perhaps - by combining all the searches into one using either egrep or grep -E. An optimization would be to capture the output from find once and then feed that to xargs on each iteration.
Have a look at the findrepo script which may give you some pointers
If you just want a better grep and don't want to do anything yourself, use ack, which you can get at http://betterthangrep.com/.