grep of string that includes a tilde returns non-matching lines - bash

I'm trying to search for a specific string in a file. The string includes a tilde- I am trying to isolate the line that contains the string "~ ca_cert".
This is my script:
#!/bin/bash
LIST=("~ ca_cert" "backup_window")
FILE=./test
for x in "${LIST[#]}"; do
grep $x $FILE
done
When I run it, it returns other lines that contain tildes. For example, in a file that contains the following, it return all of the lines, when my intention is for it to only return the bottom line that contains ~ ca_cert:
./test:./terraform.tfplan: ~ update in-place
./test:./terraform.tfplan: ~ resource "aws_db_instance" "rds_instance" {
./test:./terraform.tfplan: ~ ca_cert_identifier = "rds-ca-2019" -> "rds-ca-2015"

Problem is not quoting pattern i.e. $x in your grep command. That basically runs your command as grep '~' ca_cert ./test and finds all the lines matching ~ with an error.
However you don't really need to run a loop here. Just use grep -f with process substitution:
grep -Ff <(printf '%s\n' "${LIST[#]}") ./test
./terraform.tfplan: ~ ca_cert_identifier = "rds-ca-2019" -> "rds-ca-2015"

Related

looping with grep over several files

I have multiple files /text-1.txt, /text-2.txt ... /text-20.txt
and what I want to do is to grep for two patterns and stitch them into one file.
For example:
I have
grep "Int_dogs" /text-1.txt > /text-1-dogs.txt
grep "Int_cats" /text-1.txt> /text-1-cats.txt
cat /text-1-dogs.txt /text-1-cats.txt > /text-1-output.txt
I want to repeat this for all 20 files above. Is there an efficient way in bash/awk, etc. to do this ?
#!/bin/sh
count=1
next () {
[[ "${count}" -lt 21 ]] && main
[[ "${count}" -eq 21 ]] && exit 0
}
main () {
file="text-${count}"
grep "Int_dogs" "${file}.txt" > "${file}-dogs.txt"
grep "Int_cats" "${file}.txt" > "${file}-cats.txt"
cat "${file}-dogs.txt" "${file}-cats.txt" > "${file}-output.txt"
count=$((count+1))
next
}
next
grep has some features you seem not to be aware of:
grep can be launched on lists of files, but the output will be different:
For a single file, the output will only contain the filtered line, like in this example:
cat text-1.txt
I have a cat.
I have a dog.
I have a canary.
grep "cat" text-1.txt
I have a cat.
For multiple files, also the filename will be shown in the output: let's add another textfile:
cat text-2.txt
I don't have a dog.
I don't have a cat.
I don't have a canary.
grep "cat" text-*.txt
text-1.txt: I have a cat.
text-2.txt: I don't have a cat.
grep can be extended to search for multiple patterns in files, using the -E switch. The patterns need to be separated using a pipe symbol:
grep -E "cat|dog" text-1.txt
I have a dog.
I have a cat.
(summary of the previous two points + the remark that grep -E equals egrep):
egrep "cat|dog" text-*.txt
text-1.txt:I have a dog.
text-1.txt:I have a cat.
text-2.txt:I don't have a dog.
text-2.txt:I don't have a cat.
So, in order to redirect this to an output file, you can simply say:
egrep "cat|dog" text-*.txt >text-1-output.txt
Assuming you're using bash.
Try this:
for i in $(seq 1 20) ;do rm -f text-${i}-output.txt ; grep -E "Int_dogs|Int_cats" text-${i}.txt >> text-${i}-output.txt ;done
Details
This one-line script does the following:
Original files are intended to have the following name order/syntax:
text-<INTEGER_NUMBER>.txt - Example: text-1.txt, text-2.txt, ... text-100.txt.
Creates a loop starting from 1 to <N> and <N> is the number of files you want to process.
Warn: rm -f text-${i}-output.txt command first will be run and remove the possible outputfile (if there is any), to ensure that a fresh new output file will be only available at the end of the process.
grep -E "Int_dogs|Int_cats" text-${i}.txt will try to match both strings in the original file and by >> text-${i}-output.txt all the matched lines will be redirected to a newly created output file with the relevant number of the original file. Example: if integer number in original file is 5 text-5.txt, then text-5-output.txt file will be created & contain the matched string lines (if any).

Looking for a regex pattern, passing that pattern to a script, and replacing the pattern with the output of the script

For every time the pattern shows up (In this example the case of a 2 digit number) I want to pass that pattern to a script and replace that pattern with the output of a script.
I'm using sed an example of what it should look like would be
echo 'siedi87sik65owk55dkd' | sed 's/[0-9][0-9]/.\/script.sh/g'
Right now this returns
siedi./script.shsik./script.showk./script.shdkd
But I would like it to return
siedi!!!87!!!sik!!!65!!!owk!!!55!!!dkd
This is what is in ./script.sh
#!/bin/bash
echo "!!!$1!!!"
It has to be replaced with the output. In this example I know I could just use a normal sed substitution but I don't want that as an answer.
sed is for simple substitutions on individual lines, that is all. Anything else, even if it can be done, requires arcane language constructs that became obsolete in the mid-1970s when awk was invented and are used today purely for the mental exercise. Your problem is not a simple substitution so you shouldn't try to use sed to solve it.
You're going to want something like:
awk '{
head = ""
tail = $0
while ( match(tail,/[0-9]{2}/) ) {
tgt = substr(tail,RSTART,RLENGTH)
cmd = "./script.sh " tgt
if ( (cmd | getline line) > 0) {
tgt = line
}
close(cmd)
head = head substr(tail,1,RSTART-1) tgt
tail = substr(tail,RSTART+RLENGTH)
}
print head tail
}'
e.g. using an echo in place of your script.sh command:
$ echo 'siedi87sik65owk55dkd' |
awk '{
head = ""
tail = $0
while ( match(tail,/[0-9]{2}/) ) {
tgt = substr(tail,RSTART,RLENGTH)
cmd = "echo !!!" tgt "!!!"
if ( (cmd | getline line) > 0) {
tgt = line
}
close(cmd)
head = head substr(tail,1,RSTART-1) tgt
tail = substr(tail,RSTART+RLENGTH)
}
print head tail
}'
siedi!!!87!!!sik!!!65!!!owk!!!55!!!dkd
Ed's awk solution is obviously the way to go here.
For fun, I tried to come up with a sed solution, and here is (a convoluted GNU sed) one that takes the pattern and the script to be run as parameters; the input is either read from standard input (i.e., you can pipe to it) or from a file supplied as the third argument.
For your example, we'd have infile with contents
siedi87sik65owk55dkd
siedi11sik22owk33dkd
(two lines to demonstrate how this works for multiple lines), then script with contents
#!/bin/bash
echo "!!!${1}!!!"
and finally the solution script itself, so. Usage is
./so pattern script [input]
where pattern is an extended regular expression as understood by GNU sed (with the -r option), script is the name of the command you want to run for each match, and the optional input is the name of the input file if input is not standard input.
For your example, this would be
./so '[[:digit:]]{2}' script infile
or, as a filter,
cat infile | ./so '[[:digit:]]{2}' script
with output
siedi!!!87!!!sik!!!65!!!owk!!!55!!!dkd
siedi!!!11!!!sik!!!22!!!owk!!!33!!!dkd
This is what so looks like:
#!/bin/bash
pat=$1 # The pattern to match
script=$2 # The command to run for each pattern
infile=${3:-/dev/stdin} # Read from standard input if not supplied
# Use sed and have $pattern and $script expand to the supplied parameters
sed -r "
:build_loop # Label to loop back to
h # Copy pattern space to hold space
s/.*($pat).*/.\/\"$script\" \1/ # (1) Extract last match and prepare command
# Replace pattern space with output of command
e
G # (2) Append hold space to pattern space
s/(.*)$pat(.*)/\1~~~\2/ # (3) Replace last match of pattern with ~~~
/\n[^\n]*$pat[^\n]*$/b build_loop # Loop if string contains match
:fill_loop # Label for second loop
s/(.*\n)(.*)\n([^\n]*)~~~([^\n]*)$/\1\3\2\4/ # (4) Replace last ~~~
t fill_loop # Loop if there was a replacement
s/(.*)\n(.*)~~~(.*)$/\2\1\3/ # (5) Final ~~~ replacement
" < "$infile"
The sed command works with two loops. The first one copies the pattern space to the hold space, then removes everything but the last match from the pattern space and prepares the command to be run. After the substitution with (1) in its comment, the pattern space looks like this:
./script 55
The e command (a GNU extension) then replaces the pattern space with the output of this command. After this, G appends the hold space to the pattern space (2). The pattern space now looks like this:
!!!55!!!
siedi87sik65owk55dkd
The substitution at (3) replaces the last match with a string hopefully not equal to the pattern and we get
!!!55!!!
siedi87sik65owk~~~dkd
The loop repeats if the last line of the pattern space still has a match for the pattern. After three loops, the pattern space looks like this:
!!!87!!!
!!!65!!!
!!!55!!!
siedi~~~sik~~~owk~~~dkd
The second loop now replaces the last ~~~ with the second to last line of the pattern space with substitution (4). The command uses lots of "not a newline" ([^\n]) to make sure we're not pulling the wrong replacement for ~~~.
Because of the way command (4) is written, the loop ends with one last substitution to go, so before command (5), we have this pattern space:
!!!87!!!
siedi~~~sik!!!65!!!owk!!!55!!!dkd
Command (5) is a simpler version of command (4), and after it, the output is as desired.
This seems to be fairly robust and can deal with spaces in the name of the script to be run as long as it's properly quoted when calling:
./so '[[:digit:]]{2}' 'my script' infile
This would fail if
The input file contains ~~~ (solvable by replacing all occurrences at the start, putting them back at the end)
The output of script contains ~~~
The pattern contains ~~~
i.e., the solution very much depends on ~~~ being unique.
Because nobody asked: so as a one-liner.
#!/bin/bash
sed -re ":b;h;s/.*($1).*/.\/\"$2\" \1/;e" -e "G;s/(.*)$1(.*)/\1~~~\2/;/\n[^\n]*$1[^\n]*$/bb;:f;s/(.*\n)(.*)\n([^\n]*)~~~([^\n]*)$/\1\3\2\4/;tf;s/(.*)\n(.*)~~~(.*)$/\2\1\3/" < "${3:-/dev/stdin}"
Still works!
A conceptually simpler multi-utility solution:
Using GNU utilities:
echo 'siedi87sik65owk55dkd' |
sed 's|[0-9]\{2\}|$(./script.sh &)|g' |
xargs -d'\n' -I% sh -c 'echo '\"%\"
Using BSD utilities (also works with GNU utilities):
echo 'siedi87sik65owk55dkd' |
sed 's|[0-9]\{2\}|$(./script.sh &)|g' | tr '\n' '\0' |
xargs -0 -I% sh -c 'echo '\"%\"
The idea is to use sed to translate the tokens of interest lexically into a string containing shell command substitutions that invoke the target script with the token, and then pass the result to the shell for evaluation.
Note:
Any embedded " and $ characters in the input must be \-escaped.
xargs -d'\n' (GNU) and tr '\n' '\0' / xargs -0 (BSD) are only needed to correctly preserve whitespace in the input - if that is not needed, the following POSIX-compliant solution will do:
echo 'siedi87sik65owk55dkd' |
sed 's|[0-9]\{2\}|$(./script.sh &)|g' | tr '\n' '\0' |
xargs -I% sh -c 'printf "%s\n" '\"%\"

bash script to modify and extract information

I am creating a bash script to modify and summarize information with grep and sed. But it gets stuck.
#!/bin/bash
# This script extracts some basic information
# from text files and prints it to screen.
#
# Usage: ./myscript.sh </path/to/text-file>
#Extract lines starting with ">#HWI"
ONLY=`grep -v ^\>#HWI`
#replaces A and G with R in lines
ONLYR=`sed -e s/A/R/g -e s/G/R/g $ONLY`
grep R $ONLYR | wc -l
The correct way to write a shell script to do what you seem to be trying to do is:
awk '
!/^>#HWI/ {
gsub(/[AG]/,"R")
if (/R/) {
++cnt
}
END { print cnt+0 }
' "$#"
Just put that in the file myscript.sh and execute it as you do today.
To be clear - the bulk of the above code is an awk script, the shell script part is the first and last lines where the shell just calls awk and passes it the input file names.
If you WANT to have intermediate variables then you can create/print them with:
awk '
!/^>#HWI/ {
only = $0
onlyR = only
gsub(/[AG]/,"R",onlyR)
print "only:", only
print "onlyR:", onlyR
if (/R/) {
++cnt
}
END { print cnt+0 }
' "$#"
The above will work robustly, portably, and efficiently on all UNIX systems.
First of all, and as #fedorqui commented - you're not providing grep with a source of input, against which it will perform line matching.
Second, there are some problems in your script, which will result in unwanted behavior in the future, when you decide to manipulate some data:
Store matching lines in an array, or a file from which you'll later read values. The variable ONLY is not the right data structure for the task.
By convention, environment variables (PATH, EDITOR, SHELL, ...) and internal shell variables (BASH_VERSION, RANDOM, ...) are fully capitalized. All other variable names should be lowercase. Since
variable names are case-sensitive, this convention avoids accidentally overriding environmental and internal variables.
Here's a better version of your script, considering these points, but with an open question regarding what you were trying to do in the last line : grep R $ONLYR | wc -l :
#!/bin/bash
# This script extracts some basic information
# from text files and prints it to screen.
#
# Usage: ./myscript.sh </path/to/text-file>
input_file=$1
# Read lines not matching the provided regex, from $input_file
mapfile -t only < <(grep -v '^\>#HWI' "$input_file")
#replaces A and G with R in lines
for((i=0;i<${#only[#]};i++)); do
only[i]="${only[i]//[AG]/R}"
done
# DEBUG
printf '%s\n' "Here are the lines, after relpace:"
printf '%s\n' "${only[#]}"
# I'm not sure what you were trying to do here. Am I gueesing right that you wanted
# to count the number of R's in ALL lines ?
# grep R $ONLYR | wc -l

Replace strings in multiple files with corresponding caps using bash on MacOSX

I have multiple .txt files, in which I want to replace the strings
old -> new
Old -> New
OLD -> NEW
The first step is to only replace one string Old->New. Here is my current code, but it does not do the job (the files remain unchanged). The sed line works only if I replace the variables with the actual strings.
#!/bin/bash
old_string="Old"
new_string="New"
sed -i '.bak' 's/$old_string/$new_string/g' *.txt
Also, how do I convert a string to all upper-caps and all lower-caps?
Thank you very much for your advice!
To complement #merlin2011's helpful answer:
If you wanted to create the case variants dynamically, try this:
# Define search and replacement strings
# as all-lowercase.
old_string='old'
new_string='new'
# Loop 3 times and create the case variants dynamically.
# Build up a _single_ sed command that performs all 3
# replacements.
sedCmd=
for (( i = 1; i <= 3; i++ )); do
case $i in
1) # as defined (all-lowercase)
old_string_variant=$old_string
new_string_variant=$new_string
;;
2) # initial capital
old_string_variant="$(tr '[:lower:]' '[:upper:]' <<<"${old_string:0:1}")${old_string:1}"
new_string_variant="$(tr '[:lower:]' '[:upper:]' <<<"${new_string:0:1}")${new_string:1}"
;;
3) # all-uppercase
old_string_variant=$(tr '[:lower:]' '[:upper:]' <<<"$old_string")
new_string_variant=$(tr '[:lower:]' '[:upper:]' <<<"$new_string")
;;
esac
# Append to the sed command string. Note the use of _double_ quotes
# to ensure that variable references are expanded.
sedCmd+="s/$old_string_variant/$new_string_variant/g; "
done
# Finally, invoke sed.
sed -i '.bak' "$sedCmd" *.txt
Note that bash 4 supports case conversions directly (as part of parameter expansion), but OS X, as of 10.9.3, is still on bash 3.2.51.
Alternative solution, using awk to create the case variants and synthesize the sed command:
Aside from being shorter, it is also more robust, because it also handles strings correctly that happen to contain characters that are regex metacharacters (characters with special meaning in an regular expression, e.g., *) or have special meaning in sed's s function's replacement-string parameter (e.g., \), through appropriate escaping; without escaping, the sed command would not work as expected.
Caveat: Doesn't support strings with embedded \n chars. (though that could be fixed, too).
# Define search and replacement strings as all-lowercase literals.
old_string='old'
new_string='new'
# Synthesize the sed command string, utilizing awk and its tolower() and toupper()
# functions to create the case variants.
# Note the need to escape \ chars to prevent awk from interpreting them.
sedCmd=$(awk \
-v old_string="${old_string//\\/\\\\}" \
-v new_string="${new_string//\\/\\\\}" \
'BEGIN {
printf "s/%s/%s/g; s/%s/%s/g; s/%s/%s/g",
old_string, new_string,
toupper(substr(old_string,1,1)) substr(old_string,2), toupper(substr(new_string,1,1)) substr(new_string,2),
toupper(old_string), toupper(new_string)
}')
# Invoke sed with the synthesized command.
# The inner sed command ensures that all regex metacharacters in the strings
# are escaped so that sed treats them as literals.
sed -i '.bak' "$(sed 's#[][(){}^$.*?+\]#\\&#g' <<<"$sedCmd")" *.txt
If you want to do bash variable expansion inside the argument to sed, you need to use double quotes " instead of single quotes '.
sed -i '.bak' "s/$old_string/$new_string/g" *.txt
In terms of getting matches on all three of the literal substitutions, the cleanest solution may be just to run sed three times in a loop like this.
declare -a olds=(old Old OLD)
declare -a news=(new New NEW)
for i in `seq 0 2`; do
sed -i "s/${olds[$i]}/${news[$i]}/g" *.txt
done;
Update: The solution above works on Linux, but apparently OS X has different requirements. Additionally, as #mklement0 mentioned, my for loop is silly. Here is an improved version for OS X.
declare -a olds=(old Old OLD)
declare -a news=(new New NEW)
for (( i = 0; i < ${#olds[#]}; i++ )); do
sed -i '.bak' "s/${olds[$i]}/${news[$i]}/g" *.txt
done;
Assuming each string is separated by spaces from your other strings and that you don't want partial matches within longer strings and that you don't care about preserving white space on output and assuming that if an "old" string matches on a "new" string after a previous conversion operation, then the string should be changed again:
$ cat tst.awk
BEGIN {
split(tolower(old),oldStrs)
split(tolower(new),newStrs)
}
{
for (fldNr=1; fldNr<=NF; fldNr++) {
for (stringNr=1; stringNr in oldStrs; stringNr++) {
oldStr = oldStrs[stringNr]
if (tolower($fldNr) == oldStr) {
newStr = newStrs[stringNr]
split(newStr,newChars,"")
split($fldNr,fldChars,"")
$fldNr = ""
for (charNr=1; charNr in fldChars; charNr++) {
fldChar = fldChars[charNr]
newChar = newChars[charNr]
$fldNr = $fldNr ( fldChar ~ /[[:lower:]]/ ?
newChar : toupper(newChar) )
}
}
}
}
print
}
.
$ cat file
The old Old OLD smOLDering QuICk brown FoX jumped
$ awk -v old="old" -v new="new" -f tst.awk file
The new New NEW smOLDering QuICk brown FoX jumped
Note that the "old" in "smOLDering" did not get changed. Is that desirable?
$ awk -v old="QUIck Fox" -v new="raBid DOG" -f tst.awk file
The old Old OLD smOLDering RaBId brown DoG jumped
$ awk -v old="THE brown Jumped" -v new="FEW dingy TuRnEd" -f tst.awk file
Few old Old OLD smOLDering QuICk dingy FoX turned
Think about whether or not this is your expected output:
$ awk -v old="old new" -v new="new yes" -f tst.awk file
The yes Yes YES smOLDering QuICk brown FoX jumped
A few lines of sample input and expected output in the question would be useful to avoid all the guessing and assumptions.

Grep (Bash) error

I have a file like this called new.samples.dat
-4.5000000000E-01 8.0000000000E+00 -1.3000000000E-01
5.0000000000E-02 8.0000000000E+00 3.4000000000E-01
...
I have to search all this numbers of this file in another file called Remaining.Simulations.dat and copy them in another file. I did like this
for sample_index in $(seq 1 100)
do
sample=$(awk 'NR=='$sample_index'' new.samples.dat)
grep "$sample" Remaining.Simulations.dat >> Previous.Training.dat
done
It works almost fine but it does not copy all the $sample into Previous.Training.dat even if I am sure that these are in Remaining.Simulations.dat
This errors appear
grep: invalid option -- '.'
Usage: grep [OPTION]... PATTERN [FILE]...
Try `grep --help' for more information.
Do you have any idea how to solve it?Thank you
It's because you're trying to grep for something like -4.5 and grep is treating that as an option rather than a search string. If you use -- to indicate there are no more options, this should work okay:
pax> echo -4.5000000000E-01 | grep -4.5000000000E-01
grep: invalid option -- '.'
Usage: grep [OPTION]... PATTERN [FILE]...
Try 'grep --help' for more information.
pax> echo -4.5000000000E-01 | grep -- -4.5000000000E-01
-4.5000000000E-01
In addition, if you pass the string 7.2 to grep, it will match any line containing 7 followed by any character followed by 2 since:
Regular expressions treat . as a special character; and
Without start and end markers, 7.2 will also match 47.2, 7.25 and so on.
With awk you can try something like:
awk '
NR==FNR {
for (i=1;i<=NF;i++) {
numbers[$i]++
}
next
}
{
for (number in numbers)
if (index ($0,number) > 0) {
print $0
}
}' new.samples.dat Remaining.Simulations.dat > anotherfile

Resources