Find files that contain string match1 but does not contain match2 - bash

I am writing a shell script to find files which contain string "match1" AND does not contain "match2".
I can do this in 2 parts:
grep -lr "match1" * > /tmp/match1
grep -Lr "match2" * > /tmp/match2
comm -12 /tmp/match1 /tmp/match2
Is there a way I can achieve this directly without going through the process of creating temporary files ?

With bash's process substitution:
comm -12 <(grep -lr "match1" *) <(grep -Lr "match2" *)

Using GNU awk for multi-char RS:
awk -v RS='^$' '/match1/ && !/match2/ {print FILENAME}' *

I would use find together with awk. awk can check both matches in a single run, meaning you don't need to process all the files twice:
find -maxdepth 1 -type f -exec awk '/match1/{m1=1}/match2/{m2=1} END {if(m1 && !m2){print FILENAME}}' {} \;
Better explained in multiline version:
# Set flag if match1 occurs
/match1/{m1=1}
# Set flag if match2 occurs
/match2/{m2=1}
# After all lines of the file have been processed print the
# filename if match1 has been found and match2 has not been found.
END {if(m1 && !m2){print FILENAME}}

Is there a way I can achieve this directly without going through the process of creating temporary files ?
Yes. You can use pipelines and xargs:
grep -lr "match1" * | xargs grep -Lr "match2"
The first grep prints the names of files containing matches to its standard output, as you know. The xargs command reads those file names from its standard input, and converts them into arguments to the second grep command, appending them after the ones already provided.

You can initially search for the files containing match1 and then using xargspass it to other grep using -L or --files-without-match option.
grep -lr "match1" *|xargs grep -L "match2"

Related

How to extract codes using the grep command?

I have a file with below input lines.
John|1|R|Category is not found for local configuration/code/123.NNN and customer 113
TOM|2|R|Category is not found for local configuration/code/123.NNN and customer 114
PETER|3|R|Category is not found for local configuration/code/456.1 and customer 115
I need to extract only the above highlighted text using the grep command.
I tried the below command and didn't get the proper result. Getting the extra 2 unwanted characters in the output. Please suggest if there is any other way to achieve this through grep command.
find ./ -type f -name <FileName> -exec cut -f 4 -d'|' {} + |
grep -o 'Category is not found for local configuration/code/...\\....' |
grep -o '...\\....' | sort | uniq
Current Output:
123.NNN
456.1 a
Expected output:
123.NNN
456.1
You can use another grep regular expression.
find ./ -type f -name f -exec cut -f 4 -d'|' {} + |
grep -o 'Category is not found for local configuration/code/...\.[^ ]*' |
grep -o '...\..*' | sort | uniq
. matches any character, [^ ]* matches any sequence of characters until the first space
Output:
123.NNN
456.1
Your regex specifies a fixed character width for strings of variable width. Based on your examples, something like
[0-9]\+\.[A-Z0-9]\+
would seem like a better regex. However, we could probably also simplify this by merging the cut and multiple grep commands into a single Awk script.
find etc etc -exec awk -F '|' '
$4 ~ /Category is not found for local configuration\/code\/[0-9]{3}\.[0-9A-Z]/ {
split($4, a, /\/code\/);
split(a[2], b); print b[1] }' {} + |
sort -u
The two split operations are just a cheap way to pick out the text between /code/ and the next whitespace character; we have already established by way of the regex match that the string after /code/ matches the pattern we're after.
Notice also how sort has a -u option which allows you to replace (trivial cases of) uniq.
The regex variant supported by Awk is slightly different than that supported by POSIX grep; so the backslashed \+ in grep's BRE dialect is plain + in the dialect called ERE which is [more or less] supported by Awk - and grep -E. If you have grep -P you can use a third variant which has a convenient feature;
find etc etc -exec grep -oP '^([^|]*[|]){3}[^|]*Category is not found for local configuration/code/\K[0-9]{3}\.[0-9A-Z]+' {} + |
sort -u
The \K says "match up through here, but forget everything before this" and so only prints the part after this token.
With sed:
sed -E -n 's#.*code/(.*)\s+and.*#\1#p' file.txt | uniq
Output:
123.NNN
456.1
I'd use the -P option:
grep -oP '/code/\K\S+' file | sort -u
You want to extract the non-whitespace characters following /code/
An awk using match():
$ awk 'match($0,/[0-9]+\.[A-Z0-9]+/)&&++a[(b=substr($0,RSTART,RLENGTH))]==1{print b}' file
Output:
123.NNN
456.1
Pretty printed for slightly better readability:
$ awk '
match($0,/[0-9]+\.[A-Z0-9]+/) && ++a[(b=substr($0,RSTART,RLENGTH))]==1 {
print b
}' file
It's not possible just using grep. You should use AWK instead:
awk '{split($7, ar, "/"); print ar[3]}' FILE
Explanation:
The split function splits on a string, here $7, the 7th field, placing the result in an array ar, and using the string / as delimiter.
Then prints the 3rd field of the array.
Note:
I am assuming that all of your input looks like the samples you have given us, i.e.:
aaa|b|c|ddd is not found for local configuration/code/111.nnn and customer nnn
Where aaa and ddd will not contain whitespace.
I also assume you really do have a file FILE containing those lines. It's a bit unclear.
Input:
▶ cat FILE
John|1|R|Category is not found for local configuration/code/123.NNN and customer 113
TOM|2|R|Category is not found for local configuration/code/123.NNN and customer 114
PETER|3|R|Category is not found for local configuration/code/456.1 and customer 115
Output:
▶ awk '{split($7, ar, "/"); print ar[3]}' FILE
123.NNN
123.NNN
456.1
Single sed can do the filtering.
(The pattern can be further generalized as suggested by others if that is an option. But be careful to not to over simplify so that it can match with unexpected inputs)
sed -nE 's#(\S+\s+){6}configuration/code/(\S+)\s.*#\2#p' input.txt
To replace your exact command,
find ./ -type f -name <Filename> -exec cat {} \; | sed -nE 's#(\S+\s+){6}configuration/code/(\S+)\s.*#\2#p' | sort | uniq
Simple substitutions on individual lines is the job sed is best suited for. This will work using any sed in any shell on any UNIX box:
$ cat file
John|1|R|Category is not found for local configuration/code/123.NNN and customer 113
TOM|2|R|Category is not found for local configuration/code/123.NNN and customer 114
PETER|3|R|Category is not found for local configuration/code/456.1 and customer 115
$ sed -n 's:.*Category is not found for local configuration/code/\([^ ]*\).*:\1:p' file | sort -u
123.NNN
456.1

Grep to Print all file content [duplicate]

This question already has answers here:
Colorized grep -- viewing the entire file with highlighted matches
(24 answers)
Closed 4 years ago.
How can I modify grep so that it prints full file if its entry matches the grep pattern , instead of printing Just the matching line ?
I tried using(say) grep -C2 to print two lines above and 2 below but this doesn't always works as no. of lines is not fixed ..
I am not Just searching a single file , I am searching an entire directory where some files may contain the given pattern and I want those Files to be completely Printed.
I am also using grep inside grep result without getting printed the first grep output.
Simple grep + cat combination:
grep 'pattern' file && cat file
Use grep's -l option to list the paths of files with matching contents, then print the contents of these files using cat.
grep -lR 'regex' 'directory' | xargs -d '\n' cat
The command from above cannot handle filenames with newlines in them.
To overcome the filename with newlines issue and also allow more sophisticated checks you can use the find command.
The following command prints the content of all regular files in directory.
find 'directory' -type f -exec cat {} +
To print only the content of files whose content matches the regexes regex1 and regex2, use
find 'directory' -type f \
-exec grep -q 'regex1' {} \; -and \
-exec grep -q 'regex2' {} \; \
-exec cat {} +
The linebreaks are only for better readability. Without the \ you can write everything into one line.
Note the -q for grep. That option supresses grep's output. grep's exit status will tell find whether to list a file or not.

Applying awk pattern to all files with same name, outputting each to a new file

I'm trying to recursively find all files with the same name in a directory, apply an awk pattern to them, and then output to the directory where each of those files lives a new updated version of the file.
I thought it was better to use a for loop than xargs, but I don't exactly how to make this work...
for f in $(find . -name FILENAME.txt );
do awk -F"\(corr\)" '{print $1,$2,$3,$4}' ./FILENAME.txt > ./newFILENAME.txt $f;
done
Ultimately I would like to be able to remove multiple strings from the file at once using -F, but also not sure how to do that using awk.
Also is there a way to remove "(cor*)" where the * represents a wildcard? Not sure how to do while keeping with the escape sequence for the parentheses
Thanks!
To use (corr*) as a field separator where * is a glob-style wildcard, try:
awk -F'[(]corr[^)]*[)]' '{print $1,$2,$3,$4}'
For example:
$ echo '1(corr)2(corrTwo)3(corrThree)4' | awk -F'[(]corr[^)]*[)]' '{print $1,$2,$3,$4}'
1 2 3 4
To apply this command to every file under the current directory named FILENAME.txt, use:
find . -name FILENAME.txt -execdir sh -c 'awk -F'\''[(]corr[^)]*[)]'\'' '\''{print $1,$2,$3,$4}'\'' "$1" > ./newFILENAME.txt' Awk {} \;
Notes
Don't use:
for f in $(find . -name FILENAME.txt ); do
If any file or directory has whitespace or other shell-active characters in it, the results will be an unpleasant surprise.
Handling both parens and square brackets as field separators
Consider this test file:
$ cat file.txt
1(corr)2(corrTwo)3[some]4
To eliminate both types of separators and print the first four columns:
$ awk -F'[(]corr[^)]*[)]|[[][^]]*[]]' '{print $1,$2,$3,$4}' file.txt
1 2 3 4

one command line grep and word count recursively

I can do the following using a for loop
for f in *.txt; do grep 'RINEX' $f |wc -l; done
Is there any possibility to get an individual file report by running one liner?
Meaning that I want to grep & wc one file at the time in a similar fashion like
grep 'RINEX' *.txt
UPDATE:
grep -c 'RINEX' *.txt
returns the name of each file and its corresponding number of occurrences. Thx #Evert
grep is not the right tool for this task.
grep does line based match, e.g. line grep 'o' <<< "fooo" will return 1 line. however we have 3 os.
This one-liner should do what you want:
awk -F'RINEX' 'FILENAME!=f{if(f)print f,s;f=FILENAME;s=0}
{s+=(NF-1)}
END{print f,s}' /path/*.txt

how to find the last modified file and then extract it

Say I have 3 archrive file:
a.7z
b.7z
c.7z
What I want is to find the last modified archrive file and then extract it
1st: find the last modified
2nd: extract it
1st:
ls -t | head -1
My question is how to approach 2nd by using "|" at the end of 1st command
You can do it like that:
7z e `ls -t | head -1`
Use `` to embed the first command.
You can use the below code for writing more than 1 command together in a single line.
ls -t | head -1 && 7z e <file_name>.tar.7z command for the extracting .7z file
Here is a safer method of extracting last modified file in a directory:
find . -maxdepth 1 -type f -printf "%T#\0%p\0\0" |
awk -F '\0' -v RS='\0\0' '$1 > maxt{maxt=$1; maxf=$2} END{printf "%s%s", maxf, FS}' |
xargs -0 7z e
This required gnu find and gnu awk.
-printf option is using single NUL character or \0' as field separator and 2 NUL characters \0\0 as record separator for awk.

Resources