Bash Loop Through End of File - bash

I'm working on script that will find a pattern from a keyword and a list of other keywords on a separate file.
File1 has the list, which is a word per line. File2 has another list--the one that I actually want to search.
while read LINE; do
grep -q $LINE file2
if [ $? -eq 0 ]; then
echo "Found $LINE in file2."
grep $LINE file2 | grep example
if [ $? -eq 0 ]; then
echo "Keeping $LINE"
else
echo "Deleting $LINE"
sed -i "/$LINE/d" file2
fi
else
echo "Did not find $LINE in file2."
fi
done < file1
What I want is to take each word from file1 and search for every instance of it in file2. From those instances, I want to search for all the instances that contain the word example. Any instances that dont contain example, I want to delete them.
My code, it takes a word from file1 and searches for an instance of it in file2. Once it finds that instance, the loop moves on to the next word in file1, when it should continue searching for file2 for the previous word; it should only move on to the next file1 word when it has completed searching file2 for the current word.
Any help on how to achieve this?

Suggesting awk script, to scan each file only once.
awk 'FRN == RN {wordsArr[++wordsCount] = $0} # read file1 lines into array
FRN != RN && /example/ { # read file2 line matching regExp /example/
for (i in wordsArr) { # scan all words in array
if ($0 ~ wordsArr[i]) { # if a word matched in current line
print; # print the current line
next; # skip rest of words,read next line
}
}
}' file1 file2

Related

How to compare 2 files word by word and storing the different words in result output file

Suppose there are two files:
File1.txt
My name is Anamika.
File2.txt
My name is Anamitra.
I want result file storing:
Result.txt
Anamika
Anamitra
I use putty so can't use wdiff, any other alternative.
not my greatest script, but it works. Other might come up with something more elegant.
#!/bin/bash
if [ $# != 2 ]
then
echo "Arguments: file1 file2"
exit 1
fi
file1=$1
file2=$2
# Do this for both files
for F in $file1 $file2
do
if [ ! -f $F ]
then
echo "ERROR: $F does not exist."
exit 2
else
# Create a temporary file with every word from the file
for w in $(cat $F)
do
echo $w >> ${F}.tmp
done
fi
done
# Compare the temporary files, since they are now 1 word per line
# The egrep keeps only the lines diff starts with > or <
# The awk keeps only the word (i.e. removes < or >)
# The sed removes any character that is not alphanumeric.
# Removes a . at the end for example
diff ${file1}.tmp ${file2}.tmp | egrep -E "<|>" | awk '{print $2}' | sed 's/[^a-zA-Z0-9]//g' > Result.txt
# Cleanup!
rm -f ${file1}.tmp ${file2}.tmp
This uses a trick with the for loop. If you use a for to loop on a file, it will loop on each word. NOT each line like beginners in bash tend to believe. Here it is actually a nice thing to know, since it transforms the files into 1 word per line.
Ex: file content == This is a sentence.
After the for loop is done, the temporary file will contain:
This
is
a
sentence.
Then it is trivial to run diff on the files.
One last detail, your sample output did not include a . at the end, hence the sed command to keep only alphanumeric charactes.

Read each line of a column of a file and execute grep

I have file.txt exemplary here:
This line contains ABC
This line contains DEF
This line contains GHI
and here the following list.txt:
contains ABC<TAB>ABC
contains DEF<TAB>DEF
Now I am writing a script that executes the following commands for each line of this external file list.txt:
take the string from column 1 of list.txt and search in a third file file.txt
if the first command is positive, return the string from column 2 of list.txt
So my output.txt is:
ABC
DEF
This is my code for grep/echo with putting the query/return strings manually:
if grep -i -q 'contains abc' file.txt
then
echo ABC >output.txt
else
echo -n
fi
if grep -i -q 'contains def' file.txt
then
echo DEF >>output.txt
else
echo -n
fi
I have about 100 search terms, which makes the task laborious if done manually. So how do I include while read line; do [commands]; done<list.txt together with the commands about column1 and column2 inside that script?
I would like to use simple grep/echo/awkcommands if possible.
Something like this?
$ awk -F'\t' 'FNR==NR { a[$1] = $2; next } {for (x in a) if (index($0, x)) {print a[x]}} ' list.txt file.txt
ABC
DEF
For the lines of the first file (FNR==NR), read the key-value pairs to array a. Then for the lines of the second line, loop through the array, check if the key is found on the line, and if so, print the stored value. index($0, x) tries to find the contents of x from (the current line) $0. $0 ~ x would instead take x as a regex to match with.
If you want to do it in the shell, starting a separate grep for each and every line of list.txt, something like this:
while IFS=$'\t' read k v ; do
grep -qFe "$k" file.txt && echo "$v"
done < list.txt
read k v reads a line of input and splits it (based on IFS) into k and v.
grep -F takes the pattern as a fixed string, not a regex, and -q prevents it from outputting the matching line. grep returns true if any matching lines are found, so $v is printed if $k is found in file.txt.
Using awk and grep:
for text in `awk '{print $4}' file.txt `
do
grep "contains $text" list.txt |awk -F $'\t' '{print $2}'
done

how to ignore a newLine character in the compare script as below

#!/bin/bash
function compare {
for file1 in /dir1/*.csv
do
file2=/dir2/$(basename "$file1")
if [[ -e "$file2" ]] ### loop only if the file2 with same filename as file1 is present ###
then
awk 'BEGIN {FS==","} NR == FNR{arr[$0];next} ! ($0 in arr)' $file1 $file2 > /dirDiff/`echo $(basename "$file1")_diff`
fi
done
}
function removeNULL {
for i in /dirDiff/*_diff
do
if [[ ! -s "$i" ]] ### if file exists with zero size ###
then
\rm -- "$i"
fi
done
}
compare
removeNULL
file1 and file2 are the formatted files from two different sources. Source1 is inducing an arbitrary newLine character making one record to split into two records, causing script to fail and generate wrong diff o/p.
I want my script to compare b/w file1 and file2 by ignoring the induced newLine character by Source1. But, I am not sure how my script will identify b/w an actual new record and the manually induced newLine.
file1:-
11447438218480362,6005560623,6005560623,11447438218480362,5,20160130103044,100,195031,,1,0,00,49256,0
,195031_5_00_6,0.1,6;
11447691224860640,6997557634,6997557634,11447691224860640,601511,20160130103457,500,195035,,2,0,00,45394,0
,195035_601511_00_6,0.5,6;
file2:-
11447438218480362,6005560623,6005560623,11447438218480362,5,20160130103044,100,195031,,1,0,00,49256,0,195031_5_00_6,0.1,6;
11447691224860640,6997557634,6997557634,11447691224860640,601511,20160130103457,500,195035,,2,0,00,45394,0,195035_601511_00_6,0.5,6;
Appreciate your support.
You could preprocess your file1 joining lines not ending in ; with the next line:
sed -r ":again; /;$/! { N; s/(.+)[\r\n]+(.+)/\1\2/g; b again; }" file1
so that file1 and file2 are comparable.

Bash script to remove redundant lines

Good afternoon,
I'm trying to make a bash script that cleans out some data output files. The files look like this:
/path/
/path/to
/path/to/keep
/another/
/another/path/
/another/path/to
/another/path/to/keep
I'd like to end up with this:
/path/to/keep
/another/path/to/keep
I want to cycle through lines of the file, checking the next line to see if it contains the current line, and if so, delete the current line from the file. Here's my code:
for LINE in $(cat bbutters_data2.txt)
do
grep -A1 ${LINE} bbutters_data2.txt
if [ $? -eq 0 ]
then
sed -i '/${LINE}/d' ./bbutters_data2.txt
fi
done
Assuming that your input file is sorted in the way that you have shown:
$ awk 'NR>1 && substr($0,1,length(last))!=last {print last;} {last=$0;} END{print last}' file
/path/to/keep
/another/path/to/keep
How it works
awk reads through the input file line by line. Every time we read a new line, we compare it to the last. If the new line does not contain the last line, then we print the last line. In more detail:
NR>1 && substr($0,1,length(last))!=last {print last;}
If this is not the first line and if the last line, called last, is not contained in the current line, $0, then print the last line.
last=$0
Update the variable last to the current line.
END{print last}
After we finish reading the file, print the last line.
I like the awk solution, but bash itself can handle the task. Note: the solution (both awk and bash), require that the lesser included paths be listed in increasing order. Here is an alternative bash solution (bash only due to the glob match operation):
#!/bin/bash
fn="${1:-/dev/stdin}" ## accept filename or stdin
[ -r "$fn" ] || { ## validate file is readable
printf "error: file not found: '%s'\n" "$fn"
exit 1
}
declare -i cnt=0 ## flag for 1st iteration
while read -r line; do ## for each line in file
## if 1st iteration, fill 'last', increment 'cnt', continue
[ $cnt -eq 0 ] && { last="$line"; ((cnt++)); continue; }
## while 'line' is a child of 'last', continue, else print
[[ $line = "${last%/}"/* ]] || printf "%s\n" "$last"
last="$line" ## update last=$line
done <"$fn"
[ ${#line} -eq 0 ] && ## print last line (updated for non POSIX line end)
printf "%s\n" "$last" ||
printf "%s\n" "$line"
exit 0
Output
$ bash path_uniql.sh < dat/incpaths.txt
/path/to/keep
/another/path/to/keep

Finding text files with less than 2000 rows and deleting them

I have A LOT of text files, with just one column.
Some text file have 2000 lines (consisting of numbers), and some others have less than 2000 lines (also consisting only of numbers).
I want to delete all the textiles with less than 2000 lines in them.
EXTRA INFO
The files that have less than 2000 lines, are not empty they all have line breaks till row 2000. Plus my files have some complicated names like: Nameofpop_chr1_window1.txt
I tried using awk to first count the lines of my text file, but because there are line breaks for every file I get the same result, 2000 for every file.
awk 'END { print NR }' Nameofpop_chr1_window1.txt
Thanks in advance.
You can use this awk to count non-empty lines:
awk 'NF{i++} END { print i }' Nameofpop_chr1_window1.txt
OR this awk to count only those lines that have only numbers
awk '/^[[:digit:]]+$/ {i++} END { print i }' Nameofpop_chr1_window1.txt
To delete all files with less than 2000 lines with numbers use this awk:
for f in f*; do
[[ -n $(awk '/^[[:digit:]]+$/{i++} END {if (i<2000) print FILENAME}' "$f") ]] && rm "$f"
done
you can use expr $(cat filename|sort|uniq|wc -l) - 1 or cat filename|grep -v '^$'|wc -l it will give you the number of lines per file and based on that you decidewhat to do
You can use Bash:
for f in $files; do
n=0
while read line; do
[[ -n $line ]] && ((n++))
done < $f
[ $n -lt 2000 ] && rm $f
done

Resources