Extracting lines from 2 files using AWK just return the last match - bash

Im a bit new using AWK and im trying to print lines in a file1 that a specific field exists in a file2. I copied exactly examples that I found here but i dont know why its just printing only the last match of the file1.
File1
58000
72518
94850
File2
58000;123;abc
69982;456;rty
94000;576;ryt
94850;234;wer
84850;576;cvb
72518;345;ert
Result Expected
58000;123;abc
94850;234;wer
72518;345;ert
What Im getting
94850;234;wer
awk -F';' 'NR==FNR{a[$1]++; next} $1 in a' file1 file2
What im doing wrong?

awk (while usable here), isn't the correct tool for the job. grep with the -f option is. The -f file option will read the patterns from file one per-line and search the input file for matches.
So in your case you want:
$ grep -f file1 file2
58000;123;abc
94850;234;wer
72518;345;ert
(note: I removed the trailing '\' from the data file, replace it if it wasn't a typo)
Using awk
If you did want to rewrite what grep is doing using awk, that is fairly simple. Just read the contents of file1 into an array and then for processing records from the second file, just check if field-1 is in the array, if so, print the record (default action), e.g.
$ awk -F';' 'FNR==NR {a[$1]=1; next} $1 in a' file1 file2
58000;123;abc
94850;234;wer
72518;345;ert
(same note about the trailing slash)

Thanks #RavinderSingh13!
The file1 really had some hidden characters and I could see it using cat.
$ cat -v file1
58000^M
72518^M
94850^M
I removed using sed -e "s/\r//g" file1 and the AWK worked perfectly.

Related

Diff to get changed line from second file

I have two files file1 and file2. I want to print the new line added to file2 using diff.
file1
/root/a
/root/b
/root/c
/root/d
file2
/root/new
/root/new_new
/root/a
/root/b
/root/c
/root/d
Expected output
/root/new
/root/new_new
I looked into man page but there was no any info on this
If you don't need to preserve the order, you could use the comm command like:
comm -13 <(sort file1) <(sort file2)
comm compares 2 sorted files and will print 3 columns of output. First is the lines unique to file1, then lines unique to file2 then lines common to both. You can supress any columns, so we turn of 1 and 3 in this example with -13 so we will see only lines unique to the second file.
or you could use grep:
grep -wvFf file1 file2
Here we use -f to have grep get its patterns from file1. We then tell it to treat them as fixed strings with -F instead of as patterns, match whole words with -w, and print only lines with no matches with -v
Following awk may help you on same. This will tell you all those lines which are present in Input_file2 and not in Input_file1.
awk 'FNR==NR{a[$0];next} !($0 in a)' Input_file1 Input_file2
Try using a combination of diff and sed.
The raw diff output is:
$ diff file1 file2
0a1,2
> /root/new
> /root/new_new
Add sed to strip out everything but the lines beginning with ">":
$ diff file1 file2 | sed -n -e 's/^> //p'
/root/new
/root/new_new
This preserves the order. Note that it also assumes you are only adding lines to the second file.

Compare 2 csv files and delete rows - Shell

I have a 2 csv files. One has several columns, the other is just one column with domains. Simplified data of these files would be
file1.csv:
John,example.org,MyCompany,Australia
Lenny,domain.com,OtherCompany,US
Martha,site.com,ThirdCompany,US
file2.csv:
example.org
google.es
mysite.uk
The output should be
Lenny,domain.com,OtherCompany,US
Martha,site.com,ThirdCompany,US
I have tried this solution
grep -v -f file2.csv file1.csv >output-file
Found here
http://www.unix.com/shell-programming-and-scripting/177207-removing-duplicate-records-comparing-2-csv-files.html
But since there is no explanation whatsoever about how the script works, and I suck at shell, I cannot tweak it to make it work for me
A solution for this would be highly appreciated, a solution with some explanation would be awesome! :)
EDIT:
I have tried the line that was suppose to work, but for some reason it does not. Here the output from my terminal. What's wrong with this?
Desktop $ cat file1.csv ; echo
John,example.org,MyCompany,Australia
Lenny ,domain.com,OtherCompany,US
Martha,mysite.com,ThirCompany,US
Desktop $ cat file2.csv ; echo
example.org
google.es
mysite.uk
Desktop $ grep -v -f file2.csv file1.csv
John,example.org,MyCompany,Australia
Lenny ,domain.com,OtherCompany,US
Martha,mysite.com,ThirCompany,US
Why grep doesn't remove the line
John,example.org,MyCompany,Australia
The line you posted, works just fine.
$ grep -v -f file2.csv file1.csv
Lenny,domain.com,OtherCompany,US
Martha,site.com,ThirdCompany,US
And here's an explanation. grep will search for a given pattern in a given file and print all lines that match. The simplest example of usage is:
$ grep John file1.csv
John,example.org,MyCompany,Australia
Here we used a simple pattern that matches each character, but you can also use regular expressions (basic, extended, and even perl-compatible ones).
To invert the logic, and print only the lines that do not match, we use the -v switch, like this:
$ grep -v John file1.csv
Lenny,domain.com,OtherCompany,US
Martha,site.com,ThirdCompany,US
To specify more than one pattern, you can use the option -e pattern multiple times, like this:
$ grep -v -e John -e Lenny file1.csv
Martha,site.com,ThirdCompany,US
However, if there is a larger number of patterns to check for, we might use the -f file option that will read all patterns from a file specified.
So, when we combine all of those; reading patterns from a file with -f and inverting the matching logic with -v, we get the line you need.
One in awk:
$ awk -F, 'NR==FNR{a[$1];next}($2 in a==0)' file2 file1
Lenny,domain.com,OtherCompany,US
Martha,site.com,ThirdCompany,US
Explained:
$ awk -F, ' # using awk, comma-separated records
NR==FNR { # process the first file, file2
a[$1] # hash the domain to a
next # proceed to next record
}
($2 in a==0) # process file1, if domain in $2 not in a, print the record
' file2 file1 # file order is important

egrep -v match lines containing some same text on each line

So I have two files.
Example of file 1 content.
/n01/mysqldata1/mysql-bin.000001
/n01/mysqldata1/mysql-bin.000002
/n01/mysqldata1/mysql-bin.000003
/n01/mysqldata1/mysql-bin.000004
/n01/mysqldata1/mysql-bin.000005
/n01/mysqldata1/mysql-bin.000006
Example of file 2 content.
/n01/mysqlarch1/mysql-bin.000004
/n01/mysqlarch1/mysql-bin.000001
/n01/mysqlarch2/mysql-bin.000005
So I want to match based only on mysql-bin.00000X and not the rest of the file path in each file as they differ between file1 and file2.
Here's the command I'm trying to run
cat file1 | egrep -v file2
The output I'm hoping for here would be...
/n01/mysqldata1/mysql-bin.000002
/n01/mysqldata1/mysql-bin.000003
/n01/mysqldata1/mysql-bin.000006
Any help would be much appreciated.
Just compare based on everything from /:
$ awk -F/ 'FNR==NR {a[$NF]; next} !($NF in a)' f2 f1
/n01/mysqldata1/mysql-bin.000002
/n01/mysqldata1/mysql-bin.000003
/n01/mysqldata1/mysql-bin.000006
Explanation
This reads file2 in memory and then compares with file1.
-F/ set the field separator to /.
FNR==NR {a[$NF]; next} while reading the first file (file2), store every last piece into an array a[]. Since we set the field separator to /, this is the mysql-bin.00000X part.
!($NF in a) when reading the second file (file1) check if the last field (mysql-bin.00000X part) is in the array a[]. If it does not, print the line.
I'm having one problem that I've noticed when testing. If file2 is
empty nothing is returned at all where as I would expected every line
in file1 to be returned. Is this something you could help me with
please? – user2841861.
Then the problem is that FNR==NR matches when reading the second file. To prevent this, just cross check that the "reading into a[] array" action is done on the first file:
awk -F/ 'FNR==NR && argv[1]==FILENAME {a[$NF]; next} !($NF in a)' f2 f1
^^^^^^^^^^^^^^^^^^^^
From man awk:
ARGV
The command-line arguments available to awk programs are stored in an
array called ARGV. ARGC is the number of command-line arguments
present. See section Other Command Line Arguments. Unlike most awk
arrays, ARGV is indexed from zero to ARGC - 1

Using cut and grep commands in unix

I have a file (file1.txt) with text as:
aaa,,,,,
aaa,10001781,,,,
aaa,10001782,,,,
bbb,10001783,,,,
My file2 contents are:
11111111
10001781
11111222
I need to search second field of file1 in file2 and delete the line from file1 if pattern is matching.So output will be:
aaa,,,,,
aaa,10001782,,,,
bbb,10001783,,,,
Can I use grep and cut commands for this?
This prints lines from file1.txt only if the second field is not in file2:
$ awk -F, 'FNR==NR{a[$1]=1; next;} !a[$2]' file2 file1.txt
aaa,,,,,
aaa,10001782,,,,
bbb,10001783,,,,
How it works
This works by reading file2 and keeping track of all lines seen in an associative array a. Then, lines in file1.txt are printed only if its column 2 is not in a. In more detail:
FNR==NR{a[$1]=1; next;}
When reading file2, set a[$1] to 1 to signal that we have seen the value on this line. We then instruct awk to skip the rest of the commands and start over on the next line.
This section is only run for file2 because file2 is listed first on the command line and FNR==NR only when we are reading the first file listed on the command line. This is because FNR is the number of lines read from the current file and NR is the total number of lines read so far. These two are equal only for the first file.
!a[$2]
When reading file1.txt, a[$2] evaluates to true if column 2 was seen in file2. Since ! is negation, !a[$2] evaluates to true when column 2 was not seen. When this evaluates to true, the line is printed.
Alternative
This is the same logic, expressed in a slightly different style, as suggested in the comments by Tom Fenech:
$ awk -F, 'FNR==NR{a[$1]; next;} !($2 in a)' file2 file1.txt
aaa,,,,,
aaa,10001782,,,,
bbb,10001783,,,,
Soulution with grep
$ grep -vf file2 file1.txt
aaa,,,,,
aaa,10001782,,,,
bbb,10001783,,,,
John1024's awk soulution would be faster for large files though.

How to search for a pattern having the special characters in awk

I have to match file1 with file2 line by line . But file1 is in below format .If am using ak command to search with the below line in file2 , its throwing error with syntax error at '=' .
File1 :
Country_code=US/base_div_nbr=18/retail_channel_code=1/visit_date=2010-01-02/load_time_stamp=20100102058100
Country_code=US/base_div_nbr=18/retail_channel_code=1/visit_date=2010-01-02/load_time_stamp=20100102091000
Country_code=US/base_div_nbr=18/retail_channel_code=1/visit_date=2010-01-02/load_time_stamp=20100102067000
File2:
Country_code=US/base_div_nbr=18/retail_channel_code=1/visit_date=2010-01-02/load_time_stamp=20100102058100
Country_code=US/base_div_nbr=18/retail_channel_code=1/visit_date=2010-01-02/load_time_stamp=20100102091000
I took total line from file1 as the search pattern to search in file2 using below command:
awk "/$line/ {print ;}" file2
Here file1 , 3 rd record not found in file2 , So I need to know these differences
I am very much new to shell scripting, So please suggest me on this.
This is really a job for comm, assuming you can sort both input files, but if you want to use awk something like this might do it depending on your unstated requirements:
awk 'NR==FNR {file1[NR]=$0; next} $0 != file1[FNR]' file1 file2
If I understand correctly, you want to print the lines that are common to both files. In that case, awk is really not the best tool. You could instead do one of
comm <(sort file1) <(sort file2)
or
grep -Fxf file1 file2
If you really want to do it with awk, you could try
awk 'FNR==NR{a[$0]; next} $0 in a' file1 file2

Resources