Find a match between two columns of two files - bash

I have two files:
file 1:
a1 45 1/1 895
a1 65 0/1 478
a1 80 1/1 474
a2 45 0/1 145
a3 50 1/1 415
a3 32 0/1 547
file 2:
a1 45 1/1 784
a1 65 0/1 454
a1 89 1/1 354
a1 105 0/1 365
a2 45 0/1 478
a2 65 0/1 985
a3 32 0/1 658
a3 65 0/1 985
I want to compare both columns of file 1 with file 2 and only if both match, then I want to print the whole line in file 1.
output:
a1 45 1/1 895
a1 65 0/1 478
a2 45 0/1 145
a3 32 0/1 547
This is the solution I am thinking about in awk:
awk 'FNR==NR{a[$1$2];next} (($1$2 in a))' file1 file2
I was wondering whether there are other ways of doing this in bash.
Thanks!

If full line needs to be matched then grep -f is simpler option:
grep -Ff file1 file2
a1 45
a1 65
a2 45
a3 32
EDIT:
awk 'FNR==NR{a[$1,$2];next} (($1,$2) in a)' file1 file2
a1 45 1/1 784
a1 65 0/1 454
a2 45 0/1 478
a3 32 0/1 658

Related

filter multiline record file based if one of the lines meet condition ( word count)

everyone
I am looking for a way to keep the records from txt file that meet the following condition:
This is the example of the data:
aa bb cc
11 22 33
44 55 66
77 88 99
aa bb cc
11 22 33 44 55 66 77
44 55 66 66
77 88 99
aa bb cc
11 22 33 44 55
44 55 66
77 88 99 77
...
Basically, it's a file where one record where there are total 5 lines, 4 lines contain strings/numbers with tab delimeter , and the last is the new line \n.
The first line of the record always has 3 elements, while the number of elements in 2nd 3rd and 4th line can be different.
What I need to do is to remove every record(5 lines block) where total number of elements in the second line > 3 ( and I don't care about the number of elements in all the rest lines) . The output of the example should look like this:
aa bb cc
11 22 33
44 55 66
77 88 99
...
so only the record where i have 3 elements are kept and recorded in the new txt file.
I tried to do it with awk by modifying FS and RS values like this:
awk 'BEGIN {RS="\n\n"; FS="\n";}
{if(length($2)==3) print $2"\n\n"; }' test_filter.txt
but if(length($2)==3) is not correct, as I should count the number of entries in 2nd field instead of counting the length, which I can't find how to do.. any help would be much appreaciated!
thanks in advance,
You can use the split() function to break a line/field/string into components; in this case:
n=split($2,arr," ")
Where:
we split field #2, using a single space (" ") as the delimiter ...
components are stored in array arr[] and ...
n is the number of elements in the array
Pulling this into OP's current awk code, along with a couple small changes, we get:
awk 'BEGIN {ORS=RS="\n\n"; FS="\n"} {n=split($2,arr," "); if (n>=4) next}1' test_filter.txt
With an additional block added to our sample:
$ cat test_filter.txt
aa bb cc
11 22 33
44 55 66
77 88 99
aa bb cc
11 22 33 44 55 66 77
44 55 66 66
77 88 99
aa bb cc
111 222 333
444 555 665
777 888 999
aa bb cc
11 22 33 44 55
44 55 66
77 88 99 77
This awk solution generates:
aa bb cc
11 22 33
44 55 66
77 88 99
aa bb cc
111 222 333
444 555 665
777 888 999
# blank line here

APDU command to read Track1/track2 data from NFC card and MobileApp

Application is in C, to read only NFC card details(card number & date). Following the steps below
CardRead("1PAY.SYS.DDF01", "PSE1");
Ex:APDU - (0x00, 0xA4, 0x04, 0x00, PSE1,00) or
CardRead("2PAY.SYS.DDF01", "PSE2");
Ex:APDU - (0x00, 0xA4, 0x04, 0x00, PSE2,00)
Select the AID Get the AID from response data.
Ex:APDU - (0x00, 0xA4, 0x04, 0x00, AID,00)
ReadRecord - Want to know how to calculate SFI & P1,P2 values.
Is PDOL required or only Read Record command is enough to read track1/2 data?
After step 3 data received is 3 6F 38 84 7 A0 0 0 0 4 10 10 A5 2D 50 A 4D 41 53 54 45 52 43 41 52 44 87 1 1 5F 2D 2 65 6E 9F 38 9 9F 1D 8 9F 1A 2 9F 35 1 BF C A 9F 6E 7 8 40 0 0 32 31 0 90 0 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 9F 65 2 0 E0 9F 66 2 F 1E 9F 67 1 4 9F 6B 13 51 80 84 8 2 59 9 27 D2 20 92 1 0 0 0 0 0 0.
This is PDOL information: 9F 38 9 9F 1D 8 9F 1A 2 9F 35 1.
Please let me know how to frame next command PDOL/ReadRecord from the above data to read track1/track2 data.
Download EMV Book 3, read section 10.2 Read Application Data. It has it all. Find below in case you cant get the document.

sort text file by groups of 2 lines

I have a file with the following structure:
A 35 74 dsadasd/1 0 +
A 95 74 dsadasd/2 0 -
B 75 159 dsadasd/2 0 +
B 78 852 dsadasd/1 0 -
C 12 789 dsadasd/1 0 +
C 91 546 dsadasd/2 0 -
A 87 52 dsadasd/2 0 +
A 52 15 dsadasd/1 0 -
I would like to sort it by the 4th field (which is basically sorting by the last number) in groups of two lines by two lines to output the following result:
A 35 74 dsadasd/1 0 +
A 95 74 dsadasd/2 0 -
B 78 852 dsadasd/1 0 -
B 75 159 dsadasd/2 0 +
C 12 789 dsadasd/1 0 +
C 91 546 dsadasd/2 0 -
A 52 15 dsadasd/1 0 -
A 87 52 dsadasd/2 0 +
TIA
there should be an easier way but this works
$ awk '{c+=p!=$1; p=$1; print c "\t" $0}' file | sort -k1,1 -k5 | cut -f2-
A 35 74 dsadasd/1 0 +
A 95 74 dsadasd/2 0 -
B 78 852 dsadasd/1 0 -
B 75 159 dsadasd/2 0 +
C 12 789 dsadasd/1 0 +
C 91 546 dsadasd/2 0 -
A 52 15 dsadasd/1 0 -
A 87 52 dsadasd/2 0 +
creates a group id based on the first field groups, sort by it first then the other key field; remove the dummy group id.
awk + sort
$ awk ' { $(NF+1)=int((NR+1)/2) } 1 ' angel.txt | sort -k7,7 -k4,4 | awk ' {$NF=""}1 '
A 35 74 dsadasd/1 0 +
A 95 74 dsadasd/2 0 -
B 78 852 dsadasd/1 0 -
B 75 159 dsadasd/2 0 +
C 12 789 dsadasd/1 0 +
C 91 546 dsadasd/2 0 -
A 52 15 dsadasd/1 0 -
A 87 52 dsadasd/2 0 +
$ cat angel.txt
A 35 74 dsadasd/1 0 +
A 95 74 dsadasd/2 0 -
B 75 159 dsadasd/2 0 +
B 78 852 dsadasd/1 0 -
C 12 789 dsadasd/1 0 +
C 91 546 dsadasd/2 0 -
A 87 52 dsadasd/2 0 +
A 52 15 dsadasd/1 0 -
$
Try Perl.. note that this preserves the spaces in your input
perl -0777 -ne ' while( /(.+?)\n(.+?)\n/gms ) { $a=$1;$b=$2; (split(/\s+/,$a))[3] gt (split(/\s+/,$b))[3] ? print "$b\n$a\n" : print "$a\n$b\n" }'
with inputs
$ cat angel.txt
A 35 74 dsadasd/1 0 +
A 95 74 dsadasd/2 0 -
B 75 159 dsadasd/2 0 +
B 78 852 dsadasd/1 0 -
C 12 789 dsadasd/1 0 +
C 91 546 dsadasd/2 0 -
A 87 52 dsadasd/2 0 +
A 52 15 dsadasd/1 0 -
$ perl -0777 -ne ' while( /(.+?)\n(.+?)\n/gms ) { $a=$1;$b=$2; (split(/\s+/,$a))[3] gt (split(/\s+/,$b))[3] ? print "$b\n$a\n" : print "$a\n$b\n" }' angel.txt
A 35 74 dsadasd/1 0 +
A 95 74 dsadasd/2 0 -
B 78 852 dsadasd/1 0 -
B 75 159 dsadasd/2 0 +
C 12 789 dsadasd/1 0 +
C 91 546 dsadasd/2 0 -
A 52 15 dsadasd/1 0 -
A 87 52 dsadasd/2 0 +
$
In awk:
$ awk '{
k=NR%2; a[k]=$4; b[k]=$0 # store compare value and
} # record for 0 and 1
!(NR%2) { # on even we compare
print b[(a[0]>a[1])] ORS b[(a[0]<=a[1])] # and print the smaller first
}' file
A 35 74 dsadasd/1 0 +
A 95 74 dsadasd/2 0 -
B 78 852 dsadasd/1 0 -
B 75 159 dsadasd/2 0 +
C 12 789 dsadasd/1 0 +
C 91 546 dsadasd/2 0 -
A 52 15 dsadasd/1 0 -
A 87 52 dsadasd/2 0 +
This should work with awk:
awk '{if(p==""){p=$0;p4=$4}
else{
if(p4>$4){print $0"\n"p}
else{print p"\n"$0};p=p4=""
}}' file

transpose lines to columns [duplicate]

i am trying to transpose a table (10k rows X 10K cols) using the following script.
A simple data example
$ cat rm1
t1 t2 t3
n1 1 2 3
n2 2 3 44
n3 1 1 1
$ sh transpose.sh rm1
n1 n2 n3
t1 1 2 1
t2 2 3 1
t3 3 44 1
However, I am getting memory error. Any help would be appreciated.
awk -F "\t" '{
for (f = 1; f <= NF; f++)
a[NR, f] = $f
}
NF > nf { nf = NF }
END {
for (f = 1; f <= nf; f++)
for (r = 1; r <= NR; r++)
printf a[r, f] (r==NR ? RS : FS)
}'
Error
awk: cmd. line:2: (FILENAME=input FNR=12658) fatal: dupnode: r->stptr: can't allocate 10 bytes of memory (Cannot allocate memory)
Here's one way to do it, as I mentioned in my comments, in chunks. Here I show the mechanics on a tiny 12r x 10c file, but I also ran a chunk of 1000 rows on a 10K x 10K file in not much more than a minute (Mac Powerbook).6
EDIT The following was updated to consider an M x N matrix with unequal number of rows and columns. The previous version only worked for an 'N x N' matrix.
$ cat et.awk
BEGIN {
start = chunk_start
limit = chunk_start + chunk_size - 1
}
{
n = (limit > NF) ? NF : limit
for (f = start; f <= n; f++) {
a[NR, f] = $f
}
}
END {
n = (limit > NF) ? NF : limit
for (f = start; f <= n; f++)
for (r = 1; r <= NR; r++)
printf a[r, f] (r==NR ? RS : FS)
}
$ cat t.txt
10 11 12 13 14 15 16 17 18 19
20 21 22 23 24 25 26 27 28 29
30 31 32 33 34 35 36 37 38 39
40 41 42 43 44 45 46 47 48 49
50 51 52 53 54 55 56 57 58 59
60 61 62 63 64 65 66 67 68 69
70 71 72 73 74 75 76 77 78 79
80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99
A0 A1 A2 A3 A4 A5 A6 A7 A8 A9
B0 B1 B2 B3 B4 B5 B6 B7 B8 B9
C0 C1 C2 C3 C4 C5 C6 C7 C8 C9
$ cat et.sh
inf=$1
outf=$2
rm -f $outf
for i in $(seq 1 2 12); do
echo chunk for rows $i $(expr $i + 1)
awk -v chunk_start=$i -v chunk_size=2 -f et.awk $inf >> $outf
done
$ sh et.sh t.txt t-transpose.txt
chunk for rows 1 2
chunk for rows 3 4
chunk for rows 5 6
chunk for rows 7 8
chunk for rows 9 10
chunk for rows 11 12
$ cat t-transpose.txt
10 20 30 40 50 60 70 80 90 A0 B0 C0
11 21 31 41 51 61 71 81 91 A1 B1 C1
12 22 32 42 52 62 72 82 92 A2 B2 C2
13 23 33 43 53 63 73 83 93 A3 B3 C3
14 24 34 44 54 64 74 84 94 A4 B4 C4
15 25 35 45 55 65 75 85 95 A5 B5 C5
16 26 36 46 56 66 76 86 96 A6 B6 C6
17 27 37 47 57 67 77 87 97 A7 B7 C7
18 28 38 48 58 68 78 88 98 A8 B8 C8
19 29 39 49 59 69 79 89 99 A9 B9 C9
And then running the first chunk on the huge file looks like:
$ time awk -v chunk_start=1 -v chunk_size=1000 -f et.awk tenk.txt > tenk-transpose.txt
real 1m7.899s
user 1m5.173s
sys 0m2.552s
Doing that ten times with the next chunk_start set to 1001, etc. (and appending with >> to the output, of course) should finally give you the full transposed result.
There is a simple and quick algorithm based on sorting:
1) Make a pass through the input, prepending the row number and column number to each field. Output is a three-tuple of row, column, value for each cell in the matrix. Write the output to a temporary file.
2) Sort the temporary file by column, then row.
3) Make a pass through the sorted temporary file, reconstructing the transposed matrix.
The two outer passes are done by awk. The sort is done by the system sort. Here's the code:
$ echo '1 2 3
2 3 44
1 1 1' |
awk '{ for (i=1; i<=NF; i++) print i, NR, $i}' |
sort -n |
awk ' NR>1 && $2==1 { print "" }; { printf "%s ", $3 }; END { print "" }'
1 2 1
2 3 1
3 44 1

how to add 0 digit to a single symbol hex value where it is missed, bash

I have a some file with the following content
$ cat somefile
28 46 5d a2 26 7a 192 168 2 2
0 15 e c8 a8 a3 192 168 100 3
54 4 2b 8 c 26 192 168 20 3
As you can see the values in first six columns are represented in hex, the values in last four columns in decimal formats. I just want to add 0 to every single symbol hexidecimal value.
Thanks beforehand.
This one should work out for you:
while read -a line
do
hex=(${line[#]:0:6})
printf "%02x " ${hex[#]/#/0x}
echo ${line[#]:6:4}
done < somefile
Example:
$ cat somefile
28 46 5d a2 26 7a 192 168 2 2
0 15 e c8 a8 a3 192 168 100 3
54 4 2b 8 c 26 192 168 20 3
$ while read -a line
> do
> hex=(${line[#]:0:6})
> printf "%02x " ${hex[#]/#/0x}
> echo ${line[#]:6:4}
> done < somefile
28 46 5d a2 26 7a 192 168 2 2
00 15 0e c8 a8 a3 192 168 100 3
54 04 2b 08 0c 26 192 168 20 3
Here is a way with awk if that is an option:
awk '{for(i=1;i<=6;i++) if(length($i)<2) $i=0$i}1' file
Test:
$ cat file
28 46 5d a2 26 7a 192 168 2 2
0 15 e c8 a8 a3 192 168 100 3
54 4 2b 8 c 26 192 168 20 3
$ awk '{for(i=1;i<=6;i++) if(length($i)<2) $i=0$i}1' file
28 46 5d a2 26 7a 192 168 2 2
00 15 0e c8 a8 a3 192 168 100 3
54 04 2b 08 0c 26 192 168 20 3
Please try this too, if it helps (bash version 4.1.7(1)-release)
#!/bin/bash
while read line;do
arr=($line)
i=0
for num in "${arr[#]}";do
if [ $i -lt 6 ];then
if [ ${#num} -eq 1 ];then
arr[i]='0'${arr[i]};
fi
fi
i=$((i+1))
done
echo "${arr[*]}"
done<your_file
This might work for you (GNU sed):
sed 's/\b\S\s/0&/g' file
Finds a single non-space character and prepends a 0.

Resources