Related
I have a data file (file.txt) contains the below lines:
123 pro=tegs, ETA=12:00, team=xyz,user1=tom,dom=dby.com
345 pro=rbs, team=abc,user1=chan,dom=sbc.int,ETA=23:00
456 team=efg, pro=bvy,ETA=22:00,dom=sss.co.uk,user2=lis
I'm expecting to get the first column ($1) only if the ETA= number is greater than 15, like here I will have 2nd and 3rd line first column only is expected.
345
456
I tried like cat file.txt | awk -F [,TPF=]' '{print $1}' but its print whole line which has ETA at the end.
Using awk
$ awk -F"[=, ]" '{for (i=1;i<NF;i++) if ($i=="ETA") if ($(i+1) > 15) print $1}' input_file
345
456
With your shown samples please try following GNU awk code. Using match function of GNU awk where I am using regex (^[0-9]+).*ETA=([0-9]+):[0-9]+ which creates 2 capturing groups and saves its values into array arr. Then checking condition if 2nd element of arr is greater than 15 then print 1st value of arr array as per requirement.
awk '
match($0,/(^[0-9]+).*\<ETA=([0-9]+):[0-9]+/,arr) && arr[2]+0>15{
print arr[1]
}
' Input_file
I would harness GNU AWK for this task following way, let file.txt content be
123 pro=tegs, ETA=12:00, team=xyz,user1=tom,dom=dby.com
345 pro=rbs, team=abc,user1=chan,dom=sbc.int,ETA=23:00
456 team=efg, pro=bvy,ETA=02:00,dom=sss.co.uk,user2=lis
then
awk 'substr($0,index($0,"ETA=")+4,2)+0>15{print $1}' file.txt
gives output
345
Explanation: I use String functions, index to find where is ETA= then substr to get 2 characters after ETA=, 4 is used as ETA= is 4 characters long and index gives start position, I use +0 to convert to integer then compare it with 15. Disclaimer: this solution assumes every row has ETA= followed by exactly 2 digits.
(tested in GNU Awk 5.0.1)
Whenever input contains tag=value pairs as yours does, it's best to first create an array of those mappings (v[]) below and then you can just access the values by their tags (names):
$ cat tst.awk
BEGIN {
FS = "[, =]+"
OFS = ","
}
{
delete v
for ( i=2; i<NF; i+=2 ) {
v[$i] = $(i+1)
}
}
v["ETA"]+0 > 15 {
print $1
}
$ awk -f tst.awk file
345
456
With that approach you can trivially enhance the script in future to access whatever values you like by their names, test them in whatever combinations you like, output them in whatever order you like, etc. For example:
$ cat tst.awk
BEGIN {
FS = "[, =]+"
OFS = ","
}
{
delete v
for ( i=2; i<NF; i+=2 ) {
v[$i] = $(i+1)
}
}
(v["pro"] ~ /b/) && (v["ETA"]+0 > 15) {
print $1, v["team"], v["dom"]
}
$ awk -f tst.awk file
345,abc,sbc.int
456,efg,sss.co.uk
Think about how you'd enhance any other solution to do the above or anything remotely similar.
It's unclear why you think your attempt would do anything of the sort. Your attempt uses a completely different field separator and does not compare anything against the number 15.
You'll also want to get rid of the useless use of cat.
When you specify a column separator with -F that changes what the first column $1 actually means; it is then everything before the first occurrence of the separator. Probably separately split the line to obtain the first column, space-separated.
awk -F 'ETA=' '$2 > 15 { split($0, n, /[ \t]+/); print n[1] }' file.txt
The value in $2 will be the data after the first separator (and up until the next one) but using it in a numeric comparison simply ignores any non-numeric text after the number at the beginning of the field. So for example, on the first line, we are actually literally checking if 12:00, team=xyz,user1=tom,dom=dby.com is larger than 15 but it effectively checks if 12 is larger than 15 (which is obviously false).
When the condition is true, we split the original line $0 into the array n on sequences of whitespace, and then print the first element of this array.
Using awk you could match ETA= followed by 1 or more digits. Then get the match without the ETA= part and check if the number is greater than 15 and print the first field.
awk '/^[0-9]/ && match($0, /ETA=[0-9]+/) {
if(substr($0, RSTART+4, RLENGTH-4)+0 > 15) print $1
}' file
Output
345
456
If the first field should start with a number:
awk '/^[0-9]/ && match($0, /ETA=[0-9]+/) {
if(substr($0, RSTART+4, RLENGTH-4) > 15)+0 print $1
}' file
I have 2 files that are reports of size of the Databases (1 file is from yesterday, 1 from today).
I want to see how the size of each database changed, so I want to calculate the difference.
File looks like this:
"DATABASE","Alloc MB","Use MB","Free MB","Temp MB","Hostname"
"EUROPE","9133508","8336089","797419","896120","server3"
"ASIA","3740156","3170088","570068","354000","server5"
"AFRICA","4871331","4101711","769620","318412","server4"
Other file is the same, only the numbers are different.
I want to see how the database size changed (so ONLY column "Use MB").
I guess I cannot use "diff" or "awk" options since numbers may change dramatically each day. The only good 'algoritm' I can think of is to subtract numbers between 5th and 6th double quote ("), how do I do that?
You can do this (using awk):
paste file1 file2 -d ',' |awk -F ',' '{gsub(/"/, "", $3); gsub(/"/, "", $9); print $3 - $9}'
paste puts the two files next to another, separated by a comma (-d ','). So you will have :
"DATABASE","Alloc MB","Use MB","Free MB","Temp MB","Hostname","DATABASE","Alloc MB","Use MB","Free MB","Temp MB","Hostname"
"EUROPE","9133508","8336089","797419","896120","server3","EUROPE","9133508","8336089","797419","896120","server3"
...
gsub(/"/, "", $3) removes the quotes around column 3
And finally we print column 3 minus column 9
Maybe I missed something, but I don't get why you could not use awk as it can totally do
The only good 'algoritm' I can think of is to subtract numbers between
5th and 6th double quote ("), how do I do that?
Let's say that file1 is :
"DATABASE","Alloc MB","Use MB","Free MB","Temp MB","Hostname"
"EUROPE","9133508","8336089","797419","896120","server3"
"ASIA","3740156","3170088","570068","354000","server5"
"AFRICA","4871331","4101711","769620","318412","server4"
And file2 is :
"DATABASE","Alloc MB","Use MB","Free MB","Temp MB","Hostname"
"EUROPE","9133508","8335089","797419","896120","server3"
"ASIA","3740156","3170058","570068","354000","server5"
"AFRICA","4871331","4001711","769620","318412","server4"
Command
awk -F'[",]' 'NR>2&&NR==FNR{db[$2]=$8;next}FNR>2{print $2, db[$2]-$8}' file1 file2
gives you result :
EUROPE 1000
ASIA 30
AFRICA 100000
You can also use this answer to deal more properly with quotechars on awk.
If your awk version cannot support multiple field delimiters, you can try this :
awk -F, 'NR>2&&NR==FNR{db[$1]=$3;next}FNR>2{print $1, db[$1]-$3}' <(sed 's,",,g' file1) <(sed 's,",,g' file2)
I need to know if I can match awk value while I am inside a piped command. Like below:
somebinaryGivingOutputToSTDOUT | grep -A3 "sometext" | grep "somemoretext" | awk -F '[:|]' 'BEGIN{OFS=","; print "Col1,Col2,Col3,Col4"}{print $4,$6,$4*10^10+$6,$8}'
from here I need to check if the computed value $4*10^10+$6 is present (matches to) in any of the column value of another file. If it is present then print, else just move forward.
File where value needs to be matched is as below:
a,b,c,d,e
1,2,30000000000,3,4
I need to match with the 3rd column of the above file.
I would ideally like this to be in the same command, because if this check is not applied, it prints more than 100 million rows (and a large file).
I have already read this question.
Adding more info:
Breaking my command into parts
part1-command:
somebinaryGivingOutputToSTDOUT | grep -A3 "sometext" | grep "Something:"
part1-output(just showing 1 iteration output):
Something:38|Something1:1|Something2:10588429|Something3:1491539456372358463
part2-command Now I use awk
awk -F '[:|]' 'BEGIN{OFS=","; print "Col1,Col2,Col3,Col4"}{print $4,$6,$4*10^10+$6,$8}'
part2-command output: currently below values are printed (see how i multiplied 1*10^10+10588429 and got 10010588429
1,10588429,10010588429,1491539456372358463
3,12394810,30012394810,1491539456372359082
1,10588430,10010588430,1491539456372366413
Now here I need to put a check (within the command [near awk]) to print only if 10010588429 was present in another file (say another_file.csv as below)
another_file.csv
A,B,C,D,E
1,2, 10010588429,4,5
x,y,z,z,k
10,20, 10010588430,40,50
output should only be
1,10588429,10010588429,1491539456372358463
1,10588430,10010588430,1491539456372366413
So for every row of awk we check entry in file2 column C
Using the associative array approach in previous question, include a hyphen in place of the first file to direct AWK to the input stream.
Example:
grep -A3 "sometext" | grep "somemoretext" | awk -F '[:|]'
'BEGIN{OFS=","; print "Col1,Col2,Col3,Col4"}
NR==FNR {
query[$4*10^10+$6]=$4*10^10+$6;
out[$4*10^10+$6]=$4 FS $6 FS $4*10^10+$6 FS $8;
next
}
query[$3]==$3 {
print out[$3]
}' - another_file.csv > output.csv
More info on the merging process in the answer cited in the question:
Using AWK to Process Input from Multiple Files
I'll post a template which you can utilize for your computation
awk 'BEGIN {FS=OFS=","}
NR==FNR {lookup[$3]; next}
/sometext/ {c=4}
c&&c--&&/somemoretext/ {value= # implement your computation here
if(value in lookup)
print "what you want"}' lookup.file FS=':' grep.files...
here awk loads up the values in the third column of the first file (which is comma delimited) into the lookup array (a hashmap in disguise). For the next set of files, sets the delimiter to : and similar to grep -A3 looks within the 3 distance of the first pattern for the second pattern, does the computation and prints what you want.
In awk you can have more control on what column your pattern matches as well, here I replicated grep example.
This is another simplified example to focus on the core of the problem.
awk 'BEGIN{for(i=1;i<=1000;i++) print int(rand()*1000), rand()}' |
awk 'NR==FNR{lookup[$1]; next}
$1 in lookup' perfect.numbers -
first process creates 1000 random records, and second one filters the ones where the first fields is in the look up table.
28 0.736027
496 0.968379
496 0.404218
496 0.151907
28 0.0421234
28 0.731929
for the lookup file
$ head perfect.numbers
6
28
496
8128
the piped data is substituted as the second file at -.
You can pipe your grep or awk output into a while read loop which gives you some degree of freedom. There you could decide on whether to forward a line:
grep -A3 "sometext" | grep "somemoretext" | while read LINE; do
COMPUTED=$(echo $LINE | awk -F '[:|]' 'BEGIN{OFS=","}{print $4,$6,$4*10^10+$6,$8}')
if grep $COMPUTED /the/file/to/search &>/dev/null; then
echo $LINE
fi
done | cat -
My text file would read as:
111
111
222
222
222
333
333
My resulting file would look like:
1,111
2,111
1,222
2,222
3,222
1,333
2,333
Or the resulting file could alternatively look like the following:
1
2
1
2
3
1
2
I've specified a comma as a delimiter here but it doesn't matter what the delimeter is --- I can modify that at a future date.In reality, I don't even need the original text file contents, just the line numbers, because I can just paste the line numbers against the original text file.
I am just not sure how I can go through numbering the lines based on repeated entries.
All items in list are duplicated at least once. There are no single occurrences of a line in the file.
$ awk -v OFS=',' '{print ++cnt[$0], $0}' file
1,111
2,111
1,222
2,222
3,222
1,333
2,333
Use a variable to save the previous line, and compare it to the current line. If they're the same, increment the counter, otherwise set it back to 1.
awk '{if ($0 == prev) counter++; else counter = 1; prev=$0; print counter}'
Perl solution:
perl -lne 'print ++$c{$_}' file
-n reads the input line by line
-l handles newlines
++$c{$_} increments the value assigned to the contents of the current line $_ in the hash table %c.
Software tools method, given textfile as input:
uniq -c textfile | cut -d' ' -f7 | xargs -L 1 seq 1
Shell loop-based variant of the above:
uniq -c textfile | while read a b ; do seq 1 $a ; done
Output (of either method):
1
2
1
2
3
1
2
I have some trouble in my script.
I am currently using:
awk '{anum=substr($1,3,22); sub(/^0+/, "", anum); print anum}' file1 | grep -nf file2 | cut -d: -f1 | awk 'FNR==NR{a[$1];next};FNR in a' - file1
file1
5000000000009855892590xxxx xxx
5000000000000068582654xxxx xxx
5000000000009855892580xxxx xxx
5000000000000765432100xxxx xxx
file2
9855892588
985589259
8265
76543210
I am getting the output using the two files below (file1 and file2):
5000000000009855892590xxxx xxx
5000000000000068582654xxxx xxx
5000000000000765432100xxxx xxx
But my expected output is just:
5000000000009855892590xxxx xxx
5000000000000765432100xxxx xxx
My problem is that it captures 8265 in the middle of 5000000000000068582654xxxx which is wrong. What else can I use in replacement of grep -nf to meet my condition? Should the numbers in file2 match the prefix or whole number of 3rd to 22nd digit of file1 (w/o leading zeros).
This will work for your example but as I'm not really sure of exactly how you determine whats valid or not it may not be very robust.
gawk 'NR==FNR{a[$1]=$1;next}{match($0,/0+([1-9][0-9]+)0/,b)}a[b[1]]' file{2,1}
5000000000009855892590xxxx xxx
5000000000000765432100xxxx xxx
It creates an array of all the first fields in the first file(file2), then matches a string that i have guessed is your valid string, in the second file. Next if the string has been saved in the array it prints the line.
Not gawk version
awk 'NR==FNR{a[$1]=$1;next}{n=substr($1,3,22);sub(/^0+/, "", n)
for(i in a)if(n~"^"a[i])print}' test2 test
Same start as the other, then remove the start of the line as OP has done, next for each saved element, check if the newly created line starts with it.