Shell Script add column values - shell

I have a text file which contains like below:
{"userId":"f1fcab","count":"3","type":"Stack"}
{"userId":"fcab","count":"2","type":"Stack"}
{"userId":"abcd","count":"5","type":"Stack"}
I want to get sum of the value of count.
I am using awk to achive this like below:
$ awk -F "," '{print $4}' test.txt
How can I get only the integer type using awk and add them all.
My script should give me as
sum=10

You could try the below,
$ awk -F'"' '{sum = sum + $8;}END{print "sum="sum+0}' file
sum=10
-F'"' Sets the double quotes as FS value. Awk splits the row into colunms according to the value of FS variable.
sum = sum + $8 Calculate the sum of all the values in column no 8 and store it into a variable called sum
Finally by printing the variable sum at the end will give you the desired output.

You can get the value of count key using double quotes (") as delimiter so that the eighth column will be the value to count on:
$ awk -F"\"" 'BEGIN {sum=0} {sum+=$8} END {print sum}' fd
10

Assuming consistent use of double quote characters, you can use:
awk -F\" '{s += $8} END{print "sum=" s+0}' inputFile
This will generate:
sum=10
This works because a quote delimiter gives you the fields:
1 2 3 4 5 6 7 8 ...
{"userId":"f1fcab","count":"3","type":"Stack"}

awk -F'[:"]' '{sum+=$10} END{print "sum=" sum}' File
Setting ':' and '"' as delimiters. Then taking the 10th field, which is the count value. add then up to sum and print at the end.
Example:
sdlcb#ubuntu:~/AMD_C/SO$ cat File
{"userId":"f1fcab","count":"3","type":"Stack"}
{"userId":"fcab","count":"2","type":"Stack"}
{"userId":"abcd","count":"5","type":"Stack"}
sdlcb#ubuntu:~/AMD_C/SO$ awk -F'[:"]' '{sum+=$10} END{print "sum=" sum}' File
sum=10

Related

awk to get first column if the a specific number in the line is greater than a digit

I have a data file (file.txt) contains the below lines:
123 pro=tegs, ETA=12:00, team=xyz,user1=tom,dom=dby.com
345 pro=rbs, team=abc,user1=chan,dom=sbc.int,ETA=23:00
456 team=efg, pro=bvy,ETA=22:00,dom=sss.co.uk,user2=lis
I'm expecting to get the first column ($1) only if the ETA= number is greater than 15, like here I will have 2nd and 3rd line first column only is expected.
345
456
I tried like cat file.txt | awk -F [,TPF=]' '{print $1}' but its print whole line which has ETA at the end.
Using awk
$ awk -F"[=, ]" '{for (i=1;i<NF;i++) if ($i=="ETA") if ($(i+1) > 15) print $1}' input_file
345
456
With your shown samples please try following GNU awk code. Using match function of GNU awk where I am using regex (^[0-9]+).*ETA=([0-9]+):[0-9]+ which creates 2 capturing groups and saves its values into array arr. Then checking condition if 2nd element of arr is greater than 15 then print 1st value of arr array as per requirement.
awk '
match($0,/(^[0-9]+).*\<ETA=([0-9]+):[0-9]+/,arr) && arr[2]+0>15{
print arr[1]
}
' Input_file
I would harness GNU AWK for this task following way, let file.txt content be
123 pro=tegs, ETA=12:00, team=xyz,user1=tom,dom=dby.com
345 pro=rbs, team=abc,user1=chan,dom=sbc.int,ETA=23:00
456 team=efg, pro=bvy,ETA=02:00,dom=sss.co.uk,user2=lis
then
awk 'substr($0,index($0,"ETA=")+4,2)+0>15{print $1}' file.txt
gives output
345
Explanation: I use String functions, index to find where is ETA= then substr to get 2 characters after ETA=, 4 is used as ETA= is 4 characters long and index gives start position, I use +0 to convert to integer then compare it with 15. Disclaimer: this solution assumes every row has ETA= followed by exactly 2 digits.
(tested in GNU Awk 5.0.1)
Whenever input contains tag=value pairs as yours does, it's best to first create an array of those mappings (v[]) below and then you can just access the values by their tags (names):
$ cat tst.awk
BEGIN {
FS = "[, =]+"
OFS = ","
}
{
delete v
for ( i=2; i<NF; i+=2 ) {
v[$i] = $(i+1)
}
}
v["ETA"]+0 > 15 {
print $1
}
$ awk -f tst.awk file
345
456
With that approach you can trivially enhance the script in future to access whatever values you like by their names, test them in whatever combinations you like, output them in whatever order you like, etc. For example:
$ cat tst.awk
BEGIN {
FS = "[, =]+"
OFS = ","
}
{
delete v
for ( i=2; i<NF; i+=2 ) {
v[$i] = $(i+1)
}
}
(v["pro"] ~ /b/) && (v["ETA"]+0 > 15) {
print $1, v["team"], v["dom"]
}
$ awk -f tst.awk file
345,abc,sbc.int
456,efg,sss.co.uk
Think about how you'd enhance any other solution to do the above or anything remotely similar.
It's unclear why you think your attempt would do anything of the sort. Your attempt uses a completely different field separator and does not compare anything against the number 15.
You'll also want to get rid of the useless use of cat.
When you specify a column separator with -F that changes what the first column $1 actually means; it is then everything before the first occurrence of the separator. Probably separately split the line to obtain the first column, space-separated.
awk -F 'ETA=' '$2 > 15 { split($0, n, /[ \t]+/); print n[1] }' file.txt
The value in $2 will be the data after the first separator (and up until the next one) but using it in a numeric comparison simply ignores any non-numeric text after the number at the beginning of the field. So for example, on the first line, we are actually literally checking if 12:00, team=xyz,user1=tom,dom=dby.com is larger than 15 but it effectively checks if 12 is larger than 15 (which is obviously false).
When the condition is true, we split the original line $0 into the array n on sequences of whitespace, and then print the first element of this array.
Using awk you could match ETA= followed by 1 or more digits. Then get the match without the ETA= part and check if the number is greater than 15 and print the first field.
awk '/^[0-9]/ && match($0, /ETA=[0-9]+/) {
if(substr($0, RSTART+4, RLENGTH-4)+0 > 15) print $1
}' file
Output
345
456
If the first field should start with a number:
awk '/^[0-9]/ && match($0, /ETA=[0-9]+/) {
if(substr($0, RSTART+4, RLENGTH-4) > 15)+0 print $1
}' file

Using AWK to compare values in the same column

I'm using AWK and I've been trying to compare the previous value in a column with the next one until it finds the highest but I haven't been able to.
awk '$6 >= $6 {print $6}'
With the above, it returns me every single value
For example:
money:
49
90
30
900
I would like it to return 900
$ awk '(NR>1) && ((NR==2) || ($1>max)){max=$1} END{if (max != "") print max}' file
900
The above is based on your posted example (including the money: header line) but would also work even if all input values were 0 or negative or the input file was empty. Change $1 to $6 if the real field number you're interested in is 6.
Also consider:
$ tail -n +2 file | cut -f1 | sort -rn | head -1
900
and change -f1 to -f6.
Set separator chars in awk with -F'<char>' and cut with -d'<char>' if necessary.
Like this:
awk 'int($6) && $6 > n{n=$6}END{print n}' file
Assuming the columns are separated by the default (white-space) separator, the following awk line checks the 6th value in each data row to see if it is greater than any previously seen values (current set in a BEGIN block to zero, gets updated each time a greater 6th value is encountered). The END block prints the final value held in current (which is the greatest value for the 6th field)"
awk 'BEGIN {current=0} NR>1{if ($6>current) current=$6} END {print current}' dataFile.txt
For .csv data, the file separator can be defined as a comma with FS="," in the BEGIN block:
awk 'BEGIN {FS=","; current=0} NR>1{if ($6>current) current=$6} END {print current}' dataFile.csv
$ awk 'NR==2 {max=$6}
$6>max {max=$6}
END {print max}' file

Add string to columns in bash

I have a comma-delimited file to which I want to append a string in specific columns. I am trying to do something like this, but couldn't do it until now.
re1,1,a1e,a2e,AGT
re2,2,a1w,a2w,AGT
re3,3,a1t,a2t,ACGTCA
re12,4,b1e,b2e,ACGTACT
And I want to append 'some_string' to columns 3 and 4:
re1,1,some_stringa1e,some_stringa2e,AGT
re2,2,some_stringa1w,some_stringa2w,AGT
re3,3,some_stringa1t,some_stringa2t,ACGTCA
re12,4,some_stringb1e,some_stringb2e,ACGTACT
I was trying something similar to the suggestion solution, but to no avail:
awk -v OFS=$'\,' '{ $3="some_string" $3; print}' $lookup_file
Also, I would like my string to be added to both columns. How would you do this with awk or bash?
Thanks a lot in advance
You can do that with (almost) what you have:
pax> echo 're1,1,a1e,a2e,AGT
re2,2,a1w,a2w,AGT
re3,3,a1t,a2t,ACGTCA
re12,4,b1e,b2e,ACGTACT' | awk 'BEGIN{FS=OFS=","}{$3 = "pre3:"$3; $4 = "pre4:"$4; print}'
re1,1,pre3:a1e,pre4:a2e,AGT
re2,2,pre3:a1w,pre4:a2w,AGT
re3,3,pre3:a1t,pre4:a2t,ACGTCA
re12,4,pre3:b1e,pre4:b2e,ACGTACT
The begin block sets the input and output field separators, the two assignments massage fields 3 and 4, and the print outputs the modified line.
You need to set FS to comma, not just OFS. There's a shortcut for setting FS, it's the -F option.
awk -F, -v OFS=',' '{ $3="some_string" $3; $4 = "some_string" $4; print}' "$lookup_file"
awk's default action is to concatenate, so you can simply place strings next to each other and they'll be treated as one. 1 means true, so with no {action} it will assume "print". You can use Bash's Brace Expansion to assign multiple variables after the script.
awk '{$3 = "three" $3; $4 = "four" $4} 1' {O,}FS=,

awk combine 2 commands for csv file formatting

I have a CSV file which has 4 columns. I want to first:
print the first 10 items of each column
only print the items in the third column
My method is to pipe the first awk command into another but i didnt get exactly what i wanted:
awk 'NR < 10' my_file.csv | awk '{ print $3 }'
The only missing thing was the -F.
awk -F "," 'NR < 10' my_file.csv | awk -F "," '{ print $3 }'
You don't need to run awk twice.
awk -F, 'NR<=10{print $3}'
This prints the third field for every line whose record number (line) is less than or equal to 10.
Note that < is different from <=. The former matches records one through nine, the latter matches records one through ten. If you need ten records, use the latter.
Note that this will walk through your entire file, so if you want to optimize your performance:
awk -F, '{print $3} NR>10{exit}'
This will print the third column. Then if the record number is greater than 10, it will exit. This does not step through your entire file.
Note also that awk's "CSV" matching is very simple; awk does not understand quoted fields, so the record:
red,"orange,yellow",green
has four fields, two of which have double quotes in them. YMMV depending on your input.

creating a ":" delimited list in bash script using awk

I have following lines
380:<CHECKSUM_VALIDATION>
393:</CHECKSUM_VALIDATION>
437:<CHECKSUM_VALIDATION>
441:</CHECKSUM_VALIDATION>
I need to format it as below
CHECKSUM_VALIDATION:380:393
CHECKSUM_VALIDATION:437:441
Is it possible to achieve above output using "awk"? [I'm using bash]
Thanks you!
Here you go:
awk -F '[:<>/]+' '{ n = $1; getline; print $2 ":" n ":" $1 }'
Explanation:
Set the field separator with -F to be a sequence of a mix of :<>/ characters, this way the first field will be the number, and the second will be CHECKSUM_VALIDATION
Save the first field in variable n and read the next line (which would overwrite $1)
Print the line: a combination of the number from the previous line, and the fields on the current line
Another approach without using getline:
awk -F '[:<>/]+' 'NR % 2 { n = $1 } NR % 2 == 0 { print $2 ":" n ":" $1 }'
This one uses the record counter NR to determine whether it's time to print: if NR is odd, save the first field in n, if NR is even, then print.
You can try this sed,
sed 'N; s/\([0-9]\+\):<\(.*\)>\n\([0-9]\+\):<\(.*\)>/\2:\1:\3/' file.txt
Test:
sat:~$ sed 'N; s/\([0-9]\+\):<\(.*\)>\n\([0-9]\+\):<\(.*\)>/\2:\1:\3/' file.txt
CHECKSUM_VALIDATION:380:393
CHECKSUM_VALIDATION:437:441
Another way:
awk -F: '/<C/ {printf "CHECKSUM_VALIDATION:%d:",$1; next} {print $1}'
Here is one gnu awk
awk -F"[:\n<>]" 'NR==1{print $3,$1,$5;f=$3;next} $3{print f,$3,$7}' OFS=":" RS="</CH" file
CHECKSUM_VALIDATION:380:393
CHECKSUM_VALIDATION:437:441
Based on Jonas post and avoiding getline, this awk should do:
awk -F '[:<>/]+' '/<C/ {f=$1;next} { print $2,f,$1}' OFS=\: file
CHECKSUM_VALIDATION:380:393
CHECKSUM_VALIDATION:437:441

Resources