how to sum each column in a file using bash - bash

I have a file on the following format
id_1,1,0,2,3,lable1
id_2,3,2,2,1,lable1
id_3,5,1,7,6,lable1
and I want the summation of each column ( I have over 300 columns)
9,3,11,10,lable1
how can I do that using bash.
I tried using what described here but didn't work.

Using awk:
$ awk -F, '{for (i=2;i<NF;i++)a[i]+=$i}END{for (i=2;i<NF;i++) printf a[i]",";print $NF}' file
9,3,11,10,lable1
This will print the sum of each column (from i=2 .. i=n-1) in a comma separated file followed the value of the last column from the last row (i.e. lable1).

If the totals would need to be grouped by the label in the last column, you could try this:
awk -F, '
{
L[$NF]
for(i=2; i<NF; i++) T[$NF,i]+=$i
}
END{
for(i in L){
s=i
for(j=NF-1; j>1; j--) s=T[i,j] FS s
print s
}
}
' file
If the labels in the last column are sorted then you could try without arrays and save memory:
awk -F, '
function labelsum(){
s=p
for(i=NF-1; i>1; i--) s=T[i] FS s
print s
split(x,T)
}
p!=$NF{
if(p) labelsum()
p=$NF
}
{
for(i=2; i<NF; i++) T[i]+=$i
}
END {
labelsum()
}
' file

Here's a Perl one-liner:
<file perl -lanF, -E 'for ( 0 .. $#F ) { $sums{ $_ } += $F[ $_ ]; } END { say join ",", map { $sums{ $_ } } sort keys %sums; }'
It will only do sums, so the first and last column in your example will be 0.
This version will follow your example output:
<file perl -lanF, -E 'for ( 1 .. $#F - 1 ) { $sums{ $_ } += $F[ $_ ]; } END { $sums{ $#F } = $F[ -1 ]; say join ",", map { $sums{ $_ } } sort keys %sums; }'

A modified version based on the solution you linked:
#!/bin/bash
colnum=6
filename="temp"
for ((i=2;i<$colnum;++i))
do
sum=$(cut -d ',' -f $i $filename | paste -sd+ | bc)
echo -n $sum','
done
head -1 $filename | cut -d ',' -f $colnum

Pure bash solution:
#!/usr/bin/bash
while IFS=, read -a arr
do
for((i=1;i<${#arr[*]}-1;i++))
do
((farr[$i]=${farr[$i]}+${arr[$i]}))
done
farr[$i]=${arr[$i]}
done < file
(IFS=,;echo "${farr[*]}")

Related

How can I use awk to remove duplicate entries in the same field with data separated with commas?

I am trying to call awk from a bash script to remove duplicate data entries of a field in a file.
Data Example in file1
data1 a,b,c,d,d,d,c,e
data2 a,b,b,c
Desired Output:
data1 a,b,c,d,e
data2 a,b,c
First I removed the first column to only have the second remaining.
cut --complement -d$'\t' -f1 file1 &> file2
This worked fine, and now I just have the following in file2:
a,b,c,d,d,d,c,e
a,b,b,c
So then I tried this code that I found but do not understand well:
awk '{
for(i=1; i<=NF; i++)
printf "%s", (!seen[$1]++? (i==1?"":FS) $i: "" )
delete seen; print ""
}' file2
The problem is that this code was for a space delimiter and mine is now a comma delimiter with variable values on each row. This code just prints the file as is and I can see no difference. I also tried to make the FS a comma by doing this, to no avail:
printf "%s", (!seen[$1]++? (i==1?"":FS=",") $i: ""
This is similar to the code you found.
awk -F'[ ,]' '
{
s = $1 " " $2
seen[$2]++
for (i=3; i<=NF; i++)
if (!seen[$i]++) s = s "," $i
print s
delete seen
}
' data-file
-F'[ ,]' - split input lines on spaces and commas
s = ... - we could use printf like the code you found, but building a string is less typing
!seen[x]++ is a common idiom - it returns true only the first time x is seen
to avoid special-casing when to print a comma (as your sample code does with spaces), we simply add $2 to the print string and set seen[$2]
then for the remaining columns (3 .. NF), we add comma and column if it hasn't been seen before
delete seen - clear the array for the next line
That code is right, you need to specify the delimiter and change $1 to $i.
$ awk -F ',' '{
for(i=1; i<=NF; i++)
printf "%s", (!seen[$i]++? (i==1?"":FS) $i: "" )
delete seen; print ""
}' /tmp/file1
data1 a,b,c,d,e
data2 a,b,c
Using GNU sed if applicable
$ sed -E ':a;s/((\<[^,]*\>).*),\2/\1/;ta' input_file
data1 a,b,c,d,e
data2 a,b,c
so i did something similar lately - sanitizing the output of gnu prime factoring program when it prints out every single copy of a bunch of small primes :
gawk -Mbe '
BEGIN {
__+=__+=__+=(__+=___=_+=__=____=_^=_<_)-+-++_
__+=__^=!(___=__-=_+=_++)
for (_; _<=___; _+=__) {
if ((_%++__)*(_%(__+--__))) {
print ____*=_^_
}
}
} | gfactor | sanitize_gnu_factor
58870952193946852435332666506835273111444209706677713:
7^7
11^11
13^13
17^17
116471448967943114621777995869564336419122830800496825559417754612566153180027:
7^7
11^11
13^13
17^17
19^19
2431978363071055324951111475877083878108827552605151765803537946846931963403343871776360412541253748541645309:
7^7
11^11
13^13
17^17
19^19
23^23
6244557167645217304114386952069758950402417741892127946837837979333340639740318438767128131418285303492993082345658543853142417309747238004933649896921:
7^7
11^11
13^13
17^17
19^19
23^23
29^29
823543:
7^7
234966429149994773:
7^7
11^11
71165482274405729335192792293569:
7^7
11^11
13^13
And the core sanitizer does basically the same thing - intra-row duplicate removal :
sanitize_gnu_factor() # i implemented it as a shell function
{
mawk -Wi -- '
BEGIN {
______ = "[ ]+"
___= _+= _^=__*=____ = FS
_______ = FS = "[ \v"(OFS = "\f\r\t")"]+"
FS = ____
} {
if (/ is prime$/) {
print; next
} else if (___==NF) {
$NF = " - - - - - - - \140\140\140"\
"PRIME\140\140\140 - - - - - - - "
} else {
split("",_____)
_ = NF
do { _____[$_]++ } while(--_<(_*_))
delete _____[""]
sub("$"," ")
_^=_<_
for (__ in _____) {
if (+_<+(___=_____[__])) {
sub(" "(__)"( "(__)")+ ",
sprintf(" %\47.f^%\47.f ",__,___))
} }
___ = _+=_^=__*=_<_
FS = _______
$__ = $__
FS = ____ } } NF = NF' |
mawk -Wi -- '
/ is prime$/ { print
next } /[=]/ { gsub("="," ")
} $(_^=(_<_)) = \
(___=length(__=$_))<(_+=_++)^(_+--_) \
?__: sprintf("%.*s......%s } %\47.f dgts ",
_^=++_,__, substr(__,++___-_),--___)' FS='[:]' OFS=':'
}

bash - add columns to csv rewrite headers with prefix filename

I'd prefer a solution that uses bash rather than converting to a dataframe in python, etc as the files are quite big
I have a folder of CSVs that I'd like to merge into one CSV. The CSVs all have the same header save a few exceptions so I need to rewrite the name of each added column with the filename as a prefix to keep track of which file the column came from.
head globcover_color.csv glds00g.csv
==> file1.csv <==
id,max,mean,90
2870316.0,111.77777777777777
2870317.0,63.888888888888886
2870318.0,73.6
2870319.0,83.88888888888889
==> file2.csv <==
ogc_fid,id,_sum
"1","2870316",9.98795110916615
"2","2870317",12.3311055738527
"3","2870318",9.81535963468479
"4","2870319",7.77729743926775
The id column of each file might be in a different "datatype" but in every file the id matches the line number. For example, line 2 is always id 2870316.
Anticipated output:
file1_id,file1_90,file2_ogc_fid,file2_id,file2__sum
2870316.0,111.77777777777777,"1","2870316",9.98795110916615
2870317.0,63.888888888888886,"2","2870317",12.3311055738527
2870318.0,73.6,"3","2870318",9.81535963468479
2870319.0,83.88888888888889,"4","2870319",7.77729743926775
I'm not quite sure how to do this but I think I'd use the paste command at some point. I'm surprised that I couldn't find a similar question on stackoverflow but I guess it's not that common to have CSV with the same id on the same line number
edit:
I figured out the first part.
paste -d , * > ../rasterjointest.txt achieves what I want but the header needs to be replaced
$ cat tst.awk
BEGIN { FS=OFS="," }
FNR==1 {
fname = FILENAME
sub(/\.[^.]+$/,"",fname)
for (i=1; i<=NF; i++) {
$i = fname "_" $i
}
}
{ row[FNR] = (NR==FNR ? "" : row[FNR] OFS) $0 }
END {
for (rowNr=1; rowNr<=FNR; rowNr++) {
print row[rowNr]
}
}
$ awk -f tst.awk file1.csv file2.csv
file1_id,file1_max,file1_mean,file1_90,file2_ogc_fid,file2_id,file2__sum
2870316.0,111.77777777777777,"1","2870316",9.98795110916615
2870317.0,63.888888888888886,"2","2870317",12.3311055738527
2870318.0,73.6,"3","2870318",9.81535963468479
2870319.0,83.88888888888889,"4","2870319",7.77729743926775
To use minimal memory in awk:
$ cat tst.awk
BEGIN {
FS=OFS=","
for (fileNr=1; fileNr<ARGC; fileNr++) {
filename = ARGV[fileNr]
if ( (getline < filename) > 0 ) {
fname = filename
sub(/\.[^.]+$/,"",fname)
for (i=1; i<=NF; i++) {
$i = fname "_" $i
}
}
row = (fileNr==1 ? "" : row OFS) $0
}
print row
exit
}
$ awk -f tst.awk file1.csv file2.csv; paste -d, file1.csv file2.csv | tail -n +2
file1_id,file1_max,file1_mean,file1_90,file2_ogc_fid,file2_id,file2__sum
2870316.0,111.77777777777777,"1","2870316",9.98795110916615
2870317.0,63.888888888888886,"2","2870317",12.3311055738527
2870318.0,73.6,"3","2870318",9.81535963468479
2870319.0,83.88888888888889,"4","2870319",7.77729743926775

how to find out common columns and its records from two files using awk

I have two files:
File 1:
id|name|address|country
1|abc|efg|xyz
2|asd|dfg|uio
File 2(only headers):
id|name|country
Now, I want an output like:
OUTPUT:
id|name|country
1|abc|xyz
2|asd|uio
Basically, I have a user record file(file1) and a header file(file2).Now, I want to extract only those records from (file1) whose columns match with that in the header file.
I want to do this using awk or bash.
I tried using:
awk 'BEGIN { OFS="..."} FNR==NR { a[(FNR"")] = $0; next } { print a[(FNR"")], $0 > "test.txt"}' header.txt file.txt
and have no idea what to do next.
Thank You
Following awk may help you on same.
awk -F"|" 'FNR==NR{for(i=1;i<=NF;i++){a[$i]};next} FNR==1 && FNR!=NR{for(j=1;j<=NF;j++){if($j in a){b[++p]=j}}} {for(o=1;o<=p;o++){printf("%s%s",$b[o],o==p?ORS:OFS)}}' OFS="|" File2 File1
Adding a non-one liner form of solution too now.
awk -F"|" '
FNR==NR{
for(i=1;i<=NF;i++){
a[$i]};
next}
FNR==1 && FNR!=NR{
for(j=1;j<=NF;j++){
if($j in a){ b[++p]=j }}
}
{
for(o=1;o<=p;o++){
printf("%s%s",$b[o],o==p?ORS:OFS)}
}
' OFS="|" File2 File1
Edit by Ed Morton: FWIW here's the same script written with normal indenting/spacing and a couple of more meaningful variable names:
BEGIN { FS=OFS="|" }
NR==FNR {
for (i=1; i<=NF; i++) {
names[$i]
}
next
}
FNR==1 {
for (i=1; i<=NF; i++) {
if ($i in names) {
f[++numFlds] = i
}
}
}
{
for (i=1; i<=numFlds; i++) {
printf "%s%s", $(f[i]), (i<numFlds ? OFS : ORS)
}
}
with (lot's of) unix pipes as Doug McIlroy intended...
$ function p() { sed 1q "$1" | tr '|' '\n' | cat -n | sort -k2; }
$ cut -d'|' -f"$(join -j2 <(p header) <(p file) | sort -k2n | cut -d' ' -f3 | paste -sd,)" file
id|name|country
1|abc|xyz
2|asd|uio
Solution using bash>4:
IFS='|' headers1=($(head -n1 $file1))
IFS='|' headers2=($(head -n1 $file2))
IFS=$'\n'
# find idxes we want to output, ie. mapping of headers1 to headers2
idx=()
for i in $(seq 0 $((${#headers2[#]}-1))); do
for j in $(seq 0 $((${#headers1[#]}-1))); do
if [ "${headers2[$i]}" == "${headers1[$j]}" ]; then
idx+=($j)
break
fi
done
done
# idx=(0 1 3) for example
# simple join output function from https://stackoverflow.com/questions/1527049/join-elements-of-an-array
join_by() { local IFS="$1"; shift; echo "$*"; }
# first line - output headers
join_by '|' "${headers2[#]}"
isfirst=true
while IFS='|' read -a vals; do
# ignore first (header line)
if $isfirst; then
isfirst=false
continue;
fi;
# filter from line only columns with idx indices
tmp=()
for i in ${idx[#]}; do
tmp+=("${vals[$i]}")
done
# join ouptut with '|'
join_by '|' "${tmp[#]}"
done < $file1
This one respects the order of columns in file1, changed the order:
$ cat file1
id|country|name
The awk:
$ awk '
BEGIN { FS=OFS="|" }
NR==1 { # file1
n=split($0,a)
next
}
NR==2 { # file2 header
for(i=1;i<=NF;i++)
b[$i]=i
}
{ # output part
for(i=1;i<=n;i++)
printf "%s%s", $b[a[i]], (i==n?ORS:OFS)
}' file1 file2
id|country|name
1|xyz|abc
2|uio|asd
(Another version using cut for outputing in revisions)
This is similar to RavinderSingh13's solution, in that it first reads the headers from the shorter file, and then decides which columns to keep from the longer file based on the headers on the first line of it.
It however does the output differently. Instead of constructing a string, it shifts the columns to the left if it does not want to include a particular field.
BEGIN { FS = OFS = "|" }
# read headers from first file
NR == FNR { for (i = 1; i <= NF; ++i) header[$i]; next }
# mark fields in second file as "selected" if the header corresponds
# to a header in the first file
FNR == 1 {
for (i = 1; i <= NF; ++i)
select[i] = ($i in header)
}
{
skip = 0
pos = 1
for (i = 1; i <= NF; ++i)
if (!select[i]) { # we don't want this field
++skip
$pos = $(pos + skip) # shift fields left
} else
++pos
NF -= skip # adjust number of fields
print
}
Running this:
$ mawk -f script.awk file2 file1
id|name|country
1|abc|xyz
2|asd|uio

awk to process the first two lines then the next two and so on

Suppose i have a very file which i created from two files one is old & another is the updated file by using cat & sort on the primary key.
File1
102310863||7097881||6845193||271640||06007709532577||||
102310863||7097881||6845123||271640||06007709532577||||
102310875||7092992||6840808||023740||10034500635650||||
102310875||7092992||6840818||023740||10034500635650||||
So pattern of this file is line 1 = old value & line 2 = updated value & so on..
now I want to process the file in such a way that awk first process the first two lines of the file & find out the difference & then move on two the next two lines.
now the process is
if($[old record]!=$[new record])
i= [new record]#[old record];
Desired output
102310863||7097881||6845123#6845193||271640||06007709532577||||
102310875||7092992||6840818#6840808||023740||10034500635650||||
$ cat tst.awk
BEGIN { FS="[|][|]"; OFS="||" }
NR%2 { split($0,old); next }
{
for (i=1;i<=NF;i++) {
if (old[i] != $i) {
$i = $i "#" old[i]
}
}
print
}
$
$ awk -f tst.awk file
102310863||7097881||6845123#6845193||271640||06007709532577||||
102310875||7092992||6840818#6840808||023740||10034500635650||||
This awk could help:
$ awk -F '\\|\\|' '{
getline new;
split(new, new_array, "\\|\\|");
for(i=1;i<=NF;i++) {
if($i != new_array[i]) {
$i = new_array[i]"#"$i;
}
}
} 1' OFS="||" < input_file
102310863||7097881||6845123#6845193||271640||06007709532577||||
102310875||7092992||6840818#6840808||023740||10034500635650||||
I think, you are good enough in awk to understand above code. Skipping the explanation.
Updated version, and thanks #martin for the double | trick:
$ cat join.awk
BEGIN {new=0; FS="[|]{2}"; OFS="||"}
new==0 {
split($0, old_data, "[|]{2}")
new=1
next
}
new==1 {
split($0, new_data, "[|]{2}")
for (i = 1; i <= 7; i++) {
if (new_data[i] != old_data[i]) new_data[i] = new_data[i] "#" old_data[i]
}
print new_data[1], new_data[2], new_data[3], new_data[4], new_data[5], new_data[6], new_data[7]
new = 0
}
$ awk -f join.awk data.txt
102310863||7097881||6845123#6845193||271640||06007709532577||||
102310875||7092992||6840818#6840808||023740||10034500635650||||

Swap the 2 colums in output ----Sed or Awk?

Input file:
GET /static_register_ad_request_1_2037_0_0_0_1_1_4_8335086462.gif?pa=99439_50491&country=US&state_fips_code=US_CA&city_name=Los%2BAngeles&dpId=2&dmkNm=apple&dmlNm=iPod%2Btouch&osNm=iPhone%2BOS&osvNm=5.1.1&bNm=Safari&bvNm=null&spNm=SBC%2BInternet%2BServices&kv=0_0&sessionId=0A80187E0138A0AE42E4DE3F783E7A08&sdk_version=4.0.5.6%20&domain=805AOEtUaMu&ad_catalog=99439_50491&make=APPLE&width=320&height=460&slot_type=PREROLL&model=iPod%20touch%205.1.1&iabcat=artsandentertainment&iabsubcat=music&age=113&gender=2&zip=92869 HTTP/1.1
Output file:
domain sdk_version
805AOEtUaMu 4.0.5.6%20
I could use sed -n 's/.*sdk_version=\([^&]*\).*domain=\([^&]*\).*/\1 \2/p' to get the result, but sdk_version in first column, what I need is swap the sdk_version and domain columns in outputfile.
Could anyone help me with this? Thank you so much in advance:)
Just swap your backreferences:
sed -n 's/.*sdk_version=\([^&]*\).*domain=\([^&]*\).*/\2 \1/p'
One way using awk:
awk '
BEGIN {
FS = "&";
}
{
for ( i = 15; i <= 16; i++ ) {
split( $i, f, /=/ );
printf( "%s ", f[2] );
}
}
END {
printf "\n";
}
' infile
Output:
4.0.5.6%20 805AOEtUaMu
If you want to handle arbitrary order, I would suggest switching to awk or Perl.
perl -ne 'm/[?&]domain=([^&]+)/ && $d = $1;
m/[?&]sdk_version=([^&]+) && $s = $1;
print "$d\t$s\n"' logfile

Resources