Add up every 5 rows in a column of integers BASH - bash

I am writing a parser, and have to so some fancy stuff. I am trying not to use python, but I might have to at this point.
Given an STDOUT that looks like this:
1
0
2
3
0
0
1
0
0
2
0
3
0
4
0
5
0
2
.
.
.
For about 100,000 lines. What I need to do is add up every 5, like so:
1 - start
0 |
2 | - 6
3 |
0 - end
0 - start
1 |
0 | - 3
0 |
2 - end
0 - start
3 |
0 | - 7
4 |
0 - end
5
0
2
.
.
.
The -, |, start, end, are all for visual representation, I just need it in a column list:
6
3
7
.
.
.
I currently have a method of doing this by using an increment head -n $i and tail -n 5 to cut 5 rows out of the list, then I use paste -sd+ - | bc to add up all the values. But this is wayyyy to slow because there is 100,000 columns.
If anyone has anything to add I would appreciate it. Let me know if more info is needed.
Thank you

It looks like awk is a natural tool to use:
awk '{ sum += $1 } NR % 5 == 0 { print sum; sum = 0 }'
Add values in column 1 to sum. If the record number modulo 5 is 0, print the sum and reset it to 0. Note that if the last group of records is short (1-4 elements in the group), their sum is not printed. If you want the sum for the short group printed, add END { if (NR % 5 != 0) print sum } to the script.
Since this makes a single pass over the data file using a single command, it will be hard to beat it. Using Perl might be a little faster. I don't know how Python would fare against either Awk or Perl.

You can use awk for it.
Say file named file1 contains
1
0
2
3
0
0
1
0
0
2
0
3
0
4
0
5
0
.
.
.
So the awk command goes like:
awk 'begin{sum=0;} {sum=sum+1;if(NR%5==0){print sum;sum=0;}}' file1

Related

select multiple patterns with grep

I have file that looks like that:
t # 3-7, 1
v 0 104
v 1 92
v 2 95
u 0 1 2
u 0 2 2
u 1 2 2
t # 3-8, 1
v 0 94
v 1 13
v 2 19
v 3 5
u 0 1 2
u 0 2 2
u 0 3 2
t # 3-9, 1
v 0 94
v 1 13
v 2 19
v 3 7
u 0 1 2
u 0 2 2
u 0 3 2
t corresponds to header of each block.
I would like to extract multiple patterns from the file and output transactions that contain required patterns altogether.
I tried the following code:
ps | grep -e 't\|u 0 1 2' file.txt
and it works well to extract header and pattern 'u 0 1 2'. However, when I add one more pattern, the output list only headers start with t #. My modified code looks like that:
ps | grep -e 't\|u 0 1 2 && u 0 2 2' file.txt
I tried sed and awk solutions, but they do not work for me as well.
Thank you for your help!
Olha
Use | as the separator before the third alternative, just like the second alternative.
grep -E 't|u 0 1 2|u 0 2 2' file.txt
Also, it doesn't make sense to specify a filename and also pipe ps to grep. If you provide filename arguments, it doesn't read from the pipe (unless you use - as a filename).
You can use grep with multiple -e expressions to grep for more than one thing at a time:
$ printf '%d\n' {0..10} | grep -e '0' -e '5'
0
5
10
Expanding on #kojiro's answer, you'll want to use an array to collect arguments:
mapfile -t lines < file.txt
for line in "${lines[#]}"
do
arguments+=(-e "$line")
done
grep "${arguments[#]}"
You'll probably need a condition within the loop to check whether the line is one you want to search for, but that's it.

bash search output for similar text and perform calculation between the 2

I am working on a script that will run a pm2 list and assign it to a variable, wait X seconds and run it again assigning it to a different variable. Then I run those through a comm <(echo "$pm2_1") <(echo "$pm2_2") -3 that gives me only the output that is different between the 2 in a nice format
name ID restart count
prog-name 0 1
prog-name 0 2
prog-name-live 10 1
prog-name-live 10 8
prog-name-live 3 1
prog-name-live 3 4
prog-name-live 6 1
prog-name-live 6 6
What I need is a way to compare the restart counts on the 2 lines with similar IDs.. EX
name ID restart count
prog-name 0 1
prog-name 0 2
prog-name-worker 10 1
prog-name-worker 10 8
Any ideas would be very helpful!
Thanks
awk supports hash hope that helps
awk '{k=$1" "$2; a[k]=$3; print k, a[k]}'
here is example of using it to find difference, you can try any logic
awk '{k=$1" "$2; if (a[k]==0)a[k]=$3; else {a[k]-=$3; q=a[k]>0?a[k]:a[k]*-1;print k,q}}'

How do I filter tab-separated input by the count of fields with a given value?

My data(tab separated):
1 0 0 1 0 1 1 0 1
1 1 0 1 0 1 0 1 1
1 1 1 1 1 1 1 1 1
0 0 0 0 0 0 0 0 0
...
how can i grep the lines with exact, for example, 5 '1's,
ideal output:
1 0 0 1 0 1 1 0 1
Also, how can i grep lines with equal or more than (>=) 5 '1's,
ideal output:
1 0 0 1 0 1 1 0 1
1 1 0 1 0 1 0 1 1
1 1 1 1 1 1 1 1 1
i tried,
grep 1$'\t'1$'\t'1$'\t'1$'\t'1
however this will only output consecutive '1's, which is not all i want.
i wonder if there will be any simple method to achieve this, thank you!
John Bollinger's helpful answer and anishane's answer show that it can be done with grep, but, as has been noted, that is quite cumbersome, given that regular expression aren't designed for counting.
awk, by contrast, is built for field-based parsing and counting (often combined with regular expressions to identify field separators, or, as below, the fields themselves).
Assuming you have GNU awk, you can use the following:
Exactly 5 1s:
awk -v FPAT='\\<1\\>' 'NF==5' file
5 or more 1s:
awk -v FPAT='\\<1\\>' 'NF>=5' file
Special variable FPAT is a GNU awk extension that allows you to identify fields via a regex that describes the fields themselves, in contrast with the standard approach of using a regex to define the separators between fields (via special variable FS or option -F):
'\\<1\\>' identifies any "isolated" 1 (surrounded by non-word characters) as a field, based on word-boundary assertions \< and \>; the \ must be doubled here so that the initial string parsing performed by awk doesn't "eat" single \s.
Standard variable NF contains the count of input fields in the line at hand, which allows easy numerical comparison. If the conditional evaluates to true, the input line at hand is implicitly printed (in other words: NF==5 is implicitly the same as NF==5 { print } and, more verbosely, NF==5 { print $0 }).
A POSIX-compliant awk solution is a little more complicated:
Exactly 5 1s:
awk '{ l=$0; gsub("[\t0]", "") }; length($0)==5 { print l }' file
5 or more 1s:
awk '{ l=$0; gsub("[\t0]", "") }; length($0)>=5 { print l }' file
l=$0 saves the input line ($0) in its original form in variable l.
gsub("[\t0]", "") replaces all \t and 0 chars. in the input line with the empty string, i.e., effectively removes them, and only leaves (directly concatenated) 1 instances (if any).
length($0)==5 { print l } then prints the original input line (l) only if the resulting string of 1s (i.e., the count of 1s now stored in the modified input line ($0)) matches the specified count.
You can use grep. But that would be an abuse of regex.
$ cat countme
1 0 0 1 0 1 1 0 1
1 1 0 1 0 1 0 1 1
1 1 1 1 1 1 1 1 1
0 0 0 0 0 0 0 0 0
$ grep -P '^[0\t]*(1[0\t]*){5}[0\t]*$' countme # Match exactly 5
1 0 0 1 0 1 1 0 1
$ grep -P '^[0\t]*(1[0\t]*){5,}[0\t]*$' countme # Match >=5
1 0 0 1 0 1 1 0 1
1 1 0 1 0 1 0 1 1
1 1 1 1 1 1 1 1 1
You can do this to get lines with exactly five '1's:
grep '^[^1]*\(1[^1]*\)\{5,5\}[^1]*$'
You can simplify that to this for at least five '1's:
grep '\(1[^1]*\)\{5,\}'
The enumerated quantifier (\{n,m\}) enables you to conveniently specify a particular number or range of numbers of consecutive matches to a sub-pattern. To avoid matching lines with extra matches to such a pattern, however, you must also anchor it to the beginning and end of the line.
The other other trick is to make sure the gaps previous to the first 1, between the 1s, and after the last 1 are matched. In your case, all of those gaps can be represented pretty simply as ranges of zero or more characters other than 1: [^1]*. Putting those pieces together gives you the above regular expressions.
Do
sed -nE '/^([^1]*1[^1]*){5}$/p' your_file
for exactly 5 matches and
sed -nE '/^([^1]*1[^1]*){5,}$/p' your_file
for 5 or more matches.
Note: In GNU sed you may not see the -E option in the manpage, but it is supported. Using -E is for portability to, say, Mac OSX.
with perl
$ perl -ane 'print if (grep {$_==1} #F) == 5' ip.txt
1 0 0 1 0 1 1 0 1
$ perl -ane 'print if (grep {$_==1} #F) >= 5' ip.txt
1 0 0 1 0 1 1 0 1
1 1 0 1 0 1 0 1 1
1 1 1 1 1 1 1 1 1
-a to automatically split input line on whitespaces and save to #F array
grep {$_==1} #F returns array with elements from #F array which are exactly equal to 1
(grep {$_==1} #F) == 5 in scalar context, comparison will be done based on number of elements of array
See http://perldoc.perl.org/perlrun.html#Command-Switches for details on -ane options

Unix: Count occurrences of similar entries in first column, sum the second column

I have a file with two columns of data, I would like to count the occurrence of similarities in the first column. When two similar entries in the first column are matched, I would like to also sum the value of the second column of the two matched entries.
Example list:
2013-11-13-03 1
2013-11-13-06 1
2013-11-13-13 2
2013-11-13-13 1
2013-11-13-15 1
2013-11-13-15 1
2013-11-13-15 1
2013-11-13-17 1
2013-11-13-23 1
2013-11-14-01 1
2013-11-14-04 6
2013-11-14-07 1
2013-11-14-08 1
2013-11-14-09 1
2013-11-14-09 1
I would like the output to read similar to the following
2013-11-13-03 1 1
2013-11-13-06 1 1
2013-11-13-13 2 3
2013-11-13-15 3 3
2013-11-13-17 1 1
2013-11-13-23 1 1
2013-11-14-01 1 1
2013-11-14-04 1 6
2013-11-14-07 1 1
2013-11-14-08 1 1
2013-11-14-09 2 2
Column 1 is the matched columns from the earlier example column 1, column 2 is the count of matches of column 1 from the earlier example (1 if no other matches), column 3 is the sum of column 2 from the matched column 1 entries from the earlier example. Anyone have any tips on completing this using awk or a mixture of uniq and awk?
Here's a quickie with awk and sort:
awk '
{
counts[$1]++; # Increment count of lines.
totals[$1] += $2; # Accumulate sum of second column.
}
END {
# Iterate over all first-column values.
for (x in counts) {
print x, counts[x], totals[x];
}
}
' file.txt | sort
You can skip the sort if you don't care about the order of output lines.
Here a pure Bash solution
$ cat t
2013-11-13-03 1
2013-11-13-06 1
2013-11-13-13 2
2013-11-13-13 1
2013-11-13-15 1
2013-11-13-15 1
2013-11-13-15 1
2013-11-13-17 1
2013-11-13-23 1
2013-11-14-01 1
2013-11-14-04 6
2013-11-14-07 1
2013-11-14-08 1
2013-11-14-09 1
2013-11-14-09 1
$ declare -A SUM CNT
$ while read ts vl; do (( SUM[$ts]=+$vl )) ; (( CNT[$ts]++ )); done < t
$ for i in "${!CNT[#]}"; do echo "$i ${CNT[$i]} ${SUM[$i]} "; done | sort
2013-11-13-03 1 1
2013-11-13-06 1 1
2013-11-13-13 2 3
2013-11-13-15 3 3
2013-11-13-17 1 1
2013-11-13-23 1 1
2013-11-14-01 1 1
2013-11-14-04 1 6
2013-11-14-07 1 1
2013-11-14-08 1 1
2013-11-14-09 2 2

Another split file in bash - based on difference between rows of column x

Hello stackoverflow users!
Generally I would like to tune up script I am using, just to make it more insensitive to missing data.
My example data looks like this (tab delimited csv file with headers):
ColA ColB ColC
6 0 0
3 5.16551 12.1099
1 10.2288 19.4769
6 20.0249 30.6543
3 30.0499 40.382
1 59.9363 53.2281
2 74.9415 57.1477
2 89.9462 61.3308
6 119.855 64.0319
4 0 0
8 5.06819 46.8086
6 10.0511 60.1357
9 20.0363 71.679
6 30.0228 82.1852
6 59.8738 98.4446
3 74.871 100.648
1 89.9973 102.111
6 119.866 104.148
3 0 0
1 5.07248 51.9168
2 9.92203 77.3546
2 19.9233 93.0228
6 29.9373 98.7797
6 59.8709 100.518
6 74.7751 100.056
3 89.9363 99.5933
1 119.872 100
I use awk script found elsewhere, as follows:
awk 'BEGIN { fn=0 }
NR==1 { next }
NR==2 { delim=$2 }
$2 == delim {
f=sprintf("file_no%02d.txt",fn++);
print "Creating " f
}
{ print $0 > f }'
Which gives me output I want - omit 1st line, find 2nd column and set delimiter - in this example it will be '0':
file_no00.txt
6 0 0
3 5.16551 12.1099
1 10.2288 19.4769
6 20.0249 30.6543
3 30.0499 40.382
1 59.9363 53.2281
2 74.9415 57.1477
2 89.9462 61.3308
6 119.855 64.0319
file_no01.txt
4 0 0
8 5.06819 46.8086
6 10.0511 60.1357
9 20.0363 71.679
6 30.0228 82.1852
6 59.8738 98.4446
3 74.871 100.648
1 89.9973 102.111
6 119.866 104.148
file_no02.txt
3 0 0
1 5.07248 51.9168
2 9.92203 77.3546
2 19.9233 93.0228
6 29.9373 98.7797
6 59.8709 100.518
6 74.7751 100.056
3 89.9363 99.5933
1 119.872 100
To make the script more robust (imagine that rows with 0's are deleted) I would need to split file according to the subtracted value of rows 'n+1' and 'n' if this value is below 0 split file, so basically if (value_row_n+1)-value_row_n < 0 then split file. Of course I would need also to maintain the file naming. Preferred way is bash with awk use. Any advices? Thanks in advance!
Cheers!
Here is awk command that you can use:
cat file
ColA ColB ColC
3 5.16551 12.1099
1 10.2288 19.4769
6 20.0249 30.6543
3 30.0499 40.382
1 59.9363 53.2281
2 74.9415 57.1477
2 89.9462 61.3308
6 119.855 64.0319
8 5.06819 46.8086
6 10.0511 60.1357
9 20.0363 71.679
6 30.0228 82.1852
6 59.8738 98.4446
3 74.871 100.648
1 89.9973 102.111
6 119.866 104.148
1 5.07248 51.9168
2 9.92203 77.3546
2 19.9233 93.0228
6 29.9373 98.7797
6 59.8709 100.518
6 74.7751 100.056
3 89.9363 99.5933
1 119.872 100
awk 'NR == 1 {
next
}
!p || $2 < p {
f = sprintf("file_no%02d.txt",fn++);
print "Creating " f
}
{
p = $2;
print $0 > f
}' file
I suggest small modifications to your current script:
awk 'BEGIN { fn=0; f=sprintf("file_no%02d.txt",fn++); print "Creating " f }
NR==1 { next }
NR==2 { delim=$2 }
$2 - delim < 0 {
f=sprintf("file_no%02d.txt",fn++);
print "Creating " f
}
{ print $0 > f; delim = $2 }' infile
First, create the first file name just before starting the processing.
Second, in last condition save the value of current line to compare with the value of next line.
Third, instead the comparison with zero, do the substraction between previous value and current one to check if result is less than zero.
It yields:
==> file_no00.txt <==
6 0 0
3 5.16551 12.1099
1 10.2288 19.4769
6 20.0249 30.6543
3 30.0499 40.382
1 59.9363 53.2281
2 74.9415 57.1477
2 89.9462 61.3308
6 119.855 64.0319
==> file_no01.txt <==
4 0 0
8 5.06819 46.8086
6 10.0511 60.1357
9 20.0363 71.679
6 30.0228 82.1852
6 59.8738 98.4446
3 74.871 100.648
1 89.9973 102.111
6 119.866 104.148
==> file_no02.txt <==
3 0 0
1 5.07248 51.9168
2 9.92203 77.3546
2 19.9233 93.0228
6 29.9373 98.7797
6 59.8709 100.518
6 74.7751 100.056
3 89.9363 99.5933
1 119.872 100

Resources