bash awk moving average with skipping - bash

I am trying to calculate a moving average with a data set. But in addition, I want it to skip a few number of data each time the average 'window' moves. For example, if my data set is a column from 1 to 20 and my average window is 5, then the current calculation is the average of (1-5), (2-6), (3-7), (4-8).....
But I want to skip a few data each time the window moves, say I want to skip 2. then the new average will be (1-5), (4-8), (6-10), (8-12)......
Here is the current awk file I am using, can anyone help me edit it so that I can skip a few data each time the window moves? I want to change the skip size and window size as well. Thank you very much!
#!/bin/awk
BEGIN {
N=5 # the window size
}
{
n[NR]=$1 # store the value in an array
}
NR>=N { # for records where NR >= N
x=0 # reset the sum variable
delete n[NR-N] # delete the one out the window of N
for(i in n) # all array elements
x+=n[i] # ... must be summed
print x/N # print the row from the beginning of window
}

I think your ranges are not well specified, but you wanted to achieve can be done by parallel windowing as below
awk '{sum[1]+=$1}
!(NR%5){print NR-4"-"NR, sum[1]/5; sum[1]=0}
NR>3{sum[4]+=$1}
NR>3 && !((NR-3)%5){print NR-4"-"NR, sum[4]/5; sum[4]=0}' <(seq 15)
will give, you can remove printing ranges which it there for debugging.
1-5 3
4-8 6
6-10 8
9-13 11
11-15 13
for making window size and skip count variable
awk -v w=5 -v s=3 'function pr(x) {print (NR-s-1)"-"NR, sum[x]/w; sum[x]=0}
{sum[1]+=$1}
NR>s {sum[s+1]+=$1}
!(NR%w) {pr(1)}
NR>s && !((NR-s)%w){pr(s+1)}' file
first window always start at 1, second window starts at s+1. This can be generalize for more than 2 windows as well, perhaps you can find someone to do it...

I see that you want to print MA every K ticks instead of printing for every tick (K=1). So you could add a condition NR%K==0 before printing in your existing code.
But it would be better to keep an array of N elements and overwrite them instead of deleting. Using NR%N as array index. This way, when K is not 1 and want not to calculate the MA, you will avoid checking how many elements to delete etc.
awk -v n=5 -v k=2 '{ a[NR%n]=$0 }
NR>=n && (NR-n)%k==0 { s=0; for (i in a) s+=a[i]; print NR ":\t" s/n }' file
update condition to (NR-n)%k==0 for always starting from first tick where MA is calculated (that is for NR=n).

Related

How can I find both identical and similar strings in a particular field in a text file in Linux?

My apologies ahead of time - I'm not sure that there is an answer for this one using only Linux command-line fu. Please note I am not a programmer, but I have been playing around with bash and python a bit over the last few years.
I have a large text file with rows and columns that resemble the following (note - fields are separated with tabs):
1074 Beetle OOB11061MNH 12/22/16 Confirmed
3430 Hightop 0817BESTYET 08/07/17 Queued
3431 Hightop 0817BESTYET 08/07/17 Queued
3078 Copland 2017GENERAL 07/07/17 Confirmed
3890 Bartok FOODS 09/11/17 Confirmed
5440 Alphapha 00B1106IMNH 01/09/18 Queued
What I want to do is find and output only those rows where the third field is either identical OR similar to another in the list. I don't really care whether the other fields are similar or not, but they should all be included in the output. By similar, I mean no more than [n] characters are different in that particular field (for example, no more than 3 characters are different). So the output I would want would be:
1074 Beetle OOB11061MNH 12/22/16 Confirmed
3430 Hightop 0817BESTYET 08/07/17 Queued
3431 Hightop 0817BESTYET 08/07/17 Queued
5440 Alphapha 00B1106IMNH 01/09/18 Queued
The line beginning 1074 has a third field that differs by 3 characters with 5440, so both of them are included. 3430 and 3431 are included because they are exactly identical. 3078 and 3890 are eliminated because they are not similar.
Through googling the forums I've managed to piece together this rather longish pipeline to be able to find all of the instances where field 3 is exactly identical:
cat inputfile.txt | awk 'BEGIN { OFS=FS="\t" } {if (count[$3] > 1) print $0; else if (count[$3] == 1) { print save[$3]; print $0; } else save[$3] = $0; count[$3]++; }' > outputfile.txt
I must confess I don't really understand awk all that well; I'm just copying and adapting from the web. But that seemed to work great at finding exact duplicates (i.e., it would output only 3430 and 3431 above). But I have no idea how to approach trying to find strings that are not identical but that differ in no more than 3 places.
For instance, in my example above, it should match 1074 and 5440 because they would both fit the pattern:
??B1106?MNH
But I would want it to be able to match also any other random pattern of matches, as long as there are no more than three differences, like this:
20?7G?N?RAL
These differences could be arbitrarily in any position.
The reason for needing this is we are trying to find a way to automatically find typographical errors in a serial-number-like field. There might be a mis-key, or perhaps a letter "O" replaced with a number "0", or the like.
So... any ideas? Thanks for the help!
you can use this script
$ more hamming.awk
function hamming(x,y,xs,ys,min,max,h) {
if(x==y) return 0;
else {
nx=split(x,xs,"");
mx=split(y,ys,"");
min=nx<mx?nx:mx;
max=nx<mx?mx:nx;
for(i=1;i<=min;i++) if(xs[i]!=ys[i]) h++;
return h+(max-min);
}
}
BEGIN {FS=OFS="\t"}
NR==FNR {
if($3 in a) nrs[NR];
for(k in a)
if(hamming(k,$3)<4) {
nrs[NR];
nrs[a[k]];
}
a[$3]=NR;
next
}
FNR in nrs
usage
$ awk -f hamming.awk file{,}
it's a double scan algorithm, finds the hamming distance (the one you described) between keys. Notice the it's O(n^2) algorithm, so may not suitable for very large data sets. However, not sure any other algorithm can do better.
NB Additional note based on the comment which I missed from the post. This algorithm compares the keys character by character, so displacements won't be identified. For example 123 and 23 will give a distance of 3.
Levenshtein distance aka "edit distance" suits your task best. Perl script below requires installing a module Text::Levenshtein (for debian/ubuntu do: sudo apt install libtext-levenshtein-perl).
use Text::Levenshtein qw(distance);
$maxdist = shift;
#ll = (<>);
#k = map {
$k = (split /\t/, $_)[2];
# $k =~ s/O/0/g;
} #ll;
for ($i = 0; $i < #ll; ++$i) {
for ($j = 0; $j < #ll; ++$j) {
if ($i != $j and distance($k[$i], $k[$j]) < $maxdist) {
print $ll[$i];
last;
}
}
}
Usage:
perl lev.pl 3 inputfile.txt > outputfile.txt
The algorithm is the same O(n^2) as in #karakfa's post, but matching is more flexible.
Also note the commented line # $k =~ s/O/0/g;. If you uncomment it, then all O's in key will become 0's, which will fix keys damaged by O->0 transformation. When working with damaged data I always use small rules like this to fix data gradually, refining rules from run to run, to the point where data is almost perfect and fuzzy match is no longer needed.

Input to different attributes values from a random.sample list

so this is what I'm trying to do, and I'm not sure how cause I'm new to python. I've searched for a few options and I'm not sure why this doesn't work.
So I have 6 different nodes, in maya, called aiSwitch. I need to generate random different numbers from 0 to 6 and input that value in the aiSiwtch*.index.
In short the result should be
aiSwitch1.index = (random number from 0 to 5)
aiSwitch2.index = (another random number from 0 to 5 different than the one before)
And so on unil aiSwitch6.index
I tried the following:
import maya.cmds as mc
import random
allswtich = mc.ls('aiSwitch*')
for i in allswitch:
print i
S = range(0,6)
print S
shuffle = random.sample(S, len(S))
print shuffle
for w in shuffle:
print w
mc.setAttr(i + '.index', w)
This is the result I get from the prints:
aiSwitch1 <-- from print i
[0,1,2,3,4,5] <--- from print S
[2,3,5,4,0,1] <--- from print Shuffle (random.sample results)
2
3
5
4
0
1 <--- from print w, every separated item in the random.sample list.
Now, this happens for every aiSwitch, cause it's in a loop of course. And the random numbers are always a different list cause it happens every time the loop runs.
So where is the problem then?
aiSwitch1.index = 1
And all the other aiSwitch*.index always take only the last item in the list but the time I get to do the setAttr. It seems to be that w is retaining the last value of the for loop. I don't quite understand how to
Get a random value from 0 to 5
Input that value in aiSwitch1.index
Get another random value from 0 to 6 different to the one before
Input that value in aiSwitch2.index
Repeat until aiSwitch5.index.
I did get it to work with the following form:
allSwitch = mc.ls('aiSwitch')
for i in allSwitch:
mc.setAttr(i + '.index', random.uniform(0,5))
This gave a random number from 0 to 5 to all aiSwitch*.index, but some of them repeat. I think this works cause the value is being generated every time the loop runs, hence setting the attribute with a random number. But the numbers repeat and I was trying to avoid that. I also tried a shuffle but failed to get any values from it.
My main mistake seems to be that I'm generating a list and sampling it, but I'm failing to assign every different item from that list to different aiSwitch*.index nodes. And I'm running out of ideas for this.
Any clues would be greatly appreciated.
Thanks.
Jonathan.
Here is a somewhat Pythonic way: shuffle the list of indices, then iterate over it using zip (which is useful for iterating over structures in parallel, which is what you need to do here):
import random
index = list(range(6))
random.shuffle(index)
allSwitch = mc.ls('aiSwitch*')
for i,j in zip(allSwitch,index):
mc.setAttr(i + '.index', j)

Using awk to interpolate data based on if statement

so I am trying to automate a data collection process by using awk to search the file for a certain pattern and plug values into the linear interpolation formula. The data in question tracks time versus position, and I need to interpolate the time at which position equals zero. Example:
100 0.5
200 0.2
300 -0.3
400 -0.7
Then, my interpolation looks like this:
interpolated_time = 200 + (0 - 0.2) * (300 - 200) / (-0.3 - 0.2)
I am going to write the script in bash and use bc calculator for the arithmetic. However, I am inexperienced with using awk and cannot figure out how to correctly search the file.
I want to do something like
awk '{if ($2 > 0) #add another statement to test if $2 < 0 on next line#}'
# If test is successful, store entries in variables or an array
The interpolation may need to be performed multiple times in one file. I may need to output all values in question to an array, and then input the paired indexes into the interpolation formula. (i.e. indices [1,2] [3,4] [5,6] are paired together for separate interpolations)
I know that awk works on a line-by-line test loop, but I don't know if there is a way to incorporate the previous or next line in the test (perhaps something like
next
or
getline
?)
Any suggestions or comments would be greatly appreciated!
This will give you the result 240
awk '{if(p2>0 && $2<0) print p1-p2*($1-p1)/($2-p2); p1=$1; p2=$2}'
doesn't handle if 0 is already in the data set and assumes transition is from positive to negative.

In a column of numbers, find the closest value to some target value

Let's say I have some numerical data in columns, something like
11.100000 36.829657 6.101642
11.400000 36.402069 5.731998
11.700000 35.953025 5.372652
12.000000 35.482082 5.023737
12.300000 34.988528 4.685519
12.600000 34.471490 4.358360
12.900000 33.930061 4.042693
13.200000 33.363428 3.738985
13.500000 32.770990 3.447709
13.800000 32.152473 3.169312
I also have a single target value and a column index. Given this set of data, I want to find the closest value to the target value in the column with the specified index.
For example, If my target value is 11.6 in column 1, then the script should output 11.7. If there are two numbers equidistant from the target value, then the higher value should be output.
I have a feeling that awk has the necessary functionality to do this, but any solution that works in a bash script is welcome.
try this:
awk -v c=2 -v t=35 'NR==1{d=$c-t;d=d<0?-d:d;v=$c;next}{m=$c-t;m=m<0?-m:m}m<d{d=m;v=$c}END{print v}' file
the -v c=2 and -v t=35 could be dynamic value. they are the column idx (c) and your target value (t). in the above line, the parameter is column 2 and target 25. They could be shell variable.
the output of above line based on given input data is:
kent$ awk -v c=2 -v t=35 'NR==1{d=$c-t;d=d<0?-d:d;v=$c;next}{m=$c-t;m=m<0?-m:m}m<d{d=m;v=$c}END{print v}' f
34.988528
kent$ awk -v c=1 -v t=11.6 'NR==1{d=$c-t;d=d<0?-d:d;v=$c;next}{m=$c-t;m=m<0?-m:m}m<d{d=m;v=$c}END{print v}' f
11.700000
EDIT
If there are two numbers equidistant from the target value, then the higher value should be output
The above codes didn't check this requirement.... the below one should work:
awk -v c=1 -v t=11.6 '{a[NR]=$c}END{
asort(a);d=a[NR]-t;d=d<0?-d:d;v = a[NR]
for(i=NR-1;i>=1;i--){
m=a[i]-t;m=m<0?-m:m
if(m<d){
d=m;v=a[i]
}
}
print v
}' file
test:
kent$ awk -v c=1 -v t=11.6 '{a[NR]=$c}END{
asort(a);d=a[NR]-t;d=d<0?-d:d;v = a[NR]
for(i=NR-1;i>=1;i--){
m=a[i]-t;m=m<0?-m:m
if(m<d){
d=m;v=a[i]
}
}
print v
}' f
11.700000
short explanation.
I won't explain each line of code, what it does. just tell a bit the idea to do the job.
first read all element in the given column, save in an array
sort the array.
take the last element from the array(the greatest number). assign it to var v, and calculate the diff between it and the given target, save it(absolute value) in d
from the 2nd last element from the array loop to the first. if diff between element and target (absolute value) is less than d, overwrite d with diff, also save current element into v
print v, after looping, v is the answer.
some note:
there is room to optimize the logic. e.g. we don't have to loop thru the whole array. just compare the d(abs), if new diff > d, we can stop the loop.
due to the sort, this algorithm is O(nlogn). in fact this problem could be solved by O(n). If your input data were huge, and with a worst case(e.g. your column has value in range 500-99999999999, but your target is 1.) you may want to avoid the sort. but I assume the performance is not an issue by you.
Perl solution:
#!/usr/bin/perl
use warnings;
use strict;
#ARGV == 2 or die "Usage: closest column value < input\n";
my ($column, $target) = (shift, shift);
my $closest;
while (<>) {
my $value = (split)[$column - 1];
if ($. == 1
or abs($closest - $target) > abs($target - $value)
or abs($closest - $target) == abs($target - $value)
&& $value > $closest) {
$closest = $value;
}
}
print $closest, "\n";
Note that using float == float might not work (What Every Computer Scientist Should Know About Floatin-Point Arithmetic). You might need something like abs(abs($closest - $target) - abs($target - $value)) < 1e-14.
Let's try another way, although Kent's answer must be shorter and sharper :)
awk -vc=1 -vv=13.6 '
BEGIN{l=$c; ld=99}
{d=($c-v>=0) ? ($c-v) : v-$c; if (d <= ld) {ld=d; l=$c}}
END{print l}' file
We provide the c (=column) and v (=value) parameters in the beginning.
Then we keep track of the lower value l and the lowest distance ld. For each value we calculate the distance d to the value and if it is lower to the previous ld, we swap and save the new minimal value in l. Finally we print l.
The d=($c-v>=0) ? ($c-v) : v-$c is a way to save the distance as a absolute value: if c-v is negative, save it as positive. It is based on the value=(condition) ? if yes : else structure.
Tests
$ awk -vc=2 -vv=13.6 'BEGIN{l=$c; ld=99} {d=($c-v>=0) ? ($c-v) : v-$c; if (d <= ld) {ld=d; l=$c}} END{print l}' file
32.152473
$ awk -vc=3 -vv=10.6 'BEGIN{l=$c; ld=99} {d=($c-v>=0) ? ($c-v) : v-$c; if (d <= ld) {ld=d; l=$c}} END{print l}' file
3.169312

Reading delimited value from file into a array variable

I want to read data.txt which has a 2x2 matrix number inside delimited by tab like this:
0.5 0.1
0.3 0.2
Is there any way to read this file in bash then store it into an array then process it a little then export it to a file again? Like for example in matlab:
a=dlmread('data.txt') //read file to array variable a
for i=1:2
for j=1:2
b[i][j]=a[i][j]+100
end
end
dlmwrite(b,'data2.txt') //exporting array value b to data2.txt
If the extent of your processing is to something simple like add 100 to every entry, a simple awk command like this might work:
awk '{ for(i = 1; i <= NF - 1; i++) { printf("%.1f%s", $i + 100, OFS); } printf("%.1f%s", $NF+100, ORS); }' < matrix.txt
This just loops through each row and adds 100. It's possible to do more complex operations too, but if you really want toprocess matrices there are better tools (like python+numpy or octave).
It's also possible to use bash arrays, but to do any of the operations you'd have to use an external program anyway, since bash doesn't handle floating point arithmetic.

Resources