I have an analog power signal from a motor. The signal ramps up quickly, but powers off slowly over the course of several seconds. The signal looks almost like a series of plateaus on the descent. The problem is that the signal doesn't settle back to zero. It settles back to an intermediate level unknown, and varying from motor to motor. See chart below.
I'm trying to find a way determine when the motor is off and at that intermediate level.
My thought is to find and store the max point, and calculate the slopes thereafter until the max slope is greater than some large negative slope value like -160 (~ -60 degrees), and declare that the motor must be powering off. The sample points below are with all duplicates removed. (there's about 5000 samples typically).
My problem is determining the X values. In the formula (y2-y1) / (x2 - x1), the x values could far enough away in time that the slope never appears greater than -30 degrees. Picking an absolute number like 10 would fix this, but is there a more mathematically correct method?
The data shows me calculating slope with method described above and the max of 921. ie (y2 -y1) / ( (10+1) - 10). In this scheme, at datapoint 9, i would say the motor is "Off". I'm looking for a more precise means to determine an X value rather than randomly picking 10 for instance.
+---+-----+----------+
| X | Y | Slope |
+---+-----+----------+
| 1 | 65 | 856.000 |
| 2 | 58 | 863.000 |
| 3 | 57 | 864.000 |
| 4 | 638 | 283.000 |
| 5 | 921 | 0.000 |
| 6 | 839 | -82.000 |
| 7 | 838 | -83.000 |
| 8 | 811 | -110.000 |
| 9 | 724 | -197.000 |
+---+-----+----------+
EDIT: A much simpler answer:
Since your motor is either ON or OFF, and ON wattages are strictly higher than OFF wattages, you should be able to discriminate between ON and OFF wattages by maintaining an average wattage, reporting ON if the current measurement is higher than the average and OFF if it is lower.
Count = 0
Average = 500
Whenever a measurement comes in,
Count = Count + 1
Average = Average + (Measurement - Average) / Count
Return Measurement > Average ? ON : OFF
This represents an average of all the values the wattage has ever been. If we want to eventually "forget" the earliest values (before the motor was ever turned on), we could either keep a buffer of recent values and use that for a moving average, or approximate a moving average with an IIR like
Average = (1-X) * Average + X * Measurement
for some X between 0 and 1 (closer to 0 to change more slowly).
Original answer:
You could treat this as an online clustering problem, where you expect three clusters (before the motor turns on, when the motor is on, and when the motor is turned off), or perhaps four (before the motor turns on, peak power, when the motor is running normally, and when the motor turns off). In effect, you're trying to learn what it looks like when a motor is on (or off).
If you don't have any other information about whether the motor is on or off (which could be used to train a model), here's a simple approach:
Define an "Estimate" to contain:
float Value
int Count
Define an "Estimator" to contain:
float TotalError = 0.0
Estimate COLD_OFF = {Value = 0, Count = 1}
Estimate ON = {Value = 1000, Count = 1}
Estimate WARM_OFF = {Value = 500, Count = 1}
a function Update_Estimate(float Measurement)
Find the Estimate E such that E.Value is closest to Measurement
Update TotalError = TotalError + (E.Value - Measurement)*(E.Value - Measurement)
Update E.Value = (E.Value * E.Count + P) / (E.Count + 1)
Update E.Count = E.Count + 1
return E
This takes initial guesses for what the wattages of these stages should be and updates them with the measurements. However, this has some problems. What if our initial guesses are off?
You could initialize some number of Estimators with different possible (e.g. random) guesses for COLD_OFF, ON, and WARM_OFF; after receiving a measurement, let each Estimator update itself and aggregate their values somehow. This aggregation should reward the better estimates. Since you're storing TotalError for each estimate, you could just pick the output of the Estimator that has the lowest TotalError so far, or you could let the Estimators vote (giving each Estimator's vote a weight proportional to 1/(TotalError + 1) or something like that).
Related
I want to transform the estimates from a GEE model into estimates easy to interpret. I am analyzing in Stata v17 (for Mac) the data from a therapeutic intervention in a pilot study with 26 individuals, randomized to either the treatment or a placebo. The outcome is a set of inflammatory proteins, which expression has been normalized as an arbitrary log2 scale. There are longitudinal measurements at weeks 0, 1, 2 and 3.
To evaluate the impact of treatment on each protein, I have used GEE models. For each protein, the code looks like this:
xtgee log2_biomarker treatment##c.week, family(gaussian) link(identity) corr(ar 1)
And the model output
GEE population-averaged model Number of obs = 104
Group and time vars: id week Number of groups = 26
Family: Gaussian Obs per group:
Link: Identity min = 4
Correlation: AR(1) avg = 4.0
max = 4
Wald chi2(3) = 11.38
Scale parameter = 1.018093 Prob > chi2 = 0.0098
----------------------------------------------------------------------------------
log2_biomarker | Coefficient Std. err. z P>|z| [95% conf. interval]
-----------------+----------------------------------------------------------------
treatment |
Placebo | .1010699 .379534 0.27 0.790 -.6428031 .8449428
week | -.1841974 .125257 -1.47 0.141 -.4296967 .0613018
|
treatment#c.week |
Placebo | .3919614 .1771402 2.21 0.027 .044773 .7391497
|
_cons | .063295 .268371 0.24 0.814 -.4627026 .5892926
----------------------------------------------------------------------------------
The interaction term "treatment#c.week" indicates that this protein increases over time. In order to put it in context with the estimates from models for the other proteins, I would like to translate this 0.39 coefficient into something like this:
"Subjects in the placebo arm experience a X % (or X-fold) greater protein increase per week".
But, having a log2 transformed outcome, I am struggling to come up with the correct formula.
Thanks!
I have the below question:
In the first part of the question, is says the probability that the selected person will be a male is 0.44, it means the number of males is 25*0.44 = 11. That's ok
In the second part, the probability of the selected person will be a male who was born before 1960 is 0.28, Does that mean 0.28 out of the total number which is 25 or out of the number of males?
I mean should the number of male who was born before 1960 equals into 250.28 OR 110.28
I find it easiest to think of these sorts of problems as contingency tables.
You use a maxtrix layout to express the distributions in terms of two or more factors or characteristics, each having two or more categories. The table can be constructed either with probabilities (proportions) or with counts, and switching back and forth is easy based on the total count in the table. Entries in the table are the intersections of the categories, corresponding to and in a verbal description. The numbers to the right or at the bottom of the table are called marginals, because they're found in the margins of the tables, and are always the sum of the table row or column entries in which they occur. The total probability (or count) in the table is found by summing across all the rows and columns. The marginal distribution of gender would be found by summing across rows, and the marginal distribution of birthdays would be found by summing across the columns.
Based on this, you can inferentially determine other values as indicated by the entries in parentheses below. With one more entry, either for gender or in the marginal row for birthdays, you'd be able to fill in the whole table inferentially. (This is related to the concept of degrees of freedom - how many pieces of info can you fill in independently before the others are determined by the known constraint that the totals are fixed or that probability adds to 1.)
Probabilities
Birthday
< 1960 | >= 1960
_______________________
G | | |
e F | | | (0.56)
n __|_________|__________|
d | | |
e M | 0.28 | (0.16) | 0.44
r __|_________|__________|______
? ? | 1.00
Counts
Birthday
< 1960 | >= 1960
_______________________
G | | |
e F | | | (14)
n __|_________|__________|
d | | |
e M | 7 | (4) | 11
r __|_________|__________|_____
? ? | 25
Conditional probability corresponds to limiting yourself to the subset of rows or columns specified in the condition. If you had been asked what is the probability of a birthday < 1960 given the gender is male, i.e., P{birthday < 1960 | M} in relatively standard notation, you'd be restricting your focus to just the M row, so the answer would be 7/11 = 0.28/0.44. Computationally, you take the probabilities or counts in the qualifying table entries and express them as a proportion of the probabilities or counts of the specified (given) marginal entries. This is often written in prob & stats texts as P(A|B) = P(AB)/P(B), where AB is a set shorthand for A and B (intersection).
0,44 = 11 / 25 people are male.
0,28 = 7 / 25 people are male & born before 1960.
I am trying to implement a kernel density estimation. However my code does not provide the answer it should. It is also written in julia but the code should be self explanatory.
Here is the algorithm:
where
So the algorithm tests whether the distance between x and an observation X_i weighted by some constant factor (the binwidth) is less then one. If so, it assigns 0.5 / (n * h) to that value, where n = #of observations.
Here is my implementation:
#Kernel density function.
#Purpose: estimate the probability density function (pdf)
#of given observations
##param data: observations for which the pdf should be estimated
##return: returns an array with the estimated densities
function kernelDensity(data)
|
| #Uniform kernel function.
| ##param x: Current x value
| ##param X_i: x value of observation i
| ##param width: binwidth
| ##return: Returns 1 if the absolute distance from
| #x(current) to x(observation) weighted by the binwidth
| #is less then 1. Else it returns 0.
|
| function uniformKernel(x, observation, width)
| | u = ( x - observation ) / width
| | abs ( u ) <= 1 ? 1 : 0
| end
|
| #number of observations in the data set
| n = length(data)
|
| #binwidth (set arbitraily to 0.1
| h = 0.1
|
| #vector that stored the pdf
| res = zeros( Real, n )
|
| #counter variable for the loop
| counter = 0
|
| #lower and upper limit of the x axis
| start = floor(minimum(data))
| stop = ceil (maximum(data))
|
| #main loop
| ##linspace: divides the space from start to stop in n
| #equally spaced intervalls
| for x in linspace(start, stop, n)
| | counter += 1
| | for observation in data
| | |
| | | #count all observations for which the kernel
| | | #returns 1 and mult by 0.5 because the
| | | #kernel computed the absolute difference which can be
| | | #either positive or negative
| | | res[counter] += 0.5 * uniformKernel(x, observation, h)
| | end
| | #devide by n times h
| | res[counter] /= n * h
| end
| #return results
| res
end
#run function
##rand: generates 10 uniform random numbers between 0 and 1
kernelDensity(rand(10))
and this is being returned:
> 0.0
> 1.5
> 2.5
> 1.0
> 1.5
> 1.0
> 0.0
> 0.5
> 0.5
> 0.0
the sum of which is: 8.5 (The cumulative distibution function. Should be 1.)
So there are two bugs:
The values are not properly scaled. Each number should be around one tenth of their current values. In fact, if the number of observation increases by 10^n n = 1, 2, ... then the cdf also increases by 10^n
For example:
> kernelDensity(rand(1000))
> 953.53
They don't sum up to 10 (or one if it were not for the scaling error). The error becomes more evident as the sample size increases: there are approx. 5% of the observations not being included.
I believe that I implemented the formula 1:1, hence I really don't understand where the error is.
I'm not an expert on KDEs, so take all of this with a grain of salt, but a very similar (but much faster!) implementation of your code would be:
function kernelDensity{T<:AbstractFloat}(data::Vector{T}, h::T)
res = similar(data)
lb = minimum(data); ub = maximum(data)
for (i,x) in enumerate(linspace(lb, ub, size(data,1)))
for obs in data
res[i] += abs((obs-x)/h) <= 1. ? 0.5 : 0.
end
res[i] /= (n*h)
end
sum(res)
end
If I'm not mistaken, the density estimate should integrate to 1, that is we would expect kernelDensity(rand(100), 0.1)/100 to get at least close to 1. In the implementation above I'm getting there, give or take 5%, but then again we don't know that 0.1 is the optimal bandwith (using h=0.135 instead I'm getting there to within 0.1%), and the uniform Kernel is known to only be about 93% "efficient".
In any case, there's a very good Kernel Density package in Julia available here, so you probably should just do Pkg.add("KernelDensity") instead of trying to code your own Epanechnikov kernel :)
To point out the mistake: You have n bins B_i of size 2h covering [0,1], a random point X lands in expected number of bins. You divide by 2 n h.
For n points, the expected value of your function is .
Actually, you have some bins of size < 2h. (for example if start = 0, half of first the bin is outside of [0,1]), factoring this in gives the bias.
Edit: Btw, the bias is easy to calculate if you assume that the bins have random locations in [0,1]. Then the bins are on average missing h/2 = 5% of their size.
This question is a follow-up question from Stata: replace, if, forvalues. Consider this data:
set seed 123456
set obs 5000
g firmid = "firm" + string(_n) /* Observation (firm) id */
g nw = floor(100*runiform()) /* Number of workers in a firm */
g double lat = 39+runiform() /* Latitude in decimal degree of a firm */
g double lon = -76+runiform() /* Longitude in decimal degree of a firm */
The first 10 observations are:
+--------------------------------------+
| firmid nw lat lon |
|--------------------------------------|
1. | firm1 81 39.915526 -75.505018 |
2. | firm2 35 39.548523 -75.201567 |
3. | firm3 10 39.657866 -75.17988 |
4. | firm4 83 39.957938 -75.898837 |
5. | firm5 56 39.575881 -75.169157 |
6. | firm6 73 39.886184 -75.857255 |
7. | firm7 27 39.33288 -75.724665 |
8. | firm8 75 39.165549 -75.96502 |
9. | firm9 64 39.688819 -75.232764 |
10. | firm10 76 39.012228 -75.166272 |
+--------------------------------------+
I need to calculate the distances between firm 1 and all other firms. So, the vincenty command looks like:
. scalar theLat = 39.915526
. scalar theLon = -75.505018
. vincenty lat lon theLat theLon, hav(distance_km) inkm
The vincenty command creates the distance_km variable that has distances between each observation and firm 1. Here, I manually copy and paste the two numbers that are 39.915526 and -75.505018.
Question 1: What's the syntax that extracts those numbers?
Now, I can keep observations where distances_km <= 2. And,
. egen near_nw_sum = sum(nw)
will create the sum of workers within 2 kilometers of the firm 1. (Or, the collapse command may do the job.)
Question 2: I have to do this for all firms, and the final data should look like:
+-----------------------------------------------------------------+
| firmid nw lat lon near_nw_sum |
|-----------------------------------------------------------------|
1. | firm1 81 39.915526 -75.505018 (# workers near firm1) |
2. | firm2 35 39.548523 -75.201567 (# workers near firm2) |
3. | firm3 10 39.657866 -75.17988 (# workers near firm3) |
4. | firm4 83 39.957938 -75.898837 (# workers near firm4) |
5. | firm5 56 39.575881 -75.169157 (# workers near firm5) |
6. | firm6 73 39.886184 -75.857255 (# workers near firm6) |
7. | firm7 27 39.33288 -75.724665 (# workers near firm7) |
8. | firm8 75 39.165549 -75.96502 (# workers near firm8) |
9. | firm9 64 39.688819 -75.232764 (# workers near firm9) |
10. | firm10 76 39.012228 -75.166272 (# workers near firm10) |
+-----------------------------------------------------------------+
Creating the near_nw_sum variable is my final goal. I need your help here for my weak data management skill.
The following is basically the same strategy found here and is based on your "final goal". Again, it can be useful depending on the size of your original dataset.joinby creates observations so you may exceed the Stata limit. However, I believe it does what you want.
clear all
set more off
set seed 123456
set obs 10
g firmid = _n /* Observation (firm) id */
g nw = floor(100*runiform()) /* Number of workers in a firm */
g double lat = 39+runiform() /* Latitude in decimal degree of a firm */
g double lon = -76+runiform() /* Longitude in decimal degree of a firm */
gen dum = 1
list
* joinby procedure
tempfile main
save "`main'"
rename (firmid lat lon nw) =0
joinby dum using "`main'"
drop dum
* Pretty print
sort firmid0 firmid
order firmid0 firmid
list, sepby(firmid0)
* Uncomment if you do not want to include workers in the "base" firm.
*drop if firmid0 == firmid
* Compute distance
vincenty lat0 lon0 lat lon, hav(distance_km) inkm
keep if distance_km <= 40 // an arbitrary distance
list, sepby(firmid0)
* Compute workers of nearby-firms
collapse (sum) nw_sum=nw (mean) nw0 lat0 lon0, by(firmid0)
list
What it does is form pairwise combinations of firms to compute distances and sum workers of nearby-firms. No need here to extract scalars as asked in Question 1. Also, no need to complicate the variable firmid converting to string.
The following overcomes the problem of the Stata limit on number of observations.
clear all
set more off
* Create empty database
gen x = .
tempfile results
save "`results'", replace
* Create input for exercise
set seed 123456
set obs 500
g firmid = _n /* Observation (firm) id */
g nw = floor(100*runiform()) /* Number of workers in a firm */
g double lat = 39+runiform() /* Latitude in decimal degree of a firm */
g double lon = -76+runiform() /* Longitude in decimal degree of a firm */
gen dum = 1
*list
* Save number of firms
local size = _N
display "`size'"
* joinby procedure
tempfile main
save "`main'"
timer clear 1
timer clear 2
timer clear 3
timer clear 4
quietly {
timer on 1
forvalues i=1/`size'{
timer on 2
use "`main'" in `i', clear // assumed sorted on firmid
rename (firmid lat lon nw) =0
joinby dum using "`main'", unmatched(using)
drop _merge dum
order firmid0 firmid
timer off 2
timer on 3
vincenty lat0 lon0 lat lon, hav(dist) inkm
timer off 3
keep if dist <= 40 // an arbitrary distance
timer on 4
collapse (sum) nw_sum=nw (mean) nw0 lat0 lon0, by(firmid0)
append using "`results'"
save "`results'", replace
timer off 4
}
timer off 1
}
use "`results'", clear
sort firmid0
drop x
list
timer list
However inefficicent, some testing using timer shows that most of the computation time goes into the vincenty command which you won't be able to escape. The following is the time (in seconds) for 10,000 observations with an Intel Core i5 processor and a conventional hard drive (not SSD). Timer 1 is the total while 2, 3, 4 are the components (approx.). Timer 3 corresponds to vincenty:
. timer list
1: 1953.99 / 1 = 1953.9940
2: 169.19 / 10000 = 0.0169
3: 1669.95 / 10000 = 0.1670
4: 94.47 / 10000 = 0.0094
Of course, note that in both codes duplicate computations of distances are made (e.g. both the distances between firm1-firm2 and firm2-firm1 are computed) and this you can probably avoid. As it stands, for 110,000 observations it will take a long time. On the positive side, I noticed this second setup demands very little RAM as compared to the same amount of observations in the first setup. In fact, my 4GB machine freezes with the latter.
Also note that even though I use the same seed as you do, data is different because I create different numbers of observations (not 5000), which makes a difference in the variable creation process.
(By the way, if you wanted to save the value as a scalar you could use subscripting: scalar latitude = lat[1]).
I have a problem where I have multiple (few thousand) quantities that I need to split between a set number of recipients such that each quantity must be split into whole numbers and using the same proportion.
I need to find an algorighm that implements this reliably and efficiently (dont we all ?:-) )
This is to solve a problem in financial markets (stock exchange orders) where an order might get thousands of "fills" and at the end of the day must be distributed to a few clients while maintaining the order's average price. Here's an example:
Total Order Quantity 37300
Quantities filled by the Stock Exchange
Execution 1. 16700 shares filled at price 75.84
Execution 2. 5400 shares filled at price 75.85
Execution 3. 4900 shares filled at price 75.86
Execution 4. 10300 shares filled at price 75.87
Total 37300 shares filled at average price = (16700*75.84 + 5400*75.85 + 4900*75.86 + 10300*75.87) / 37300 = 75.85235925
Suppose I need to split these quantities between 3 clients such that :
Client1: 15000 shares
Client2: 10000 shares
Client3: 12300 shares
Each execution must be split individually (I can't just take each clients requested quantity priced at average price)
My first thought was to split each proportionately :
Client 1 gets 15000/37300=0.402144772
Client 2 gets 10000/37300=0.268096515
Client 3 gets 12300/37300=0.329758713
Which would lead to
Client1 - 15000 Client2 - 10000 Client3 - 12300
Ratio : 0.402144772 Ratio : 0.268096515 Ratio : 0.329758713
Splits (Sorry about the formatting - this was the best I could do in the Post editor)
+-------------+-------------+-------------+
| Client 1 | Client 2 | Client 3 |
+-------------+-------------+-------------+
| 6715.817694 | 4477.211796 | 5506.970509 |
| 2171.581769 | 1447.72118 | 1780.697051 |
| 1970.509383 | 1313.672922 | 1615.817694 |
| 4142.091153 | 2761.394102 | 3396.514745 |
+-------------+-------------+-------------+
| Totals: | | |
| 15000 | 10000 | 12300 |
+-------------+-------------+-------------+
The problem with this is that I can't assign fractional quantities to clients so I need a smart algorithm which adjusts the quantities such that the fractional part of these splits is 0. I understand that this may be impossible in many scenarios so this requirement can be relaxed a little bit so that a certain client gets a little more (or less).
Does anybody know of an algorithm that I can use as a starting point for this problem ?
You can round all the numbers (ratio[n] * totalQuantity) except the last one (possibly the smallest) The last one must be the totalQuantity - sum of the others. This will give you whole number quantities while having a correct total as close to the ratios you choose.
Try to look at this from a different angle. You already know how many shares each client is getting. You want to calculate what's the fair total amount each has to pay and do this without rounding errors.
You therefore want this total dollar amounts to have no rounding issues, i.e. be accurate to the 0.01.
Prices can then be computed using the dollar amounts and displayed to the required precision.
The opposite (calculate prices, then derive amounts) will always yield rounding issues with the dollar amounts.
Assuming price is per 100 units, here's one way to accomplish this:
Calculate total $ for the order (16,700*75.84/100 + 5,400*75.85/100 + 4,900*75.86/100 + 10,300*75.87/100) = $28,292.93
Allocate all clients except 1, based on the ratio quantities ordered / quantities filled:
Client 2 = $28,292.93 / 37,300 * 10,000 = $7,585,24
Price = 7,585,24 / 10,000 * 100 = 75.8524.
Client 3 = $28,292.93 / 37,300 * 12,300 = $9,329.84
Price = $9,329.84 / 12,300 * 100 = 75.85235772
Calculate the last client as the remaining $$$:
$28,292.93 - ($7,585,24 + $9,329.84) = $11,377.85.
Price = $11,377.85 / 15,000 * 100 = 75.85233333
Here I arbitrarily picked Client 1, the one with the largest quantity to be the object of the remainder calculation.