Mapping one continuous data range to another nonlinearly - algorithm

Sorry about the vague title. I'm not sure how to concisely word what I'm about to ask. This is more of a math/algorithms question than a programming question.
In an app that I'm developing, we have a value that can fluctuate anywhere between 0 and a predetermined maximum (in testing it's usually hovered around 100, so let's just say 100). This range of data is continuous, meaning there are an infinite number of possible values- as long as it's between 0 and 100, it's possible.
Right now, any value returned from this is mapped to a different range that is also continuous- from 1000 to 200. So if the value from the first set is 100, I map it to 200, and if the value from the first set is 0, it gets mapped to 1000. And of course everything in between. This is what the code looks like:
-(float)mapToRange:(float)val withMax:(float)maxVal{
// Establish range constants.
const int upperBound = 1000;
const int lowerBound = 200;
const int bandwidth = upperBound - lowerBound;
// Make sure we don't go above the calibrated maximum.
if(val > maxVal)
val = maxVal;
// Scale the original value to our new boundaries.
float scaled = val/maxVal;
float ret = upperBound - scaled*bandwidth;
return ret;
}
Now, what I want to do is make it so that the higher original values (closer to 100) increase in larger increments than the lower original values (closer to 0). Meaning if I slowly start decreasing from 100 to 0 at a steady rate, the new values starting at 200 move quickly toward 1000 at first but go in smaller increments the closer they get to 1000. What would be the best way to go about doing this?

Your value scaled is basically the 0-100 value represented in the range 0-1 so it's good to work with. Try raising this to an integer power, and the result will increase faster near 1 and slower near 0. The higher the power, the larger the effect. So something like:
float scaled = val/maxVal;
float bent = scaled*scaled*scaled*scaled; // or however you want to write x^4
float ret = upperBound - bent*bandwidth;
Here's a sketch of the idea:
That is, the span A to B, maps to the smaller span a to b, while the span C to D maps to the larger span c to d. The larger the power of the polynomial, the more the curve will be bent into the lower right corner.
The advantage of using the 0 to 1 range is that the endpoints stay fixed since x^n=x when x is 0 or 1, but this, of course, isn't necessary as anything could be compensated for by the appropriate shifting and scaling.
Note also that this map isn't symmetric (though my drawing sort of looks that way), though course a symmetric curve could be chosen. If you want to curve to bend the other way, choose a power less than 1.

Related

How to calculate the length of cable on a winch given the rotations of the drum

I have a cable winch system that I would like to know how much cable is left given the number of rotations that have occurred and vice versa. This system will run on a low-cost microcontroller with low computational resources and should be able to update quickly, long for/while loop iterations are not ideal.
The inputs are cable diameter, inner drum diameter, inner drum width, and drum rotations. The output should be the length of the cable on the drum.
At first, I was calculating the maximum number of wraps of cable per layer based on cable diameter and inner drum width, I could then use this to calculate the length of cable per layer. The issue comes when I calculate the total length as I have to loop through each layer, a costly operation (could be 100's of layers).
My next approach was to precalculate a table with each layer, then perform a 3-5 degree polynomial regression down to an easy to calculate formula.
This appears to work for the most part, however, there are slight inaccuracies at the low and high end (0 rotations could be + or - a few units of cable length). The real issue comes when I try and reverse the function to get the current rotations of the drum given the length. So far, my reversed formula does not seem to equal the forward formula (I am reversing X and Y before calculating the polynomial).
I have looked high and low and cannot seem to find any formulas for cable length to rotations that do not use recursion or loops. I can't figure out how to reverse my polynomial function to get the reverse value without losing precision. If anyone happens to have an insight/ideas or can help guide me in the right direction that would be most helpful. Please see my attempts below.
// Units are not important
CableLength = 15000
CableDiameter = 5
DrumWidth = 50
DrumDiameter = 5
CurrentRotations = 0
CurrentLength = 0
CurrentLayer = 0
PolyRotations = Array
PolyLengths = Array
PolyLayers = Array
WrapsPerLayer = DrumWidth / CableDiameter
While CurrentLength < CableLength // Calcuate layer length for each layer up to cable length
CableStackHeight = CableDiameter * CurrentLayer
DrumDiameterAtLayer = DrumDiameter + (CableStackHeight * 2) // Assumes cables stack vertically
WrapDiameter = DrumDiameterAtLayer + CableDiameter // Center point of cable
WrapLength = WrapDiameter * PI
LayerLength = WrapLength * WrapsPerLayer
CurrentRotations += WrapsPerLayer // 1 Rotation per wrap
CurrentLength += LayerLength
CurrentLayer++
PolyRotations.Push(CurrentRotations)
PolyLengths.Push(CurrentLength)
PolyLayers.Push(CurrentLayer)
End
// Using 5 degree polynomials, any lower = very low precision
PolyLengthToRotation = CreatePolynomial(PolyLengths, PolyRotations, 5) // 5 Degrees
PolyRotationToLength = CreatePolynomial(PolyRotations, PolyLengths, 5) // 5 Degrees
// 40 Rotations should equal about 3141.593 units
RealRotation = 40
RealLength = 3141.593
CalculatedLength = EvaluatePolynomial(RealRotation,PolyRotationToLength)
CalculatedRotations = EvaluatePolynomial(RealLength,PolyLengthToRotation)
// CalculatedLength = 3141.593 // Good
// CalculatedRotations = 41.069 // No good
// CalculatedRotations != RealRotation // These should equal
// 0 Rotations should equal 0 length
RealRotation = 0
RealLength = 0
CalculatedLength = EvaluatePolynomial(RealRotation,PolyRotationToLength)
CalculatedRotations = EvaluatePolynomial(RealLength,PolyLengthToRotation)
// CalculatedLength = 1.172421e-9 // Very close
// CalculatedRotations = 1.947, // No good
// CalculatedRotations != RealRotation // These should equal
Side note: I have a "spool factor" parameter to calibrate for the actual cable spooling efficiency that is not shown here. (cable is not guaranteed to lay mathematically perfect)
#Bathsheba May have meant cable, but a table is a valid option (also experimental numbers are probably more interesting in the real world).
A bit slow, but you could always do it manually. There's only 40 rotations (though optionally for better experimental results, repeat 3 times and take the average...). Reel it completely in. Then do a rotation (depending on the diameter of your drum, half rotation). Measure and mark how far it spooled out (tape), record it. Repeat for the next 39 rotations. You now have a lookup table you can find the length in O(log N) via binary search (by sorting the data) and a bit of interpolation (IE: 1.5 rotations is about half way between 1 and 2 rotations).
You can also use this to derived your own experimental data. Do the same thing, but with a cable half as thin (perhaps proportional to the ratio of the inner diameter and the cable radius?). What effect does it have on the numbers? How about twice or half the diameter? Math says circumference is linear (2πr), so half the radius, half the amount per rotation. Might be easier to adjust the table data.
The gist is that it may be easier for you to have a real world reference for your numbers rather than relying purely on an abstract mathematically model (not to say the model would be wrong, but cables don't exactly always wind up perfectly, who knows perhaps you can find a quirk about your winch that would have lead to errors in a pure mathematical approach). Who knows might be able to derive the formula yourself :) with a fudge factor for the real world even lol.

Algorithm to smooth a curve while keeping the area under it constant

Consider a discrete curve defined by the points (x1,y1), (x2,y2), (x3,y3), ... ,(xn,yn)
Define a constant SUM = y1+y2+y3+...+yn. Say we change the value of some k number of y points (increase or decrease) such that the total sum of these changed points is less than or equal to the constant SUM.
What would be the best possible manner to adjust the other y points given the following two conditions:
The total sum of the y points (y1'+y2'+...+yn') should remain constant ie, SUM.
The curve should retain as much of its original shape as possible.
A simple solution would be to define some delta as follows:
delta = (ym1' + ym2' + ym3' + ... + ymk') - (ym1 + ym2 + ym3 + ... + ymk')
and to distribute this delta over the rest of the points equally. Here ym1' is the value of the modified point after modification and ym1 is the value of the modified point before modification to give delta as the total difference in modification.
However this would not ensure a totally smoothed curve as area near changed points would appear ragged. Does a better solution/algorithm exist for the this problem?
I've used the following approach, though it is a bit OTT.
Consider adding d[i] to y[i], to get s[i], the smoothed value.
We seek to minimise
S = Sum{ 1<=i<N-1 | sqr( s[i+1]-2*s[i]+s[i-1] } + f*Sum{ 0<=i<N | sqr( d[i])}
The first term is a sum of the squares of (an approximate) second derivative of the curve, and the second term penalises moving away from the original. f is a (positive) constant. A little algebra recasts this as
S = sqr( ||A*d - b||)
where the matrix A has a nice structure, and indeed A'*A is penta-diagonal, which means that the normal equations (ie d = Inv(A'*A)*A'*b) can be solved efficiently. Note that d is computed directly, there is no need to initialise it.
Given the solution d to this problem we can compute the solution d^ to the same problem but with the constraint One'*d = 0 (where One is the vector of all ones) like this
d^ = d - (One'*d/Q) * e
e = Inv(A'*A)*One
Q = One'*e
What value to use for f? Well a simple approach is to try out this procedure on sample curves for various fs and pick a value that looks good. Another approach is to pick a estimate of smoothness, for example the rms of the second derivative, and then a value that should attain, and then search for an f that gives that value. As a general rule, the bigger f is the less smooth the smoothed curve will be.
Some motivation for all this. The aim is to find a 'smooth' curve 'close' to a given one. For this we need a measure of smoothness (the first term in S) and a measure of closeness (the second term. Why these measures? Well, each are easy to compute, and each are quadratic in the variables (the d[]); this will mean that the problem becomes an instance of linear least squares for which there are efficient algorithms available. Moreover each term in each sum depends on nearby values of the variables, which will in turn mean that the 'inverse covariance' (A'*A) will have a banded structure and so the least squares problem can be solved efficiently. Why introduce f? Well, if we didn't have f (or set it to 0) we could minimise S by setting d[i] = -y[i], getting a perfectly smooth curve s[] = 0, which has nothing to do with the y curve. On the other hand if f is gigantic, then to minimise s we should concentrate on the second term, and set d[i] = 0, and our 'smoothed' curve is just the original. So it's reasonable to suppose that as we vary f, the corresponding solutions will vary between being very smooth but far from y (small f) and being close to y but a bit rough (large f).
It's often said that the normal equations, whose use I advocate here, are a bad way to solve least squares problems, and this is generally true. However with 'nice' banded systems -- like the one here -- the loss of stability through using the normal equations is not so great, while the gain in speed is so great. I've used this approach to smooth curves with many thousands of points in a reasonable time.
To see what A is, consider the case where we had 4 points. Then our expression for S comes down to:
sqr( s[2] - 2*s[1] + s[0]) + sqr( s[3] - 2*s[2] + s[1]) + f*(d[0]*d[0] + .. + d[3]*d[3]).
If we substitute s[i] = y[i] + d[i] in this we get, for example,
s[2] - 2*s[1] + s[0] = d[2]-2*d[1]+d[0] + y[2]-2*y[1]+y[0]
and so we see that for this to be sqr( ||A*d-b||) we should take
A = ( 1 -2 1 0)
( 0 1 -2 1)
( f 0 0 0)
( 0 f 0 0)
( 0 0 f 0)
( 0 0 0 f)
and
b = ( -(y[2]-2*y[1]+y[0]))
( -(y[3]-2*y[2]+y[1]))
( 0 )
( 0 )
( 0 )
( 0 )
In an implementation, though, you probably wouldn't want to form A and b, as they are only going to be used to form the normal equation terms, A'*A and A'*b. It would be simpler to accumulate these directly.
This is a constrained optimization problem. The functional to be minimized is the integrated difference of the original curve and the modified curve. The constraints are the area under the curve and the new locations of the modified points. It is not easy to write such codes on your own. It is better to use some open source optimization codes, like this one: ool.
what about to keep the same dynamic range?
compute original min0,max0 y-values
smooth y-values
compute new min1,max1 y-values
linear interpolate all values to match original min max y
y=min1+(y-min1)*(max0-min0)/(max1-min1)
that is it
Not sure for the area but this should keep the shape much closer to original one. I got this Idea right now while reading your question and now I face similar problem so I try to code it and try right now anyway +1 for the getting me this Idea :)
You can adapt this and combine with the area
So before this compute the area and apply #1..#4 and after that compute new area. Then multiply all values by old_area/new_area ratio. If you have also negative values and not computing absolute area then you have to handle positive and negative areas separately and find multiplication ration to best fit original area for booth at once.
[edit1] some results for constant dynamic range
As you can see the shape is slightly shifting to the left. Each image is after applying few hundreds smooth operations. I am thinking of subdivision to local min max intervals to improve this ...
[edit2] have finished the filter for mine own purposes
void advanced_smooth(double *p,int n)
{
int i,j,i0,i1;
double a0,a1,b0,b1,dp,w0,w1;
double *p0,*p1,*w; int *q;
if (n<3) return;
p0=new double[n<<2]; if (p0==NULL) return;
p1=p0+n;
w =p1+n;
q =(int*)((double*)(w+n));
// compute original min,max
for (a0=p[0],i=0;i<n;i++) if (a0>p[i]) a0=p[i];
for (a1=p[0],i=0;i<n;i++) if (a1<p[i]) a1=p[i];
for (i=0;i<n;i++) p0[i]=p[i]; // store original values for range restoration
// compute local min max positions to p1[]
dp=0.01*(a1-a0); // min delta treshold
// compute first derivation
p1[0]=0.0; for (i=1;i<n;i++) p1[i]=p[i]-p[i-1];
for (i=1;i<n-1;i++) // eliminate glitches
if (p1[i]*p1[i-1]<0.0)
if (p1[i]*p1[i+1]<0.0)
if (fabs(p1[i])<=dp)
p1[i]=0.5*(p1[i-1]+p1[i+1]);
for (i0=1;i0;) // remove zeros from derivation
for (i0=0,i=0;i<n;i++)
if (fabs(p1[i])<dp)
{
if ((i> 0)&&(fabs(p1[i-1])>=dp)) { i0=1; p1[i]=p1[i-1]; }
else if ((i<n-1)&&(fabs(p1[i+1])>=dp)) { i0=1; p1[i]=p1[i+1]; }
}
// find local min,max to q[]
q[n-2]=0; q[n-1]=0; for (i=1;i<n-1;i++) if (p1[i]*p1[i-1]<0.0) q[i-1]=1; else q[i-1]=0;
for (i=0;i<n;i++) // set sign as +max,-min
if ((q[i])&&(p1[i]<-dp)) q[i]=-q[i]; // this shifts smooth curve to the left !!!
// compute weights
for (i0=0,i1=1;i1<n;i0=i1,i1++) // loop through all local min,max intervals
{
for (;(!q[i1])&&(i1<n-1);i1++); // <i0,i1>
b0=0.5*(p[i0]+p[i1]);
b1=fabs(p[i1]-p[i0]);
if (b1>=1e-6)
for (b1=0.35/b1,i=i0;i<=i1;i++) // compute weights bigger near local min max
w[i]=0.8+(fabs(p[i]-b0)*b1);
}
// smooth few times
for (j=0;j<5;j++)
{
for (i=0;i<n ;i++) p1[i]=p[i]; // store data to avoid shifting by using half filtered data
for (i=1;i<n-1;i++) // FIR smooth filter
{
w0=w[i];
w1=(1.0-w0)*0.5;
p[i]=(w1*p1[i-1])+(w0*p1[i])+(w1*p1[i+1]);
}
for (i=1;i<n-1;i++) // avoid local min,max shifting too much
{
if (q[i]>0) // local max
{
if (p[i]<p[i-1]) p[i]=p[i-1]; // can not be lower then neigbours
if (p[i]<p[i+1]) p[i]=p[i+1];
}
if (q[i]<0) // local min
{
if (p[i]>p[i-1]) p[i]=p[i-1]; // can not be higher then neigbours
if (p[i]>p[i+1]) p[i]=p[i+1];
}
}
}
for (i0=0,i1=1;i1<n;i0=i1,i1++) // loop through all local min,max intervals
{
for (;(!q[i1])&&(i1<n-1);i1++); // <i0,i1>
// restore original local min,max
a0=p0[i0]; b0=p[i0];
a1=p0[i1]; b1=p[i1];
if (a0>a1)
{
dp=a0; a0=a1; a1=dp;
dp=b0; b0=b1; b1=dp;
}
b1-=b0;
if (b1>=1e-6)
for (dp=(a1-a0)/b1,i=i0;i<=i1;i++)
p[i]=a0+((p[i]-b0)*dp);
}
delete[] p0;
}
so p[n] is the input/output data. There are few things that can be tweaked like:
weights computation (constants 0.8 and 0.35 means weights are <0.8,0.8+0.35/2>)
number of smooth passes (now 5 in the for loop)
the bigger the weight the less the filtering 1.0 means no change
The main Idea behind is:
find local extremes
compute weights for smoothing
so near local extremes are almost none change of the output
smooth
repair dynamic range per each interval between all local extremes
[Notes]
I did also try to restore the area but that is incompatible with mine task because it distorts the shape a lot. So if you really need the area then focus on that and not on the shape. The smoothing causes signal to shrink mostly so after area restoration the shape rise on magnitude.
Actual filter state has none markable side shifting of shape (which was the main goal for me). Some images for more bumpy signal (the original filter was extremly poor on this):
As you can see no visible signal shape shifting. The local extremes has tendency to create sharp spikes after very heavy smoothing but that was expected
Hope it helps ...

Multiliteration implementation with inaccurate distance data

I am trying to create an android smartphone application which uses Apples iBeacon technology to determine the current indoor location of itself. I already managed to get all available beacons and calculate the distance to them via the rssi signal.
Currently I face the problem, that I am not able to find any library or implementation of an algorithm, which calculates the estimated location in 2D by using 3 (or more) distances of fixed points with the condition, that these distances are not accurate (which means, that the three "trilateration-circles" do not intersect in one point).
I would be deeply grateful if anybody can post me a link or an implementation of that in any common programming language (Java, C++, Python, PHP, Javascript or whatever). I already read a lot on stackoverflow about that topic, but could not find any answer I were able to convert in code (only some mathematical approaches with matrices and inverting them, calculating with vectors or stuff like that).
EDIT
I thought about an own approach, which works quite well for me, but is not that efficient and scientific. I iterate over every meter (or like in my example 0.1 meter) of the location grid and calculate the possibility of that location to be the actual position of the handset by comparing the distance of that location to all beacons and the distance I calculate with the received rssi signal.
Code example:
public Location trilaterate(ArrayList<Beacon> beacons, double maxX, double maxY)
{
for (double x = 0; x <= maxX; x += .1)
{
for (double y = 0; y <= maxY; y += .1)
{
double currentLocationProbability = 0;
for (Beacon beacon : beacons)
{
// distance difference between calculated distance to beacon transmitter
// (rssi-calculated distance) and current location:
// |sqrt(dX^2 + dY^2) - distanceToTransmitter|
double distanceDifference = Math
.abs(Math.sqrt(Math.pow(beacon.getLocation().x - x, 2)
+ Math.pow(beacon.getLocation().y - y, 2))
- beacon.getCurrentDistanceToTransmitter());
// weight the distance difference with the beacon calculated rssi-distance. The
// smaller the calculated rssi-distance is, the more the distance difference
// will be weighted (it is assumed, that nearer beacons measure the distance
// more accurate)
distanceDifference /= Math.pow(beacon.getCurrentDistanceToTransmitter(), 0.9);
// sum up all weighted distance differences for every beacon in
// "currentLocationProbability"
currentLocationProbability += distanceDifference;
}
addToLocationMap(currentLocationProbability, x, y);
// the previous line is my approach, I create a Set of Locations with the 5 most probable locations in it to estimate the accuracy of the measurement afterwards. If that is not necessary, a simple variable assignment for the most probable location would do the job also
}
}
Location bestLocation = getLocationSet().first().location;
bestLocation.accuracy = calculateLocationAccuracy();
Log.w("TRILATERATION", "Location " + bestLocation + " best with accuracy "
+ bestLocation.accuracy);
return bestLocation;
}
Of course, the downside of that is, that I have on a 300m² floor 30.000 locations I had to iterate over and measure the distance to every single beacon I got a signal from (if that would be 5, I do 150.000 calculations only for determine a single location). That's a lot - so I will let the question open and hope for some further solutions or a good improvement of this existing solution in order to make it more efficient.
Of course it has not to be a Trilateration approach, like the original title of this question was, it is also good to have an algorithm which includes more than three beacons for the location determination (Multilateration).
If the current approach is fine except for being too slow, then you could speed it up by recursively subdividing the plane. This works sort of like finding nearest neighbors in a kd-tree. Suppose that we are given an axis-aligned box and wish to find the approximate best solution in the box. If the box is small enough, then return the center.
Otherwise, divide the box in half, either by x or by y depending on which side is longer. For both halves, compute a bound on the solution quality as follows. Since the objective function is additive, sum lower bounds for each beacon. The lower bound for a beacon is the distance of the circle to the box, times the scaling factor. Recursively find the best solution in the child with the lower lower bound. Examine the other child only if the best solution in the first child is worse than the other child's lower bound.
Most of the implementation work here is the box-to-circle distance computation. Since the box is axis-aligned, we can use interval arithmetic to determine the precise range of distances from box points to the circle center.
P.S.: Math.hypot is a nice function for computing 2D Euclidean distances.
Instead of taking confidence levels of individual beacons into account, I would instead try to assign an overall confidence level for your result after you make the best guess you can with the available data. I don't think the only available metric (perceived power) is a good indication of accuracy. With poor geometry or a misbehaving beacon, you could be trusting poor data highly. It might make better sense to come up with an overall confidence level based on how well the perceived distance to the beacons line up with the calculated point assuming you trust all beacons equally.
I wrote some Python below that comes up with a best guess based on the provided data in the 3-beacon case by calculating the two points of intersection of circles for the first two beacons and then choosing the point that best matches the third. It's meant to get started on the problem and is not a final solution. If beacons don't intersect, it slightly increases the radius of each up until they do meet or a threshold is met. Likewise, it makes sure the third beacon agrees within a settable threshold. For n-beacons, I would pick 3 or 4 of the strongest signals and use those. There are tons of optimizations that could be done and I think this is a trial-by-fire problem due to the unwieldy nature of beaconing.
import math
beacons = [[0.0,0.0,7.0],[0.0,10.0,7.0],[10.0,5.0,16.0]] # x, y, radius
def point_dist(x1,y1,x2,y2):
x = x2-x1
y = y2-y1
return math.sqrt((x*x)+(y*y))
# determines two points of intersection for two circles [x,y,radius]
# returns None if the circles do not intersect
def circle_intersection(beacon1,beacon2):
r1 = beacon1[2]
r2 = beacon2[2]
dist = point_dist(beacon1[0],beacon1[1],beacon2[0],beacon2[1])
heron_root = (dist+r1+r2)*(-dist+r1+r2)*(dist-r1+r2)*(dist+r1-r2)
if ( heron_root > 0 ):
heron = 0.25*math.sqrt(heron_root)
xbase = (0.5)*(beacon1[0]+beacon2[0]) + (0.5)*(beacon2[0]-beacon1[0])*(r1*r1-r2*r2)/(dist*dist)
xdiff = 2*(beacon2[1]-beacon1[1])*heron/(dist*dist)
ybase = (0.5)*(beacon1[1]+beacon2[1]) + (0.5)*(beacon2[1]-beacon1[1])*(r1*r1-r2*r2)/(dist*dist)
ydiff = 2*(beacon2[0]-beacon1[0])*heron/(dist*dist)
return (xbase+xdiff,ybase-ydiff),(xbase-xdiff,ybase+ydiff)
else:
# no intersection, need to pseudo-increase beacon power and try again
return None
# find the two points of intersection between beacon0 and beacon1
# will use beacon2 to determine the better of the two points
failing = True
power_increases = 0
while failing and power_increases < 10:
res = circle_intersection(beacons[0],beacons[1])
if ( res ):
intersection = res
else:
beacons[0][2] *= 1.001
beacons[1][2] *= 1.001
power_increases += 1
continue
failing = False
# make sure the best fit is within x% (10% of the total distance from the 3rd beacon in this case)
# otherwise the results are too far off
THRESHOLD = 0.1
if failing:
print 'Bad Beacon Data (Beacon0 & Beacon1 don\'t intersection after many "power increases")'
else:
# finding best point between beacon1 and beacon2
dist1 = point_dist(beacons[2][0],beacons[2][1],intersection[0][0],intersection[0][1])
dist2 = point_dist(beacons[2][0],beacons[2][1],intersection[1][0],intersection[1][1])
if ( math.fabs(dist1-beacons[2][2]) < math.fabs(dist2-beacons[2][2]) ):
best_point = intersection[0]
best_dist = dist1
else:
best_point = intersection[1]
best_dist = dist2
best_dist_diff = math.fabs(best_dist-beacons[2][2])
if best_dist_diff < THRESHOLD*best_dist:
print best_point
else:
print 'Bad Beacon Data (Beacon2 distance to best point not within threshold)'
If you want to trust closer beacons more, you may want to calculate the intersection points between the two closest beacons and then use the farther beacon to tie-break. Keep in mind that almost anything you do with "confidence levels" for the individual measurements will be a hack at best. Since you will always be working with very bad data, you will defintiely need to loosen up the power_increases limit and threshold percentage.
You have 3 points : A(xA,yA,zA), B(xB,yB,zB) and C(xC,yC,zC), which respectively are approximately at dA, dB and dC from you goal point G(xG,yG,zG).
Let's say cA, cB and cC are the confidence rate ( 0 < cX <= 1 ) of each point.
Basically, you might take something really close to 1, like {0.95,0.97,0.99}.
If you don't know, try different coefficient depending of distance avg. If distance is really big, you're likely to be not very confident about it.
Here is the way i'll do it :
var sum = (cA*dA) + (cB*dB) + (cC*dC);
dA = cA*dA/sum;
dB = cB*dB/sum;
dC = cC*dC/sum;
xG = (xA*dA) + (xB*dB) + (xC*dC);
yG = (yA*dA) + (yB*dB) + (yC*dC);
xG = (zA*dA) + (zB*dB) + (zC*dC);
Basic, and not really smart but will do the job for some simple tasks.
EDIT
You can take any confidence coef you want in [0,inf[, but IMHO, restraining at [0,1] is a good idea to keep a realistic result.

How to modify d3js fisheye distortion so that it will support radius

I am trying to modify fisheye this project so that I can use radius function to increase fisheye size. My aim is to see more cells bigger around mouse. Current implementation does not support radius function. If I use circular instead of scale, I can use radius function. But in this case, I dont know how to use circular.
Either way, help is appreciated :)
Thanks!
The radius parameter on the circular fisheye puts a boundary to the magnification effects. In contrast, in the scale/Cartesian fisheye, the entire graph is modified. The focus cell is enlarged, and other cells are compressed according to how far away they are from the focus. There is no boundary, the compression continues smoothly (getting progressively more compressed) until the edge of the plot. See http://bost.ocks.org/mike/fisheye/#cartesian
If what you want is that cells near to the focus aren't compressed as much (so you can still compare adjacent cells effectively), then the parameter to change is the distortion parameter. Lower distortion will reduce the amount by which the focus cell is magnified, and therefore leave more room for adjacent cells. The default distortion parameter is 3, you're using higher values, which increases the magnification of the focus cell at the expense of all the others.
If changing the distortion doesn't satisfy you, try changing the scale type by using d3.fisheye.scale(d3.scale.sqrt); this will change the function determining how the image magnification changes as you move away from the focus point. (I couldn't get other scale types to work -- log gives an error, and with power scales there is no way to set the exponent.)
Edit
After additional playing around, I'm not satisfied with the results from changing the input scale type. I misunderstood how that would affect it: it doesn't change the scale function for the distortion, but for the raw data, so that changes are different for points above the focus compared to point below the focus. The scale type you give as a parameter to the fisheye scale should be the underlying scale type that makes sense for the data, and is distinct from the fisheye effects.
Instead, I've tried some custom code to add an exponent into the calculation. To understand how it works, you need to first break down the original function:
The original code for the fisheye scale is:
function fisheye(_) {
var x = scale(_),
left = x < a,
range = d3.extent(scale.range()),
min = range[0],
max = range[1],
m = left ? a - min : max - a;
if (m == 0) m = max - min;
return a + (left ? -1 : 1) * m * (d + 1) / (d + (m / Math.abs(x - a)));
}
The _ is the input value, scale is usually a linear scale for which domain and range have been set, a is the focus point in the output range, and d is the distortion parameter.
In other words: to determine the point at which a value is drawn on the distorted scale:
calculate the range position of the value based on the default/undistorted scale;
calculate it's distance from the focal point ({distance}, Math.abs(x-a));
calculate the distance between edge of the graph and the focal point ({total distance}, m);
the returned value is offset from the focal point by {total distance} multiplied by
(d + 1) / (d + ({total distance} / {distance}) );
adjust as appropriate depending on whether the value is below or above the focal point.
For an input point that is half-way between the focal point and the edge of the graph on the undistorted scale, the inner fraction {total distance}/{distance} will equal 2. The outer fraction will therefore be (d+1)/(d+2). If d is 0 (no distortion), this will equal 1/2, and the output point will also be half-way between the focal point and the edge of the graph. As the distortion parameter, d, increases, that fraction also increases: at d=1, the output point would be 2/3 of the way from the focal point to the edge of the graph; at d=2, it would be 3/4 of the way to the edge of the graph; and so on.
In contrast, when the input value is very close to the focal point, {distance} is nearly 0, so the inner fraction approaches infinity and the outer fraction approaches 0, so the returned point will be very close to the focal point.
Finally, when the input value is very close to the edge of the graph, {distance} is nearly {total distance}, and both the inner and outer fractions will be nearly 1, so the returned point will also be very close to the edge of the graph.
Those last two identities we want to keep. We just want to change the relationship in between -- how the offset from focal point changes as the input point gets farther away from the focal point and closer to the edge of the graph. Changing the distortion parameter changes the amount of distortion in both near and far values equally. If you reduce the distortion parameter you also reduce the overall magnification, since all the data still has to fit in the same space.
The OP wanted to reduce the rate of change in magnification between cells near the focal point. Reducing the distortion parameter does this, but only by reducing the magnification overall. The ideal approach would be to change the relationship between distance from the focal point and degree of distortion.
My changed code for the same function is:
function fisheye(_) {
var x = scale(_),
left = x < a,
range = d3.extent(scale.range()),
min = range[0],
max = range[1],
m = left ? a - min : max - a,
dp = Math.pow(d, p);
if (m == 0) return left? min : max;
return a + (left ? -1 : 1) * m *
Math.pow(
(dp + 1)
/ (dp + (m / Math.abs(x-a) ) )
, p);
}
I've changed two things: I raise the fraction (d + 1)/(d + {total distance}/{distance}) to a power, and I also replace the original d value with it raised to the same exponent (dp). The first change is what changes the relationship, the second is just an adjustment so that a given distortion parameter will have approximately the same overall magnification effect regardless of the power parameter.
The fraction raised to the power will still be close to zero if the fraction is close to zero, and will still be close to one if the fraction is close to one, so the basic identities remain the same. However, when the power parameter is less than one, the rate of change will be shallower at the edges, and steeper in the middle. For a power parameter greater than 1, the rate of change will be quite steep at the edges and shallower near the focal point.
Example here: http://codepen.io/AmeliaBR/pen/zHqac
The horizontal fisheye scale has a square-root power function (p = 0.5), while the vertical has a square function (p = 2). Both have the same unadjusted distortion parameter (d = 6).
The effect of the square root function is that even the farthest columns still have some visible width, but the change in column width near the focal point is significant. The effect of the power of 2 function is that the rows far away from the focal point are compressed to nearly invisible height, but the rows above and below the focus are still of significant size. I think this latter version achieves what #piedpiper was hoping for.
I've of course also added a .power function to the fisheye scale in order to set the p parameter, and have set the default value for p to 1, which will give the same results as the original fisheye scale. I use the name power for the method to distinguish from the exponent method of power scales, which would be used if they underlying scale (before distortion) had a power relationship.

Ideas for algorithm to generate random flower

Can anyone suggest any links, ideas or algorithms to generate flowers randomly like the one as my profile pic? The profile pic flower has only a 10 x 10 grid and the algorithm is not truly random. I would also prefer that the new algorithm use a grid of about 500 x 500 or even better, allow the user to pick the size of the grid.
[Plant[][] is declared as int plant[10][10];]
public void generateSimpleSky(){
for(int w2=0;w2<10;w2++)
for(int w3=0;w3<10;w3++)
plant[w2][w3]=5;
}
public void generateSimpleSoil(){
for(int q=0;q<10;q++)
plant[q][9]=1;
}
public void generateSimpleStem(){
int ry=rand.nextInt(4);
plant[3+ry][8]=4;
xr=3+ry;
for(int u=7;u>1;u--){
int yu=rand.nextInt(3);
plant[xr-1+yu][u]=4;
xr=xr-1+yu;
}
}
public void generateSimpleFlower(){
plant[xr][2]=3;
for(int q2=1;q2<4;q2++)
if((2-q2)!=0)
plant[xr][q2]=2;
for(int q3=xr-1;q3<=xr+1;q3++)
if((xr-q3)!=0)
plant[q3][2]=2;
}
It sounds like a reasonably simple problem where you just generate 1 parameter at a time, possibly based on the output of the previous variables.
My model of a flower will be: It has just a reasonably upright stem, a perfectly round center, some amount of leaves on the stem on alternating sides, petals perfectly distributed around the center.
random() is just a random number within some chosen bounds, the bounds may be unique for each variable. random(x1, x2, ..., xn) generates a random number within some bounds dependent on the variables x1, x2, ..., xn (as in stemWidth < stemHeight/2, a reasonable assumption).
The Stem
stemXPosition = width / 2
stemHeight = random()
stemWidth = random(stemHeight)
stemColour = randomColour()
stemWidthVariationMax = random(stemWidth, stemHeight)
stemWidthVariationPerPixel = random(stemWidth, stemHeight)
stemWidthVariationMax/-PerPixel are for generating a stem that isn't perfectly straight (if you want to do something that complicated, a low PerPixel is for smoothness). Generate the stem using these as follows:
pixelRelative[y-position][0] := left x-position at that y-position relative to the stem
pixelRelative[y-position][1] := right x-position at that y-position relative to the stem
pixelRelative[0][0] = randomInRange(-stemWidthVariationMax, stemWidthVariationMax)
for each y > 0:
pixelRelative[y-1][0] = max(min(randomInRange(pixel[y] - stemWidthVariationPerPixel,
pixel[y] + stemWidthVariationPerPixel),
-stemWidthVariationMax),
stemWidthVariationMax)
//pixelRelative[0][1] and pixelRelative[y-1][1] generated same as pixelRelative[y-1][i]
for each y:
pixelAbsolute[y][0] = width / 2 - stemWidth / 2 + pixelRelative[y][0]
pixelAbsolute[y][1] = width / 2 + stemWidth / 2 + pixelRelative[y][1]
You can also use arcs to simplify things and go more than 1 pixel at a time.
The Top
centerRadius = random(stemHeight)
petalCount = random() // probably >= 3
petalSize = random(centerRadius, petalCount)
It's not too easy to generate the petals, you need to step from 0 to 2*PI with step-size of 2*PI/petalCount and generate arcs around the circle. It requires either a good graphics API or some decent maths.
Here's some nicely generated tops of flowers, though seemingly not open-source. Note that they don't have a center at all. (or centerRadius = 0)
The Leaves
You could probably write an entire paper on this, (like this one) but a simple idea would just be to generate a 1/2 circle and extend lines outward from there to meet at 2*the radius of the circle and to draw parallel lines on the flower.
Once you have a leaf generation algorithm:
leafSize = random(stemHeight) // either all leaves are the same size or generate the size for each randomly
leafStemLength = random(leafSize) // either all leaves have the same stem length or generate for each randomly
leafStemWidth = random(leafStemLength)
leaf[0].YPosition = random(stemHeight)
leaf[0].XSide = randomly either left or right
leaf[0].rotation = random between say 0 and 80 degrees
for each leaf i:
leaf[i].YPosition = random(stemHeight, leaf[i-1]) // only generate new leaves above previous leaves
leaf[i].XSide = opposite of leaf[i].XSide
Last words
The way to determine the bounds of each random would be either to argue it out, or give it some fixed value, generate everything else randomly a few times, keep increasing / decreasing it until it starts to look weird.
10 x 10 versus 500 x 500 would probably require greatly different algorithms, I wouldn't recommend the above for below 100 x 100, maybe generate a bigger image and simply shrink it using averaging or something.
Code
I started writing some Java code, when I realised it may take a bit longer than I would like to spend on this, so I'll show you what I have so far.
// some other code, including these functions to generate random numbers:
float nextFloat(float rangeStart, float rangeEnd);
int nextInt(int rangeStart, int rangeEnd);
...
// generates a color somewhere between green and brown
Color stemColor = Color.getHSBColor(nextFloat(0.1, 0.2), nextFloat(0.5, 1), nextFloat(0.2, 0.8));
int stemHeight = nextInt(height/2, 3*height/4);
int stemWidth = nextInt(height/20, height/20 + height/5);
Color flowerColor = ??? // I just couldn't use the same method as above to generate bright colors, but I'm sure it's not too difficult
int flowerRadius = nextInt(Math.min(stemHeight, height - stemHeight)/4, 3*Math.min(stemHeight, height - stemHeight)/4);

Resources