I'm trying to get accurate location with the help of 3 or more iBeacons. There are two steps
Getting accurate distance from iBeacon's RSSI.
Applying Trilateration algorithm to calculate the location.
For now i'm not getting the precise distance. iOS "Accuracy" is not truly distance. I'm applying different formulas to calculate distance but i'm unable to find precise distances. Till now I'm able to get precise distance up to 10 meters, but i need distances for like 20 meters at least.
(I'm using default txPower of iBeacon witch is -59 dbm and I've measured power of C3 and using Broadcast interval of 300ms.
Any help would be highly appreciated. Thanks!
Unfortunately, it is really not possible to get very accurate distance estimates over 10 meters. The problem is that the signal gets relatively weak at that distance and the noise overwhelms the RSSI measurement -- at least for quick responses. The signal to noise ratio, as it is known, is a general problem in all radio applications. For Bluetooth LE applications, you either have to accept inaccurate distance estimates at greater distances, or average the RSSI samples over a very long period to help filter out the noise.
Once you get your distance you could use the following code sample to find the co-ordinates of the trilaterated point
{
//Declarations
//include the math.h header file
double xa=0, ya=0; //Co-ordinates of 1st ACCESS Point(preferably set as origin)
double xb=10,yb=0; //Co-ordinates of 2nd ACCESS Point
double xc=5,yc=10; //Co-ordinates of 3rd ACCESS Point
double triPt[2]={0,0}; //Trilaterated point
double p1[2]={xa,ya};
double p2[2]={xb,yb};
double p3[2]={xc,yc};
double ex[2],ey[2],ez[2];
double i=0,k=0,x=0,y=0;
double distA=5*1.41; //Distance from API 1 (Measured using RSSI values)
double distB=5*1.41; //""""" 2
double distC=5; //""""" 3
//Transforms to find circles around the three access points
//Here it is assumed that all access points and the trilaterated point are in the same plane
for(int j=0;j<2;j++)
{
ex[j]=p2[j]-p1[j];
}
double d=sqrt(pow(ex[0],2)+pow(ex[1],2));
for(int j=0;j<2;j++)
{
ex[j]=ex[j]/(sqrt(pow(ex[0],2)+pow(ex[1],2)));
}
for(int j=0;j<2;j++)
{
i=i+(p3[j]-p1[j])*ex[j];
}
for(int j=0;j<2;j++)
{
ey[j]=p3[j]-p1[j]-i*ex[j];
}
for(int j=0;j<2;j++)
{
ey[j]=ey[j]/(sqrt(pow(ey[0],2)+pow(ey[1],2)));
}
for(int j=0;j<2;j++)
{
k=k+(ey[j]*(p3[j]-p1[j]));
}
x=(pow(distA,2)-pow(distB,2)+pow(d,2))/(2*d);
y=((pow(distA,2)-pow(distC,2)+pow(i,2)+pow(k,2))/(2*k))-((i/k)*x);
//Calculating the co-ordinates of the point to be trilaterated
for(int j=0;j<3;j++)
{
triPt[j]=p1[j]+x*ex[j]+y*ey[j];
}
//Print the values
cout<<triPt[0]<<endl<<triPt[1];
getch();
return 0;
}
Related
I'm reading the book Realistic Ray Tracing and I couldn't understand the box filter code:
void boxFilter(Vector2* samples, int num_samples)
{
for (int i = 0; i < num_samples; i++)
{
samples[i].x = samples[i].x - 0.5f;
samples[i].y = samples[i].y - 0.5f;
}
}
In my opinion, "filter" is an array of weights, and sampling is to generate positions to produce rays, filter is to combine the results (so the filter method should return float[], but the function above returns an Vector2[]). What does the code mean?
The basic idea of box filtering is that no matter where on an image plane "pixel" the sample lands, a box filter causes the renderer to act as if it landed in the exact center of that pixel.
I haven't read that particular book, but I'm guessing that in that code fragment, sample[].x and .y are the int (or previously floor()ed) locations where the returned ray hits the image plane, in pixel coordinates. Therefore, subtracting .5 from each puts the sample in the geometric center of each pixel, hence, it's a box filter.
For an in-depth discussion of box filtering (and other filters), see Physically-Based Rendering, Chapter 7, "Sampling and Reconstruction".
I want a distance measure to find similarity between images.
What i have tried till now:
1) I have used low level distance metrics such as Normalized cross correlation (This retrieves similar images based on some threshold values) , but it cant retrieve images which are rotated or shifted. But if brightness of a particular image is reduced, the images are not retrieved even if they were of the same type.
2)Bhattacharya coefficient: It retrieves Rotated or shifted images but doesnot Detect images whose intensity(Brightness) is reduced.
3) Tried with global features like SURF which provide help for rotated(30 degrees) and transformed images , but no help for images with intensity difference.
What i need: I need a distance metric for image similarity which recognizes those images whose brightness are reduced an all images which are Transformed(rotated and shifted).
I want combination of these two metrics (Cross correlation) + (Bhattacharya Coefficient).
Will Mutual Information help me in this issue?? Or Can anyone Please suggest me a new metric For similarity measurement for this issue. Tried Googling with a wide issue and irrelevant answers. Can anyone guide me in here.Advance Thanks.
I implemented some mututal information and Kullback-Leibler distance to find similarity in Facades. It worked really well, how it works is explaind here:
Image-based Procedural Modeling of Facades
The whole steps are explained in the paper. But they are not for similarity of Images they are for the symmetrie of image parts. But maybe it works well also for Image comparison. Well it is just and idea maybe it works you should try. One think where i really see a problem is the rotation. I don't think this procedure is rotation invariant. Maybe you should look for some Visual Information Retrieval techniques, for your problem.
First you have to compute the mutual Information. For thate you create an accumulator array of the size of 256 x 256. Why that size? First for every gray color so the joint distribution and then for the marginal distribution.
for(int x = 0; x < width; x++)
for(int y = 0; y < height; y++)
{
int value1 = image1[y *width + x];
int value2 = image2[y * width + x];
//so first the joint distribution
distributionTable[value1][value2]++;
// and now the marginal distribution
distributionTable[value1][256]++;
distributionTable[256][value2]++;
}
Now you own the distribution table, and now you can compute the Kullback-Leibler distance.
for(int x = 0; x < width; x++)
for(int y = 0; y < height; y++)
{
int value1 = image1[y *width + x];
int value2= image2[y * width + x];
double ab = distributionTable[value1][value2] / size;
double a = distributionTable[value1][256] / size;
double b = distributionTable[256][value2] / size;
//Kullback-Leibler distance
sum += ab * Math.log(ab / (a * b));
}
A smaller sum says you that the similiarity/symmetrie between the two Images/Regions is very high. Should work well if the Image just have a brightness difference. Maybe there are other distances which are inveriant against rotation.
Maybe you shold try to to use SURF, SIFT or something like this. Then you can match the feature points. More higher the match results are so higher is the similarity. I think this is a better approach, because you don't have to care about scale, brightness and rotation difference. And it is also fast implemented with OpenCV
I am trying to implement Bezier Curves for an assignment. I am trying to move a ball (using bezier curves) by giving my function an array of key frames. The function should give me all the frames in between the key frames ... or control points ... but although I'm using the formula found on wikipedia... it is not really working :s
her's my code:
private void interpolate(){
float x,y,b, t = 0;
frames = new Frame[keyFrames.length];
for(int i =0;i<keyFrames.length;++i){
t+=0.001;
b = Bint(i,keyFrames.length,t);
x = b*keyFrames[i].x;
y = b*keyFrames[i].y;
frames[i] = new Frame(x,y);
}
}
private float Bint(int i, int n, float t){
float Cni = fact(n)/(fact(i) * fact(n-i));
return Cni * pow(1-t,n-i) * pow(t,i);
}
Also I've noticed that the frames[] array should be much bigger but I can't find any other text which is more programmer friendly
Thanks in advance.
There are lots of things that don't look quite right here.
Doing it this way, your interpolation will pass exactly through the first and last control points, but not through the others. Is that what you want?
If you have lots of key frames, you're using a very-high-degree polynomial for your interpolation. Polynomials of high degree are notoriously badly-behaved, you may get your position oscillating wildly in between the key frame positions. (This is one reason why the answer to question 1 should probably be no.)
Assuming for the sake of argument that you really do want to do this, your value of t should go from 0 at the start to 1 at the end. Do you happen to have exactly 1001 of these key frames? If not, you'll be doing the wrong thing.
Evaluating these polynomials with lots of calls to fact and pow is likely to be inefficient, especially if n is large.
I'm reluctant to go into much detail about what you should do without knowing more about the scope of your assignment -- it will do no one any good for Stack Overflow to do your homework for you! What have you already been told about Bezier curves? What exactly does your assignment ask you to do?
EDITED to add:
The simplest way to do interpolation using Bezier curves is probably this. Have one (cubic) Bezier curve between each pair of key-points. The endpoints (first and last control points) of each Bezier curve are those keypoints. You need two more control points. For motion to be smooth as you move through a given keypoint, you need (keypoint minus previous control point) = (next control point minus keypoint). So you're choosing a single vector at each keypoint, which will determine where the previous and subsequent control points go. As you move through each keypoint, you'll be moving in the direction of that vector, and the longer the vector is the faster you'll be moving. (If the vector is zero then your cubic Bezier degenerates into a simple straight-line path.)
Choosing that vector so that everything looks nice is highly nontrivial, but you probably aren't really being asked to do that at this stage. So something pretty simple will probably be good enough. You might, e.g., take the vector to be proportional to (next keypoint minus previous keypoint). You'll need to do something a bit different at the start and end of your path if you do that.
Finally got What I needed! Here's what I did:
private void interpolate() {
float t = 0;
float x,y,b;
for(int f =0;f<frames.length;f++) {
x=0;
y=0;
for(int i = 0; i<keyFrames.length; i++) {
b = Bint(i,keyFrames.length-1,map(t,0,time,0,1));
x += b*keyFrames[i].x;
y += b*keyFrames[i].y;
}
frames[f] = new Frame(x,y);
t+=partialTime;
}
}
private void createInterpolationData() {
time = keyFrames[keyFrames.length-1].time -
keyFrames[0].time;
noOfFrames = 60*time;
partialTime = time/noOfFrames;
frames = new Frame[ceil(noOfFrames)];
}
Currently, I am working on a project that is trying to group 3d points from a dataset by specifying connectivity as a minimum euclidean distance. My algorithm right now is simply a 3d adaptation of the naive flood fill.
size_t PointSegmenter::growRegion(size_t & seed, size_t segNumber) {
size_t numPointsLabeled = 0;
//alias for points to avoid retyping
vector<Point3d> & points = _img.points;
deque<size_t> ptQueue;
ptQueue.push_back(seed);
points[seed].setLabel(segNumber);
while (!ptQueue.empty()) {
size_t currentIdx = ptQueue.front();
ptQueue.pop_front();
points[currentIdx].setLabel(segNumber);
numPointsLabeled++;
vector<int> newPoints = _img.queryRadius(currentIdx, SEGMENT_MAX_DISTANCE, MATCH_ACCURACY);
for (int i = 0; i < (int)newPoints.size(); i++) {
int newIdx = newPoints[i];
Point3d &newPoint = points[newIdx];
if(!newPoint.labeled()) {
newPoint.setLabel(segNumber);
ptQueue.push_back(newIdx);
}
}
}
//NOTE to whoever wrote the other code, the compiler optimizes i++
//to ++i in cases like these, so please don't change them just for speed :)
for (size_t i = seed; i < points.size(); i++) {
if(!points[i].labeled()) {
//search for an unlabeled point to serve as the next seed.
seed = i;
return numPointsLabeled;
}
}
return numPointsLabeled;
}
Where this code snippet is ran again for the new seed, and _img.queryRadius() is a fixed radius search with the ANN library:
vector<int> Image::queryRadius(size_t index, double range, double epsilon) {
int k = kdTree->annkFRSearch(dataPts[index], range*range, 0);
ANNidxArray nnIdx = new ANNidx[k];
kdTree->annkFRSearch(dataPts[index], range*range, k, nnIdx);
vector<int> outPoints;
outPoints.reserve(k);
for(int i = 0; i < k; i++) {
outPoints.push_back(nnIdx[i]);
}
delete[] nnIdx;
return outPoints;
}
My problem with this code is that it runs waaaaaaaaaaaaaaaay too slow for large datasets. If I'm not mistaken, this code will do a search for every single point, and the searches are O(NlogN), giving this a time complexity of (N^2*log(N)).
In addition to that, deletions are relatively expensive if I remember right from KD trees, but also not deleting points creates problems in that each point can be searched hundreds of times, by every neighbor close to it.
So my question is, is there a better way to do this? Especially in a way that will grow linearly with the dataset?
Thanks for any help you may be able to provide
EDIT
I have tried using a simple sorted list like dash-tom-bang said, but the result was even slower than what I was using before. I'm not sure if it was the implementation, or it was just simply too slow to iterate through every point and check euclidean distance (even when just using squared distance.
Is there any other ideas people may have? I'm honestly stumped right now.
I propose the following algorithm:
Compute 3D Delaunay triangulation of your data points.
Remove all the edges that are longer than your threshold distance, O(N) when combined with step 3.
Find connected components in the resulting graph which is O(N) in size, this is done in O(N α(N)).
The bottleneck is step 1 which can be done in O(N2) or even O(N log N) according to this page http://www.ncgia.ucsb.edu/conf/SANTA_FE_CD-ROM/sf_papers/lattuada_roberto/paper.html. However it's definitely not a 100 lines algorithm.
When I did something along these lines, I chose an "origin" outside of the dataset somewhere and sorted all of the points by their distance to that origin. Then I had a much smaller set of points to choose from at each step, and I only had to go through the "onion skin" region around the point being considered. You would check neighboring points until the distance to the closest point is less than the width of the range you're checking.
While that worked well for me, a similar version of that can be achieved by sorting all of your points along one axis (which would represent the "origin" being infinitely far away) and then just checking points again until your "search width" exceeds the distance to the closest point so far found.
Points should be better organized. To search more efficiently instead of a vector<Point3d> you need some sort of a hash map where hash collision implies that two points are close to each other (so you use hash collisions to your advantage). You can for instance divide the space into cubes of size equal to SEGMENT_MAX_DISTANCE, and use a hash function that returns a triplet of ints instead of just an int, where each part of a triplet is calculated as point.<corresponding_dimension> / SEGMENT_MAX_DISTANCE.
Now for each point in this new set you search only for points in the same cube, and in adjacent cubes of space. This greatly reduces the search space.
I have a device that records GPS data. A reading is taken every 2-10 seconds. For an activity taking 2 hours there are a lot of GPS points.
Does anyone know of an algorithm for compressing the dataset by removing redundant data points. i.e. If a series of data points are all in a straight line then only the start and end point are required.
check out the Douglas Peucker Algorithm which is used to simplify a polygon. i´ve used this successfully to reduce the amount of gps waypoints when trasmitted to clients for displaying purposes.
You probably want to approximate your path x(t), y(t) with a polynomial approximation of it. Are you looking for something like this: http://www.youtube.com/watch?v=YtcZXlKbDJY ???
You can remove redundant points by performing a very basic simplification based on calculation of slope between subsequent points.
Here is a bit of but not complete C++ code presenting possible algorithm:
struct Point
{
double x;
double y;
};
double calculate_slope(Point const& p1, Point const& p2)
{
// dy y2 - y1
// m = ---- = ---------
// dx x2 - x1
return ((p2.y - p1.y) / (p2.x - p1.x));
}
// 1. Read first two points from GPS stream source
Point p0 = ... ;
Point p1 = ... ;
// 2. Accept p0 as it's first point
// 3. Calculate slope
double m0 = calculate_slope(p0, p1);
// 4. next point is now previous
p0 = p1;
// 5. Read another point
p1 = ... ;
double m1 = calculate_slope(p0, p1);
// 6. Eliminate Compare slopes
double const tolerance = 0.1; // choose your tolerance
double const diff = m0 - m1;
bool if (!((diff <= tolerance) && (diff >= -tolerance)))
{
// 7. Accept p0 and eliminate p1
m0 = m1;
}
// Repeat steps from 4 to 7 for the rest of points.
I hope it helps.
There is a research paper on Compressing GPS Data on Mobile Devices.
Additionally, you can look at this CodeProject article on Writing GPS Applications. I think the problem you will have is not for straight points, but curved roads. It all depends on how precise you want your path to be.
The code given above has a couple of issues that might make it unsuitable:
"same slope" tolerance measures difference in gradient rather than angle, so NNE to NNW is considered a much bigger difference than NE to SE (assuming y axis runs North-South).
One way of addressing this would be for the tolerance to measure how the dot product of two segments compares with the product of their lengths. (It might help understanding to remember that dot product of two vectors is the product of their lengths and the cosine of the angle between them.) However, see next point.
Considers only slope error rather than position error, so a long ENE segment followed by long ESE segment is just as likely to be compressed to a single segment as a string of short segments alternating between ENE and ESE.
The approach that I was thinking of would be to look at what vector graphics applications do to convert a list of cursor coordinates into a sequence of curves. E.g. see lib2geom's bezier-utils.cpp. Note that (i) it's almost entirely position-based rather than direction-based; and (ii) it gives cubic bézier curves as output rather than a polyline, though you could use the same approach to give polyline output if that's preferred (in which case the Newton-Raphson step becomes essentially just a simple dot product).