I need to determine location based upon N number of WIFI Access Points and their signal strengths. Is the best way to do this using a least-squares fit algorithm? How would I go about implementing such an algorithm?
Least Squares fitting should work if your noise in the signal strength is Gaussian, ie. follows a normal distribution.
What you are really looking for is the maximum likelihood estimator for the mean of the signal strength, and you hope that that estimation corresponds in some way to a distance.
"Least squares corresponds to the maximum likelihood criterion if the
experimental errors have a normal distribution." -- http://en.wikipedia.org/wiki/Least_squares
So if your signal strength noise is not normally distributed (Gaussian) you are out of luck.
Of course you will also have a standard deviation of your estimate, which will let you know sure you can be of your location estimate. The more wifi signals, and more data points from each signal you can record, the better your estimate will be.
I have been trying to find time to do this too, please do tell how it turns out.
Related
I have 2 digital pseudorandom signals. The second signal is a copy of the first but it is shifted in time as well as scaled in time and has random noise added to it.
In mathspeak: s1=f(t) while s2=f(a*t+c) + noise.
I begin my sampling of this pair of signals at an arbitrary time t thus the relation of this time to the "beginning" of the pseudorandom sequence is not known.
When a=1, I can just use cross-correlation techniques to find c, but a!=1 throws a monkey wrench into the problem.
What would be the optimal approach to find a and c given these two signals?
Right now I am brute-forcing many combinations of a and c and it takes hours on modern computers to find them.
I am not looking for a ready code to solve this. Just a good general algorithm.
P.S.
I can read C and C++ well
There is not much information about your signals (for example, is the length of both signals equal, is your time shift cyclic, or you use zero padding when you do time shift...).But I can give general advice. You can try use some minimization or fitting packet/library. I used with success MPFIT: A MINPACK-1 Least Squares Fitting Library in C for a task similar to yours.
Your task has 2 parameters - a and c. Your s2 data set is “observed data points” in MPFIT. f(a*t+c) - is a model. MPFIT finds the best a and c, which minimize the Least squares of difference between s2 and f(a*t+c). One of the disadvantages of this method is that you must set the initial values for a and b, but a good initial approximation is usually known.
I have a set of angles. The distribution could be roughly described as:
there a are usually several values very close (0.0-1.0 degree apart) to the correct solution
there are also noisy values being very far from the correct result, even opposite direction
is there a common solution/strategy for such a problem?
For multidimensional data, I would use RANSAC - but I have the impression that it is unusual to apply Ransac on 1-dimensional data. Another problem is computing the mean of an angle. I read some other posts about how to calculate the mean of angles by using vectors, but I just wonder if there isn't a particular fitting solution which deals with both issues already.
You can use RANSAC even in this case, all the necessary conditions (minimal samples, error of a data point, consensus set) are met. Your minimal sample will be 1 point, a randomly picked angle (although you can try all of them, might be fast enough). Then, all the angles (data points) with error (you can use just absolute distance, modulo 360) less than some threshold (e.g. 1 degree), will be considered as inliers, i.e. within the consensus set.
If you want to play with it a bit more, you can make the results more stable by adding some local optimisation, see e.g.:
Lebeda, Matas, Chum: Fixing the Locally Optimized RANSAC, BMVC 2012.
You could try another approaches, e.g. median, or fitting a mixture of Gaussian and uniform distribution, but you would have to deal with the periodicity of the signal somehow, so I guess RANSAC should be your choice.
I have to calculate phase difference between two signals. I'm not very strong mathematically but I understand and interested in implementing F.F.T. algorithm on my electronic signals for calculating exact phase difference between them. I read lots of documents and papers. In some papers I obtained following understanding:-
1. FFT is good when integer no. of sampled period.
2. And when your frequency of interest is in the bins of FFT.
3. There are different methods such as 3/4-parameter sine wave fit which claims accurate phase difference based on LSE(least square error)method.
I need to calculate the phase difference between to signals (Current and Voltage) On real time basis where frequency of my signals wont be constant but at any instant both signals will have the same frequency(~50kHz).
Considerations:
My signals will be filtered using FIR and SNR will be moderate.
Noise : First harmonic of the fundamental + Gaussian Noise
My question and concerns are:-
1. What should be the sampling frequency?
2. How much should be the length of the FFT / How many number of cycles of input signals be sampled?
According to this document SWFM is the best method:-
http://www.metrology.pg.gda.pl/full/2005/M&MS_2005_427.pdf
As I'm weak in mathematics can you please help me understand the basics of this method ? which are the input signals to this algorithm ?
We have a set of radio nodes in close proximity to each other and would like to allocate the frequencies for them to minimize overlap. To get complete coverage of the area, radio channels need to be oversubscribed and so we will have nearby radios transmitting on the same frequency.
Sample data:
5 Frequencies
343 Radios
4158 Edges
My current best guess is to randomly generate a population of frequency allocations and to swap frequencies between radios until the best score does not improve for 10 generations. Score is the sum of 1/range^2 for radios on the same frequency.
Each edge is the distance between the radios, corrected for walls and floors. Edges above 2* the max radio range have been culled from the list.
Is there a better way?
This is basically a graph-coloring problem with a twist. Rather than all proper colorings being equally good, some proper colorings are better than others, as defined by your scoring algorithm.
I think your genetic approach is practical and will yield good (if not provably optimal) solutions, but I would definitely suggest looking at some graph-coloring papers and seeing how applicable they are. It is very likely that you will get some great ideas for deciding how your algorithm should consider the available choices.
I agree that a simulation based on random initial assignment followed by some optimization is a good approach, but you're describing an optimization procedure which does not seem optimal, if I understand correctly (you're planning to swap frequencies at random if I read you correctly). At each optimization step you could pick a "reasonable" improvement by taking one radio from each frequency group and considering the 5*4/2=10 possible swaps of frequencies between two of them, and either choose the best, or (say) one of those which has positive delta score, with probabilities proportional to the deltas in the scores.
In the spirit of "simulated annealing", once the overall score seems to have more or less stabilized, you may want to switch for a small number of steps to "high temperature" (high randomness) where you just pick the set of 5 radios and swap them all e.g. with a circular permutation of frequency assignments -- do that a few times then go to the "cooling down" part again with the procedure in the above paragraph (which tries to get a cheap simulation of a maximum-gradient descent;-).
My quick stab at it would be to use a thin plate spline (or possibly a similar, cleverer linear algebra technique) to fit a plane to the function of frequency density. The average 'altitude' of each plane (per frequency) would then tell you whether a frequency is overused (i.e. when it's higher than the others); the slope would be an indication of the spatial distribution.
I'm reading data from a device which measures distance. My sample rate is high so that I can measure large changes in distance (i.e. velocity) but this means that, when the velocity is low, the device delivers a number of measurements which are identical (due to the granularity of the device). This results in a 'stepped' curve.
What I need to do is to smooth the curve in order to calculate the velocity. Following that I then need to calculate the acceleration.
How to best go about this?
(Sample rate up to 1000Hz, calculation rate of 10Hz would be ok. Using C# in VS2005)
The wikipedia entry from moogs is a good starting point for smoothing the data. But it does not help you in making a decision.
It all depends on your data, and the needed processing speed.
Moving Average
Will flatten the top values. If you are interrested in the minimum and maximum value, don't use this. Also I think using the moving average will influence your measurement of the acceleration, since it will flatten your data (a bit), thereby acceleration will appear to be smaller. It all comes down to the needed accuracy.
Savitzky–Golay
Fast algorithm. As fast as the moving average. That will preserve the heights of peaks. Somewhat harder to implement. And you need the correct coefficients. I would pick this one.
Kalman filters
If you know the distribution, this can give you good results (it is used in GPS navigation systems). Maybe somewhat harder to implement. I mention this because I have used them in the past. But they are probably not a good choice for a starter in this kind of stuff.
The above will reduce noise on your signal.
Next you have to do is detect the start and end point of the "acceleration". You could do this by creating a Derivative of the original signal. The point(s) where the derivative crosses the Y-axis (zero) are probably the peaks in your signal, and might indicate the start and end of the acceleration.
You can then create a second degree derivative to get the minium and maximum acceleration itself.
You need a smoothing filter, the simplest would be a "moving average": just calculate the average of the last n points.
The question here is, how to determine n, can you tell us more about your application?
(There are other, more complicated filters. They vary on how they preserve the input data. A good list is in Wikipedia)
Edit!: For 10Hz, average the last 100 values.
Moving averages are generally terrible - but work well for white noise. Both moving averages & Savitzky-Golay both boil down to a correlation - and therefore are very fast and could be implemented in real time. If you need higher order information like first and second derivatives - SG is a good right choice. The magic of SG lies in the constant correlation coefficients needed for the filter - once you have decided the length and degree of polynomial to fit locally, the coefficients need only to be found once. You can compute them using R (sgolay) or Matlab.
You can also estimate a noisy signal's first derivative via the Savitzky-Golay best-fit polynomials - these are sometimes called Savitzky-Golay derivatives - and typically give a good estimate of the first derivative.
Kalman filtering can be very effective, but it's heavier computationally - it's hard to beat a short convolution for speed!
Paul
CenterSpace Software
In addition to the above articles, have a look at Catmull-Rom Splines.
You could use a moving average to smooth out the data.
In addition to GvSs excellent answer above you could also consider smoothing / reducing the stepping effect of your averaged results using some general curve fitting such as cubic or quadratic splines.