How to detect the offset in a digital signal? - algorithm

Currently I am writing an algorithm to detect the offset in the values from ADC. An example of a typical signal is as shown in the figure below.
There can be a possibility that such a signal can have an offset at any point of time due to the external conditions. An example is as shown in the figure below.
I would like to determine this exact point when an offset is added to the signal.
Approach I have tried:
Calculate the moving average of around 50 values and compare it to the old mean value. If the difference is too large then conclude that there is an offset.
Problem with this approach: It also considers the peak in the signal as the offset which is not really the case.
The offset has to be detected in real time. I am currently coding in C.
I have spent almost a week trying to figure out the solution, but as a last way out I am asking you guys.

This is a known problem in signal processing, known as step detection:
https://en.wikipedia.org/wiki/Step_detection
There are many algorithms to deal with the problem, you will have to do some research and whichever algorithm you decide to go with, you will probably have to do some parameter tweaking before it suits your needs. I would recommend starting with the sliding window algorithm for your needs, a sample implementation of student's t test can be found here, perhaps you can build on that.

Related

How to deal with asyncronous data in a kalman filter

I'm implementing a Kalman Filter which fuses 3d position data (provided from 2 different computer vision algorithms). I am modeling the problem with a 9-dimensional state vector (position, velocity, and acceleration). However, the data from each sensor does not come at the same time. Since I compute the velocity by considering the time step between the reception of the previous data and the current data point, two consecutive data points can be quite different but separated by only a very small time step, thus making it seem like the position has changed rapidly.
I am wondering if anyone has insight or direction on the best way to approach this problem- will the kalman filter itself be tolerant of this behaviour? Or should I place all data received within a time window into a bin, and perform the update/predict cycle less frequently on a batch of data? The resources I've seen for utilizing kalman filter in object tracking have used only one camera (i.e. synchronous data), so I'm having trouble finding information related to my use case.
Any help is very much appreciated! Thank you!
From all what I got to know from your question and our conversation in the comments let me first shortly describe the issue and suggest the solution.
A quick recap
You have a system with two independent sensors, which take measurements with different rates (30Hz and 5Hz) (and maybe have some time jitter). The good news is that each such measurement is completely sufficient to proceed an update step of your kalman filter. Each measurement have a time stamp.
Another important point is, that the measurements (maybe) have poor precision, so that the change in position looks not plausible.
A possible solution
Define a smallest time interval for calling your kalman filter, so that none of the recieved measurements has to wait too long to be processed. It looks for me like a 100Hz rate could be a good first choice. In this case your dt would be 0.01s.
Design your F and Q matrices based on the chosen dt (they both strongly depend on this value).
In each call without measurement execute the prediction step. As soon as a measurement comes, do update. So your call sequence would look like:
call sequence:
init()
predict()
predict()
predict()
predict()
update(sensor1)
predict()
update(sensor2)
update(sensor1)
predict()
predict()
update(sensor1)
predict()
and so on...
To deal with the precision issue you could use a reference signal (the ground truth). Analyze the error in each sensor reading for each signal (x, y, z) compared to the reference. A kalman filter can work well ONLY with readings, whose error is normally distributed with a zero mean. If you see some systematical offset, may be you can get rid of it. From the observed error you can calculate the standard deviation (and the variance), so you can tell your filter how good the measurements are. It will be your R matrix.
If you don't have a reference you can take some measurements while standing still on the same place. So your reference position would be constant and you could have a look at the dispersion of the readings.
Tune elements of your Q matrix and describe the possible dynamic of your state elements. A smaller Q element for position would tell the filter not to change it too fast. So the (possible) poor performance of your sensors will be partially eliminated (think of a low pass filter as intuition).
I hope it can help you. Please correct me if I understood something wrong.
It would be helpful to see a plot of your sensor readings (and if possible of the reference trajectory).

Find a peak in a signal

I want to know if there is any algorithm to find a peak in a signal in Java (Android). I'm working on ECG's signals, and I'm using a real time algorithm to draw the signal so each point that I'm receiving I'm drawing it directly so I don't have data for the next points.
The signal is like this
From past experience, I can suggest the following simplistic idea off the top of my head. I'm assuming you're looking for the big spike, right? If not, I think the below process will work. You just need to change your thresholds. Bear in mind, the idea below comes from experience but isn't tested!
Run the signal through a moving-average filter to smooth it out. (Critical!)
Find the discrete differential of this filtered signal.
Run the discrete differential through another moving average filter. (Also critical!)
At each zero-crossings in the smoothed differential signal, compare the point n (either the left or the right sample of the zero-crossing) to its equivalent on the original filtered signal (found in step 2). If this point is greater than some predetermined threshold, n is your big spike.
I hope this helps. Feel free to ask if you have any questions. I implemented a similar algorithm in C++ it might help to look at: https://github.com/sawbg/avda/blob/master/src/process.hpp.

Accurate parallel swathing algorithm for (GPS) guidance needed

I wrote a delphi program generating a gpx file as input for a "poor man's guidance system" for aerial spray by means of ultralight plane.
By and large, it produces route (parallel swaths) using gpx file as output.
The route's engine is based on the "Vincenty" algorithm which works fine for any wgs84 computation but
I can't get the accuracy of grid generated by ExpertGPS of Topografix (requirement).
I assume a 2D computation on the ellipsoïd :
1) From the start rtept (route point), compute the next rtept given a bearing and an arbitrary distance (swath length).
2) Compute the next rtept respective respective to previous bearing (90° turn) and another arbitrary distance (swath distance).
3) Redo 1) with the last rtept as starting point but in the opposite direction, and so on.
What's wrong with it ?
You do not describe your Pascal implementation of Vincenty's earth ellipsoid model so the following is speculation:
The model makes use of numerous geometrical trig functions-- ATAN2,
COS, SIN etc. Depending whether you use internal Delphi functions
or your own versions, there is the possibility of lack of precision
in calculations. The precision in the value of pi used in your
calculations could affect the precision you require.
Floating point arithmetic can cause decimal place errors. It will
make a difference whether you use single, double or real. I
believe some of the internal Delphi functions have changed with
different versions so possibly the version of Delphi you are
using will affect how the internal function is implemented.
If implemented accurately, Vincenty’s formula is supposed to be
accurate to within 0.5mm. Amazing accuracy. If there are rounding
errors or lack of precision in your Delphi implemention, the positional
errors can be significantly larger.
Consider the accuracy of your GPS information. Depending on how
many satellites are being used by the GPS receiver at any one time,
the accuracy of the positional information changes. Errors on
the order of 50 feet or more is possible. Additionally, the refresh
of positional information on the GPS receiver is not necessarily
instantaneous; therefore if the swath 'turns' occur rapidly, you
will have to ensure the GPS has updated at the turning point.
Your procedure to calculate the pattern seems reasonable so look
at your implementation of Vincenty's algorithm in your Delphi code.
This list is not exhaustive, I imagine others can improve it
dramatically. What I mention is based on my experience with GPS and
various versions of Delphi and what I could recall off the top of my head.
Something you might try is compare your calculations of
distance/bearing using your implementation of the algorithm with
examples provided on the Internet. There are several online
calculators. If you have not been there, the Aviation Formulary
is an excellent place to find examples of other navigational tricks.
http://williams.best.vwh.net/avform.htm . A comparison will
allow you to gain confidence in the precision of the Delphi
implementation of Vincenty's algorithm with data calculated by
mathematicians. Simply, your implementation of Vincenty may not be
precise. Then again, the error may be elsewhere.
I am doing farm GPS guidance similar for ground rig just with Android. Great for second tractor to help follow previous A B tracks especially when they disappear for a bit .
GPS accuracy repeat ability from one day to next will give larger distance. Expensive system's use dGPS2cm-10cm.5-30metres different without dGPS. Simple solution is recalibrate at known location. Cheaper light bars use this method.
Drift As above except relates to movement during job. Mostly unnoticeable <20cm 3hrs. Can jump 1-2metres rarely. I think when satellite connect or disconnect. Again recalibrate regularly at known coordinates ,i.e. spray fill point
GPS accuracy. Most phone update speed 1hz. 3? seconds between fixes at say 50km/hr , 41.66m between fixes. On ground rig 18km hrs but will be tracks after first run. Try a Bluetooth GPS 10hz check update speed and as mentioned fast turns a problem.
Accuracy of inputs and whether your guidance uses dGPS will make huge difference.
Once you are off your line say 5 metres at 100metres till next point, then at 50 metres your still 2.5 metres off unless your guidance takes you back to the route not the next coordinates.
I am not using Vincenty as I can 'bump'back onto line manually and over 1km across difference <30cm according to only reference I saw however I am taking 2 points and create parrallel points across.
Hope these ideas help your situation.

Comparing 2 one dimensional signals

I have the following problem: I have 2 signals over time. They are from the same source so they should be the same. I want to check if they really are.
Complications:
they may be measured with different sample rates
the start / end time do not correlate. The measurement does not start at the same time and end at the same time.
there may be an time offset between the two signals.
My thoughts go along Fourier transformation, convolution and statistical methods for comparison. Can someone post me some links where I can find more information on how to handle this?
You can easily correct for the phase by just shifting them so their centers of mass line up. (Or alternatively, in the Fourier domain just multiplying by the inverse of the phase of the first coefficient.)
Similarly, if you want to line up the images given only partial data, you can just cross correlate and take the maximal value (which is again easy to do in the Fourier domain).
That leaves the only tricky part of this process as dealing with the sampling rates. Now if you know a-priori what the sample rates are, (and if they are related by a rational number), you can just use sinc interpolation/downsampling to rescale them to a common sampling rate:
https://ccrma.stanford.edu/~jos/st/Bandlimited_Interpolation_Time_Limited_Signals.html
If you don't know the sampling rate, you may be a bit screwed. Technically, you can try just brute forcing over all the different rescalings of your signal, but doing this tends to be either slow or else give mediocre results.
As a last suggestion, if you just want to match sounds exactly you can try using the cepstrum and verifying that the peaks of the signal are close enough to within some tolerance. This type of analysis is used a lot in sound and speech recognition, with some refinements to make it operate a bit more locally. It tends to work best with frequency modulated data like speech and music:
http://en.wikipedia.org/wiki/Cepstrum
Fourier transformation does sound like the right way.
There is too much mathematical information for me to just start explaining here so if you really wanna know what's going on with that (cause I don't think you can just use FT without understanding it) you should use this reference from MIT OpenCourseWare: http://ocw.mit.edu/courses/mathematics/18-103-fourier-analysis-theory-and-applications-spring-2004/lecture-notes/
Hope it helped.
If you are working with a linux box and the waveforms that need to be processed have already been recorded, you can try to use the file command to display details about the recording. It gives you the sampling rate when it is invoked on a wav file, though I am not sure what format you are recording in.
If the signals are time-shifted with respect to each other, you may try to convolve one with a delta function with increasing delays and then comparing. On MATLAB, conv and all should be good enough.
These are just 'crude' attempts (almost like hacking at the problem). There may be algorithms that are shift-invariant that may do a better job.
Hope that helps.

Algorithms needed on filtering the noise caused by the vibration

For example you measure the data coming from some device, it can be a mass of the object moving on the bridge. Because it is moving the mass will give data which will vibrate in some amplitude depending on the mass of the object. Bigger the mass - bigger the vibrations.
Are there any methods for filtering such kind of noise from that data?
May be using some formulas of vibrations? Have no idea what kind of formulas or algorithms (filters) can be used here. Please suggest anything.
EDIT 2:
Better picture, I just draw it for better understanding:
Not very good picture. From that graph you can see that the frequency is the same every
time, but the amplitude chanbges periodically. Something like that I have when there are no objects on the moving road. (conveyer belt). vibrating near zero value.
When the object moves, I there are the same waves with changing amplitude.
The graph can tell that there may be some force applying to the system and which produces forced occilations. So I am interested in removing such kind of noise. I do not know what force causes such occilations. Soon I hope I will get some data on the non moving road with and without object on it for comparison with moving road case.
What you have in your last plot is basically an amplitude modulated oscillation coming from a function like:
f[x] := 10 * (4 + Sin[x]) * Sin[80 * x]
The constants have been chosen to match your plot (using just a rule of thumb)
The Plot of this function is
That isn't "noise" (although may be some noise is there too), but can be filtered easily.
Let's see your data for the static and moving payloads ....
Edit
Based on your response to several comments, and based in my previous experience with weighting devices:
You are interfacing the physical world, not just getting input from a mouse and keyboard. It is very important for you understand the device, how it works and how it is designed.
You need a calibration procedure. You have to use several master weights to be sure that the device is working properly and linearly in the whole scale, and that the static case is measured much better than your dynamic needs.
You'll not be able to predict if you can measure with several loads in the conveyor until you do some experiments and look very carefully at the resulting plots
You need to be sure that a load placed anywhere in the conveyor shows the same reading. Or at least you should be able to correlate reading and position.
As I said before, you need a lot of info, and it seems that is not available. I always worked as a team with the engineers designing the device.
Don't hesitate to add more info ...
Have you tried filters with lowpass characteristics? There are different approaches for smoothing data (i.e. Savitzky-Golay, Gauss, moving average) but often, a simple N-point median filter is already sufficient.
It really depends on what you're after.
Take a look at this book:
The Scientist and Engineer's Guide to Digital Signal Processing
You can download it for free. In particular, check chapters 14 and 15.
If the frequency changes with mass and you're trying to measure mass, why not measure the frequency of the oscillations and use that as your primary measure?
Otherwise you need a notch filter which is tunable - figure out the frequency of the "noise" and tune the notch filter to that.
Another book to try is Lyons Understanding Digital Signal Processing
In order to smooth the signal, I'd average the previous 2 * n samples where n is the maximum expected wavelength of the vibrations.
This should cause most of the noise to be eliminated.
If you have some idea of the range of frequencies, you could do a simple average as long as the measurement period were sufficiently long to give you the level of accuracy you want to achieve. The more wavelengths worth of data you average against, the smaller the ratio of contributed error from a partial wavelength.
I'd suggest first simulating/modeling this in software like Matlab.
Data you'll need to consider:
The expected range of vibration frequencies
The measurement accuracy you want to achieve
The expected range of mass you'll want to measure
The function of mass to vibration amplitude
You should be able to apply the same principles as noise-cancelling microphones: put two sensors out, then subtract the secondary sensor's (farther away from the good signal source) signal from the primary sensor's (closer to the good signal source) signal.
Obviously, this works best if the "noise" will reach both sensors fairly equally while the "signal" reaches the primary sensor much more strongly.
For things like sound, this is pretty easy to do in the sensor itself, which makes your software a lot easier and more performant. Depending on what you're measuring, this might be easier to do with multiple sets of hardware and doing the cancellation in software.
If you can characterize the frequency spectra of the unwanted vibration noise, you might be able to synthesize a set of (near) minimum phase notch or band reject filter(s) to allow you to acquire your desired signal at your desired S/N ratio with minimized latency or data set size.
Filtering noisy digital signals is straight forward, as previous posters have noted. There are lots of references. You have not however stated what your objectives are clearly, so we cannot point you into a good direction. Are you looking for a single measurement of a single object on a bridge? [Then see other answers].
Are you monitoring traffic on this bridge and weighing each entity as it passes by? Then you need to determine when entities are on the sensor and when they are not. Typically, as long as the sensor's noise floor is significantly lower than the signal you're measuring this can be accomplished by simple thresholding.
Are you trying to measure the vibrations of the bridge caused by other vehicles? In which case you need either a more expensive sensor if you're having problems doing this, or a clearer measuring objective.

Resources