I have a little problem with this kind of signal
As you can see , there's some missing values and later a big amplitude. What's the best way to filter this king of signal preventing ringing artifact? I was thinking about median filter.
Thank you
Related
I'm implementing a Kalman Filter which fuses 3d position data (provided from 2 different computer vision algorithms). I am modeling the problem with a 9-dimensional state vector (position, velocity, and acceleration). However, the data from each sensor does not come at the same time. Since I compute the velocity by considering the time step between the reception of the previous data and the current data point, two consecutive data points can be quite different but separated by only a very small time step, thus making it seem like the position has changed rapidly.
I am wondering if anyone has insight or direction on the best way to approach this problem- will the kalman filter itself be tolerant of this behaviour? Or should I place all data received within a time window into a bin, and perform the update/predict cycle less frequently on a batch of data? The resources I've seen for utilizing kalman filter in object tracking have used only one camera (i.e. synchronous data), so I'm having trouble finding information related to my use case.
Any help is very much appreciated! Thank you!
From all what I got to know from your question and our conversation in the comments let me first shortly describe the issue and suggest the solution.
A quick recap
You have a system with two independent sensors, which take measurements with different rates (30Hz and 5Hz) (and maybe have some time jitter). The good news is that each such measurement is completely sufficient to proceed an update step of your kalman filter. Each measurement have a time stamp.
Another important point is, that the measurements (maybe) have poor precision, so that the change in position looks not plausible.
A possible solution
Define a smallest time interval for calling your kalman filter, so that none of the recieved measurements has to wait too long to be processed. It looks for me like a 100Hz rate could be a good first choice. In this case your dt would be 0.01s.
Design your F and Q matrices based on the chosen dt (they both strongly depend on this value).
In each call without measurement execute the prediction step. As soon as a measurement comes, do update. So your call sequence would look like:
call sequence:
init()
predict()
predict()
predict()
predict()
update(sensor1)
predict()
update(sensor2)
update(sensor1)
predict()
predict()
update(sensor1)
predict()
and so on...
To deal with the precision issue you could use a reference signal (the ground truth). Analyze the error in each sensor reading for each signal (x, y, z) compared to the reference. A kalman filter can work well ONLY with readings, whose error is normally distributed with a zero mean. If you see some systematical offset, may be you can get rid of it. From the observed error you can calculate the standard deviation (and the variance), so you can tell your filter how good the measurements are. It will be your R matrix.
If you don't have a reference you can take some measurements while standing still on the same place. So your reference position would be constant and you could have a look at the dispersion of the readings.
Tune elements of your Q matrix and describe the possible dynamic of your state elements. A smaller Q element for position would tell the filter not to change it too fast. So the (possible) poor performance of your sensors will be partially eliminated (think of a low pass filter as intuition).
I hope it can help you. Please correct me if I understood something wrong.
It would be helpful to see a plot of your sensor readings (and if possible of the reference trajectory).
I have a received signal level data which looks like this as below :
From this signal, I would like to separate only peaks from the signal. For instance, we can see that signal level deteriorate from the time step 47 and become worse during 53. I would like separate this out from the original signal. I was wondering that wavelet transform could be possible solution for my problem. Please share me your thoughts if you have some other better algorithms to solve this problem.
Your suggestion on this is highly appreciated.
You can use wavelet for sure, although I'm not sure if it's necessary. If you are just trying to identify the time instants where these two peaks occur, they are pretty distinct for direct identification. If you'd like a cleaner separation, you can pass the signal through a wavelet filter, identify the peaks in time and frequency, define a threshold for amplitude separation (but since there are other peaks following them, there's bound to be some mixing), and at last inverse-transform to get the filtered signal.
I want to know if there is any algorithm to find a peak in a signal in Java (Android). I'm working on ECG's signals, and I'm using a real time algorithm to draw the signal so each point that I'm receiving I'm drawing it directly so I don't have data for the next points.
The signal is like this
From past experience, I can suggest the following simplistic idea off the top of my head. I'm assuming you're looking for the big spike, right? If not, I think the below process will work. You just need to change your thresholds. Bear in mind, the idea below comes from experience but isn't tested!
Run the signal through a moving-average filter to smooth it out. (Critical!)
Find the discrete differential of this filtered signal.
Run the discrete differential through another moving average filter. (Also critical!)
At each zero-crossings in the smoothed differential signal, compare the point n (either the left or the right sample of the zero-crossing) to its equivalent on the original filtered signal (found in step 2). If this point is greater than some predetermined threshold, n is your big spike.
I hope this helps. Feel free to ask if you have any questions. I implemented a similar algorithm in C++ it might help to look at: https://github.com/sawbg/avda/blob/master/src/process.hpp.
Given two byte arrays of data captured from a microphone, how can I determine which one has more spikes in noise? I would assume there is an algorithm I can apply to the data, but I have no idea where to start.
Getting down to it, I need to be able to determine when a baby is crying vs ambient noise in the room.
If it helps, I am using the Microsoft.Xna.Framework.Audio.Microphone class to capture the sound.
you can convert each sample (normalised to a range 1.0 to -1.0) into a decibel rating by applying the formula
dB = 20 * log-base-10 (sample-value)
To be honest, so long as you don't mind the occasional false positive, and your microphone is set up OK, you should have no problem telling the difference between a baby crying and ambient background noise, without going through the hassle of doing an FFT.
I'd recommend you having a look at the source code for a noise gate, which does pretty much what you are after, with configurable attack times & thresholds.
First use a Fast Fourier Transform to transform the signal into the frequency domain.
Then check if the signal in the typical "cry-frequencies" is significantly higher than the other amplitudes.
The preprocessor of the speex codec supports noise vs signal detection, but I don't know if you can get it to work with XNA.
Or if you really want some kind of loudness calculate the sum of squares of the amplitudes from the frequencies you're interested in (for example 50-20000Hz) and if the average of that over the last 30 seconds is significantly higher than the average over the last 10 minutes or exceeds a certain absolute threshold sound the alarm.
Louder at what point? The signal's average amplitude will tell you which one is louder on average, but that is kind of a dumb, brute force way to go about it. It may work for you in practice though.
Getting down to it, I need to be able to determine when a baby is crying vs ambient noise in the room.
Ok, so, I'm just throwing out ideas here; I am by no means an expert on audio processing.
If you know your input, i.e., a baby crying (relatively loud with a high pitch) versus ambient noise (relatively quiet), you should be able to analyze the signal in terms of pitch (frequency) and amplitude (loudness). Of course, if during he recording someone drops some pots and pans onto the kitchen floor, that will be tough to discern.
As a first pass I would simply traverse the signal, maintaining a standard deviation of pitch and amplitude throughout, and then set a flag when those deviations jump beyond some threshold that you will have to define. When they come back down you may be able to safely assume that you captured the baby's cry.
Again, just throwing you an idea here. You will have to see how it works in practice with actual data.
I agree with #Ed Swangren, it will take a lot of playing with samples of data for a lot of sources. To me, it sounds like the trick will be to limit or hopefully eliminate false positives. My experience with babies is they are much louder crying than the environment. so, keeping track of the average measurements (freq/amp/??) of the normal environment and then classifying how well the changes match the characteristics of a crying baby which changes from kid to kid, so you'll probably want a system that 'learns'. Best of luck.
update: you might find this library useful http://naudio.codeplex.com/
For example you measure the data coming from some device, it can be a mass of the object moving on the bridge. Because it is moving the mass will give data which will vibrate in some amplitude depending on the mass of the object. Bigger the mass - bigger the vibrations.
Are there any methods for filtering such kind of noise from that data?
May be using some formulas of vibrations? Have no idea what kind of formulas or algorithms (filters) can be used here. Please suggest anything.
EDIT 2:
Better picture, I just draw it for better understanding:
Not very good picture. From that graph you can see that the frequency is the same every
time, but the amplitude chanbges periodically. Something like that I have when there are no objects on the moving road. (conveyer belt). vibrating near zero value.
When the object moves, I there are the same waves with changing amplitude.
The graph can tell that there may be some force applying to the system and which produces forced occilations. So I am interested in removing such kind of noise. I do not know what force causes such occilations. Soon I hope I will get some data on the non moving road with and without object on it for comparison with moving road case.
What you have in your last plot is basically an amplitude modulated oscillation coming from a function like:
f[x] := 10 * (4 + Sin[x]) * Sin[80 * x]
The constants have been chosen to match your plot (using just a rule of thumb)
The Plot of this function is
That isn't "noise" (although may be some noise is there too), but can be filtered easily.
Let's see your data for the static and moving payloads ....
Edit
Based on your response to several comments, and based in my previous experience with weighting devices:
You are interfacing the physical world, not just getting input from a mouse and keyboard. It is very important for you understand the device, how it works and how it is designed.
You need a calibration procedure. You have to use several master weights to be sure that the device is working properly and linearly in the whole scale, and that the static case is measured much better than your dynamic needs.
You'll not be able to predict if you can measure with several loads in the conveyor until you do some experiments and look very carefully at the resulting plots
You need to be sure that a load placed anywhere in the conveyor shows the same reading. Or at least you should be able to correlate reading and position.
As I said before, you need a lot of info, and it seems that is not available. I always worked as a team with the engineers designing the device.
Don't hesitate to add more info ...
Have you tried filters with lowpass characteristics? There are different approaches for smoothing data (i.e. Savitzky-Golay, Gauss, moving average) but often, a simple N-point median filter is already sufficient.
It really depends on what you're after.
Take a look at this book:
The Scientist and Engineer's Guide to Digital Signal Processing
You can download it for free. In particular, check chapters 14 and 15.
If the frequency changes with mass and you're trying to measure mass, why not measure the frequency of the oscillations and use that as your primary measure?
Otherwise you need a notch filter which is tunable - figure out the frequency of the "noise" and tune the notch filter to that.
Another book to try is Lyons Understanding Digital Signal Processing
In order to smooth the signal, I'd average the previous 2 * n samples where n is the maximum expected wavelength of the vibrations.
This should cause most of the noise to be eliminated.
If you have some idea of the range of frequencies, you could do a simple average as long as the measurement period were sufficiently long to give you the level of accuracy you want to achieve. The more wavelengths worth of data you average against, the smaller the ratio of contributed error from a partial wavelength.
I'd suggest first simulating/modeling this in software like Matlab.
Data you'll need to consider:
The expected range of vibration frequencies
The measurement accuracy you want to achieve
The expected range of mass you'll want to measure
The function of mass to vibration amplitude
You should be able to apply the same principles as noise-cancelling microphones: put two sensors out, then subtract the secondary sensor's (farther away from the good signal source) signal from the primary sensor's (closer to the good signal source) signal.
Obviously, this works best if the "noise" will reach both sensors fairly equally while the "signal" reaches the primary sensor much more strongly.
For things like sound, this is pretty easy to do in the sensor itself, which makes your software a lot easier and more performant. Depending on what you're measuring, this might be easier to do with multiple sets of hardware and doing the cancellation in software.
If you can characterize the frequency spectra of the unwanted vibration noise, you might be able to synthesize a set of (near) minimum phase notch or band reject filter(s) to allow you to acquire your desired signal at your desired S/N ratio with minimized latency or data set size.
Filtering noisy digital signals is straight forward, as previous posters have noted. There are lots of references. You have not however stated what your objectives are clearly, so we cannot point you into a good direction. Are you looking for a single measurement of a single object on a bridge? [Then see other answers].
Are you monitoring traffic on this bridge and weighing each entity as it passes by? Then you need to determine when entities are on the sensor and when they are not. Typically, as long as the sensor's noise floor is significantly lower than the signal you're measuring this can be accomplished by simple thresholding.
Are you trying to measure the vibrations of the bridge caused by other vehicles? In which case you need either a more expensive sensor if you're having problems doing this, or a clearer measuring objective.