I have some signals of varying lengths that are to be propagated over a distance. I have a Time Variable Gain (TVG) function that compensates for the transmission loss over the range of propagation. I am investigating the SNR of these signals when the TVG is applied, at different ranges. For example, at 10m, 50m and so on. The noise being modeled is White Noise, the co-variance matrix of which is simply the Identity (I) matrix.
While taking into account the TVG, I understand this gain would be applied to both the signal (signals of different lengths) and the noise (white noise) at respective ranges, before we calculate their power and expected value (for noise), and then calculate the ratio.
I am using the simple SNR formula given at the Wikipedia site, the link is given below.
SNR = s^H R_{vv} s
https://latex.codecogs.com/gif.download?%5Ctext%7BSNR%7D%3D%20s%5EH%20R_%7Bvv%7D%5C%20s
where s is the signal vector, and R_{vv} is the noise co-variance matrix.
How do I add this TVG to the signal and the noise co-variance matrix in this formula to calculate the SNR?
The gain would be normalized and applied to the noise covariance matrix. While keeping the overall magnitude of the noise covariance matrix same, the TVG will be applied for the duration of signal and the particular area of the TVG curve you are interested in. That would be mean at what range you are interested in for the TVG and how long is your code to be affected by the TVG.
Related
I am analyzing a voltage output that I get from spice simulator and I want to quantize the time sampled voltage data so that I can convert the given (trapezoidal wave) to a square wave.
I have tried differentiation as a method to understand when the data is at flat level(one of the voltage levels) or when it is in transition, but since there are inherent glitches in the voltage vs time data I am not able to get the exact levels that I want. The waveform extracted is from simulator and I am doing all the analysis in "C" language.
Voltage waveform with a glitch
There are a number of methods to deal with this kind of a problem, but a nice solution to this will be the following:
Convolve the signal with a gaussian kernel, thus canceling out the noise at the cost of making the transitions between the levels slower
Due to the convolution, the transition will not be linear but there will be a place in the transition where the curvature changes sign, which is the location of the original edge. To find these locations, compute the Laplacian, which is simply the second derivative in 1D and look for places the Laplacian crosses 0.
To avoid the undershoot and overshoot, take a buffer of a few samples from the found edges and compute the mean, thus finding the height of the level
Steps 1 and 2 can be united if you convolve with the Laplacian of a gaussian kernel. A very good explanation for this edge detection algorithm is shown here, explained by Shree K. Nayar
Can anyone explain me how can I use the equation of bilateral filter in an image ,each coefficient in equation how can use it on an array and how the filter leave the edge without smoothing ? can anyone help me, please?and why(Ip - Iq) multiply in Iq
The filter computes a weighted sum of the pixel intensities. A normalization factor is always required in a weighted average, so that a constant signal keeps the same value.
The space factor makes sure that the filter value is influenced by nearby pixels only, with a smooth decrease of the weighting. The range factor makes sure that the filter value is influenced by pixels with close gray value only, with a smooth decrease of the weighting.
The idea behind this filter is to average the pixels belonging to the same homogeneous region as the center pixel, as if performing a local segmentation. The average performs smoothing/noise reduction, while restricting to the same region avoids spoling the edges by mixing with pixels of a very different intensity.
I implemented a bootstrap Particle filter on C++ by reading few Papers and I first implemented a 1D mouse tracker which performed really well. I used normal Gaussian for weighting in this exam.
I extended the algorithm to track face using 2 features of Local motion and HSV 32 bin Histogram. In this example my weighing function becomes the probability of Motion x probability of Histogram. (Is this correct).
Incase if that is correct than I am confused on the resampling function. At the moment my resampling function is as follows:
For each Particle N = 50;
Compute CDF
Generate a random number (via Gaussian) X
Update the particle at index X
Repeat for all N particles.
This is my re-sampling function at the moment. Note: the second step I am using a Random Number via Gaussian distribution for get the index while my weighting function is Probability of Motion and Histogram.
My question is: Should I generate random number using the probability of Motion and Histogram or just the random number via Gaussian is ok.
In the SIR (Sequential Importance Resampling) particle filter, resampling aims to replicate particles that have gained high weight, while remove those with less weight.
So, when you have your particles weighted (typically with the likelihood you have used), one way to do resampling is to create the cumulative distribution of the weights, and then generate a random number following a uniform distribution and pick the particle corresponding to the slot of the CDF. This way there is more probability to select a particle that has more weight.
Also, don't forget to add some noise after generating replicas of particles, otherwise your point-estimate might be biased for a period of time.
So for this wind monitoring project I'm getting data from a couple of 3d sonic anemometers, specifically 2 R.M.Young 81000. The data output is made digitally with a sampling frequency of 10Hz for periods of 10min. After all the pre-processing (coordinate rotation, trend removal...) I get 3 orthogonal time series of the turbulent data. Right now I'm using the stationary data of 2 hours of measurements with windows of 4096 points and a 50% overlapping to obtain the frequency spectrums in all three directions. After obtaining the spectrum I apply a logarithmic frequency smoothing algorithm, which averages the obtained spectrum in logarithmic spaced intervals.
I have two questions:
The spectrums I obtain from the measured show a clear downward trend in the highest frequencies as seen in the attached figure. I wonder if this loss of energy can have anything to do with an internal filter from the sonic anemometer? Or what else? Is there a way to compensate this loss or better just to consider the spectrum until the "break frequency"?
http://i.stack.imgur.com/B11uP.png
When applying the curve fitting algorithm to determine the integral length scales according to the von Karman equation what is the correct procedure: curve fitting the original data, which gives more weight to higher frequency data points? or using the logarithmic frequency smoothed data to approximate the von karman equation, giving an equal weight to data in the logarithmic scale? In some cases I obtain very different estimates for the integral length scales using both approaches (ex: Original -> Lu=113.16 Lv=42.68 Lw=9.23; Freq. Smoothed -> Lu=148.60 Lv=30.91 Lw=14.13).
Curve fitting with Logarithmic frequency smoothing and with Original data:
http://i.imgur.com/VL2cf.png
Let me know if something is not clear. I'm relatively new in this field, and I might me be making some mistakes in my approach, so if you could give me some advice or tips it would be amazing.
I'm interested, how is the dual input in a sensor fusioning setup in a Kalman filter modeled?
Say for instance that you have an accelerometer and a gyro and want to present the "horizon level", like in an airplane, a good demo of something like this here.
How do you actually harvest the two sensors positive properties and minimize the negative?
Is this modeled in the Observation Model matrix (usually symbolized by capital H)?
Remark: This question was also asked without any answers at math.stackexchange.com
Usually, the sensor fusion problem is derived from the bayes theorem. Actually you have that your estimate (in this case the horizon level) will be a weighted sum of your sensors, which is caracterized by the sensor model. For dual sensors, you have two common choices: Model a two sensor system and derive the kalman gain for each sensor (using the system model as the predictor), or run two correction stages using different observation models. You should take a look at Bayesian Predictors (a little more general than Kalman Filter) which is precisely derived from minimizing the variance of an estimate, given two different information sources. If you have a weighted sum, and minimize the variance of the sum, for two sensors, then you get the Kalman Gain.
The properties of the sensor can be "seen" in two parts of the filter. First, you have the error matrix for your observations. This is the matrix that represents the noise in the sensors observation (it is assumed to be zero mean gaussian noise, which isn't a too big assumption, given that during calibration, you can achieve a zero mean noise).
The other important matrix is the observation covariance matrix. This matrix gives you an insight about how good is the sensor at giving you information (information meaning something "new" and not dependent on the other sensors reading).
About "harvesting the good characteristics", what you should do is do a good calibration and noise characterization (is that spelled ok?) of the sensors. The best way to get a Kalman Filter to converge is to have a good noise model for your sensors, and that is 100% experimental. Try to determine the variance for your system (dont always trust datasheets).
Hope that helps a bit.
The gyro measures rate of angle change (e.g. in radians per sec), while from accelerometer reading you can calculate the angle itself. Here is a simple way of combining these measurements:
At every gyro reading received:
angle_radians+=gyro_reading_radians_per_sec * seconds_since_last_gyro_reading
At every accelerometer reading received:
angle_radians+=0.02 * (angle_radians_from_accelerometer - angle_radians)
The 0.02 constant is for tuning - it selects the tradeoff between noise rejection and responsiveness (you can't have both at the same time). It also depends on the accuracy of both sensors, and the time intervals at which new readings are received.
These two lines of code implement a simple 1-dimensional (scalar) Kalman filter. It assumes that
the gyro has very low noise compared to accelerometer (true with most consumer-grade sensors). Therefore we do not model gyro noise at all, but instead use gyro in the state transition model (usually denoted by F).
accelerometer readings are received at generally regular time intervals and accelerometer noise level (usually R) is constant
angle_radians has been initialised with an initial estimate (f.ex by averaging angle_radians_from_accelerometer over some time)
therefore also estimate covariance (P) and optimal Kalman gain (K) are constant, which means we do not need to keep estimate covariance in a variable at all.
As you see, this approach is simplified. If the above assumptions are not met, you should learn some Kalman filter theory, and modify the code accordingly.
Horizon line is G' * (u, v, f)=0 ,where G is a gravity vector, u and v image centred coordinates and f focal length. Now pros and cons of sensors: gyro is super fast and accurate but drifts, accelerometer is less accurate but (if calibrated) has zero bias and doesn't drift (given no acceleration except gravity). They measure different things - accelerometer measures acceleration and thus orientation relative to the gravity vector while gyro measures rotation speed and thus the change in orientation. To convert it to orientation one has to integrate its values (thankfully it can be sampled at high fps like 100-200). thus Kalman filter that supposed to be linear is not applicable to gyro. for now we can just simplify sensor fusion as a weighted sum of readings and predictions.
You can combine two readings - accelerometer and integrated gyro and model prediction using weights that are inversely proportional to data variances. You will also have to use compass occasionally since accelerometer doesn't tell you much about the azimuth but I guess it is irrelevant for calculation of a horizon line. The system should be responsive and accurate and for this purpose whenever orientation changes fast the weights for gyro should be large; when the system settles down and rotation stops the weights for accelerometer will go up allowing more integration of zero bias readings and killing the drift from gyro.