I want to calculate the damping ratio of the following transfer function
(110s+5.5e8)/(3.142e-7+0.55s^2s+2.75e6)
But not sure how to convert it into the standard form
Related
How does one transform an image such that each pixel of the image is moved radially in/outwards by a certain amount that is determined by a user defined function? (In Wolfram Mathematica)
I have this image:
I want to calculate SNR in it. For this i used code:
img=imread('noicy.JPG');
img=double(img(:));
ima=max(img(:));
imi=min(img(:));
ims=std(img(:));
snr=20*log10((ima-imi)./ims)
Is that correct code to calculate SNR?
The definition of SNR can be found here or here:
Both the standard and the industry definition can be used (10log(x) and 20log(x)). check this
now, the signal is equal to the mean of the pixel values (mean(img(:))) and the noise is the standard deviation or error value of the pixel values (std(img(:))).
You may use either the ratio or the SNR=10*log10(signal/noise) to express the result in decibel.
I have some data set where each object has a Value and Price. I want to apply Gaussian Blur to their Price using their Value. Since my data has only 1 component to use in blurring, I am trying to apply 1D Gaussian blur.
My code does this:
totalPrice = 0;
totalValue = 0;
for each object.OtherObjectsWithinPriceRange()
totalPrice += price;
totalValue += Math.Exp(-value*value);
price = totalPrice/totalValue;
I see good results, but the 1D Gaussian blur algorithms I see online seems to use deviations, sigma, PI, etc. Do I need them, or are they strictly for 2D Gaussian blurs? They combine these 1D blur passes as vertical and horizontal so they are still accounting for 2D.
Also I display the results as colors but the white areas are a little over 1 (white). How can I normalize this? Should I just clamp the values to 1? That's why I am wondering if I am using the correct formula.
Your code applies some sort of a blur, though definitely not Gaussian. The Gaussian blur would look something like
kindaSigma = 1;
priceBlurred = object.price;
for each object.OtherObjectsWithinPriceRange()
priceBlurred += price*Math.Exp(-value*value/kindaSigma/kindaSigma);
and that only assuming that value is proportional to a "distance" between the object and other objects within price range, whatever this "distance" in your application means.
To your questions.
2D Gaussian blur is completely equivalent to a combination of vertical and horizontal 1D Gaussian blurs done one ofter another. That's how thee 2D Gaussian blur is usually implemented in practice.
You don't need any PI or sigmas as a multiplicative factor for the Gaussian - those have an effect of merely scaling an image and can be safely ignored.
The sigma (standard deviation) under the exponent has a major impact on the result, but it is not possible for me to tell you if you need it or not. It depends on your application.
Want more blur: use larger kindaSigma in the snippet above.
Want less blur: use smaller kindaSigma.
When kindaSigma is too small, you won't notice any blur at all. When kindaSigma is too large, the Gaussian blur effectively transforms itself into a moving average filter.
Play with it and choose what you need.
I am not sure I understand your normalization question. In image processing it is common to store each color component (R,G,B) as unsigned char. So black color is represented by (0,0,0) and white color by (255,255,255). Of course, you are free to decided to choose a different presentation form and take white color as 1. But keep in mind that for the visualization packages that are using standard 8-bit presentation, the value of 1 means almost black color. So you will likely need to manipulate and renormalize your image before display.
I am studying Inverse Filtering, I was trying to code it, I sought out some references from the net. Everyone has considered optical transfer function which is no where to be seen in Gonzalez Book I am referring to.
% Inverse_Filter_Demo-
clc
clear all
close all
original_image=imread('cameraman.jpg'); %loading the original (un-blurred) image
original_image=double(original_image);
%The blur function (PSF- Point Spread Function) that will be added to the original image
PSF=fspecial('motion',20,45);
%Adding blur to the original image
degraded_image = imfilter(original_image,PSF,'circular','conv');
OTF= psf2otf(PSF,[size(degraded_image,1) size(degraded_image,2)]);%Getting OTF from PSF
Inverse_Filter=conj(OTF)./((abs(OTF)).^2); %The inverse filter
%Preforming Fourier Transform to the degraded image
FT_degraded_image=fftn(degraded_image);
%Preforming the restoration itself
restored_image=abs(ifftn(FT_degraded_image.*Inverse_Filter));
%Presenting the restoration results:
figure;
set(gca,'Fontsize',14);
colormap(gray);
imagesc(original_image,[0 255]);
truesize;
title('Original image');
figure;
set(gca,'Fontsize',14);
colormap(gray);
imagesc(degraded_image,[0 255]);
truesize;
title('Degraded image');
figure;
set(gca,'Fontsize',14);
colormap(gray);
imagesc(restored_image,[0 255]);
truesize;
title('Restoration of the degraded image (using Inverse Filter)');
Your question is unclear, and probably more appropriate for (dsp.stackexchange.com). However, if what you are asking is "what is the optical transfer function?" then the Wikipedia article for OTFs is a fine place to start.
The simplest way to think about it is that the Optical Transfer Function is the Fourier transform of the point-spreading function (PSF). Usually PSF is a filter (convolution kernel) that describes how a single pin-hole type light source would get smeared into an actual image via some device.
The OTF is just the amplitude/phase representation of that smearing process. It is the filter that the image's Fourier transform would be multiplied by in phase space to produce the smeared, true output image's Fourier transform (instead of convolving, which is what you do with the PSF in the spatial domain). Applying the inverse Fourier transform after applying the OTF should give you the actual image the device would produce.
For mathematical convenience, and sometimes for processing efficiency, it can be more expedient to work with the OTF instead of the regular spatial domain's PSF. This is why you'll see some algorithms and textbooks describe their methods with the OTF instead of the PSF.
I have an image. I want to resize it to double the original size, filling in the new pixels by interpolation. I need to specify which type of interpolation I want to use.
I see the imresize function, which has an option for 'method'. The problem is, there are only 3 options: nearest, bilinear, bicubic. Bilinear and bicubic are averaging/mean methods, but is there any way to set the neighborhood size / weighting?
The main problem is, I need to do it with a 'median' interpolation method, instead of mean. How can I tell it to use this method?
The way that IMRESIZE implements interpolation is by calculating for each pixel in the output image (inverse mapping), the indices of the pixels in the input image that are going to be involved in the interpolation, along with the contributing weights.
The neighborhood and the weights are determined by the type of the interpolation kernel used, which as #Albert points out, can be passed along to the IMRESIZE function (the 'Method' property can accept {f,w} a cell array with the kernel function and the kernel width)
These two components will be used to compute linear combination of the input pixels involved to fill each value of the output pixels. This process is performed along each dimension separately one-at-a-time (vertically then horizontally).
Now the problem for you is that you can never obtain the median value by using a linear combination, that's because median is a non-linear ordering filter. So your only option is to write your own implementation...
Amro is right that the median filter cannot be computed as a weighted response. But MATLAB has a specific function for the median filter: medfilt2.
imresize has a third way of passing the interpolation method: "Two-element Cell Array Specifying Interpolation Kernel". You can read more about it in Matlab's documentation.