I have an image. i am calculating I,u,v components for it.
I = (R+G+B)/3
u = R-G
v= G-B;
Now , I want to find two-dimensional histograms
over the chromatic information (u; v).
Thanks in advance.
You can use sparse to creatre a sparse 2D matrix that counts the u-v entries.
Note that you'll have to adjust the indices in the u-v dimension to be in range 1...|u| and 1...|v| (and not negative or fractional).
[uu ui] = unique( round(u(:)) ); % adjust u index using unique command
[vv vi] = unique( round(v(:)) );
twoDhist = sparse( ui, vi, 1, numel(uu), numel(vv) );
twoDhist = spfun( #(x) x/numel(ui), twoDhist ); % normalize hist to sum to 1
figure;
imagesc( vv, uu, twoDhist ); colormap jet; colorbar; axis image
Related
I am trying to solve a problem which I may struggle to describe, so will attempt to describe with the aid of the following picture (please bear with me!):
I have two matrices which are defined on different coordinate spaces (u,v) for matrix A and (x,y) for matrix B. They have different grid sizes and different numbers of pixels. My goal is to apply a scaling factor S to the matrix A, and then to simply add it to matrix B. (For context, this is an optical imaging problem, where matrix A is located at an object plane, matrix B is located at an image plane, and S is the magnification).
So, I would like to create a new matrix C which is the equivalent of A but brought into the new coordinates (x,y). Matrix C should have the same number of rows and columns as B.
A minimum example of A and B is shown below, where the red dashed lines on the right illustrate the effective physical regions occupied by matrix A's pixels:
This is produced by the following code:
%%% Inputs for matrix A %%%
M = 4; % num columns in matrix A
N = 4; % num rows in matrix A
du = 13; % horizontal size of a pixel in matrix A [mm]
dv = 13; % vertical size of a pixel in matrix A [mm]
%%% Set up matrix A %%%
Lu = (M-1)*du; % physical hor. coord. of centre of last pixel [mm]
Lv = (N-1)*dv; % physical ver. coord. of centre of last pixel [mm]
u = -Lu/2:du:Lu/2; % hor. coordinates for matrix A [mm]
v = -Lv/2:dv:Lv/2; % ver. coordinates for matrix A [mm]
A = zeros(N,M);
A(1,1) = 1; % Set a few values to 1 for testing
A(2,3) = 1;
A(3,4) = 1;
%%% Inputs for matrix B %%%
dx = 0.1; % grid step in matrix B [mm]
dy = 0.1; % grid step in matrix B [mm]
Lx = 6; % physical hor. coord. of centre of last pixel [mm]
Ly = 6; % physical ver. coord. of centre of last pixel [mm]
%%% Set up matrix B %%%
x = -Lx/2:dx:Lx/2;
y = -Ly/2:dy:Ly/2;
B = rand(length(y),length(x));
figure('color','w');
subplot(1,2,1);imagesc(u, v, A); axis equal tight;
subplot(1,2,2);imagesc(x, y, B); axis equal tight;
S = 1/20; % scale factor from matrix A's corrdinates to matrix B's
% C = ?
In this example, I have set the pixel size of matrix A to be 13mm, and the scaling factor to be 1/20. This means that in B's coordinates each pixel should be 13/20 = 0.65mm. This is bigger than the grid size dx=0.1mm, and so in this case the result should be that, after mapping, pixels should span multiple grid points. Any region outside the total extent of matrix A should be padded with zeros.
Is there a simple way (or built-in function) which would quickly generate matrix C in Matlab (ideally without using loops over each pixel, or interpolation)?
I can simply scale the coordinates, which matches the physical dimensions, but the matrices are still different number of rows and columns:
u_scaled = u*S;
v_scaled = v*S;
subplot(1,3,3);imagesc(u_scaled, v_scaled, A); axis equal tight;
You can use interp2 or griddedInterpolant:
C = interp2(u, v, A, x * 20, y.' * 20, 'nearest', 0);
With some modification better result can be produced:
AA = padarray(A, [1 1], 0);
uu = [-du+u(1) u u(end)+du];
vv = [-dv+v(1) v v(end)+dv];
C = interp2(uu, vv, AA, x * 20, y.' * 20, 'nearest', 0);
clc
clear
a = imread('004_1.bmp');
I2 = imcrop(a,[80 17 101 180]);
[i,j]=size(I2);
x_hist=sum(I2,1);
y_hist=(sum(I2,2))';
x=1:j ; y=1:i;
centx=sum(x.*x_hist)/sum(x_hist)
centy=sum(y.*y_hist)/sum(y_hist)
BW = edge(I2,'Canny',0.329);
bw2 = imcomplement(BW);
circle = int32([centx,centy,40]);
shapeInserter = vision.ShapeInserter('Fill',false);
release(shapeInserter);
set(shapeInserter,'Shape','Circles');
K = step(shapeInserter,bw2,circle);
figure, imshow(K)
I have this program and I want to know value from the intersection between circle and binary image. If anyone know how to find the value?
You can use find to obtain the indexes of the desired images as follows:
bwCircle = step(shapeInserter,true(size(bw2)),circle); % construct binary image of circle only
[i, j] = find ((bw2 | bwCircle) == 0); % find the indexes of the intersection between the binary image and the circle
figure
imshow(bw2 & bwCircle) % plot the combination of both images
hold on
plot(j, i, 'r*') % plot the intersection points
I have to use an inverse filter to remove the blurring from this image
.
Unfortunately, I have to figure out the transfer function H of the imaging
system used to get these sharper images, It should be Gaussian. So, I should determine the approximate width of the Gaussian by trying different Gaussian widths in an inverse filter and judging which resulting images look the “best”.
The best result will be optimally sharp – i.e., edges will look sharp but will not have visible ringing.
I tried by using 3 approaches:
I created a transfer function with N dimensions (odd number, for simplicity), by creating a grid of N dimensions, and then applying the Gaussian function to this grid. After that, we add zeroes to this transfer function in order to get the same size as the original image. However, after applying the filter to the original image, I just see noise (too many artifacts).
I created the transfer function with size as high as the original image, by creating a grid of the same size as the original image. If sigma is too small, then the PSF FFT magnitude is wide. Otherwise it gets thinner. If sigma is small, then the image is even more blurred, but if we set a very high sigma value then we get the same image (not better at all).
I used the fspecial function, playing with sizes of sigma and h. But still I do not get anything sharper than the original blurred image.
Any ideas?
Here is the code used for creating the transfer function in Approach 1:
%Create Gaussian Filter
function h = transfer_function(N, sigma, I) %N is the dimension of the kernel
%create a 2D-grid that is the same size as the Gaussian filter matrix
grid = -floor(N/2) : floor(N/2);
[x, y] = meshgrid(grid, grid);
arg = -(x.*x + y.*y)/(2*sigma*sigma);
h = exp(arg); %gaussian 2D-function
kernel = h/sum(h(:)); %Normalize so that total weight equals 1
[rows,cols] = size(I);
add_zeros_w = (rows - N)/2;
add_zeros_h = (cols - N)/2;
h = padarray(kernel,[add_zeros_w add_zeros_h],0,'both'); % h = kernel_final_matrix
end
And this is the code for every approach:
I = imread('lena_blur.jpg');
I1 = rgb2gray(I);
figure(1),
I1 = double(I1);
%---------------Approach 1
% N = 5; %Dimension Assume is an odd number
% sigma = 20; %The bigger number, the thinner the PSF in FREQ
% H = transfer_function(N, sigma, I1);
%I1=I1(2:end,2:end); %To simplify operations
imagesc(I1); colormap('gray'); title('Original Blurred Image')
I_fft = fftshift(fft2(I1)); %Shift the image in Fourier domain to let its DC part in the center of the image
% %FILTER-----------Approach 2---------------
% N = 5; %Dimension Assume is an odd number
% sigma = 20; %The bigger number, the thinner the PSF in FREQ
%
%
% [x,y] = meshgrid(-size(I,2)/2:size(I,2)/2-1, -size(I,1)/2:size(I,1)/2-1);
% H = exp(-(x.^2+y.^2)*sigma/2);
% %// Normalize so that total area (sum of all weights) is 1
% H = H /sum(H(:));
%
% %Avoid zero freqs
% for i = 1:size(I,2) %Cols
% for j = 1:size(I,1) %Rows
% if (H(i,j) == 0)
% H(i,j) = 1e-8;
% end
% end
% end
%
% [rows columns z] = size(I);
% G_filter_fft = fft2(H,rows,columns);
%FILTER---------------------------------
%Filter--------- Aproach 3------------
N = 21; %Dimension Assume is an odd number
sigma = 1.25; %The bigger number, the thinner the PSF in FREQ
H = fspecial('gaussian',N,sigma)
[rows columns z] = size(I);
G_filter_fft = fft2(H,rows,columns);
%Filter--------- Aproach 3------------
%DISPLAY FFT PSF MAGNITUDE
figure(2),
imshow(fftshift(abs(G_filter_fft)),[]); title('FFT PSF magnitude 2D');
% Yest = Y_blurred/Gaussian_Filter
I_restoration_fft = I_fft./G_filter_fft;
I_restoration = (ifft2(I_restoration_fft));
I_restoration = abs(I_restoration);
I_fft = abs(I_fft);
% Display of Frequency domain (To compare with the slides)
figure(3),
subplot(1,3,1);
imagesc(I_fft);colormap('gray');title('|DFT Blurred Image|')
subplot(1,3,2)
imshow(log(fftshift(abs(G_filter_fft))+1),[]) ;title('| Log DFT Point Spread Function + 1|');
subplot(1,3,3)
imagesc(abs(I_restoration_fft));colormap('gray'); title('|DFT Deblurred|')
% imshow(log(I_restoration+1),[])
%Display PSF FFT in 3D
figure(4)
hf_abs = abs(G_filter_fft);
%270x270
surf([-134:135]/135,[-134:135]/135,fftshift(hf_abs));
% surf([-134:134]/134,[-134:134]/134,fftshift(hf_abs));
shading interp, camlight, colormap jet
xlabel('PSF FFT magnitude')
%Display Result (it should be the de-blurred image)
figure(5),
%imshow(fftshift(I_restoration));
imagesc(I_restoration);colormap('gray'); title('Deblurred Image')
%Pseudo Inverse restoration
% cam_pinv = real(ifft2((abs(G_filter_fft) > 0.1).*I_fft./G_filter_fft));
% imshow(fftshift(cam_pinv));
% xlabel('pseudo-inverse restoration')
A possible solution is deconvwr. I will first show its performance starting from an undistorted lena image. So, I know exactly the gaussian blurring function. Note that setting estimated_nsr to zero will destroy the performance completely due to quantisation noise.
I_ori = imread('lenaTest3.jpg'); % Download an original undistorted lena file
N = 19;
sigma = 5;
H = fspecial('gaussian',N,sigma)
estimated_nsr = 0.05;
I = imfilter(I_ori, H)
wnr3 = deconvwnr(I, H, estimated_nsr);
figure
subplot(1, 4, 1);
imshow(I_ori)
subplot(1, 4, 2);
imshow(I)
subplot(1, 4, 3);
imshow(wnr3)
title('Restoration of Blurred, Noisy Image Using Estimated NSR');
subplot(1, 4, 4);
imshow(H, []);
The best parameters I found for your problem by trial and error.
N = 19;
sigma = 2;
H = fspecial('gaussian',N,sigma)
estimated_nsr = 0.05;
EDIT: calculating exactly the used blurring filter
If you download an undistorted lena (I_original_fft), you can calculate the used blurring filter as follows:
G_filter_fft = I_fft./I_original_fft
So I want to measure the vertical edges of an image to use it later as depth cue for 2D to 3D conversion.
To do so I will have to compute the horizontal gradient value for each block to measure the vertical edges as follow:
̅ g(x,y) = 1/N ∑_((x',y')∈ Ω(x,y))〖g(x', y')〗
Where:
g(x',y') is a horizontal gradient at a pixel location (x',y'),
omega(x,y) is the nighborhood of the pixel location(x',y')
and N is the number of pixels in omega(x,y).
So Here is what I did on matlab:
I = im2double(imread('landscape.jpg'));
% convert RGB to gray
gI = rgb2gray(I);
[nrow, ncol] = size(gI);
% divide the image into 4-by-4 blocks
gI = mat2tiles((gI),[4,4]);
N = 4*4; % block size
% For each block, compute the horizontal gradient
gI = reshape([gI{:}],4*4, []);
mask = fspecial('sobel');
g = imfilter(gI, mask);
g_bar = g./N;
g_bar = reshape(g_bar,nrow, ncol);
I'm new to Matlab so I'm not sure if my code is expressing the equation in the right way.
Can you please let me know if you think it is correct? as I'm not sure how to test the output!
There's no need for you to decompose your image into 4 x 4 blocks. The horizontal gradient can be used with a Sobel filter or Prewitt filter, which is 3 x 3 and can directly be put into imfilter. imfilter performs 2D convolution / filtering with a specified mask / kernel for you, so tiling is not necessary. As such, you can just use imfilter with the mask defined through fspecial, and define N = 9. Therefore:
I = im2double(imread('landscape.jpg'));
% convert RGB to gray
gI = rgb2gray(I);
N = 9;
mask = fspecial('sobel');
g = imfilter(gI, mask);
g_bar = g./N;
From experience, increasing the size of your gradient mask won't give you much better results. You want to ensure that the mask is as small as possible to capture as many local changes as possible.
I am plotting a 7x7 pixel 'image' in MATLAB, using the imagesc command:
imagesc(conf_matrix, [0 1]);
This represents a confusion matrix, between seven different objects. I have a thumbnail picture of each of the seven objects that I would like to use as the axes tick labels. Is there an easy way to do this?
I don't know an easy way. The axes properties XtickLabel which determines the labels, can only be strings.
If you want a not-so-easy way, you could do something in the spirit of the following non-complete (in the sense of a non-complete solution) code, creating one label:
h = imagesc(rand(7,7));
axh = gca;
figh = gcf;
xticks = get(gca,'xtick');
yticks = get(gca,'ytick');
set(gca,'XTickLabel','');
set(gca,'YTickLabel','');
pos = get(axh,'position'); % position of current axes in parent figure
pic = imread('coins.png');
x = pos(1);
y = pos(2);
dlta = (pos(3)-pos(1)) / length(xticks); % square size in units of parant figure
% create image label
lblAx = axes('parent',figh,'position',[x+dlta/4,y-dlta/2,dlta/2,dlta/2]);
imagesc(pic,'parent',lblAx)
axis(lblAx,'off')
One problem is that the label will have the same colormap of the original image.
#Itmar Katz gives a solution very close to what I want to do, which I've marked as 'accepted'. In the meantime, I made this dirty solution using subplots, which I've given here for completeness. It only works up to a certain size input matrix though, and only displays well when the figure is square.
conf_mat = randn(5);
A = imread('peppers.png');
tick_images = {A, A, A, A, A};
n = length(conf_mat) + 1;
% plotting axis labels at left and top
for i = 1:(n-1)
subplot(n, n, i + 1);
imshow(tick_images{i});
subplot(n, n, i * n + 1);
imshow(tick_images{i});
end
% generating logical array for where the confusion matrix should be
idx = 1:(n*n);
idx(1:n) = 0;
idx(mod(idx, n)==1) = 0;
% plotting the confusion matrix
subplot(n, n, find(idx~=0));
imshow(conf_mat);
axis image
colormap(gray)