So I want the RGB values of an image placed into an histogram and then that histogram will be compared to other image's histogram.
Currently this is the code:
if (size(cimg, 3) ~= 3)
error('rgbhist:numberOfSamples', 'Input image must be RGB.')
end
nBins = 256;
rHist = imhist(cimg(:,:,1), nBins);
gHist = imhist(cimg(:,:,2), nBins);
bHist = imhist(cimg(:,:,3), nBins);
hFig = figure;
%figure
subplot(1,2,1);imshow(cimg)
subplot(1,2,2);
hold on
h(1) = stem(1:256, rHist); %hold on
h(2) = stem(1:256 + 1/3, gHist, 'g');
h(3) = stem(1:256 + 2/3, bHist, 'b');
hold off
set(h, 'marker', 'none')
set(h(1), 'color', [1 0 0])
set(h(2), 'color', [0 1 0])
set(h(3), 'color', [0 0 1])
axis square
The code outputs the image along with its RGB histogram value, how can I use that histogram to compare it with other histograms so that I could potentially classify the image as having nearly the same colors as that of another image?
You could use Kullback Leibler Divergence to calculate the distance between 2 histograms.
This is easy as you can treat the Histogram as a distribution.
Since the KL Divergence isn't symmetric one could compute it twice (Namely [X, Y] and [Y, X]) and take the average.
Related
find pixels location that have value/color =white
for i=1:row
for j=1:colo
if i==x if the row of rgb image is the same of pixel location row
end
end
end
end
what's Wrong
You can use logical indexing.
For logical indexing to work, you need the mask (bw2) to be the same size as RGB.
Since RGB is 3D matrix, you need to duplicate bw2 three times.
Example:
%Read sample image.
RGB = imread('autumn.tif');
%Build mask.
bw2 = zeros(size(RGB, 1), size(RGB, 2));
bw2(1+(end-30)/2:(end+30)/2, 1+(end-30)/2:(end+30)/2) = 1;
%Convert bw2 mask to same dimensions as RGB
BW = logical(cat(3, bw2, bw2, bw2));
RGB(BW) = 255;
figure;imshow(RGB);
Result (just decoration):
In case you want to fix your for loops implementation, you can do it as follows:
[x, y] = find(bw2 == 1);
[row, colo, z]=size(RGB); %size of rgb image
for i=1:row
for j=1:colo
if any(i==x) %if the row of rgb image is the same of pixel location row
if any(j==y(i==x)) %if the colos of rgb image is the same of pixel loca colo
RGB(i,j,1)=255; %set Red color channel to 255
RGB(i,j,2)=255; %set Green color channel to 255
RGB(i,j,3)=255; %set Blue color channel to 255
end
end
end
end
[x, y] = find(bw2 == 1)
x and y are arrays unless there is only one pixel is white.
However, if i==x and if j==y are comparing a single number with an array. This is wrong.
As Anthony pointed out, x and y are arrays so i==x and j==y won't work as intended. Furthermore RGB(i,j) only uses the first two dimensions, but RGB images have three dimensions. Lastly, from an optimization point of view, the for-loops are unnecessary.
%% Create some mock data.
% Generate a black/white image.
bw2 = rand(10);
% Introduce some 1's in the BW image
bw2(1:5,1:5)=1;
% Generate a RGB image.
RGB = rand(10,10,3)*255;
%% Do the math.
% Build a mask from the bw2 image
bw_mask = bw2 == 1;
% Set RGB at bw_mask pixels to white.
RGB2 = bsxfun(#plus, bw_mask*255, bsxfun(#times, RGB, ~bw_mask)); % MATLAB 2016a and earlier
RGB2 = bw_mask*255 + RGB .* ~bw_mask; % MATLAB 2016b and later.
I have to use an inverse filter to remove the blurring from this image
.
Unfortunately, I have to figure out the transfer function H of the imaging
system used to get these sharper images, It should be Gaussian. So, I should determine the approximate width of the Gaussian by trying different Gaussian widths in an inverse filter and judging which resulting images look the “best”.
The best result will be optimally sharp – i.e., edges will look sharp but will not have visible ringing.
I tried by using 3 approaches:
I created a transfer function with N dimensions (odd number, for simplicity), by creating a grid of N dimensions, and then applying the Gaussian function to this grid. After that, we add zeroes to this transfer function in order to get the same size as the original image. However, after applying the filter to the original image, I just see noise (too many artifacts).
I created the transfer function with size as high as the original image, by creating a grid of the same size as the original image. If sigma is too small, then the PSF FFT magnitude is wide. Otherwise it gets thinner. If sigma is small, then the image is even more blurred, but if we set a very high sigma value then we get the same image (not better at all).
I used the fspecial function, playing with sizes of sigma and h. But still I do not get anything sharper than the original blurred image.
Any ideas?
Here is the code used for creating the transfer function in Approach 1:
%Create Gaussian Filter
function h = transfer_function(N, sigma, I) %N is the dimension of the kernel
%create a 2D-grid that is the same size as the Gaussian filter matrix
grid = -floor(N/2) : floor(N/2);
[x, y] = meshgrid(grid, grid);
arg = -(x.*x + y.*y)/(2*sigma*sigma);
h = exp(arg); %gaussian 2D-function
kernel = h/sum(h(:)); %Normalize so that total weight equals 1
[rows,cols] = size(I);
add_zeros_w = (rows - N)/2;
add_zeros_h = (cols - N)/2;
h = padarray(kernel,[add_zeros_w add_zeros_h],0,'both'); % h = kernel_final_matrix
end
And this is the code for every approach:
I = imread('lena_blur.jpg');
I1 = rgb2gray(I);
figure(1),
I1 = double(I1);
%---------------Approach 1
% N = 5; %Dimension Assume is an odd number
% sigma = 20; %The bigger number, the thinner the PSF in FREQ
% H = transfer_function(N, sigma, I1);
%I1=I1(2:end,2:end); %To simplify operations
imagesc(I1); colormap('gray'); title('Original Blurred Image')
I_fft = fftshift(fft2(I1)); %Shift the image in Fourier domain to let its DC part in the center of the image
% %FILTER-----------Approach 2---------------
% N = 5; %Dimension Assume is an odd number
% sigma = 20; %The bigger number, the thinner the PSF in FREQ
%
%
% [x,y] = meshgrid(-size(I,2)/2:size(I,2)/2-1, -size(I,1)/2:size(I,1)/2-1);
% H = exp(-(x.^2+y.^2)*sigma/2);
% %// Normalize so that total area (sum of all weights) is 1
% H = H /sum(H(:));
%
% %Avoid zero freqs
% for i = 1:size(I,2) %Cols
% for j = 1:size(I,1) %Rows
% if (H(i,j) == 0)
% H(i,j) = 1e-8;
% end
% end
% end
%
% [rows columns z] = size(I);
% G_filter_fft = fft2(H,rows,columns);
%FILTER---------------------------------
%Filter--------- Aproach 3------------
N = 21; %Dimension Assume is an odd number
sigma = 1.25; %The bigger number, the thinner the PSF in FREQ
H = fspecial('gaussian',N,sigma)
[rows columns z] = size(I);
G_filter_fft = fft2(H,rows,columns);
%Filter--------- Aproach 3------------
%DISPLAY FFT PSF MAGNITUDE
figure(2),
imshow(fftshift(abs(G_filter_fft)),[]); title('FFT PSF magnitude 2D');
% Yest = Y_blurred/Gaussian_Filter
I_restoration_fft = I_fft./G_filter_fft;
I_restoration = (ifft2(I_restoration_fft));
I_restoration = abs(I_restoration);
I_fft = abs(I_fft);
% Display of Frequency domain (To compare with the slides)
figure(3),
subplot(1,3,1);
imagesc(I_fft);colormap('gray');title('|DFT Blurred Image|')
subplot(1,3,2)
imshow(log(fftshift(abs(G_filter_fft))+1),[]) ;title('| Log DFT Point Spread Function + 1|');
subplot(1,3,3)
imagesc(abs(I_restoration_fft));colormap('gray'); title('|DFT Deblurred|')
% imshow(log(I_restoration+1),[])
%Display PSF FFT in 3D
figure(4)
hf_abs = abs(G_filter_fft);
%270x270
surf([-134:135]/135,[-134:135]/135,fftshift(hf_abs));
% surf([-134:134]/134,[-134:134]/134,fftshift(hf_abs));
shading interp, camlight, colormap jet
xlabel('PSF FFT magnitude')
%Display Result (it should be the de-blurred image)
figure(5),
%imshow(fftshift(I_restoration));
imagesc(I_restoration);colormap('gray'); title('Deblurred Image')
%Pseudo Inverse restoration
% cam_pinv = real(ifft2((abs(G_filter_fft) > 0.1).*I_fft./G_filter_fft));
% imshow(fftshift(cam_pinv));
% xlabel('pseudo-inverse restoration')
A possible solution is deconvwr. I will first show its performance starting from an undistorted lena image. So, I know exactly the gaussian blurring function. Note that setting estimated_nsr to zero will destroy the performance completely due to quantisation noise.
I_ori = imread('lenaTest3.jpg'); % Download an original undistorted lena file
N = 19;
sigma = 5;
H = fspecial('gaussian',N,sigma)
estimated_nsr = 0.05;
I = imfilter(I_ori, H)
wnr3 = deconvwnr(I, H, estimated_nsr);
figure
subplot(1, 4, 1);
imshow(I_ori)
subplot(1, 4, 2);
imshow(I)
subplot(1, 4, 3);
imshow(wnr3)
title('Restoration of Blurred, Noisy Image Using Estimated NSR');
subplot(1, 4, 4);
imshow(H, []);
The best parameters I found for your problem by trial and error.
N = 19;
sigma = 2;
H = fspecial('gaussian',N,sigma)
estimated_nsr = 0.05;
EDIT: calculating exactly the used blurring filter
If you download an undistorted lena (I_original_fft), you can calculate the used blurring filter as follows:
G_filter_fft = I_fft./I_original_fft
The following is the code for sample covariance matrix for the single pixel. I have taken 10 neighboring pixels for the (1,1) including the first pixel of the stacked image. y_1, y_2, y_3 and y_4 are my four images. Kindly do let me know if the question is not clear.
y_cal=cat(3, y_1, y_2, y_3, y_4);
Y_new=reshape(y_cal, [5586, 4]);
Y_new_cov=Y_new(1:10,:);
Y_new_cell = arrayfun(#(ri) Y_new_cov(ri, :)', 1:10, 'UniformOutput', 0);
Y_new_cell_tr= cellfun(#ctranspose, Y_new_cell , 'UniformOutput', 0);
Y_covariance_initial = cellfun(#mtimes, Y_new_cell,Y_new_cell_tr, 'UniformOutput', 0);
Y_covariance_final = Y_covariance_initial{1,1}+Y_covariance_initial{1,2}+Y_covariance_initial{1,3}+Y_covariance_initial{1,4}+Y_covariance_initial{1,5}+Y_covariance_initial{1,6}+Y_covariance_initial{1,7}+Y_covariance_initial{1,8}+Y_covariance_initial{1,9}+Y_covariance_initial{1,10};
Here 10 pixels were taken manually where covariance is implemented. I have the image dimension as 114 X 49. So the final covariance matrix generated is 114 X 49 x 4 X 4. How should I apply a square window to select the neighboring pixels for a target pixel and continue for other pixels also?
Kindly provide necessary assistance as it took me two months to write this code being from a non coding background. Your help will be highly appreciated.
Regards
The standard way would be to use nlfilter. For this function, you supply your function (the one to compute covariance), and it will apply it to a sliding window of your size. For example:
octave> img = rand (64, 64);
octave> img_cov = nlfilter (img, [10 10], #(x) cov (x(:)));
Will call cov (x(:)) for each sliding block of size [10 10] (after padding the original image with zeros), and return an array of size [64 64] (same as the input image) with those results. Since you are using Octave, your window and image may have any number of dimensions. So you can do this:
octave> img = rand (64, 64, 3, 4);
octave> img_cov = nlfilter (img, [10 10 3 4], #(x) cov (x(:)));
An alternative is to get all the sliding windows from your n dimensional image into a column (using im2col), use a function that will work along each column, and then build an image back with col2im. This may, or may not, be faster but does give you a bit more flexibility if you can warp your head around it:
octave> img = rand (64, 64);
octave> im_cols = im2col (img, [10 10], "sliding");
octave> im_cov = you_nd_cov_function (im_cols);
octave> img_cov = col2im (cov (im_cols), [1 1], [55 55], "sliding");
Suppose i would like to draw an image like the following:
Where the pixel values are refined to 0 for black and white for 1.
These line are drawn with specific radius and angles
Now I create a 80 x 160 matrix
texturematrix = zeros(80,160);
then i want to change particular elements to be 1 according to the lines conditions
but how do i make them repeatedly with specific distance apart from each others effectively?
Thanks a lot everyone!
This might not be what you are looking for, but generating such an image could be done by plotting a set of lines, as follows:
% grid sizes
m = 6;
n = 5;
% line length and angle
len = 1;
theta = .1*pi;
[a,b] = meshgrid(1:m,1:n);
x = reshape([a(:),a(:)+len*cos(theta),nan(numel(a),1)]',[],1);
y = reshape([b(:),b(:)+len*sin(theta),nan(numel(b),1)]',[],1);
h = figure();
plot(x,y,'k', 'LineWidth', 2);
But this has nothing to do with a texture matrix. So, we construct a matrix of desired size:
set(gca, 'position',[0 0 1 1], 'units','normalized', 'YTick',[], 'XTick',[]);
frame = frame2im(getframe(h),[0 0 1 1]);
im = imresize(frame,[80 160]);
M = ~(im(2:end,2:end,1)==255);
I have an image and I want to import this image to matlab. I am using the following code. The problem that I have is that when I convert the image to grayscale, everything will be changed and the converted image is not similar to original one. In another words, I want to keep the values (or let say the image) as it is in the original image. Is there any way for doing this?
I = imread('myimage.png');
figure, imagesc(I), axis equal tight xy
I2 = rgb2gray(I);
figure, imagesc(I2), axis equal tight xy
Your original image is already using a jet colormap. The problem is, when you convert it to grayscale, you lose some crucial information. See the image below.
In the original image you have a heatmap. Blue areas generally indicate "low value", whereas red areas indicate "high values". But when converted to grayscale, both areas indicate low value, as they aproach dark pixels (see the arrows).
A possible solution is this:
You take every pixel of your image, find the nearest (closest)
color value in the jet colormap and use its index as a gray value.
I will show you first the final code and the results. The explanation goes below:
I = im2double(imread('myimage.png'));
map = jet(256);
Irgb = reshape(I, size(I, 1) * size(I, 2), 3);
Igray = zeros(size(I, 1), size(I, 2), 'uint8');
for ii = 1:size(Irgb, 1)
[~, idx] = min(sum((bsxfun(#minus, Irgb(ii, :), map)) .^ 2, 2));
Igray(ii) = idx - 1;
end
clear Irgb;
subplot(2,1,1), imagesc(I), axis equal tight xy
subplot(2,1,2), imagesc(Igray), axis equal tight xy
Result:
>> whos I Igray
Name Size Bytes Class Attributes
I 110x339x3 894960 double
Igray 110x339 37290 uint8
Explanation:
First, you get the jet colormap, like this:
map = jet(256);
It will return a 256x3 colormap with the possible colors on the jet palette, where each row is a RGB pixel. map(1,:) would be kind of a dark blue, and map(256,:) would be kind of a dark red, as expected.
Then, you do this:
Irgb = reshape(I, size(I, 1) * size(I, 2), 3);
... to turn your 110x339x3 image into a 37290x3 matrix, where each row is a RGB pixel.
Now, for each pixel, you take the Euclidean distance of that pixel to the map pixels. You take the index of the nearest one and use it as a gray value. The minus one (-1) is because the index is in the range 1..256, but a gray value is in the range 0..255.
Note: the Euclidean distance takes a square root at the end, but since we are just trying to find the closest value, there is no need to do so.
EDIT:
Here is a 10x faster version of the code:
I = im2double(imread('myimage.png'));
map = jet(256);
[C, ~, IC] = unique(reshape(I, size(I, 1) * size(I, 2), 3), 'rows');
equiv = zeros(size(C, 1), 1, 'uint8');
for ii = 1:numel(equiv)
[~, idx] = min(sum((bsxfun(#minus, C(ii, :), map)) .^ 2, 2));
equiv(ii) = idx - 1;
end
Irgb = reshape(equiv(IC), size(I, 1), size(I, 2));
Irgb = Irgb(end:-1:1,:);
clear equiv C IC;
It runs faster because it exploits the fact that the colors on your image are restricted to the colors in the jet palette. Then, it counts the unique colors and only match them to the palette values. With fewer pixels to match, the algorithm runs much faster. Here are the times:
Before:
Elapsed time is 0.619049 seconds.
After:
Elapsed time is 0.061778 seconds.
In the second image, you're using the default colormap, i.e. jet. If you want grayscale, then try using colormap(gray).