Reduce the calculation time for the matlab code - performance

To calculate an enhancement function for an input image I have written the following piece of code:
Ig = rgb2gray(imread('test.png'));
N = numel(Ig);
meanTotal = mean2(Ig);
[row,cal] = size(Ig);
IgTransformed = Ig;
n = 3;
a = 1;
b = 1;
c = 1;
k = 1;
for ii=2:row-1
for jj=2:cal-1
window = Ig(ii-1:ii+1,jj-1:jj+1);
IgTransformed(ii,jj) = ((k*meanTotal)/(std2(window) + b))*abs(Ig(ii,jj)-c*mean2(window)) + mean2(window).^a;
end
end
How can I reduce the calculation time?
Obviously, one of the factors is the small window (3x3) that should be made in the loop each time.

Here you go -
Igd = double(Ig);
std2v = colfilt(Igd, [3 3], 'sliding', #std);
mean2v = conv2(Igd,ones(3),'same')/9;
Ig_out = uint8((k*meanTotal)./(std2v + b).*abs(Igd-cal*mean2v) + mean2v.^a);
This will change the boundary elements too, which if not desired could be set back to the original ones with few additional steps, like so -
Ig_out(:,[1 end]) = Ig(:,[1 end])
Ig_out([1 end],:) = Ig([1 end],:)

Related

How to properly process images with mixed noise types

The picture with noise is like this.
Noised picture: Image3.bmp
I was doing image processing in MatLab with some built-in and self-implemented filters.
I have already tried a combination of bilateral, median and gaussian. bilateral and gaussian code are at the end of this post.
img3 = double(imread('Image3.bmp')); % this is the noised image
lena = double(imread('lena_gray.jpg')); % this is the original one
img3_com = bilateral(img3, 3, 2, 80);
img3_com = medfilt2(img3_com, [3 3], 'symmetric');
img3_com = gaussian(img3_com, 3, 0.5);
img3_com = bilateral(double(img3_com), 6, 100, 13);
SNR3_com = snr(img3_com,img3_com - lena); % 17.1107
However, the result is not promising with SNR of only 17.11.
Filtered image: img3_com
The original picture is like this.
Clean original image: lena_gray.jpg
Could you please give me any possible ideas about how to process it? Like what noise generators generated the noised image and what filtering methods or image processing method I can use to deal with it. Appreciate!!!
My bilateral function bilateral.m
function img_new = bilateral(img_gray, window, sigmaS, sigmaI)
imgSize = size(img_gray);
img_new = zeros(imgSize);
for i = 1:imgSize(1)
for j = 1:imgSize(2)
sum = 0;
simiSum = 0;
for a = -window:window
for b = -window:window
x = i + a;
y = j + b;
p = img_gray(i,j);
q = 0;
if x < 1 || y < 1 || x > imgSize(1) || y > imgSize(2)
% q=0;
continue;
else
q = img_gray(x,y);
end
gaussianFilter = exp( - double((a)^2 + (b)^2)/ (2 * sigmaS^2 ) - (double(p-q)^2)/ (2 * sigmaI^2 ));
% gaussianFilter = gaussian((a^2 + b^2)^(1/2), sigma) * gaussian(abs(p-q), sigma);
sum = sum + gaussianFilter * q;
simiSum = simiSum + gaussianFilter;
end
end
img_new(i,j) = sum/simiSum;
end
end
% disp SNR
lena = double(imread('lena_gray.jpg'));
SNR1_4_ = snr(img_new,img_new - lena);
disp(SNR1_4_);
My gaussian implementation gaussian.m
function img_gau = gaussian(img, hsize, sigma)
h = fspecial('gaussian', hsize, sigma);
img_gau = conv2(img,h,'same');
% disp SNR
lena = double(imread('lena_gray.jpg'));
SNR1_4_ = snr(img_gau,img_gau - lena);
disp(SNR1_4_);

Is there a faster alternative to find() function in MATLAB?

I'm running a kinetic Monte Carlo simulation code wherein I have a large sparse array of which I first calculate cumsum() and then find the first element greater than or equal to a given value using find().
vecIndex = find(cumsum(R) >= threshold, 1);
Since I'm calling the function a large number of times, I'd like to speed up my code. Is there a faster way to carry out this operation?
the complete function:
function Tr = select_transition(Fr,Rt,R)
N_dep = (1/(Rt+1))*Fr; %N flux-rate
Ga_dep = (1-(1/(Rt+1)))*Fr; %Ga flux-rate
Tr = zeros(4,1);
RVec = R(:, :, :, 3);
RVec = RVec(:);
sumR = Fr + sum(RVec); %Sum of the rates of all possible transitions
format long
sumRx = rand * sumR; %for randomly selecting one to the transitions
%disp(sumRx);
if sumRx <= Fr %adatom addition
Tr(1) = 0;
if sumRx <= Ga_dep
Tr(2) = 10; %Ga deposition
elseif sumRx > Ga_dep
Tr (2) = -10; %N deposition
end
else
Tr(1) = 1; %adatom hopping
vecIndex = find(cumsum(RVec) >= sumRx - Fr, 1);
[Tr(2), Tr(3), Tr(4)] = ind2sub(size(R(:, :, :, 3)), vecIndex); %determines specific hopping transition
end
end
If Rvec is sparse it is more efficient to extract its nonzero values and the corresponding indexes and apply cumsum on those values.
Tr(1) = 1;
[r,c,v] = find(RVec); % extract nonzeros
cum = cumsum(v);
f = find(cum >= sumRx - Fr, 1);
Tr(2) = r(f);
sz = size(R);
[Tr(3), Tr(4)] = ind2sub(sz(2:3), c(f));

How to calculate the mean of 3D matrices in an image without NaN?

I need to calculate the mean of a 3D matrices (last step in the code). However, there are many NaNs in the (diff_dataframe./dataframe_vor) calculation. So when I use this code, some results will be NaN. How could I calculate the mean of this matrix by ignoring the NaNs? I attached the code as below.
S.amplitude = 1:20;%:20;
S.blocksize = [1 2 3 4 5 6 8 10 12 15 20];
S.frameWidth = 1920;
S.frameHeight = 1080;
S.quality=0:10:100;
image = 127*ones(S.frameHeight,S.frameWidth,3);
S.yuv2rgb = [1 0 1.28033; 1 -0.21482 -0.38059; 1 2.12798 0];
i_bs = 0;
for BS = S.blocksize
i_bs = i_bs + 1;
hblocks = S.frameWidth / BS;
vblocks = S.frameHeight / BS;
i_a = 0;
dataU = randi([0 1],vblocks,hblocks);
dataV = randi([0 1],vblocks,hblocks);
dataframe_yuv = zeros(S.frameHeight, S.frameWidth, 3);
for x = 1 : hblocks
for y = 1 : vblocks
dataframe_yuv((y-1)*BS+1:y*BS, ...
(x-1)*BS+1:x*BS, 2) = dataU(y,x) * 2 - 1;
dataframe_yuv((y-1)*BS+1:y*BS, ...
(x-1)*BS+1:x*BS, 3) = dataV(y,x) * 2 - 1;
end
end
dataframe_rgb(:,:,1) = S.yuv2rgb(1,1) * dataframe_yuv(:,:,1) + ...
S.yuv2rgb(1,2) * dataframe_yuv(:,:,2) + ...
S.yuv2rgb(1,3) * dataframe_yuv(:,:,3);
dataframe_rgb(:,:,2) = S.yuv2rgb(2,1) * dataframe_yuv(:,:,1) + ...
S.yuv2rgb(2,2) * dataframe_yuv(:,:,2) + ...
S.yuv2rgb(2,3) * dataframe_yuv(:,:,3);
dataframe_rgb(:,:,3) = S.yuv2rgb(3,1) * dataframe_yuv(:,:,1) + ...
S.yuv2rgb(3,2) * dataframe_yuv(:,:,2) + ...
S.yuv2rgb(3,3) * dataframe_yuv(:,:,3);
for A = S.amplitude
i_a = i_a + 1;
i_q = 0;
image1p = round(image + dataframe_rgb * A);
image1n = round(image - dataframe_rgb * A);
dataframe_vor = ((image1p-image1n)/2)/255;
for Q = S.quality
i_q = i_q + 1;
namestrp = ['greyjpegs/Img_BS' num2str(BS) '_A' num2str(A) '_Q' num2str(Q) '_1p.jpg'];
namestrn = ['greyjpegs/Img_BS' num2str(BS) '_A' num2str(A) '_Q' num2str(Q) '_1n.jpg'];
imwrite(image1p/255,namestrp,'jpg', 'Quality', Q);
imwrite(image1n/255,namestrn,'jpg', 'Quality', Q);
error_mean(i_bs, i_a, i_q) = mean2((abs(diff_dataframe./dataframe_vor)));
end
end
end
mean2 is a shortcut function that's part of the image processing toolbox that finds the entire average of a 2D region which doesn't include handling NaN. In that case, simply remove all values that are NaN and find the resulting average. Note that the removal of NaN unrolls the 2D region into a 1D vector, so we can simply use mean in this case. As an additional check, let's make sure there are no divide by 0 errors, so also check for Inf as well.
Therefore, replace this line:
error_mean(i_bs, i_a, i_q) = mean2((abs(diff_dataframe./dataframe_vor)));
... with:
tmp = abs(diff_dataframe ./ dataframe_vor);
mask = ~isnan(tmp) | ~isinf(tmp);
tmp = tmp(mask);
if isempty(tmp)
error_mean(i_bs, i_a, i_q) = 0;
else
error_mean(i_bs, i_a, i_q) = mean(tmp);
We first assign the desired operation to a temporary variable, use isnan and isinf to remove out the offending values, then find the average of the rest. One intricacy is that if your entire region is NaN or Inf, then the removal of all these entries in the region results in the empty vector, and finding the mean of this undefined. A separate check is there to be sure that if it's empty, simply assign the value of 0 instead.

Matlab Multivariable Minimization

I'm trying to develop the adaptive unsharp algorithm described by Polesel et al. in the article "Image Enhancement via Adaptive Unsharp Masking" (link to the article). The core of the algorithm is the minimization of a cost function defined as:
J(m,n) = E[e(m,n)^2] = E[(gd(m,n)-gy(m,n))^2]
where E[] is the statistical expectation and gy(m,n) is:
gy(m,n) = gx(m,n) + lambda1(m,n)*gzx(m,n) + lambda2(m,n)*gzy(m,n);
I want to find lambda1 and lambda2 for each pixel in order to minimize the cost function in each pixel.
Here the code that I wrote so far:
function [ o_sharpened_image ] = AdaptativeUnsharpMask( i_image , t1, t2)
%ADAPTATIVEUNSHARPMASK Summary of this function goes here
% Detailed explanation goes here
if isa(i_image,'dip_image')
i_image = dip_array(i_image);
end
if ~isfloat(i_image)
i_image = im2double(i_image);
end
adh = 4;
adl = 3;
g = [-1 -1 -1; -1 8 -1; -1 -1 -1];
dim = size(i_image);
lambda_x = 0.5*ones(dim);
lambda_y = 0.5*ones(dim);
z_x = conv2(i_image,[-1 2 -1],'same');
z_y = conv2(i_image,[-1; 2; -1],'same');
g_x = conv2(i_image,g,'same');
g_zx = conv2(z_x,g,'same');
g_zy = conv2(z_y,g,'same');
a = ones(dim);
variance_map = colfilt(i_image,[3 3],'sliding',#var);
a(variance_map >= t1 & variance_map < t2) = adh;
a(variance_map >= t2) = adl;
g_d = a.*g_x;
lambda = [lambda_x lambda_y];
lambda0 = lambda;
lambda_min = lsqnonlin(#(lambda) UnsharpCostFunction(lambda,g_d,g_zx,g_zy),lambda0);
o_sharpened_image = i_image + lambda_min(:,1:size(i_image,2)).*z_x + lambda_min(:,size(i_image,2)+1:end).*z_y;
end
Here the code of the cost function:
function [ J ] = UnsharpCostFunction( i_lambda, i_gd, i_gzx, i_gzy )
%UNSHARPCOSTFUNCTION Summary of this function goes herek
gy = i_gd + i_lambda(:,1:size(i_gd,2)).*i_gzx + i_lambda(:,size(i_gd,2)+1:end).*i_gzy;
J = mean((i_gd(:) - gy(:)).^2);
end
For each iteration I print on the command window the value of the J function and it is always the same. What am I doing wrong?
Thank you.

Solve wrong type argument 'cell'

I write in variable 'O' some values using
for i = 1:size(I,1)
for j = 1:size(1,I)
h = i * j;
O{h} = I(i, j) * theta(h);
end
end
I - double, theta - double.
I need to sum()all 'O' values, but when I do it its give me error: sum: wrong type argument 'cell'.
How can I sum() it?
P.s. when I want to see O(), its give me
O =
{
[1,1] = 0.0079764
[1,2] = 0.0035291
[1,3] = 0.0027539
[1,4] = 0.0034392
[1,5] = 0.017066
[1,6] = 0.0082958
[1,7] = 1.4764e-04
[1,8] = 0.0024597
[1,9] = 1.1155e-04
[1,10] = 0.0010342
[1,11] = 0.0039654
[1,12] = 0.0047713
[1,13] = 0.0054305
[1,14] = 3.3794e-04
[1,15] = 0.014323
[1,16] = 0.0026826
[1,17] = 0.013864
[1,18] = 0.0097778
[1,19] = 0.0058029
[1,20] = 0.0020726
[1,21] = 0.0016430
etc...
The exact answer to your question is to use cell2mat
sum (cell2mat (your_cell_o))
However, this is the very wrong way to solve your problem. The thing is that you should not have created a cell array in first place. You should have created a numeric array:
O = zeros (size (I), class (I));
for i = 1:rows (I)
for j = 1:columns (I)
h = i * j;
O(h) = I(i, j) * theta(h);
endfor
endfor
but even this is just really bad and slow. Octave is a language to vectorize operations. Instead, you should have:
h = (1:rows (I))' .* (1:columns (I)); # automatic broadcasting
O = I .* theta (h);
which assumes your function theta behaves properly and if givena matrix will compute the value for each of the element of h and return something of the same size.
If you get an error about wrong sizes, I will guess you have an old version of Octave that does not perform automatic broadcasting. If so, update Octave. If you really can't, then:
h = bsxfun (#times, (1:rows (I))', 1:columns (I));

Resources