Can someone help me vectorize / speed up this Matlab Loop? - performance

correlation = zeros(length(s1), 1);
sizeNum = 0;
for i = 1 : length(s1) - windowSize - delta
s1Dat = s1(i : i + windowSize);
s2Dat = s2(i + delta : i + delta + windowSize);
if length(find(isnan(s1Dat))) == 0 && length(find(isnan(s2Dat))) == 0
if(var(s1Dat) ~= 0 || var(s2Dat) ~= 0)
sizeNum = sizeNum + 1;
correlation(i) = abs(corr(s1Dat, s2Dat)) ^ 2;
end
end
end
What's happening here:
Run through every values in s1. For every value, get a slice for s1
till s1 + windowSize.
Do the same for s2, only get the slice after an intermediate delta.
If there are no NaN's in any of the two slices and they aren't flat,
then get the correlaton between them and add that to the
correlation matrix.

This is not an answer, I am trying to understand what is being asked.
Take some data:
N = 1e4;
s1 = cumsum(randn(N, 1)); s2 = cumsum(randn(N, 1));
s1(randi(N, 50, 1)) = NaN; s2(randi(N, 50, 1)) = NaN;
windowSize = 200; delta = 100;
Compute correlations:
tic
corr_s = zeros(N - windowSize - delta, 1);
for i = 1:(N - windowSize - delta)
s1Dat = s1(i:(i + windowSize));
s2Dat = s2((i + delta):(i + delta + windowSize));
corr_s(i) = corr(s1Dat, s2Dat);
end
inds = isnan(corr_s);
corr_s(inds) = 0;
corr_s = corr_s .^ 2; % square of correlation coefficient??? Why?
sizeNum = sum(~inds);
toc
This is what you want to do, right? A moving window correlation function? This is a very interesting question indeed …

Related

How to properly process images with mixed noise types

The picture with noise is like this.
Noised picture: Image3.bmp
I was doing image processing in MatLab with some built-in and self-implemented filters.
I have already tried a combination of bilateral, median and gaussian. bilateral and gaussian code are at the end of this post.
img3 = double(imread('Image3.bmp')); % this is the noised image
lena = double(imread('lena_gray.jpg')); % this is the original one
img3_com = bilateral(img3, 3, 2, 80);
img3_com = medfilt2(img3_com, [3 3], 'symmetric');
img3_com = gaussian(img3_com, 3, 0.5);
img3_com = bilateral(double(img3_com), 6, 100, 13);
SNR3_com = snr(img3_com,img3_com - lena); % 17.1107
However, the result is not promising with SNR of only 17.11.
Filtered image: img3_com
The original picture is like this.
Clean original image: lena_gray.jpg
Could you please give me any possible ideas about how to process it? Like what noise generators generated the noised image and what filtering methods or image processing method I can use to deal with it. Appreciate!!!
My bilateral function bilateral.m
function img_new = bilateral(img_gray, window, sigmaS, sigmaI)
imgSize = size(img_gray);
img_new = zeros(imgSize);
for i = 1:imgSize(1)
for j = 1:imgSize(2)
sum = 0;
simiSum = 0;
for a = -window:window
for b = -window:window
x = i + a;
y = j + b;
p = img_gray(i,j);
q = 0;
if x < 1 || y < 1 || x > imgSize(1) || y > imgSize(2)
% q=0;
continue;
else
q = img_gray(x,y);
end
gaussianFilter = exp( - double((a)^2 + (b)^2)/ (2 * sigmaS^2 ) - (double(p-q)^2)/ (2 * sigmaI^2 ));
% gaussianFilter = gaussian((a^2 + b^2)^(1/2), sigma) * gaussian(abs(p-q), sigma);
sum = sum + gaussianFilter * q;
simiSum = simiSum + gaussianFilter;
end
end
img_new(i,j) = sum/simiSum;
end
end
% disp SNR
lena = double(imread('lena_gray.jpg'));
SNR1_4_ = snr(img_new,img_new - lena);
disp(SNR1_4_);
My gaussian implementation gaussian.m
function img_gau = gaussian(img, hsize, sigma)
h = fspecial('gaussian', hsize, sigma);
img_gau = conv2(img,h,'same');
% disp SNR
lena = double(imread('lena_gray.jpg'));
SNR1_4_ = snr(img_gau,img_gau - lena);
disp(SNR1_4_);

Solving Project Euler #12 with Matlab

I am trying to solve Problem #12 of Project Euler with Matlab and this is what I came up with to find the number of divisors of a given number:
function [Divisors] = ND(n)
p = primes(n); %returns a row vector containing all the prime numbers less than or equal to n
i = 1;
count = 0;
Divisors = 1;
while n ~= 1
while rem(n, p(i)) == 0 %rem(a, b) returns the remainder after division of a by b
count = count + 1;
n = n / p(i);
end
Divisors = Divisors * (count + 1);
i = i + 1;
count = 0;
end
end
After this, I created a function to evaluate the number of divisors of the product n * (n + 1) / 2 and when this product achieves a specific limit:
function [solution] = Solution(limit)
n = 1;
product = 0;
while(product < limit)
if rem(n, 2) == 0
product = ND(n / 2) * ND(n + 1);
else
product = ND(n) * ND((n + 1) / 2);
end
n = n + 1;
end
solution = n * (n + 1) / 2;
end
I already know the answer and it's not what comes back from the function Solution. Could someone help me find what's wrong with the coding.
When I run Solution(500) (500 is the limit specified in the problem), I get 76588876, but the correct answer should be:
76576500.
The trick is quite simple while it also bothering me for a while: The iteration in you while loop is misplaced, which would cause the solution a little bigger than the true answer.
function [solution] = Solution(limit)
n = 1;
product = 0;
while(product < limit)
n = n + 1; %%%But Here
if rem(n, 2) == 0
product = ND(n / 2) * ND(n + 1);
else
product = ND(n) * ND((n + 1) / 2);
end
%n = n + 1; %%%Not Here
end
solution = n * (n + 1) / 2;
end
The output of Matlab 2015b:
>> Solution(500)
ans =
76576500

How to calculate the mean of 3D matrices in an image without NaN?

I need to calculate the mean of a 3D matrices (last step in the code). However, there are many NaNs in the (diff_dataframe./dataframe_vor) calculation. So when I use this code, some results will be NaN. How could I calculate the mean of this matrix by ignoring the NaNs? I attached the code as below.
S.amplitude = 1:20;%:20;
S.blocksize = [1 2 3 4 5 6 8 10 12 15 20];
S.frameWidth = 1920;
S.frameHeight = 1080;
S.quality=0:10:100;
image = 127*ones(S.frameHeight,S.frameWidth,3);
S.yuv2rgb = [1 0 1.28033; 1 -0.21482 -0.38059; 1 2.12798 0];
i_bs = 0;
for BS = S.blocksize
i_bs = i_bs + 1;
hblocks = S.frameWidth / BS;
vblocks = S.frameHeight / BS;
i_a = 0;
dataU = randi([0 1],vblocks,hblocks);
dataV = randi([0 1],vblocks,hblocks);
dataframe_yuv = zeros(S.frameHeight, S.frameWidth, 3);
for x = 1 : hblocks
for y = 1 : vblocks
dataframe_yuv((y-1)*BS+1:y*BS, ...
(x-1)*BS+1:x*BS, 2) = dataU(y,x) * 2 - 1;
dataframe_yuv((y-1)*BS+1:y*BS, ...
(x-1)*BS+1:x*BS, 3) = dataV(y,x) * 2 - 1;
end
end
dataframe_rgb(:,:,1) = S.yuv2rgb(1,1) * dataframe_yuv(:,:,1) + ...
S.yuv2rgb(1,2) * dataframe_yuv(:,:,2) + ...
S.yuv2rgb(1,3) * dataframe_yuv(:,:,3);
dataframe_rgb(:,:,2) = S.yuv2rgb(2,1) * dataframe_yuv(:,:,1) + ...
S.yuv2rgb(2,2) * dataframe_yuv(:,:,2) + ...
S.yuv2rgb(2,3) * dataframe_yuv(:,:,3);
dataframe_rgb(:,:,3) = S.yuv2rgb(3,1) * dataframe_yuv(:,:,1) + ...
S.yuv2rgb(3,2) * dataframe_yuv(:,:,2) + ...
S.yuv2rgb(3,3) * dataframe_yuv(:,:,3);
for A = S.amplitude
i_a = i_a + 1;
i_q = 0;
image1p = round(image + dataframe_rgb * A);
image1n = round(image - dataframe_rgb * A);
dataframe_vor = ((image1p-image1n)/2)/255;
for Q = S.quality
i_q = i_q + 1;
namestrp = ['greyjpegs/Img_BS' num2str(BS) '_A' num2str(A) '_Q' num2str(Q) '_1p.jpg'];
namestrn = ['greyjpegs/Img_BS' num2str(BS) '_A' num2str(A) '_Q' num2str(Q) '_1n.jpg'];
imwrite(image1p/255,namestrp,'jpg', 'Quality', Q);
imwrite(image1n/255,namestrn,'jpg', 'Quality', Q);
error_mean(i_bs, i_a, i_q) = mean2((abs(diff_dataframe./dataframe_vor)));
end
end
end
mean2 is a shortcut function that's part of the image processing toolbox that finds the entire average of a 2D region which doesn't include handling NaN. In that case, simply remove all values that are NaN and find the resulting average. Note that the removal of NaN unrolls the 2D region into a 1D vector, so we can simply use mean in this case. As an additional check, let's make sure there are no divide by 0 errors, so also check for Inf as well.
Therefore, replace this line:
error_mean(i_bs, i_a, i_q) = mean2((abs(diff_dataframe./dataframe_vor)));
... with:
tmp = abs(diff_dataframe ./ dataframe_vor);
mask = ~isnan(tmp) | ~isinf(tmp);
tmp = tmp(mask);
if isempty(tmp)
error_mean(i_bs, i_a, i_q) = 0;
else
error_mean(i_bs, i_a, i_q) = mean(tmp);
We first assign the desired operation to a temporary variable, use isnan and isinf to remove out the offending values, then find the average of the rest. One intricacy is that if your entire region is NaN or Inf, then the removal of all these entries in the region results in the empty vector, and finding the mean of this undefined. A separate check is there to be sure that if it's empty, simply assign the value of 0 instead.

MATLAB code running slow on MacBookPro, triple while loop

I have been running a MATLAB program for almost six hours now, and it is still not complete. It is cycling through three while loops (the outer two loops are n=855, the inner loop is n=500). Is this a surprise that it is taking this long? Is there anything I can do to increase the speed? I am including the code below, as well as the variable data types underneath that.
while i < (numAtoms + 1)
pointAccessible = ones(numPoints,1);
j = 1;
while j <(numAtoms + 1)
if (i ~= j)
k=1;
while k < (numPoints + 1)
if (pointAccessible(k) == 1)
sphereCoord = [cell2mat(atomX(i)) + p + sphereX(k), cell2mat(atomY(i)) + p + sphereY(k), cell2mat(atomZ(i)) + p + sphereZ(k)];
neighborCoord = [cell2mat(atomX(j)), cell2mat(atomY(j)), cell2mat(atomZ(j))];
coords(1,:) = [sphereCoord];
coords(2,:) = [neighborCoord];
if (pdist(coords) < (atomRadius(j) + p))
pointAccessible(k)=0;
end
end
k = k + 1;
end
end
j = j+1;
end
remainingPoints(i) = sum(pointAccessible);
i = i +1;
end
Variable Data Types:
numAtoms = 855
numPoints = 500
p = 1.4
atomRadius = <855 * 1 double>
pointAccessible = <500 * 1 double>
atomX, atomY, atomZ = <1 * 855 cell>
sphereX, sphereY, sphereZ = <500 * 1 double>
remainingPoints = <855 * 1 double>

How to accelerate matlab code?

I'm using matlab to implement a multilayer neural network. In the code I represent
the value of each node AS netValue{k}
the weight between layer k and k + 1 AS weight{k}
etc.
Since these data is three-dimensional, I have to use cell to hold a 2-D matrix to enable matrix multiply.
So it becomes really really slow to train the model, which I expect to have resulted from the usage of cell.
Can anyone tell me how to accelerate this code? Thanks
clc;
close all;
clear all;
input = [-2 : 0.4 : 2;-2:0.4:2];
ican = 4;
depth = 4; % total layer - 1, by convension
[featureNum , sampleNum] = size(input);
levelNum(1) = featureNum;
levelNum(2) = 5;
levelNum(3) = 5;
levelNum(4) = 5;
levelNum(5) = 2;
weight = cell(0);
for k = 1 : depth
weight{k} = rand(levelNum(k+1), levelNum(k)) - 2 * rand(levelNum(k+1) , levelNum(k));
threshold{k} = rand(levelNum(k+1) , 1) - 2 * rand(levelNum(k+1) , 1);
end
runCount = 0;
sumMSE = 1; % init MSE
minError = 1e-5;
afa = 0.1; % step of "gradient ascendence"
% training loop
while(runCount < 100000 & sumMSE > minError)
sumMSE = 0; % sum of MSE
for i = 1 : sampleNum % sample loop
netValue{1} = input(:,i);
for k = 2 : depth
netValue{k} = weight{k-1} * netValue{k-1} + threshold{k-1}; %calculate each layer
netValue{k} = 1 ./ (1 + exp(-netValue{k})); %apply logistic function
end
netValue{depth+1} = weight{depth} * netValue{depth} + threshold{depth}; %output layer
e = 1 + sin((pi / 4) * ican * netValue{1}) - netValue{depth + 1}; %calc error
assistS{depth} = diag(ones(size(netValue{depth+1})));
s{depth} = -2 * assistS{depth} * e;
for k = depth - 1 : -1 : 1
assistS{k} = diag((1-netValue{k+1}).*netValue{k+1});
s{k} = assistS{k} * weight{k+1}' * s{k+1};
end
for k = 1 : depth
weight{k} = weight{k} - afa * s{k} * netValue{k}';
threshold{k} = threshold{k} - afa * s{k};
end
sumMSE = sumMSE + e' * e;
end
sumMSE = sqrt(sumMSE) / sampleNum;
runCount = runCount + 1;
end
x = [-2 : 0.1 : 2;-2:0.1:2];
y = zeros(size(x));
z = 1 + sin((pi / 4) * ican .* x);
% test
for i = 1 : length(x)
netValue{1} = x(:,i);
for k = 2 : depth
netValue{k} = weight{k-1} * netValue{k-1} + threshold{k-1};
netValue{k} = 1 ./ ( 1 + exp(-netValue{k}));
end
y(:, i) = weight{depth} * netValue{depth} + threshold{depth};
end
plot(x(1,:) , y(1,:) , 'r');
hold on;
plot(x(1,:) , z(1,:) , 'g');
hold off;
Have you used the profiler to find out what functions are actually slowing down your code? It shows what lines take the most time to execute.

Resources