In the problem Im working on there is such a part of code, as shown below. The definition part is just to show you the sizes of arrays. Below I pasted vectorized version - and it is >2x slower. Why it happens so? I know that i happens if vectorization requiers large temporary variables, but (it seems) it is not true here.
And generally, what (other than parfor, with I already use) can I do to speed up this code?
maxN = 100;
levels = maxN+1;
xElements = 101;
umn = complex(zeros(levels, levels));
umn2 = umn;
bessels = ones(xElements, xElements, levels); % 1.09 GB
posMcontainer = ones(xElements, xElements, maxN);
tic
for j = 1 : xElements
for i = 1 : xElements
for n = 1 : 2 : maxN
nn = n + 1;
mm = 1;
for m = 1 : 2 : n
umn(nn, mm) = bessels(i, j, nn) * posMcontainer(i, j, m);
mm = mm + 1;
end
end
end
end
toc % 0.520594 seconds
tic
for j = 1 : xElements
for i = 1 : xElements
for n = 1 : 2 : maxN
nn = n + 1;
m = 1:2:n;
numOfEl = ceil(n/2);
umn2(nn, 1:numOfEl) = bessels(i, j, nn) * posMcontainer(i, j, m);
end
end
end
toc % 1.275926 seconds
sum(sum(umn-umn2)) % veryfying, if all done right
Best regards,
Alex
From the profiler:
Edit:
In reply to #Jason answer, this alternative takes the same time:
for n = 1:2:maxN
nn(n) = n + 1;
numOfEl(n) = ceil(n/2);
end
for j = 1 : xElements
for i = 1 : xElements
for n = 1 : 2 : maxN
umn2(nn(n), 1:numOfEl(n)) = bessels(i, j, nn(n)) * posMcontainer(i, j, 1:2:n);
end
end
end
Edit2:
In reply to #EBH :
The point is to do the following:
parfor i = 1 : xElements
for j = 1 : xElements
umn = complex(zeros(levels, levels)); % cleaning
for n = 0:maxN
mm = 1;
for m = -n:2:n
nn = n + 1; % for indexing
if m < 0
umn(nn, mm) = bessels(i, j, nn) * negMcontainer(i, j, abs(m));
end
if m > 0
umn(nn, mm) = bessels(i, j, nn) * posMcontainer(i, j, m);
end
if m == 0
umn(nn, mm) = bessels(i, j, nn);
end
mm = mm + 1; % for indexing
end % m
end % n
beta1 = sum(sum(Aj1.*umn));
betaSumSq1(i, j) = abs(beta1).^2;
beta2 = sum(sum(Aj2.*umn));
betaSumSq2(i, j) = abs(beta2).^2;
end % j
end % i
I speeded it up as much, as I was able to. What you have written is taking only the last bessels and posMcontainer values, so it does not produce the same result. In the real code, those two containers are filled not with 1, but with some precalculated values.
After your edit, I can see that umn is just a temporary variable for another calculation. It still can be mostly vectorizable:
betaSumSq1 = zeros(xElements); % preallocating
betaSumSq2 = zeros(xElements); % preallocating
% an index matrix to fetch the right values from negMcontainer and
% posMcontainer:
indmat = tril(repmat([0 1;1 0],ceil((maxN+1)/2),floor(levels/2)));
indmat(end,:) = [];
% an index matrix to fetch the values in correct order for umn:
b_ind = repmat([1;0],ceil((maxN+1)/2),1);
b_ind(end) = [];
tempind = logical([fliplr(indmat) b_ind indmat+triu(ones(size(indmat)))]);
% permute the arrays to prevent squeeze:
PM = permute(posMcontainer,[3 1 2]);
NM = permute(negMcontainer,[3 1 2]);
B = permute(bessels,[3 1 2]);
for k = 1 : maxN+1 % third dim
for jj = 1 : xElements % columns
b = B(:,jj,k); % get one vector of B
% perform b*NM for every row of NM*indmat, than flip the result:
neg = fliplr(bsxfun(#times,bsxfun(#times,indmat,NM(:,jj,k).'),b));
% perform b*PM for every row of PM*indmat:
pos = bsxfun(#times,bsxfun(#times,indmat,PM(:,jj,k).'),b);
temp = [neg mod(1:levels,2).'.*b pos].'; % concat neg and pos
% assign them to the right place in umn:
umn = reshape(temp(tempind.'),[levels levels]).';
beta1 = Aj1.*umn;
betaSumSq1(jj,k) = abs(sum(beta1(:))).^2;
beta2 = Aj2.*umn;
betaSumSq2(jj,k) = abs(sum(beta2(:))).^2;
end
end
This reduce running time from ~95 seconds to less 3 seconds (both without parfor), so it improves in almost 97%.
I would suspect it is memory allocation. You are re-allocating the m array in a 3 deep loop.
try rearranging the code:
tic
for n = 1 : 2 : maxN
nn = n + 1;
m = 1:2:n;
numOfEl = ceil(n/2);
for j = 1 : xElements
for i = 1 : xElements
umn2(nn, 1:numOfEl) = bessels(i, j, nn) * posMcontainer(i, j, m);
end
end
end
toc % 1.275926 seconds
I was trying this in Igor pro, which a similar language, but with different optimizations. So the direct translations don't time the same way as Matlab (vectorized was slightly faster in Igor). But reordering the loops did speed up the vectorized form.
In your second part of the code, that is setting umn2, inside the loops, you have:
nn = n + 1;
m = 1:2:n;
numOfEl = ceil(n/2);
Those 3 lines don't require any input from the i and j loops, they only use the n loop. So reordering the loops such that i and j are inside the n loop will mean that those 3 lines are done xElements^2 (100^2) times less often. I suspect it is that m = 1:2:n line that takes time, since that is allocating an array.
Related
I'm doing this exercise by Andrew NG about using k-means to reduce the number of colors in an image. It worked correctly but I'm afraid it's a little slow because of all the for loops in the code, so I'd like to vectorize them. But there are those loops that I just can't seem to vectorize effectively. Please help me, thank you very much!
Also if possible please give some feedback on my coding style :)
Here is the link of the exercise, and here is the dataset.
The correct result is given in the link of the exercise.
And here is my code:
function [] = KMeans()
Image = double(imread('bird_small.tiff'));
[rows,cols, RGB] = size(Image);
Points = reshape(Image,rows * cols, RGB);
K = 16;
Centroids = zeros(K,RGB);
s = RandStream('mt19937ar','Seed',0);
% Initialization :
% Pick out K random colours and make sure they are all different
% from each other! This prevents the situation where two of the means
% are assigned to the exact same colour, therefore we don't have to
% worry about division by zero in the E-step
% However, if K = 16 for example, and there are only 15 colours in the
% image, then this while loop will never exit!!! This needs to be
% addressed in the future :(
% TODO : Vectorize this part!
done = false;
while done == false
RowIndex = randperm(s,rows);
ColIndex = randperm(s,cols);
RowIndex = RowIndex(1:K);
ColIndex = ColIndex(1:K);
for i = 1 : K
for j = 1 : RGB
Centroids(i,j) = Image(RowIndex(i),ColIndex(i),j);
end
end
Centroids = sort(Centroids,2);
Centroids = unique(Centroids,'rows');
if size(Centroids,1) == K
done = true;
end
end;
% imshow(imread('bird_small.tiff'))
%
% for i = 1 : K
% hold on;
% plot(RowIndex(i),ColIndex(i),'r+','MarkerSize',50)
% end
eps = 0.01; % Epsilon
IterNum = 0;
while 1
% E-step: Estimate membership given parameters
% Membership: The centroid that each colour is assigned to
% Parameters: Location of centroids
Dist = pdist2(Points,Centroids,'euclidean');
[~, WhichCentroid] = min(Dist,[],2);
% M-step: Estimate parameters given membership
% Membership: The centroid that each colour is assigned to
% Parameters: Location of centroids
% TODO: Vectorize this part!
OldCentroids = Centroids;
for i = 1 : K
PointsInCentroid = Points((find(WhichCentroid == i))',:);
NumOfPoints = size(PointsInCentroid,1);
% Note that NumOfPoints is never equal to 0, as a result of
% the initialization. Or .... ???????
if NumOfPoints ~= 0
Centroids(i,:) = sum(PointsInCentroid , 1) / NumOfPoints ;
end
end
% Check for convergence: Here we use the L2 distance
IterNum = IterNum + 1;
Margins = sqrt(sum((Centroids - OldCentroids).^2, 2));
if sum(Margins > eps) == 0
break;
end
end
IterNum;
Centroids ;
% Load the larger image
[LargerImage,ColorMap] = imread('bird_large.tiff');
LargerImage = double(LargerImage);
[largeRows,largeCols,NewRGB] = size(LargerImage); % RGB is always 3
% TODO: Vectorize this part!
largeRows
largeCols
NewRGB
% Replace each of the pixel with the nearest centroid
NewPoints = reshape(LargerImage,largeRows * largeCols, NewRGB);
Dist = pdist2(NewPoints,Centroids,'euclidean');
[~,WhichCentroid] = min(Dist,[],2);
NewPoints = Centroids(WhichCentroid,:);
LargerImage = reshape(NewPoints,largeRows,largeCols,NewRGB);
% for i = 1 : largeRows
% for j = 1 : largeCols
% Dist = pdist2(Centroids,reshape(LargerImage(i,j,:),1,RGB),'euclidean');
% [~,WhichCentroid] = min(Dist);
% LargerImage(i,j,:) = Centroids(WhichCentroid,:);
% end
% end
% Display new image
imshow(uint8(round(LargerImage)),ColorMap)
UPDATE: Replaced
for i = 1 : K
for j = 1 : RGB
Centroids(i,j) = Image(RowIndex(i),ColIndex(i),j);
end
end
with
for i = 1 : K
Centroids(i,:) = Image(RowIndex(i),ColIndex(i),:);
end
I think this may be vectorized further by using linear indexing, but for now I should just focus on the while loop since it takes most of the time.
Also when I tried #Dev-iL's suggestion and replaced
for i = 1 : K
PointsInCentroid = Points((find(WhichCentroid == i))',:);
NumOfPoints = size(PointsInCentroid,1);
% Note that NumOfPoints is never equal to 0, as a result of
% the initialization. Or .... ???????
if NumOfPoints ~= 0
Centroids(i,:) = sum(PointsInCentroid , 1) / NumOfPoints ;
end
end
with
E = sparse(1:size(WhichCentroid), WhichCentroid' , 1, Num, K, Num);
Centroids = (E * spdiags(1./sum(E,1)',0,K,K))' * Points ;
the results were always worse: With K = 16, the first takes 2,414s , the second takes 2,455s ; K = 32, the first takes 4,529s , the second takes 5,022s. Seems like vectorization does not help, but maybe there's something wrong with my code :( .
Replaced
for i = 1 : K
for j = 1 : RGB
Centroids(i,j) = Image(RowIndex(i),ColIndex(i),j);
end
end
with
for i = 1 : K
Centroids(i,:) = Image(RowIndex(i),ColIndex(i),:);
end
I think this may be vectorized further by using linear indexing, but for now I should just focus on the while loop since it takes most of the time.
Also when I tried #Dev-iL's suggestion and replaced
for i = 1 : K
PointsInCentroid = Points((find(WhichCentroid == i))',:);
NumOfPoints = size(PointsInCentroid,1);
% Note that NumOfPoints is never equal to 0, as a result of
% the initialization. Or .... ???????
if NumOfPoints ~= 0
Centroids(i,:) = sum(PointsInCentroid , 1) / NumOfPoints ;
end
end
with
E = sparse(1:size(WhichCentroid), WhichCentroid' , 1, Num, K, Num);
Centroids = (E * spdiags(1./sum(E,1)',0,K,K))' * Points ;
the results were always worse: With K = 16, the first takes 2,414s , the second takes 2,455s ; K = 32, the first took 4,529s , the second took 5,022s. Seems like vectorization did not help in this case.
However, when I replaced
Dist = pdist2(Points,Centroids,'euclidean');
[~, WhichCentroid] = min(Dist,[],2);
(in the while loop) with
Dist = bsxfun(#minus,dot(Centroids',Centroids',1)' / 2 , Centroids * Points' );
[~, WhichCentroid] = min(Dist,[],1);
WhichCentroid = WhichCentroid';
the code ran much faster, especially when K is large (K=32)
Thank you everyone!
Here is a matalab program for backpropagation algorithm-
% XOR input for x1 and x2
input = [0 0; 0 1; 1 0; 1 1];
% Desired output of XOR
output = [0;1;1;0];
% Initialize the bias
bias = [-1 -1 -1];
% Learning coefficient
coeff = 0.7;
% Number of learning iterations
iterations = 10000;
% Calculate weights randomly using seed.
rand('state',sum(100.*clock));
weights = -1 +2.*rand(3,3);
for i = 1:iterations
out = zeros(4,1);
numIn = length (input(:,1));
for j = 1:numIn
% Hidden layer
H1 = bias(1,1).*weights(1,1) + input(j,1).*weights(1,2)+ input(j,2).*weights(1,3);
% Send data through sigmoid function 1/1+e^-x
% Note that sigma is a different m file
% that I created to run this operation
x2(1) = sigma(H1);
H2 = bias(1,2).*weights(2,1)+ input(j,1).*weights(2,2)+ input(j,2).*weights(2,3);
x2(2) = sigma(H2);
% Output layer
x3_1 = bias(1,3).*weights(3,1)+ x2(1).*weights(3,2)+ x2(2).*weights(3,3);
out(j) = sigma(x3_1);
% Adjust delta values of weights
% For output layer:
% delta(wi) = xi*delta,
% delta = (1-actual output)*(desired output - actual output)
delta3_1 = out(j).*(1-out(j)).*(output(j)-out(j));
% Propagate the delta backwards into hidden layers
delta2_1 = x2(1).*(1-x2(1)).*weights(3,2).*delta3_1;
delta2_2 = x2(2).*(1-x2(2)).*weights(3,3).*delta3_1;
% Add weight changes to original weights
% And use the new weights to repeat process.
% delta weight = coeff*x*delta
for k = 1:3
if k == 1 % Bias cases
weights(1,k) = weights(1,k) + coeff.*bias(1,1).*delta2_1;
weights(2,k) = weights(2,k) + coeff.*bias(1,2).*delta2_2;
weights(3,k) = weights(3,k) + coeff.*bias(1,3).*delta3_1;
else % When k=2 or 3 input cases to neurons
weights(1,k) = weights(1,k) + coeff.*input(j,1).*delta2_1;
weights(2,k) = weights(2,k) + coeff.*input(j,2).*delta2_2;
weights(3,k) = weights(3,k) + coeff.*x2(k-1).*delta3_1;
end
end
end
end
But its showing error like -
??? Index exceeds matrix dimensions.
Error in ==> sigma at 95
a=varargin{1}; b=varargin{2}; c=varargin{3}; d=varargin{4};
Error in ==> back at 25
x2(1) = sigma(H1);
Please help me out. I am not able to understand the problem. Why there is an error saying index exceeds matrix dimension? Help is needed.
I am using MATLAB to find all of the possible combinations of k elements out of n possible elements. I stumbled across this question, but unfortunately it does not solve my problem. Of course, neither does nchoosek as my n is around 100.
Truth is, I don't need all of the possible combinations at the same time. I will explain what I need, as there might be an easier way to achieve the desired result. I have a matrix M of 100 rows and 25 columns.
Think of a submatrix of M as a matrix formed by ALL columns of M and only a subset of the rows. I have a function f that can be applied to any matrix which gives a result of either -1 or 1. For example, you can think of the function as sign(det(A)) where A is any matrix (the exact function is irrelevant for this part of the question).
I want to know what is the biggest number of rows of M for which the submatrix A formed by these rows is such that f(A) = 1. Notice that if f(M) = 1, I am done. However, if this is not the case then I need to start combining rows, starting of all combinations with 99 rows, then taking the ones with 98 rows, and so on.
Up to this point, my implementation had to do with nchoosek which worked when M had only a few rows. However, now that I am working with a relatively bigger dataset, things get stuck. Do any of you guys think of a way to implement this without having to use the above function? Any help would be gladly appreciated.
Here is my minimal working example, it works for small obs_tot but fails when I try to use bigger numbers:
value = -1; obs_tot = 100; n_rows = 25;
mat = randi(obs_tot,n_rows);
while value == -1
posibles = nchoosek(1:obs_tot,i);
[num_tries,num_obs] = size(possibles);
num_try = 1;
while value == 0 && num_try <= num_tries
check = mat(possibles(num_try,:),:);
value = sign(det(check));
num_try = num_try + 1;
end
i = i - 1;
end
obs_used = possibles(num_try-1,:)';
Preamble
As yourself noticed in your question, it would be nice not to have nchoosek to return all possible combinations at the same time but rather to enumerate them one by one in order not to explode memory when n becomes large. So something like:
enumerator = CombinationEnumerator(k, n);
while(enumerator.MoveNext())
currentCombination = enumerator.Current;
...
end
Here is an implementation of such enumerator as a Matlab class. It is based on classic IEnumerator<T> interface in C# / .NET and mimics the subfunction combs in nchoosek (the unrolled way):
%
% PURPOSE:
%
% Enumerates all combinations of length 'k' in a set of length 'n'.
%
% USAGE:
%
% enumerator = CombinaisonEnumerator(k, n);
% while(enumerator.MoveNext())
% currentCombination = enumerator.Current;
% ...
% end
%
%% ---
classdef CombinaisonEnumerator < handle
properties (Dependent) % NB: Matlab R2013b bug => Dependent must be declared before their get/set !
Current; % Gets the current element.
end
methods
function [enumerator] = CombinaisonEnumerator(k, n)
% Creates a new combinations enumerator.
if (~isscalar(n) || (n < 1) || (~isreal(n)) || (n ~= round(n))), error('`n` must be a scalar positive integer.'); end
if (~isscalar(k) || (k < 0) || (~isreal(k)) || (k ~= round(k))), error('`k` must be a scalar positive or null integer.'); end
if (k > n), error('`k` must be less or equal than `n`'); end
enumerator.k = k;
enumerator.n = n;
enumerator.v = 1:n;
enumerator.Reset();
end
function [b] = MoveNext(enumerator)
% Advances the enumerator to the next element of the collection.
if (~enumerator.isOkNext),
b = false; return;
end
if (enumerator.isInVoid)
if (enumerator.k == enumerator.n),
enumerator.isInVoid = false;
enumerator.current = enumerator.v;
elseif (enumerator.k == 1)
enumerator.isInVoid = false;
enumerator.index = 1;
enumerator.current = enumerator.v(enumerator.index);
else
enumerator.isInVoid = false;
enumerator.index = 1;
enumerator.recursion = CombinaisonEnumerator(enumerator.k - 1, enumerator.n - enumerator.index);
enumerator.recursion.v = enumerator.v((enumerator.index + 1):end); % adapt v (todo: should use private constructor)
enumerator.recursion.MoveNext();
enumerator.current = [enumerator.v(enumerator.index) enumerator.recursion.Current];
end
else
if (enumerator.k == enumerator.n),
enumerator.isInVoid = true;
enumerator.isOkNext = false;
elseif (enumerator.k == 1)
enumerator.index = enumerator.index + 1;
if (enumerator.index <= enumerator.n)
enumerator.current = enumerator.v(enumerator.index);
else
enumerator.isInVoid = true;
enumerator.isOkNext = false;
end
else
if (enumerator.recursion.MoveNext())
enumerator.current = [enumerator.v(enumerator.index) enumerator.recursion.Current];
else
enumerator.index = enumerator.index + 1;
if (enumerator.index <= (enumerator.n - enumerator.k + 1))
enumerator.recursion = CombinaisonEnumerator(enumerator.k - 1, enumerator.n - enumerator.index);
enumerator.recursion.v = enumerator.v((enumerator.index + 1):end); % adapt v (todo: should use private constructor)
enumerator.recursion.MoveNext();
enumerator.current = [enumerator.v(enumerator.index) enumerator.recursion.Current];
else
enumerator.isInVoid = true;
enumerator.isOkNext = false;
end
end
end
end
b = enumerator.isOkNext;
end
function [] = Reset(enumerator)
% Sets the enumerator to its initial position, which is before the first element.
enumerator.isInVoid = true;
enumerator.isOkNext = (enumerator.k > 0);
end
function [c] = get.Current(enumerator)
if (enumerator.isInVoid), error('Enumerator is positioned (before/after) the (first/last) element.'); end
c = enumerator.current;
end
end
properties (GetAccess=private, SetAccess=private)
k = [];
n = [];
v = [];
index = [];
recursion = [];
current = [];
isOkNext = false;
isInVoid = true;
end
end
We can test implementation is ok from command window like this:
>> e = CombinaisonEnumerator(3, 6);
>> while(e.MoveNext()), fprintf(1, '%s\n', num2str(e.Current)); end
Which returns as expected the following n!/(k!*(n-k)!) combinations:
1 2 3
1 2 4
1 2 5
1 2 6
1 3 4
1 3 5
1 3 6
1 4 5
1 4 6
1 5 6
2 3 4
2 3 5
2 3 6
2 4 5
2 4 6
2 5 6
3 4 5
3 4 6
3 5 6
4 5 6
Implementation of this enumerator may be further optimized for speed, or by enumerating combinations in an order more appropriate for your case (e.g., test some combinations first rather than others) ... Well, at least it works! :)
Problem solving
Now solving your problem is really easy:
n = 100;
m = 25;
matrix = rand(n, m);
k = n;
cont = true;
while(cont && (k >= 1))
e = CombinationEnumerator(k, n);
while(cont && e.MoveNext());
cont = f(matrix(e.Current(:), :)) ~= 1;
end
if (cont), k = k - 1; end
end
Suppose that I have an N-by-K matrix A, N-by-P matrix B. I want to do the following calculations to get my final N-by-P matrix X.
X(n,p) = B(n,p) - dot(gamma(p,:),A(n,:))
where
gamma(p,k) = dot(A(:,k),B(:,p))/sum( A(:,k).^2 )
In MATLAB, I have my code like
for p = 1:P
for n = 1:N
for k = 1:K
gamma(p,k) = dot(A(:,k),B(:,p))/sum(A(:,k).^2);
end
x(n,p) = B(n,p) - dot(gamma(p,:),A(n,:));
end
end
which are highly inefficient since it uses three for loops! Is there a good way to speed up this code?
Use bsxfun for the division and matrix multiplication for the loops:
gamma = bsxfun(#rdivide, B.'*A, sum(A.^2));
x = B - A*gamma.';
And here is a test script
N = 3;
K = 4;
P = 5;
A = rand(N, K);
B = rand(N, P);
for p = 1:P
for n = 1:N
for k = 1:K
gamma(p,k) = dot(A(:,k),B(:,p))/sum(A(:,k).^2);
end
x(n,p) = B(n,p) - dot(gamma(p,:),A(n,:));
end
end
gamma2 = bsxfun(#rdivide, B.'*A, sum(A.^2));
X2 = B - A*gamma2.';
isequal(x, X2)
isequal(gamma, gamma2)
which returns
ans =
1
ans =
1
It looks to me like you can hoist the gamma calculations out of the loop; at least, I don't see any dependencies on N in the gamma calculations.
So something like this:
for p = 1:P
for k = 1:K
gamma(p,k) = dot(A(:,k),B(:,p))/sum(A(:,k).^2);
end
end
for p = 1:P
for n = 1:N
x(n,p) = B(n,p) - dot(gamma(p,:),A(n,:));
end
end
I'm not familiar enough with your code (or matlab) to really know if you can merge the two loops, but if you can:
for p = 1:P
for k = 1:K
gamma(p,k) = dot(A(:,k),B(:,p))/sum(A(:,k).^2);
end
for n = 1:N
x(n,p) = B(n,p) - dot(gamma(p,:),A(n,:));
end
end
bxfun is slow...
How about something like the following (I might have a transpose wrong)
modA = A * (1./sum(A.^2,2)) * ones(1,k);
gamma = B' * modA;
x = B - A * gamma';
I have been running a MATLAB program for almost six hours now, and it is still not complete. It is cycling through three while loops (the outer two loops are n=855, the inner loop is n=500). Is this a surprise that it is taking this long? Is there anything I can do to increase the speed? I am including the code below, as well as the variable data types underneath that.
while i < (numAtoms + 1)
pointAccessible = ones(numPoints,1);
j = 1;
while j <(numAtoms + 1)
if (i ~= j)
k=1;
while k < (numPoints + 1)
if (pointAccessible(k) == 1)
sphereCoord = [cell2mat(atomX(i)) + p + sphereX(k), cell2mat(atomY(i)) + p + sphereY(k), cell2mat(atomZ(i)) + p + sphereZ(k)];
neighborCoord = [cell2mat(atomX(j)), cell2mat(atomY(j)), cell2mat(atomZ(j))];
coords(1,:) = [sphereCoord];
coords(2,:) = [neighborCoord];
if (pdist(coords) < (atomRadius(j) + p))
pointAccessible(k)=0;
end
end
k = k + 1;
end
end
j = j+1;
end
remainingPoints(i) = sum(pointAccessible);
i = i +1;
end
Variable Data Types:
numAtoms = 855
numPoints = 500
p = 1.4
atomRadius = <855 * 1 double>
pointAccessible = <500 * 1 double>
atomX, atomY, atomZ = <1 * 855 cell>
sphereX, sphereY, sphereZ = <500 * 1 double>
remainingPoints = <855 * 1 double>