Four nested for loops optimization - I promise I searched - performance

I've tried to find a good way to speed up the code for a problem I've been working on. The basic idea of the code is very simple. There are five inputs:
Four 1xm (for some m < n, they can be different sizes) matrices (A, B, C, D) that are pairwise-disjoint subsets of {1,2,...,n} and one nxn symmetric binary matrix (M). The basic idea for the code is to check an inequality for for every combination of elements and if the inequality holds, return the values that cause it to hold, i.e.:
for a = A
for b = B
for c = C
for d = D
if M(a,c) + M(b,d) < M(a,d) + M(b,c)
result = [a b c d];
return
end
end
end
end
end
I know there has to be a better way to do this. First, since it's symmetric, I can cut down half of the items checked since M(a,b) = M(b,a). I've been researching vectorization, found several functions I'd never heard of with MATLAB (since I'm relatively new), but I can't find anything that will particularly help me with this specific problem. I've thought of other ways to approach the problem, but nothing has been perfected, and I just don't know what to do at this point.
For example, I could possibly split this into two cases:
1) The right hand side is 1: then I have to check that both terms on the left side are 0.
2) The right hand side is 2: then I have to check that at least one term on the left hand side is 0.
But, again, I won't be able to avoid nesting.
I appreciate all the help you can offer. Thank you!

You're asking two questions here: (1) is there a more efficient algorithm to perform this search, and (2) how can I vectorize this in MATLAB. The first one is very interesting to think about, but may be a little beyond the scope of this forum. The second one is easier to answer.
As pointed out in the comments below your question, you can vectorize the for loop by enumerating all of the possibilities and checking them all together, and the answers from this question can help:
[a,b,c,d] = ndgrid(A,B,C,D); % Enumerate all combos
a=a(:); b=b(:); c=c(:); d=d(:); % Reshape from 4-D matrices to vectors
ac = sub2ind(size(M),a,c); % Convert subscript pairs to linear indices
bd = sub2ind(size(M),b,d);
ad = sub2ind(size(M),a,d);
bc = sub2ind(size(M),b,c);
mask = (M(ac) + M(bd) < M(ad) + M(bc)); % Test the inequality
results = [a(mask), b(mask), c(mask), d(mask)]; % Select the ones that pass
Again, this isn't an algorithmic change: it still has the same complexity as your nested for loop. The vectorization may cause it to run faster, but it also lacks early termination, so in certain cases it may be slower.

Since M is binary, we can think about this as a graph problem. i,j in {1..n} correspond to nodes, and M(i,j) indicates whether there is an undirected edge connecting them.
Since A,B,C,D are disjoint, that simplifies the problem a bit. We can approach the problem in stages:
Find all (c,d) for which there exists a such that M(a,c) < M(a,d). Let's call this set CD_lt_a, (the subset of C*D such that the "less than" inequality holds for some a).
Find all (c,d) for which there exists a such that M(a,c) <= M(a,d), and call this set CD_le_a.
Repeat for b, forming CD_lt_b for M(b,d) < M(b,c) and CD_le_b for M(b,d)<=M(b,c).
One way to satisfy the overall inequality is for M(a,c) < M(a,d) and M(b,d) <= M(b,c), so we can look at the intersection of CD_lt_a and CD_le_b.
The other way is if M(a,c) <= M(a,d) and M(b,d) < M(b,c), so look at the intersection of CD_le_a and CD_lt_b.
With (c,d) known, we can go back and find the (a,b).
And so my implementation is:
% 0. Some preliminaries
% Get the size of each set
mA = numel(A); mB = numel(B); mC = numel(C); mD = numel(D);
% 1. Find all (c,d) for which there exists a such that M(a,c) < M(a,d)
CA_linked = M(C,A);
AD_linked = M(A,D);
CA_not_linked = ~CA_linked;
% Multiplying these matrices tells us, for each (c,d), how many nodes
% in A satisfy this M(a,c)<M(a,d) inequality
% Ugh, we need to cast to double to use the matrix multiplication
CD_lt_a = (CA_not_linked * double(AD_linked)) > 0;
% 2. For M(a,c) <= M(a,d), check that the converse is false for some a
AD_not_linked = ~AD_linked;
CD_le_a = (CA_linked * double(AD_not_linked)) < mA;
% 3. Repeat for b
CB_linked = M(C,B);
BD_linked = M(B,D);
CD_lt_b = (CB_linked * double(~BD_linked)) > 0;
CD_le_b = (~CB_linked * double(BD_linked)) < mB;
% 4. Find the intersection of CD_lt_a and CD_le_b - this is one way
% to satisfy the inequality M(a,c)+M(b,d) < M(a,d)+M(b,c)
CD_satisfy_ineq_1 = CD_lt_a & CD_le_b;
% 5. The other way to satisfy the inequality is CD_le_a & CD_lt_b
CD_satisfy_ineq_2 = CD_le_a & CD_lt_b;
inequality_feasible = any(CD_satisfy_ineq_1(:) | CD_satisfy_ineq_2(:));
Note that you can stop here if feasibility is your only concern. The complexity is A*C*D + B*C*D, which is better than the worst-case A*B*C*D complexity of the for loop. However, early termination means your nested for loops may still be faster in certain cases.
The next block of code enumerates all the a,b,c,d that satisfy the inequality. It's not very well optimized (it appends to a matrix from within a loop), so it can be pretty slow if there are many results.
% 6. With (c,d) known, find a and b
% We can define these functions to help us search
find_a_lt = #(c,d) find(CA_not_linked(c,:)' & AD_linked(:,d));
find_a_le = #(c,d) find(CA_not_linked(c,:)' | AD_linked(:,d));
find_b_lt = #(c,d) find(CB_linked(c,:)' & ~BD_linked(:,d));
find_b_le = #(c,d) find(CB_linked(c,:)' | ~BD_linked(:,d));
% I'm gonna assume there aren't too many results, so I will be appending
% to an array inside of a for loop. Bad for performance, but maybe a bit
% more readable for a StackOverflow answer.
results = zeros(0,4);
% Find those that satisfy it the first way
[c_list,d_list] = find(CD_satisfy_ineq_1);
for ii = 1:numel(c_list)
c = c_list(ii); d = d_list(ii);
a = find_a_lt(c,d);
b = find_b_le(c,d);
% a,b might be vectors, in which case all combos are valid
% Many ways to find all combos, gonna use ndgrid()
[a,b] = ndgrid(a,b);
% Append these to the growing list of results
abcd = [a(:), b(:), repmat([c d],[numel(a),1])];
results = [results; abcd];
end
% Repeat for the second way
[c_list,d_list] = find(CD_satisfy_ineq_2);
for ii = 1:numel(c_list)
c = c_list(ii); d = d_list(ii);
a = find_a_le(c,d);
b = find_b_lt(c,d);
% a,b might be vectors, in which case all combos are valid
% Many ways to find all combos, gonna use ndgrid()
[a,b] = ndgrid(a,b);
% Append these to the growing list of results
abcd = [a(:), b(:), repmat([c d],[numel(a),1])];
results = [results; abcd];
end
% Remove duplicates
results = unique(results, 'rows');
% And actually these a,b,c,d will be indices into A,B,C,D because they
% were obtained from calling find() on submatrices of M.
if ~isempty(results)
results(:,1) = A(results(:,1));
results(:,2) = B(results(:,2));
results(:,3) = C(results(:,3));
results(:,4) = D(results(:,4));
end
I tested this on the following test case:
m = 1000;
A = (1:m); B = A(end)+(1:m); C = B(end)+(1:m); D = C(end)+(1:m);
M = rand(D(end),D(end)) < 1e-6; M = M | M';
I like to think that first part (see if the inequality is feasible for any a,b,c,d) worked pretty well. The other vectorized answers (that use ndgrid or combvec to enumerate all combinations of a,b,c,d) would require 8 terabytes of memory for a problem of this size!
But I would not recommend running the second part (enumerating all of the results) when there are more than a few hundred c,d that satisfy the inequality, because it will be pretty damn slow.
P.S. I know I answered already, but that answer was about vectorizing such loops in general, and is less specific to your particular problem.
P.P.S. This kinda reminds me of the stable marriage problem. Perhaps some of those references would contain algorithms relevant to your problem as well. I suspect that a true graph-based algorithm could probably achieve the worst-case complexity as this while additionally offering early termination. But I think it would be difficult to implement a graph-based algorithm efficiently in MATLAB.
P.P.P.S. If you only want one of the feasible solutions, you can simplify step 6 to only return a single value, e.g.
find_a_lt = #(c,d) find(CA_not_linked(c,:)' & AD_linked(:,d), 1, 'first');
find_a_le = #(c,d) find(CA_not_linked(c,:)' | AD_linked(:,d), 1, 'first');
find_b_lt = #(c,d) find(CB_linked(c,:)' & ~BD_linked(:,d), 1, 'first');
find_b_le = #(c,d) find(CB_linked(c,:)' | ~BD_linked(:,d), 1, 'first');
if any(CD_satisfy_ineq_1)
[c,d] = find(CD_satisfy_ineq_1, 1, 'first');
a = find_a_lt(c,d);
b = find_a_le(c,d);
result = [A(a), B(b), C(c), D(d)];
elseif any(CD_satisfy_ineq_2)
[c,d] = find(CD_satisfy_ineq_2, 1, 'first');
a = find_a_le(c,d);
b = find_a_lt(c,d);
result = [A(a), B(b), C(c), D(d)];
else
result = zeros(0,4);
end

If you have access to the Neural Network Toolbox, combvec could be helpful here.
running allCombs = combvec(A,B,C,D) will give you a (4 by m1*m2*m3*m4) matrix that looks like:
[...
a1, a1, a1, a1, a1 ... a1... a2... am1;
b1, b1, b1, b1, b1 ... b2... b1... bm2;
c1, c1, c1, c1, c2 ... c1... c1... cm3;
d1, d2, d3, d4, d1 ... d1... d1... dm4]
You can then use sub2ind and Matrix Indexing to setup the two values you need for your inequality:
indices = [sub2ind(size(M),allCombs(1,:),allCombs(3,:));
sub2ind(size(M),allCombs(2,:),allCombs(4,:));
sub2ind(size(M),allCombs(1,:),allCombs(4,:));
sub2ind(size(M),allCombs(2,:),allCombs(3,:))];
testValues = M(indices);
testValues(5,:) = (testValues(1,:) + testValues(2,:) < testValues(3,:) + testValues(4,:))
Your final a,b,c,d indices could be retrieved by saying
allCombs(:,find(testValues(5,:)))
Which would print a matrix with all columns which the inequality was true.
This article might be of some use.

Related

Dynamic Programming Solution for "Demerging" of two Sequences

I have a problem that I am trying to solve but am having very little luck getting started. Here is the problem:
"If two sequences a1, a2,..., am and b1, b2,..., bn are interleaved, we say that the resulting sequence c1, c2,..., cm+n is a shuffle of the first two. For example,
DCCDBDADCACDBACB
is a shuffle of DCBDAACBB and CDDCDAC since it can be obtained by interleaving those two sequences in this way:
DC BDA AC B B
CD DC D AC"
I am to find a dynamic programming solution that figures out whether the two given subsequences are able to be merged to form the larger sequence. I have already done a problem finding the longest common subsquence and such but I can't seem to figure this one out. I know how I would go about doing it if I were to not consider dynamic programming, but I can't seem to think of how to do it with it.
Any help would be greatly appreciated.
Thank you!
You can construct a DP algorithm for solving this but first creating a recursive solution for example:
a = 'DCBDAACBB'
b = 'CDDCDAC'
c = 'DCCDBDADCACDBACB'
an = len(a)
bn = len(b)
cn = len(c)
# recursive solution O(2^n)
def isPossible(ai, bi, ci):
if ai == an and bi == bn and ci == cn:
return True
K = False
if ci < cn and ai < an and c[ci] == a[ai]:
K = K or isPossible(ai+1, bi, ci+1)
if ci < cn and bi < bn and c[ci] == b[bi]:
K = K or isPossible(ai, bi+1, ci+1)
return K
print isPossible(0, 0, 0)
Here the state can be encoded as three numbers ai, bi, ci which indicate the index of the beginning of the suffix of the original strings and isPossible(ai, bi, ci) computes whether the suffixes ai and bi can be merged into the suffix ci, we seek isPossible(0, 0, 0).
From here we can create the following DP recurrence, first initialize:
isPossible[ai][bi][ci] = False
isPossible[ai][bi][ci] = True where ai == an and bi == bn and ci == cn
Then compute:
isPossible[ai][bi][ci] = isPossible[ai+1][bi][ci+1] if A[ai] == C[ai]
isPossible[ai][bi][ci] = isPossible[ai][bi+1][ci+1] if B[ai] == C[ai]
The solution is then isPossible[0][0][0]. This runs in n^3, where as the recursive solution was 2^n

Homework: Implementing Karp-Rabin; For the hash values modulo q, explain why it is a bad idea to use q as a power of 2?

I have a two-fold homework problem, Implement Karp-Rabin and run it on a test file and the second part:
For the hash values modulo q, explain why it is a bad idea to use q as a power of 2. Can you construct a terrible example e.g. for q=64
and n=15?
This is my implementation of the algorithm:
def karp_rabin(text, pattern):
# setup
alphabet = 'ACGT'
d = len(alphabet)
n = len(pattern)
d_n = d**n
q = 2**32-1
m = {char:i for i,char in enumerate(alphabet)}
positions = []
def kr_hash(s):
return sum(d**(n-i-1) * m[s[i]] for i in range(n))
def update_hash():
return d*text_hash + m[text[i+n-1]] - d_n * m[text[i-1]]
pattern_hash = kr_hash(pattern)
for i in range(0, len(text) - n + 1):
text_hash = update_hash() if i else kr_hash(text[i:n])
if pattern_hash % q == text_hash % q and pattern == text[i:i+n]:
positions.append(i)
return ' '.join(map(str, positions))
...The second part of the question is referring to this part of the code/algo:
pattern_hash = kr_hash(pattern)
for i in range(0, len(text) - n + 1):
text_hash = update_hash() if i else kr_hash(text[i:n])
# the modulo q used to check if the hashes are congruent
if pattern_hash % q == text_hash % q and pattern == text[i:i+n]:
positions.append(i)
I don't understand why it would be a bad idea to use q as a power of 2. I've tried running the algorithm on the test file provided(which is the genome of ecoli) and there's no discernible difference.
I tried looking at the formula for how the hash is derived (I'm not good at math) trying to find some common factors that would be really bad for powers of two but found nothing. I feel like if q is a power of 2 it should cause a lot of clashes for the hashes so you'd need to compare strings a lot more but I didn't find anything along those lines either.
I'd really appreciate help on this since I'm stumped. If someone wants to point out what I can do better in the first part (code efficiency, readability, correctness etc.) I'd also be thrilled to hear your input on that.
There is a problem if q divides some power of d, because then only a few characters contribute to the hash. For example in your code d=4, if you take q=64 only the last three characters determine the hash (d**3 = 64).
I don't really see a problem if q is a power of 2 but gcd(d,q) = 1.
Your implementation looks a bit strange because instead of
if pattern_hash % q == text_hash % q and pattern == text[i:i+n]:
you could also use
if pattern_hash == text_hash and pattern == text[i:i+n]:
which would be better because you get fewer collisions.
The Thue–Morse sequence has among its properties that its polynomial hash quickly becomes zero when a power of 2 is the hash module, for whatever polynomial base (d). So if you will try to search a short Thue-Morse sequence in a longer one, you will have a great lot of hash collisions.
For example, your code, slightly adapted:
def karp_rabin(text, pattern):
# setup
alphabet = '01'
d = 15
n = len(pattern)
d_n = d**n
q = 32
m = {char:i for i,char in enumerate(alphabet)}
positions = []
def kr_hash(s):
return sum(d**(n-i-1) * m[s[i]] for i in range(n))
def update_hash():
return d*text_hash + m[text[i+n-1]] - d_n * m[text[i-1]]
pattern_hash = kr_hash(pattern)
for i in range(0, len(text) - n + 1):
text_hash = update_hash() if i else kr_hash(text[i:n])
if pattern_hash % q == text_hash % q : #and pattern == text[i:i+n]:
positions.append(i)
return ' '.join(map(str, positions))
print(karp_rabin('0110100110010110100101100110100110010110011010010110100110010110', '0110100110010110'))
outputs a lot of positions, although only three of then are proper matches.
Note that I have dropped the and pattern == text[i:i+n] check. Obviously if you restore it, the result will be correct, but also it is obvious that the algorithm will do much more work checking this additional condition than for other q. In fact, because there are so many collisions, the whole idea of algorithm becomes not working: you could almost as effectively wrote a simple algorithm that checks every position for a match.
Also note that your implementation is quite strange. The whole idea of polynomial hashing is to take the modulo operation each time you compute the hash. Otherwise your pattern_hash and text_hash are very big numbers. In other languages this might mean arithmetic overflow, but in Python this will invoke big integer arithmetic, which is slow and once again loses the whole idea of the algorithm.

How to find the inverse permutation?

Suppose I have an unknown vector v, and a permutation p.
How can I reconstruct v from v(p) and p?
An equivalent question would be to find a permutation q such that p(q) = [1 2 ... n]?
Since this is going to run in a tight loop, I need the answer to be vectorized (and efficient).
To find the inverse permutation I usually use:
[~,q] = sort(p);
Which is faster than the methods suggested by Divakar.
If you want the inverse permutation q of p, it won't get more efficient than:
q(p) = 1:numel(p);
You can thus reconstruct v from vp = v(p) and p via:
q(p) = 1:numel(p);
v = vp(q);
or even faster without explicitly constructing q:
v(p) = vp;
(You might have noticed that v = vp(q) corresponds to v == P^(-1)*vp and v(p) = vp corresponds to P*v == vp for appropriate permutation operators (matrices) P = sparse(1:numel(p),p,1) and P^(-1)==P.'==sparse(p,1:numel(p),1). Thus yielding the same result.)
If you use this in a loop, do however mind to properly reset q or v respectively to [] before this operation. In case of changing length of p, you would otherwise get wrong results if the new p was shorter than the old p.
With ismember -
[~,q] = ismember(1:numel(p),p)
With intersect -
[~,~,q] = intersect(1:numel(p),p)
With bsxfun -
[q,~] = find(bsxfun(#eq,[1:numel(p)],p(:)))

Improving performance of interpolation (Barycentric formula)

I have been given an assignment in which I am supposed to write an algorithm which performs polynomial interpolation by the barycentric formula. The formulas states that:
p(x) = (SIGMA_(j=0 to n) w(j)*f(j)/(x - x(j)))/(SIGMA_(j=0 to n) w(j)/(x - x(j)))
I have written an algorithm which works just fine, and I get the polynomial output I desire. However, this requires the use of some quite long loops, and for a large grid number, lots of nastly loop operations will have to be done. Thus, I would appreciate it greatly if anyone has any hints as to how I may improve this, so that I will avoid all these loops.
In the algorithm, x and f stand for the given points we are supposed to interpolate. w stands for the barycentric weights, which have been calculated before running the algorithm. And grid is the linspace over which the interpolation should take place:
function p = barycentric_formula(x,f,w,grid)
%Assert x-vectors and f-vectors have same length.
if length(x) ~= length(f)
sprintf('Not equal amounts of x- and y-values. Function is terminated.')
return;
end
n = length(x);
m = length(grid);
p = zeros(1,m);
% Loops for finding polynomial values at grid points. All values are
% calculated by the barycentric formula.
for i = 1:m
var = 0;
sum1 = 0;
sum2 = 0;
for j = 1:n
if grid(i) == x(j)
p(i) = f(j);
var = 1;
else
sum1 = sum1 + (w(j)*f(j))/(grid(i) - x(j));
sum2 = sum2 + (w(j)/(grid(i) - x(j)));
end
end
if var == 0
p(i) = sum1/sum2;
end
end
This is a classical case for matlab 'vectorization'. I would say - just remove the loops. It is almost that simple. First, have a look at this code:
function p = bf2(x, f, w, grid)
m = length(grid);
p = zeros(1,m);
for i = 1:m
var = grid(i)==x;
if any(var)
p(i) = f(var);
else
sum1 = sum((w.*f)./(grid(i) - x));
sum2 = sum(w./(grid(i) - x));
p(i) = sum1/sum2;
end
end
end
I have removed the inner loop over j. All I did here was in fact removing the (j) indexing and changing the arithmetic operators from / to ./ and from * to .* - the same, but with a dot in front to signify that the operation is performed on element by element basis. This is called array operators in contrast to ordinary matrix operators. Also note that treating the special case where the grid points fall onto x is very similar to what you had in the original implementation, only using a vector var such that x(var)==grid(i).
Now, you can also remove the outermost loop. This is a bit more tricky and there are two major approaches how you can do that in MATLAB. I will do it the simpler way, which can be less efficient, but more clear to read - using repmat:
function p = bf3(x, f, w, grid)
% Find grid points that coincide with x.
% The below compares all grid values with all x values
% and returns a matrix of 0/1. 1 is in the (row,col)
% for which grid(row)==x(col)
var = bsxfun(#eq, grid', x);
% find the logical indexes of those x entries
varx = sum(var, 1)~=0;
% and of those grid entries
varp = sum(var, 2)~=0;
% Outer-most loop removal - use repmat to
% replicate the vectors into matrices.
% Thus, instead of having a loop over j
% you have matrices of values that would be
% referenced in the loop
ww = repmat(w, numel(grid), 1);
ff = repmat(f, numel(grid), 1);
xx = repmat(x, numel(grid), 1);
gg = repmat(grid', 1, numel(x));
% perform the calculations element-wise on the matrices
sum1 = sum((ww.*ff)./(gg - xx),2);
sum2 = sum(ww./(gg - xx),2);
p = sum1./sum2;
% fix the case where grid==x and return
p(varp) = f(varx);
end
The fully vectorized version can be implemented with bsxfun rather than repmat. This can potentially be a bit faster, since the matrices are not explicitly formed. However, the speed difference may not be large for small system sizes.
Also, the first solution with one loop is also not too bad performance-wise. I suggest you test those and see, what is better. Maybe it is not worth it to fully vectorize? The first code looks a bit more readable..

MATLAB loop optimization

I have a matrix, matrix_logical(50000,100000), that is a sparse logical matrix (a lot of falses, some true). I have to produce a matrix, intersect(50000,50000), that, for each pair, i,j, of rows of matrix_logical(50000,100000), stores the number of columns for which rows i and j have both "true" as the value.
Here is the code I wrote:
% store in advance the nonzeros cols
for i=1:50000
nonzeros{i} = num2cell(find(matrix_logical(i,:)));
end
intersect = zeros(50000,50000);
for i=1:49999
a = cell2mat(nonzeros{i});
for j=(i+1):50000
b = cell2mat(nonzeros{j});
intersect(i,j) = numel(intersect(a,b));
end
end
Is it possible to further increase the performance? It takes too long to compute the matrix. I would like to avoid the double loop in the second part of the code.
matrix_logical is sparse, but it is not saved as sparse in MATLAB because otherwise the performance become the worst possible.
Since the [i,j] entry counts the number of non zero elements in the element-wise multiplication of rows i and j, you can do it by multiplying matrix_logical with its transpose (you should convert to numeric data type first, e.g matrix_logical = single(matrix_logical)):
inter = matrix_logical * matrix_logical';
And it works both for sparse or full representation.
EDIT
In order to calculate numel(intersect(a,b))/numel(union(a,b)); (as asked in your comment), you can use the fact that for two sets a and b, you have
length(union(a,b)) = length(a) + length(b) - length(intersect(a,b))
so, you can do the following:
unLen = sum(matrix_logical,2);
tmp = repmat(unLen, 1, length(unLen)) + repmat(unLen', length(unLen), 1);
inter = matrix_logical * matrix_logical';
inter = inter ./ (tmp-inter);
If I understood you correctly, you want a logical AND of the rows:
intersct = zeros(50000, 50000)
for ii = 1:49999
for jj = ii:50000
intersct(ii, jj) = sum(matrix_logical(ii, :) & matrix_logical(jj, :));
intersct(jj, ii) = intersct(ii, jj);
end
end
Doesn't avoid the double loop, but at least works without the first loop and the slow find command.
Elaborating on my comment, here is a distance function suitable for pdist()
function out = distfun(xi,xj)
out = zeros(size(xj,1),1);
for i=1:size(xj,1)
out(i) = sum(sum( xi & xj(i,:) )) / sum(sum( xi | xj(i,:) ));
end
In my experience, sum(sum()) is faster for logicals than nnz(), thus its appearance above.
You would also need to use squareform() to reshape the output of pdist() appropriately:
squareform(pdist(martrix_logical,#distfun));
Note that pdist() includes a 'jaccard' distance measure, but it is actually the Jaccard distance and not the Jaccard index or coefficient, which is the value you are apparently after.

Resources