Weighted Sum Scheduling in Halide - halide

I am implementing a Radial Basis Function in Halide, and while I have it running successfully it is quite slow. For each pixel I compute the distance, then take a weighted sum of this distance to produce the output. To loop over the weights I use an RDom (as seen below). In this implementation, every pixel computation requires reloading all of the many (3000+) weights, hence the slow speed.
My question is how to take advantage of Halide's scheduling functionality in this instance. My desire is to load some of the weights, compute partial weighted sums for a subset of the pixels, load the next set of weights, and continue to completion. This keeps locality for each smaller group of weights, and that kind of thing is exactly what Halide is built for. Unfortunately I haven't found anything for this specific problem. The RDom seems to be at a lower level of abstraction than the scheduling primitives, so its unclear how to schedule this.
Any alternative suggestions for weighted sum implementation in Halide are welcome. No need to do this with an RDom, I'm just not aware of any other way.
Func rbf_ctrl_pts("rbf_ctrl_pts");
// Initialization with all zero
rbf_ctrl_pts(x,y,c) = cast<float>(0);
// Index to iterate with
RDom idx(0,num_ctrl_pts);
// Loop code
// Subtract the vectors
Expr red_sub = (*in_func)(x,y,0) - (*ctrl_pts_h)(0,idx);
Expr green_sub = (*in_func)(x,y,1) - (*ctrl_pts_h)(1,idx);
Expr blue_sub = (*in_func)(x,y,2) - (*ctrl_pts_h)(2,idx);
// Take the L2 norm to get the distance
Expr dist = sqrt( red_sub*red_sub +
green_sub*green_sub +
blue_sub*blue_sub );
// Update persistant loop variables
rbf_ctrl_pts(x,y,c) = select( c == 0, rbf_ctrl_pts(x,y,c) +
( (*weights_h)(0,idx) * dist),
c == 1, rbf_ctrl_pts(x,y,c) +
( (*weights_h)(1,idx) * dist),
rbf_ctrl_pts(x,y,c) +
( (*weights_h)(2,idx) * dist));

You can use split or tile and rfactor in the idx dimension of rbf_ctrl_pts to factor and schedule the reduction operation. Getting locality on the weights should be doable via these mechanisms. I'm not 100% sure the associative prover will handle the select so it may be required to unroll by channels or move to using a Tuple across the channels, although in the code above, I'm not sure the select is doing anything compared to passing c through.

Related

Finding a perfect matching in graphs

I have this question :
Airline company has N different planes and T pilots. Every pilot has a list of planes he can fly. Every flight needs 2 pilots. The company want to have as much flights simultaneously as possible. Find an algorithm that finds if you can have all the flights simultaneously.
This is the solution I thought about is finding max flow on this graph:
I am just not sure what the capacity should be. Can you help me with that?
Great idea to find the max flow.
For each edge from source --> pilot, assign a capacity of 1. Each pilot can only fly one plane at a time since they are running simultaneously.
For each edge from pilot --> plane, assign a capacity of 1. If this edge is filled with flow of 1, it represents that the given pilot is flying that plane.
For each edge from plane --> sink, assign a capacity of 2. This represents that each plane must be supplied by exactly 2 pilots.
Now, find a maximum flow. If the resulting maximum flow is two times the number of planes, then it's possible to satisfy the constraints. In this case, the edges between planes and pilots that are at capacity represent the matching.
The other answer is fine but you don't really need to involve flow as this can be reduced just as well to ordinary maximum bipartite matching:
For each plane, add another auxiliary plane to the plane partition with edges to the same pilots as the first plane.
Find a maximum bipartite matching M.
The answer is now true if and only if M = 2 N.
If you like, you can think of this as saying that each plane needs a pilot and a co-pilot, and the two vertices associated to each plane now represents those two roles.
The reduction to maximum bipartite matching is linear time, so using e.g. the Hopcroft–Karp algorithm to find the matching, you can solve the problem in O(|E| √|V|) where E is the number of edges between the partitions, and V = T + N.
In practice, the improvement over using a maximum flow based approach should depend on the quality of your implementations as well as the particular choice of representation of the graph, but chances are that you're better off this way.
Implementation example
To illustrate the last point, let's give an idea of how the two reductions could look in practice. One representation of a graph that's often useful due to its built-in memory locality is that of a CSR matrix, so let us assume that the input is such a matrix, whose rows correspond to the planes, and whose columns correspond to the pilots.
We will use the Python library SciPy which comes with algorithms for both maximum bipartite matching and maximum flow, and which works with CSR matrix representations for graphs under the hood.
In the algorithm given above, we will then need to construct the biadjacency matrix of the graph with the additional vertices added. This is nothing but the result of stacking the input matrix on top of itself, which is straightforward to phrase in terms of the CSR data structures: Following Wikipedia's notation, COL_INDEX should just be repeated, and ROW_INDEX should be replaced with ROW_INDEX concatenated with a copy of ROW_INDEX in which all elements are increased by the final element of ROW_INDEX.
In SciPy, a complete implementation which answers yes or no to the problem in OP would look as follows:
import numpy as np
from scipy.sparse.csgraph import maximum_bipartite_matching
def reduce_to_max_matching(a):
i, j = a.shape
data = np.ones(a.nnz * 2, dtype=bool)
indices = np.concatenate([a.indices, a.indices])
indptr = np.concatenate([a.indptr, a.indptr[1:] + a.indptr[-1]])
graph = csr_matrix((data, indices, indptr), shape=(2*i, j))
return (maximum_bipartite_matching(graph) != -1).sum() == 2 * i
In the maximum flow approach given by #HeatherGuarnera's answer, we will need to set up the full adjacency matrix of the new graph. This is also relatively straightforward; the input matrix will appear as a certain submatrix of the adjacency matrix, and we need to add a row for the source vertex and a column for the target. The example section of the documentation for SciPy's max flow solver actually contains an illustration of what this looks like in practice. Adopting this, a complete solution looks as follows:
import numpy as np
from scipy.sparse.csgraph import maximum_flow
def reduce_to_max_flow(a):
i, j = a.shape
n = a.nnz
data = np.concatenate([2*np.ones(i, dtype=int), np.ones(n + j, dtype=int)])
indices = np.concatenate([np.arange(1, i + 1),
a.indices + i + 1,
np.repeat(i + j + 1, j)])
indptr = np.concatenate([[0],
a.indptr + i,
np.arange(n + i + 1, n + i + j + 1),
[n + i + j]])
graph = csr_matrix((data, indices, indptr), shape=(2+i+j, 2+i+j))
flow = maximum_flow(graph, 0, graph.shape[0]-1)
return flow.flow_value == 2*i
Let us compare the timings of the two approaches on a single example consisting of 40 planes and 100 pilots, on a graph whose edge density is 0.1:
from scipy.sparse import random
inp = random(40, 100, density=.1, format='csr', dtype=bool)
%timeit reduce_to_max_matching(inp) # 191 µs ± 3.57 µs per loop
%timeit reduce_to_max_flow(inp) # 1.29 ms ± 20.1 µs per loop
The matching-based approach is faster, but not by a crazy amount. On larger problems, we'll start to see the advantages of using matching instead; with 400 planes and 1000 pilots:
inp = random(400, 1000, density=.1, format='csr', dtype=bool)
%timeit reduce_to_max_matching(inp) # 473 µs ± 5.52 µs per loop
%timeit reduce_to_max_flow(inp) # 68.9 ms ± 555 µs per loop
Again, this exact comparison relies on the use of specific predefined solvers from SciPy and how those are implemented, but if nothing else, this hints that simpler is better.

Choose the best cluster partition based on a cost function

I've a string that I'd like to cluster:
s = 'AAABBCCCCC'
I don't know in advance how many clusters I'll get. All I have, is a cost function that can take a clustering and give it a score.
There is also a constraint on the cluster sizes: they must be in a range [a, b]
In my exemple, for a=3 and b=4, all possible clustering are:
[
['AAA', 'BBC', 'CCCC'],
['AAA', 'BBCC', 'CCC'],
['AAAB', 'BCC', 'CCC'],
]
Concatenation of each clustering must give the string s
The cost function is something like this
cost(clustering) = alpha*l + beta*e + gamma*d
where:
l = variance(cluster_lengths)
e = mean(clusters_entropies)
d = 1 - nb_characters_in_b_that_are_not_in_a)/size_of_b (for b the
consecutive cluster of a)
alpha, beta, gamma are weights
This cost function gives a low cost (0) for the best case:
Where all clusters have the same size.
Content inside each cluster is the same.
Consecutive clusters don't have the same content.
Theoretically, the solution is to calculate the cost of all possible compositions for this string and choose the lowest. but It will take too much time.
Is there any clustering algorithme that can find the best clustering according to this cost function in a reasonable time ?
A dynamic programming approach should work here.
Imagine, first, that a cost(clustering) equals to the sum of cost(cluster) for all all clusters that constitute the clustering.
Then, a simple DP function is defined as follows:
F[i] = minimal cost of clustering the substring s[0:i]
and calculated in the following way:
for i = 0..length(s)-1:
for j = a..b:
last_cluster = s[i-j..i]
F[i] = min(F[i], F[i - j] + cost(last_cluster))
Of course, first you have to initialize values of F to some infinite values or nulls to correctly apply min function.
To actually restore the answer, you can store additional values P[i], which would contain the lengths of the last cluster with optimal clustering of string s[0..i].
When you update F[i], you also update P[i].
Then, restoring answer is little trouble:
current_pos = length(s) - 1
while (current_pos >= 0):
current_cluster_length = P[current_pos]
current_cluster = s[(current_pos - current_cluster_length + 1)..current_pos]
// grab current_cluster to the answer
current_pos -= current_cluster_length
Note that in this approach you will get the clsuters in the inverse order, meaning from the last cluster all the way to the first one.
Let's now apply this idea to the initial problem.
What we would like is to make cost(clustering) more or less linear, so that we can compute it cluster by cluster instead of computing it for the whole clustering.
The first parameter of our DP function F will be, as before, i, the number of chars in the substring s[0:i] we have found optimal answer to.
The meaning of the F function is, as usual, the minimal cost we can achieve with the given parameters.
The parameter e = mean(clusters_entropies) of the cost function is already linear and can be computed cluster by cluster, so this is not a problem.
The parameter l = variance(cluster_lengths) is a little bit more complex.
The variance of n values is defined as Sum[(x[i] - mean)^2] / n.
mean is expected value, namely mean = Sum[x[i]] / n.
Note also that Sum[x[i]] is the sum of lengths of all clusters and in our case it is always fixed and equals to length(s).
Therefore, mean = length(s) / n.
Okay, we have more or less made our l part of cost function linear except the n parameter. We will add this parameter, namely the number of clusters in the desired clustering, as a parameter to our F function.
We will also have a parameter cur which will mean the number of clusters currently assembled in the given state.
The parameter d of the cost function also requires adding additional parameter to our DP function F, namely j, sz, the size of the last cluster in our partition.
Overall, we have come up with a DP function F[i][n][cur][sz] that gives us the minimal cost function of partitioning string s[0:i] into n clusters of which cur are currently constructed with the size of the last cluster equal to sz. Of course, our responsibility is to make sure that a<=sz<=b.
The answer in terms of the minimal cost function will be the minimum among all possible n and a<=sz<=b values of DP function F[length(s)-1][n][n][sz].
Now notice that this time we do not even require the companion P function to store the length of the last cluster as we already included that information as the last sz parameter into our F function.
We will, however, store in P[i][n][cur][sz] the length of the next to last cluster in the optimal clustering with the specified parameters. We will use that value to restore our solution.
Thus, we will be able to restore an answer in the following way, assuming the minimum of F is achieved in the parameters n=n0 and sz=sz0:
current_pos = length(s) - 1
current_n = n0
current_cluster_size = sz0
while (current_n > 0):
current_cluster = s[(current_pos - current_cluster_size + 1)..current_pos]
next_cluster_size = P[current_pos][n0][current_n][current_cluster_size]
current_n--;
current_pos -= current_cluster_size;
current_cluster_size = next_cluster_size
Let's now get to the computation of F.
I will omit the corner cases and range checks, but it will be enough to just initialize F with some infinite values.
// initialize for the case of one cluster
// d = 0, l = 0, only have to calculate entropy
for i=0..length(s)-1:
for n=1..length(s):
F[i][n][1][i+1] = cluster_entropy(s[0..i]);
P[i][n][1][i+1] = -1; // initialize with fake value as in this case there is no previous cluster
// general case computation
for i=0..length(s)-1:
for n=1..length(s):
for cur=2..n:
for sz=a..b:
for prev_sz=a..b:
cur_cluster = s[i-sz+1..i]
prev_cluster = s[i-sz-prev_sz+1..i-sz]
F[i][n][cur][sz] = min(F[i][n][cur][sz], F[i-sz][n][cur - 1][prev_sz] + gamma*calc_d(prev_cluster, cur_cluster) + beta*cluster_entropy(cur_cluster)/n + alpha*(sz - s/n)^2)

Finding parameters of exponentially decaying sinusoids (Matrix Pencil Method)

The matrix pencil method is an algorithm which can be used to find the individual exponential decaying sinusoids' parameters (frequency, amplitude, decay factor and initial phase) in a signal consisting of multiple such signals added. I am trying to implement the algorithm. The algorithm can be found in the paper from this link:
http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=370583 OR
http://krein.unica.it/~cornelis/private/IEEE/IEEEAntennasPropagMag_37_48.pdf
In order to test the algorithm, I created a synthetic signal composed of four exponentially decaying sinusoids generated as follows:
fs=2205;
t=0:1/fs:249/fs;
f(1)=80;
f(2)=120;
f(3)=250;
f(4)=560;
a(1)=.4;
a(2)=1;
a(3)=0.89;
a(4)=.65;
d(1)=70;
d(2)=50;
d(3)=90;
d(4)=80;
for i=1:4
x(i,:)=a(i)*exp(-d(i)*t).*cos(2*pi*f(i)*t);
end
y=x(1,:)+x(2,:)+x(3,:)+x(4,:);
I then feed this signal to the algorithm described in the paper as follows:
function [f d] = mpencil(y)
%construct hankel matrix
N = size(y,2);
L1 = ceil(1/3 * N);
L2 = floor(2/3 * N);
L = ceil((L1 + L2) / 2);
fs=2205;
for i=1:1:(N-L)
Y(i,:)=y(i:(i+L));
end
Y1=Y(:,1:L);
Y2=Y(:,2:(L+1));
[U,S,V] = svd(Y);
D=diag(S);
tol=1e-3;
m=0;
l=length(D);
for i=1:l
if( abs(D(i)/D(1)) >= tol)
m=m+1;
end
end
Ss=S(:,1:m);
Vnew=V(:,1:m);
a=size(Vnew,1);
Vs1=Vnew(1:(a-1),:);
Vs2=Vnew(2:end,:);
Y1=U*Ss*(Vs1');
Y2=U*Ss*(Vs2');
D_fil=(pinv(Y1))*Y2;
z = eig(D_fil);
l=length(z);
for i=1:2:l
f((i+1)/2)= (angle(z(i))*fs)/(2*pi);
d((i+1)/2)=-real(z(i))*fs;
end
In the output from the above code, I am correctly getting the four constituent frequency components but am not getting their decaying factors. If anybody has prior experience with this algorithm or has some understanding about why this discrepancy might be there, I would be very grateful for your help. I have tried rewriting the code from a scratch multiple times but it has been of no help, giving the same results.
Any help would be highly appreciated.
I found the problem.
There are two small glitches in the code:
SVD output is a complex conjugate of the right singular matrix—i.e, Vh—and according to IEEE, it needs to be converted to V first.
Now, this V is filtered for reducing the dimension.
After reducing the dimensions of V, V1 and V2 are calculated from V. (In your case, you are using Vh directly for calculating V1 and V2!)
When calculating Y1 and Y2, the complex conjugates of V1 and V2 are used.
You did not consider the absolute magnitude of complex eigen values, but only the real part.
damping coefficient "zeta"= log(magnitude(z))/Ts

How to perform operation for all matrix elements in Scilab?

I'm trying to simulate the heat distribution on an infinite plate over time. For this purpose, I've wrote a Scilab script. Now, the crucial point of it, is calculation of temperature for all plate points, and it has to be done for every time instance I want to observe:
for j=2:S-1
for i=2:S-1
heat(i, j) = tcoeff*10000*(plate(i-1,j) + plate(i+1,j) - 4*plate(i,j) + plate(i, j-1) + plate(i, j+1)) + plate(i,j);
end;
end
The problem is, that, if I'd like to do it for a 100x100 points plate, it means, that here (it's only for inner part, without boundary conditions), I would have to loop 98x98 = 9604 times, at every turn calculating the heat at a given i,j point. If I'd like to observe that for, say 100 secons, with a 1 s step, I have to repeat it 100 times, giving 960,400 iterations in total. Which takes quite a long time, and I'd like to avoid it. Up to 50x50 plate, it all happens in a reasonable, 4-5 seconds time frame.
Now my question is - is it necessary to do all this using for loops? Is there any built-in aggregate function in Scilab, that will let me do this for all elements of a matrix? The reason I haven't found a way yet, is that the result for every point depends on the values of other matrix points, and that made me do it with nested loops. Any ideas on how to make it faster appreciated.
It seems to me that you want to compute a 2D intercorrelation of your heat field and a certain diffusion pattern. This pattern can be thought as a "filter" kernel, which is a common way to modify images with a linear filter matrix. Your "filter" is:
F=[0,1,0;1,-4,1;0,1,0];
If you install the Image Processing Toolbox (IPD) you will have a MaskFilter function to do this 2D intercorrelation.
S=500;
plate=rand(S,S);
tcoeff=1;
//your solution with nested for loops
t0=getdate();
for j=2:S-1
for i=2:S-1
heat(i, j) = tcoeff*10000*(plate(i-1,j)+plate(i+1,j)-..
4*plate(i,j)+plate(i,j-1)+plate(i, j+1))+plate(i,j);
end
end
t1=getdate();
T0=etime(t1,t0);
mprintf("\nNested for loops: %f s (100 %%)",T0);
//optimised nested for loop
F=[0,1,0;1,-4,1;0,1,0]; //"filter" matrix
F=tcoeff*10000*F;
heat2=zeros(plate);
t0=getdate();
for j=2:S-1
for i=2:S-1
heat2(i,j)=sum(F.*plate(i-1:i+1,j-1:j+1));
end
end
heat2=heat2+plate;
t1=getdate();
T2=etime(t1,t0);
mprintf("\nNested for loops optimised: %f s (%.2f %%)",T2,T2/T0*100);
//MaskFilter from IPD toolbox
t0=getdate();
heat3=MaskFilter(plate,F);
heat3=heat3+plate;
t1=getdate();
T3=etime(t1,t0);
mprintf("\nWith MaskFilter: %f s (%.2f %%)",T3,T3/T0*100);
disp(heat3(1:10,1:10)-heat(1:10,1:10),"Difference of the results (heat3-heat):");
Please note, that MaskFilter pads the image (the original matrix) before applying the filter, and as far as I know it uses a "mirror" array across the border. You should check whether this behaviour is appropriate for you or not.
The speed increase is about *320 (the execution time is 0.32% of your original code). Is that fast enough?
In theory it could be done with two 2D Fourier Transform (with Scilab builtin mfft maybe) but it might not be faster than this. See here: http://mailinglists.scilab.org/Image-processing-filter-td2618144.html#a2618168
Please consider that there is a big difference between vectorizing an operation and parallel computation, as I have explained here. Although vectorizing might improve performance a little bit, that's not comparable to what you can achive through GPU computing for example (e.g. OpenCL). I will try to explain a vectorized form of your code without going too much into the details. Consider these as given:
S = ...;
tcoeff = ...;
function Plate = plate(i, j)
...;
endfunction
function Heat = heat(i, j)
...;
endfunction
Now you could define a meshgrid:
x = 2 : S - 1;
y = 2 : S - 1;
[M, N] = meshgrid(x,y);
Result = feval(M, N, heat);
The feval is the key here which will broadcast the feval function over the M and N matrices.
Your scheme is a finite differences scheme of the Laplacian operator applied to a rectangular grid. If you choose a row-wise or column-wise numbering of your degrees of freedom (here the plate(i,j)) in order to treat them as vectors, then applying your "discrete" Laplacian can be done by multiplying a sparse matrix on the left (it is very fast) This is particularly well explained in the following document:
https://www.math.uci.edu/~chenlong/226/FDMcode.pdf.
The implementation is described in Matlab but is easily translated in Scilab.

matlab: optimum amount of points for linear fit

I want to make a linear fit to few data points, as shown on the image. Since I know the intercept (in this case say 0.05), I want to fit only points which are in the linear region with this particular intercept. In this case it will be lets say points 5:22 (but not 22:30).
I'm looking for the simple algorithm to determine this optimal amount of points, based on... hmm, that's the question... R^2? Any Ideas how to do it?
I was thinking about probing R^2 for fits using points 1 to 2:30, 2 to 3:30, and so on, but I don't really know how to enclose it into clear and simple function. For fits with fixed intercept I'm using polyfit0 (http://www.mathworks.com/matlabcentral/fileexchange/272-polyfit0-m) . Thanks for any suggestions!
EDIT:
sample data:
intercept = 0.043;
x = 0.01:0.01:0.3;
y = [0.0530642513911393,0.0600786706929529,0.0673485248329648,0.0794662409166333,0.0895915873196170,0.103837395346484,0.107224784565365,0.120300492775786,0.126318699218730,0.141508831492330,0.147135757370947,0.161734674733680,0.170982455701681,0.191799936622712,0.192312642057298,0.204771365716483,0.222689541632988,0.242582251060963,0.252582727297656,0.267390860166283,0.282890010610515,0.292381165948577,0.307990544720676,0.314264952297699,0.332344368808024,0.355781519885611,0.373277721489254,0.387722683944356,0.413648156978284,0.446500064130389;];
What you have here is a rather difficult problem to find a general solution of.
One approach would be to compute all the slopes/intersects between all consecutive pairs of points, and then do cluster analysis on the intersepts:
slopes = diff(y)./diff(x);
intersepts = y(1:end-1) - slopes.*x(1:end-1);
idx = kmeans(intersepts, 3);
x([idx; 3] == 2) % the points with the intersepts closest to the linear one.
This requires the statistics toolbox (for kmeans). This is the best of all methods I tried, although the range of points found this way might have a few small holes in it; e.g., when the slopes of two points in the start and end range lie close to the slope of the line, these points will be detected as belonging to the line. This (and other factors) will require a bit more post-processing of the solution found this way.
Another approach (which I failed to construct successfully) is to do a linear fit in a loop, each time increasing the range of points from some point in the middle towards both of the endpoints, and see if the sum of the squared error remains small. This I gave up very quickly, because defining what "small" is is very subjective and must be done in some heuristic way.
I tried a more systematic and robust approach of the above:
function test
%% example data
slope = 2;
intercept = 1.5;
x = linspace(0.1, 5, 100).';
y = slope*x + intercept;
y(1:12) = log(x(1:12)) + y(12)-log(x(12));
y(74:100) = y(74:100) + (x(74:100)-x(74)).^8;
y = y + 0.2*randn(size(y));
%% simple algorithm
[X,fn] = fminsearch(#(ii)P(ii, x,y,intercept), [0.5 0.5])
[~,inds] = P(X, y,x,intercept)
end
function [C, inds] = P(ii, x,y,intercept)
% ii represents fraction of range from center to end,
% So ii lies between 0 and 1.
N = numel(x);
n = round(N/2);
ii = round(ii*n);
inds = min(max(1, n+(-ii(1):ii(2))), N);
% Solve linear system with fixed intercept
A = x(inds);
b = y(inds) - intercept;
% and return the sum of squared errors, divided by
% the number of points included in the set. This
% last step is required to prevent fminsearch from
% reducing the set to 1 point (= minimum possible
% squared error).
C = sum(((A\b)*A - b).^2)/numel(inds);
end
which only finds a rough approximation to the desired indices (12 and 74 in this example).
When fminsearch is run a few dozen times with random starting values (really just rand(1,2)), it gets more reliable, but I still wouln't bet my life on it.
If you have the statistics toolbox, use the kmeans option.
Depending on the number of data values, I would split the data into a relative small number of overlapping segments, and for each segment calculate the linear fit, or rather the 1-st order coefficient, (remember you know the intercept, which will be same for all segments).
Then, for each coefficient calculate the MSE between this hypothetical line and entire dataset, choosing the coefficient which yields the smallest MSE.

Resources