Hi I'm trying to implement Gradient Descent algorithm for a function:
My starting point for the algorithm is w = (u,v) = (2,2). The learning rate is eta = 0.01 and bound = 10^-14. Here is my MATLAB code:
function [resultTable, boundIter] = gradientDescent(w, iters, bound, eta)
% FUNCTION [resultTable, boundIter] = gradientDescent(w, its, bound, eta)
%
% DESCRIPTION:
% - This function will do gradient descent error minimization for the
% function E(u,v) = (u*exp(v) - 2*v*exp(-u))^2.
%
% INPUTS:
% 'w' a 1-by-2 vector indicating initial weights w = [u,v]
% 'its' a positive integer indicating the number of gradient descent
% iterations
% 'bound' a real number indicating an error lower bound
% 'eta' a positive real number indicating the learning rate of GD algorithm
%
% OUTPUTS:
% 'resultTable' a iters+1-by-6 table indicating the error, partial
% derivatives and weights for each GD iteration
% 'boundIter' a positive integer specifying the GD iteration when the error
% function got below the given error bound 'bound'
%
% The error function
E = #(u,v) (u*exp(v) - 2*v*exp(-u))^2;
% Partial derivative of E with respect to u
pEpu = #(u,v) 2*(u*exp(v) - 2*v*exp(-u))*(exp(v) + 2*v*exp(-u));
% Partial derivative of E with respect to v
pEpv = #(u,v) 2*(u*exp(v) - 2*v*exp(-u))*(u*exp(v) - 2*exp(-u));
% Initialize boundIter
boundIter = 0;
% Create a table for holding the results
resultTable = zeros(iters+1, 6);
% Iteration number
resultTable(1, 1) = 0;
% Error at iteration i
resultTable(1, 2) = E(w(1), w(2));
% The value of pEpu at initial w = (u,v)
resultTable(1, 3) = pEpu(w(1), w(2));
% The value of pEpv at initial w = (u,v)
resultTable(1, 4) = pEpv(w(1), w(2));
% Initial u
resultTable(1, 5) = w(1);
% Initial v
resultTable(1, 6) = w(2);
% Loop all the iterations
for i = 2:iters+1
% Save the iteration number
resultTable(i, 1) = i-1;
% Update the weights
temp1 = w(1) - eta*(pEpu(w(1), w(2)));
temp2 = w(2) - eta*(pEpv(w(1), w(2)));
w(1) = temp1;
w(2) = temp2;
% Evaluate the error function at new weights
resultTable(i, 2) = E(w(1), w(2));
% Evaluate pEpu at the new point
resultTable(i, 3) = pEpu(w(1), w(2));
% Evaluate pEpv at the new point
resultTable(i, 4) = pEpv(w(1), w(2));
% Save the new weights
resultTable(i, 5) = w(1);
resultTable(i, 6) = w(2);
% If the error function is below a specified bound save this iteration
% index
if E(w(1), w(2)) < bound
boundIter = i-1;
end
end
This is an exercise in my machine learning course, but for some reason my results are all wrong. There must be something wrong in the code. I have tried debugging and debugging it and haven't found anything wrong...can someone identify what is my problem here?...In other words can you check that the code is valid gradient descent algorithm for the given function?
Please let me know if my question is too unclear or if you need more info :)
Thank you for your effort and help! =)
Here is my results for five iterations and what other people got:
PARAMETERS: w = [2,2], eta = 0.01, bound = 10^-14, iters = 5
As discussed below the question: I would say the others are wrong... your minimization leads to smaller values of E(u,v), check:
E(1.4,1.6) = 37.8 >> 3.6 = E(0.63, -1.67)
Not a complete answer but lets go for it:
I added a plotting part in your code, so you can see whats going on.
u1=resultTable(:,5);
v1=resultTable(:,6);
E1=E(u1,v1);
E1(E1<bound)=NaN;
[x,y]=meshgrid(-1:0.1:5,-5:0.1:2);Z=E(x,y);
surf(x,y,Z)
hold on
plot3(u1,v1,E1,'r')
plot3(u1,v1,E1,'r*')
The result shows that your algorithm is doing the right thing for that function. So, as other said, or all the others are wrong, or you are not using the right equation from the beggining.
(I apologize for not just commenting, but I'm new to SO and cannot comment.)
It appears that your algorithm is doing the right thing. What you want to be sure is that at each step the energy is shrinking (which it is). There are several reasons why your data points may not agree with the others in the class: they could be wrong (you or others in the class), they perhaps started at a different point, they perhaps used a different step size (what you are calling eta I believe).
Ideally, you don't want to hard-code the number of iterations. You want to continue until you reach a local minimum (which hopefully is the global minimum). To check this, you want both partial derivatives to be zero (or very close). In addition, to make sure you're at a local min (not a local max, or saddle point) you should check the sign of E_uu*E_vv - E_uv^2 and the sign of E_uu look at: http://en.wikipedia.org/wiki/Second_partial_derivative_test for details (the second derivative test, at the top). If you find yourself at a local max or saddle point, your gradient will tell you not to move (since the partial derivatives are 0). Since you know this isn't optimal, you have to just perturb your solution (sometimes called simulated annealing).
Hope this helps.
Related
This code is a programming assignment for Andrew Ng's machine learning course.
The function is expecting a row vector [J grad]. The code computes J (albeit wrongly, but that's not the issue here), and I put in a dummy value for grad (because I haven't written the code to compute it yet). When I run the code, it only outputs ans as a scalar with the value of J. Where did grad go?
function [J grad] = nnCostFunction(nn_params, ...
input_layer_size, ...
hidden_layer_size, ...
num_labels, ...
X, y, lambda)
%NNCOSTFUNCTION Implements the neural network cost function for a two layer
%neural network which performs classification
% [J grad] = NNCOSTFUNCTON(nn_params, hidden_layer_size, num_labels, ...
% X, y, lambda) computes the cost and gradient of the neural network. The
% parameters for the neural network are "unrolled" into the vector
% nn_params and need to be converted back into the weight matrices.
%
% The returned parameter grad should be a "unrolled" vector of the
% partial derivatives of the neural network.
%
% Reshape nn_params back into the parameters Theta1 and Theta2, the weight matrices
% for our 2 layer neural network
Theta1 = reshape(nn_params(1:hidden_layer_size * (input_layer_size + 1)), ...
hidden_layer_size, (input_layer_size + 1));
Theta2 = reshape(nn_params((1 + (hidden_layer_size * (input_layer_size + 1))):end), ...
num_labels, (hidden_layer_size + 1));
% Setup some useful variables
m = size(X, 1);
% You need to return the following variables correctly
J = 0;
Theta1_grad = zeros(size(Theta1));
Theta2_grad = zeros(size(Theta2));
% ====================== YOUR CODE HERE ======================
% Instructions: You should complete the code by working through the
% following parts.
%
% Part 1: Feedforward the neural network and return the cost in the
% variable J. After implementing Part 1, you can verify that your
% cost function computation is correct by verifying the cost
% computed in ex4.m
%
% Part 2: Implement the backpropagation algorithm to compute the gradients
% Theta1_grad and Theta2_grad. You should return the partial derivatives of
% the cost function with respect to Theta1 and Theta2 in Theta1_grad and
% Theta2_grad, respectively. After implementing Part 2, you can check
% that your implementation is correct by running checkNNGradients
%
% Note: The vector y passed into the function is a vector of labels
% containing values from 1..K. You need to map this vector into a
% binary vector of 1's and 0's to be used with the neural network
% cost function.
%
% Hint: We recommend implementing backpropagation using a for-loop
% over the training examples if you are implementing it for the
% first time.
%
% Part 3: Implement regularization with the cost function and gradients.
%
% Hint: You can implement this around the code for
% backpropagation. That is, you can compute the gradients for
% the regularization separately and then add them to Theta1_grad
% and Theta2_grad from Part 2.
%
% PART 1
a1 = [ones(m,1) X]; % set a1 to equal X and add column of 1's
z2 = a1 * Theta1'; % matrix times matrix [5000*401 * 401*25 = 5000*25]
a2 = [ones(m,1),sigmoid(z2)]; % sigmoid function on matrix [5000*26]
z3 = a2 * Theta2'; % matrix times matrix [5000*26 * 26*10 = 5000 * 10]
hox = sigmoid(z3); % sigmoid function on matrix [5000*10]
for k = 1:num_labels
yk = y == k; % using the correct column vector y each loop
J = J + sum(-yk.*log(hox(:,k)) - (1-yk).*log(1-hox(:,k)));
end
J = 1/m * J;
% -------------------------------------------------------------
% =========================================================================
% Unroll gradients
% grad = [Theta1_grad(:) ; Theta2_grad(:)];
grad = 6.6735;
end
You have specified in your function declaration that the function can simultaneously return more than one output value:
function [J grad] = nnCostFunction(nn_params, ... % etc
You can capture both outputs if you 'request' them by assigning to a matrix of variables instead of a single variable:
[a, b] = nnCostFunction(input1, input2, etc)
If you don't do this, you're essentially 'requesting' only the first of the returned variables:
a = nnCostFunction(input1, input2, etc) % output 'b' is discarded.
If you don't specify a variable to assign to at all, octave by default assigns to the 'default' variable ans. So it's essentially equivalent to doing
ans = nnCostFunction(input1, input2, etc) % output 'b' is discarded.
See the documentation for the find function (i.e. type help find in your octave terminal) to see an example of such a function.
PS. If you only wanted the second output and did not want to 'waste' a variable name for the first one, you can do this by specifying ~ as the first output, e.g.:
[~, b] = nnCostFunction(input1, input2, etc) % output 'a' is discarded
I have a black box function, f(x) and a range of values for x.
I need to find the lowest value of x for which f(x) = 0.
I know that for the start of the range of x, f(x) > 0, and if I had a value for which f(x) < 0 I could use regula falsi, or similar root finding methods, to try determine f(x)=0.
I know f(x) is continuous, and should only have 0,1 or 2 roots for the range in question, but it might have a local minimum.
f(x) is somewhat computationally expensive, and I'll have to find this first root a lot.
I was thinking some kind of hill climbing with a degree of randomness to avoid any local minimums, but then how do you know if there was no minimum less than zero or if you just haven't found it yet? I think the function shouldn't have more than two minimum points, but I can't be absolutely certain of that enough to rely on it.
If it helps, x in this case represents a time, and f(x) represents the distance between a ship and a body in orbit (moon/planet) at that time. I need the first point where they are a certain distance from each other.
My method will sound pretty complicated, but in the end the computation time of the method will be far smaller than the distance calculations (evaluation of your f(x)). Also, there are quite many implementations of it already written up in existing libraries.
So what I would do:
approximate f(x) with a Chebychev polynomial
find the real roots of that polynomial
If any are found, use those roots as initial estimates in a more precise rootfinder (if needed)
Given the nature of your function (smooth, continuous, otherwise well-behaved) and the information that there's 0,1 or 2 roots, a good Chebychev polynomial can already be found with 3 evaluations of f(x).
Then find the eigenvalues of the companion matrix of the Chebychev coefficients; these correspond to the roots of the Chebychev polynomial.
If all are imaginary, there's 0 roots.
If there are some real roots, check if two are equal (that "rare" case you spoke of).
Otherwise, all real eigenvalues are roots; the lowest one of which is the root you seek.
Then use Newton-Raphson to refine (if necessary, or use a better Chebychev polynomial). Derivatives of f can be approximated using central differences
f'(x) = ( f(x+h)-f(h-x) ) /2/h (for small h)
I have an implementation of the Chebychev routines in Matlab/Octave (given below). Use like this:
R = FindRealRoots(#f, x_min, x_max, 5, true,true);
with [x_min,x_max] your range in x, 5 the number of points to use for finding the polynomial (the higher, the more accurate. Equals the amount of function evaluations needed), and the last true will make a plot of the actual function and the Chebychev approximation to it (mainly for testing purposes).
Now, the implementation:
% FINDREALROOTS Find approximations to all real roots of any function
% on an interval [a, b].
%
% USAGE:
% Roots = FindRealRoots(funfcn, a, b, n, vectorized, make_plot)
%
% FINDREALROOTS() approximates all the real roots of the function 'funfcn'
% in the interval [a,b]. It does so by finding the roots of an [n]-th degree
% Chebyshev polynomial approximation, via the eignevalues of the associated
% companion matrix.
%
% When the argument [vectorized] is [true], FINDREALROOTS() will evaluate
% the function 'funfcn' at all [n] required points in the interval
% simultaneously. Otherwise, it will use ARRAFUN() to calculate the [n]
% function values one-by-one. [vectorized] defaults to [false].
%
% When the argument [make_plot] is true, FINDREALROOTS() plots the
% original function and the Chebyshev approximation, and shows any roots on
% the given interval. Also [make_plot] defaults to [false].
%
% All [Roots] (if any) will be sorted.
%
% First version 26th May 2007 by Stephen Morris,
% Nightingale-EOS Ltd., St. Asaph, Wales.
%
% Modified 14/Nov (Rody Oldenhuis)
%
% See also roots, eig.
function Roots = FindRealRoots(funfcn, a, b, n, vectorized, make_plot)
% parse input and initialize.
inarg = nargin;
if n <= 2, n = 3; end % Minimum [n] is 3:
if (inarg < 5), vectorized = false; end % default: function isn't vectorized
if (inarg < 6), make_plot = false; end % default: don't make plot
% some convenient variables
bma = (b-a)/2; bpa = (b+a)/2; Roots = [];
% Obtain the Chebyshev coefficients for the function
%
% Based on the routine given in Numerical Recipes (3rd) section 5.8;
% calculates the Chebyshev coefficients necessary to approximate some
% function over the interval [a,b]
% initialize
c = zeros(1,n); k=(1:n)'; y = cos(pi*((1:n)-1/2)/n);
% evaluate function on Chebychev nodes
if vectorized
f = feval(funfcn,(y*bma)+bpa);
else
f = arrayfun(#(x) feval(funfcn,x),(y*bma)+bpa);
end
% compute the coefficients
for j=1:n, c(j)=(f(:).'*(cos((pi*(j-1))*((k-0.5)/n))))*(2-(j==1))/n; end
% coefficients may be [NaN] if [inf]
% ??? TODO - it is of course possible for c(n) to be zero...
if any(~isfinite(c(:))) || (c(n) == 0), return; end
% Define [A] as the Frobenius-Chebyshev companion matrix. This is based
% on the form given by J.P. Boyd, Appl. Num. Math. 56 pp.1077-1091 (2006).
one = ones(n-3,1);
A = diag([one/2; 0],-1) + diag([1; one/2],+1);
A(end, :) = -c(1:n-1)/2/c(n);
A(end,end-1) = A(end,end-1) + 0.5;
% Now we have the companion matrix, we can find its eigenvalues using the
% MATLAB built-in function. We're only interested in the real elements of
% the matrix:
eigvals = eig(A); realvals = eigvals(imag(eigvals)==0);
% if there aren't any real roots, return
if isempty(realvals), return; end
% Of course these are the roots scaled to the canonical interval [-1,1]. We
% need to map them back onto the interval [a, b]; we widen the interval just
% a tiny bit to make sure that we don't miss any that are right on the
% boundaries.
rangevals = nonzeros(realvals(abs(realvals) <= 1+1e-5));
% also sort the roots
Roots = sort(rangevals*bma + bpa);
% As a sanity check we'll plot out the original function and its Chebyshev
% approximation: if they don't match then we know to call the routine again
% with a larger 'n'.
if make_plot
% simple grid
grid = linspace(a,b, max(25,n));
% evaluate function
if vectorized
fungrid = feval(funfcn, grid);
else
fungrid = arrayfun(#(x) feval(funfcn,x), grid);
end
% corresponding Chebychev-grid (more complicated but essentially the same)
y = (2.*grid-a-b)./(b-a); d = zeros(1,length(grid)); dd = d;
for j = length(c):-1:2, sv=d; d=(2*y.*d)-dd+c(j); dd=sv; end, chebgrid=(y.*d)-dd+c(1);
% Now make plot
figure(1), clf, hold on
plot(grid, fungrid ,'color' , 'r');
line(grid, chebgrid,'color' , 'b');
line(grid, zeros(1,length(grid)), 'linestyle','--')
legend('function', 'interpolation')
end % make plot
end % FindRealRoots
You could use the secant method which is a discrete version of Newton's method.
The root is estimated by calculating the line between two points (= the secant) and its crossing of the X axis.
Your function has only 0, 1 or 2 roots, so it can be done using an algorithm it doesn't ensure the first root.
Find one root using Newton's method or other method. If it can't find any root, this algorithm also give up.
Let the found root is r and beginning of the range of the x is x0. let d = (r-x0)/2.
While d > 0, calculate f(r-d). if f(r-d) > 0, half d (d := d / 2) and loop. if
f(r-d) <= 0, escape the loop.
if loop is finished by d = 0, report r as the first root. if d > 0, find a root between x0 and r-d by using any other method and report it.
I assumed two prerequiesite conditions.
f(x) takes x of floating point numbers
At each point of the roots of f(x), the graph of f(x) crosses to x-axis. They are not touching root like x=0 in f(x)=x^2.
Using condition 2, you can prove that if there is no point such that f(r-d) < 0, ∀ x: x0 < x < r, f(x) > 0.
You could make a small change to the uniroot.all function from the R library rootSolve.
uniroot.all <- function (f, interval, lower= min(interval),
upper= max(interval), tol= .Machine$double.eps^0.2,
maxiter= 1000, n = 100, nroots = -1, ... ) {
## error checking as in uniroot...
if (!missing(interval) && length(interval) != 2)
stop("'interval' must be a vector of length 2")
if (!is.numeric(lower) || !is.numeric(upper) || lower >=
upper)
stop("lower < upper is not fulfilled")
## subdivide interval in n subintervals and estimate the function values
xseq <- seq(lower,upper,len=n+1)
mod <- f(xseq,...)
## some function values may already be 0
Equi <- xseq[which(mod==0)]
ss <- mod[1:n]*mod[2:(n+1)] # interval where functionvalues change sign
ii <- which(ss<0)
for (i in ii) {
Equi <- c(Equi, uniroot(f, lower = xseq[i], upper = xseq[i+1] ,...)$root)
if (length(Equi) == nroots) {
return(Equi)
}
}
return(Equi)
}
And run it like this:
uniroot.all(f = your_function, interval = c(start, stop), nroots = 1)
I have been given an assignment in which I am supposed to write an algorithm which performs polynomial interpolation by the barycentric formula. The formulas states that:
p(x) = (SIGMA_(j=0 to n) w(j)*f(j)/(x - x(j)))/(SIGMA_(j=0 to n) w(j)/(x - x(j)))
I have written an algorithm which works just fine, and I get the polynomial output I desire. However, this requires the use of some quite long loops, and for a large grid number, lots of nastly loop operations will have to be done. Thus, I would appreciate it greatly if anyone has any hints as to how I may improve this, so that I will avoid all these loops.
In the algorithm, x and f stand for the given points we are supposed to interpolate. w stands for the barycentric weights, which have been calculated before running the algorithm. And grid is the linspace over which the interpolation should take place:
function p = barycentric_formula(x,f,w,grid)
%Assert x-vectors and f-vectors have same length.
if length(x) ~= length(f)
sprintf('Not equal amounts of x- and y-values. Function is terminated.')
return;
end
n = length(x);
m = length(grid);
p = zeros(1,m);
% Loops for finding polynomial values at grid points. All values are
% calculated by the barycentric formula.
for i = 1:m
var = 0;
sum1 = 0;
sum2 = 0;
for j = 1:n
if grid(i) == x(j)
p(i) = f(j);
var = 1;
else
sum1 = sum1 + (w(j)*f(j))/(grid(i) - x(j));
sum2 = sum2 + (w(j)/(grid(i) - x(j)));
end
end
if var == 0
p(i) = sum1/sum2;
end
end
This is a classical case for matlab 'vectorization'. I would say - just remove the loops. It is almost that simple. First, have a look at this code:
function p = bf2(x, f, w, grid)
m = length(grid);
p = zeros(1,m);
for i = 1:m
var = grid(i)==x;
if any(var)
p(i) = f(var);
else
sum1 = sum((w.*f)./(grid(i) - x));
sum2 = sum(w./(grid(i) - x));
p(i) = sum1/sum2;
end
end
end
I have removed the inner loop over j. All I did here was in fact removing the (j) indexing and changing the arithmetic operators from / to ./ and from * to .* - the same, but with a dot in front to signify that the operation is performed on element by element basis. This is called array operators in contrast to ordinary matrix operators. Also note that treating the special case where the grid points fall onto x is very similar to what you had in the original implementation, only using a vector var such that x(var)==grid(i).
Now, you can also remove the outermost loop. This is a bit more tricky and there are two major approaches how you can do that in MATLAB. I will do it the simpler way, which can be less efficient, but more clear to read - using repmat:
function p = bf3(x, f, w, grid)
% Find grid points that coincide with x.
% The below compares all grid values with all x values
% and returns a matrix of 0/1. 1 is in the (row,col)
% for which grid(row)==x(col)
var = bsxfun(#eq, grid', x);
% find the logical indexes of those x entries
varx = sum(var, 1)~=0;
% and of those grid entries
varp = sum(var, 2)~=0;
% Outer-most loop removal - use repmat to
% replicate the vectors into matrices.
% Thus, instead of having a loop over j
% you have matrices of values that would be
% referenced in the loop
ww = repmat(w, numel(grid), 1);
ff = repmat(f, numel(grid), 1);
xx = repmat(x, numel(grid), 1);
gg = repmat(grid', 1, numel(x));
% perform the calculations element-wise on the matrices
sum1 = sum((ww.*ff)./(gg - xx),2);
sum2 = sum(ww./(gg - xx),2);
p = sum1./sum2;
% fix the case where grid==x and return
p(varp) = f(varx);
end
The fully vectorized version can be implemented with bsxfun rather than repmat. This can potentially be a bit faster, since the matrices are not explicitly formed. However, the speed difference may not be large for small system sizes.
Also, the first solution with one loop is also not too bad performance-wise. I suggest you test those and see, what is better. Maybe it is not worth it to fully vectorize? The first code looks a bit more readable..
Statement of Problem:
I have an array M with m rows and n columns. The array M is filled with non-zero elements.
I also have a vector t with n elements, and a vector omega
with m elements.
The elements of t correspond to the columns of matrix M.
The elements of omega correspond to the rows of matrix M.
Goal of Algorithm:
Define chi as the multiplication of vector t and omega. I need to obtain a 1D vector a, where each element of a is a function of chi.
Each element of chi is unique (i.e. every element is different).
Using mathematics notation, this can be expressed as a(chi)
Each element of vector a corresponds to an element or elements of M.
Matlab code:
Here is a code snippet showing how the vectors t and omega are generated. The matrix M is pre-existing.
[m,n] = size(M);
t = linspace(0,5,n);
omega = linspace(0,628,m);
Conceptual Diagram:
This appears to be a type of integration (if this is the right word for it) along constant chi.
Reference:
Link to reference
The algorithm is not explicitly stated in the reference. I only wish that this algorithm was described in a manner reminiscent of computer science textbooks!
Looking at Figure 11.5, the matrix M is Figure 11.5(a). The goal is to find an algorithm to convert Figure 11.5(a) into 11.5(b).
It appears that the algorithm is a type of integration (averaging, perhaps?) along constant chi.
It appears to me that reshape is the matlab function you need to use. As noted in the link:
B = reshape(A,siz) returns an n-dimensional array with the same elements as A, but reshaped to siz, a vector representing the dimensions of the reshaped array.
That is, create a vector siz with the number m*n in it, and say A = reshape(P,siz), where P is the product of vectors t and ω; or perhaps say something like A = reshape(t*ω,[m*n]). (I don't have matlab here, or would run a test to see if I have the product the right way around.) Note, the link does not show an example with one number (instead of several) after the matrix parameter to reshape, but I would expect from the description that A = reshape(t*ω,m*n) might also work.
You should add a pseudocode or a link to the algorithm you want to implement. From what I could understood I have developed the following code anyway:
M = [1 2 3 4; 5 6 7 8; 9 10 11 12]' % easy test M matrix
a = reshape(M, prod(size(M)), 1) % convert M to vector 'a' with reshape command
[m,n] = size(M); % Your sample code
t = linspace(0,5,n); % Your sample code
omega = linspace(0,628,m); % Your sample code
for i=1:length(t)
for j=1:length(omega) % Acces a(chi) in the desired order
chi = length(omega)*(i-1)+j;
t(i) % related t value
omega(j) % related omega value
a(chi) % related a(chi) value
end
end
As you can see, I also think that the reshape() function is the solution to your problems. I hope that this code helps,
The basic idea is to use two separate loops. The outer loop is over the chi variable values, whereas the inner loop is over the i variable values. Referring to the above diagram in the original question, the i variable corresponds to the x-axis (time), and the j variable corresponds to the y-axis (frequency). Assuming that the chi, i, and j variables can take on any real number, bilinear interpolation is then used to find an amplitude corresponding to an element in matrix M. The integration is just an averaging over elements of M.
The following code snippet provides an overview of the basic algorithm to express elements of a matrix as a vector using the spectral collapsing from 2D to 1D. I can't find any reference for this, but it is a solution that works for me.
% Amp = amplitude vector corresponding to Figure 11.5(b) in book reference
% M = matrix corresponding to the absolute value of the complex Gabor transform
% matrix in Figure 11.5(a) in book reference
% Nchi = number of chi in chi vector
% prod = product of timestep and frequency step
% dt = time step
% domega = frequency step
% omega_max = maximum angular frequency
% i = time array element along x-axis
% j = frequency array element along y-axis
% current_i = current time array element in loop
% current_j = current frequency array element in loop
% Nchi = number of chi
% Nivar = number of i variables
% ivar = i variable vector
% calculate for chi = 0, which only occurs when
% t = 0 and omega = 0, at i = 1
av0 = mean( M(1,:) );
av1 = mean( M(2:end,1) );
av2 = mean( [av0 av1] );
Amp(1) = av2;
% av_val holds the sum of all values that have been averaged
av_val_sum = 0;
% loop for rest of chi
for ccnt = 2:Nchi % 2:Nchi
av_val_sum = 0; % reset av_val_sum
current_chi = chi( ccnt ); % current value of chi
% loop over i vector
for icnt = 1:Nivar % 1:Nivar
current_i = ivar( icnt );
current_j = (current_chi / (prod * (current_i - 1))) + 1;
current_t = dt * (current_i - 1);
current_omega = domega * (current_j - 1);
% values out of range
if(current_omega > omega_max)
continue;
end
% use bilinear interpolation to find an amplitude
% at current_t and current_omega from matrix M
% f_x_y is the bilinear interpolated amplitude
% Insert bilinear interpolation code here
% add to running sum
av_val_sum = av_val_sum + f_x_y;
end % icnt loop
% compute the average over all i
av = av_val_sum / Nivar;
% assign the average to Amp
Amp(ccnt) = av;
end % ccnt loop
I have 2 matrices: V which is square MxM, and K which is MxN. Calling the dimension across rows x and the dimension across columns t, I need to evaluate the integral (i.e sum) over both dimensions of K times a t-shifted version of V, the answer being a function of the shift (almost like a convolution, see below). The sum is defined by the following expression, where _{} denotes the summation indices, and a zero-padding of out-of-limits elements is assumed:
S(t) = sum_{x,tau}[V(x,t+tau) * K(x,tau)]
I manage to do it with a single loop, over the t dimension (vectorizing the x dimension):
% some toy matrices
V = rand(50,50);
K = rand(50,10);
[M N] = size(K);
S = zeros(1, M);
for t = 1 : N
S(1,1:end-t+1) = S(1,1:end-t+1) + sum(bsxfun(#times, V(:,t:end), K(:,t)),1);
end
I have similar expressions which I managed to evaluate without a for loop, using a combination of conv2 and\or mirroring (flipping) of a single dimension. However I can't see how to avoid a for loop in this case (despite the appeared similarity to convolution).
Steps to vectorization
1] Perform sum(bsxfun(#times, V(:,t:end), K(:,t)),1) for all columns in V against all columns in K with matrix-multiplication -
sum_mults = V.'*K
This would give us a 2D array with each column representing sum(bsxfun(#times,.. operation at each iteration.
2] Step1 gave us all possible summations and also the values to be summed are not aligned in the same row across iterations, so we need to do a bit more work before summing along rows. The rest of the work is about getting a shifted up version. For the same, you can use boolean indexing with a upper and lower triangular boolean mask. Finally, we sum along each row for the final output. So, this part of the code would look like so -
valid_mask = tril(true(size(sum_mults)));
sum_mults_shifted = zeros(size(sum_mults));
sum_mults_shifted(flipud(valid_mask)) = sum_mults(valid_mask);
out = sum(sum_mults_shifted,2);
Runtime tests -
%// Inputs
V = rand(1000,1000);
K = rand(1000,200);
disp('--------------------- With original loopy approach')
tic
[M N] = size(K);
S = zeros(1, M);
for t = 1 : N
S(1,1:end-t+1) = S(1,1:end-t+1) + sum(bsxfun(#times, V(:,t:end), K(:,t)),1);
end
toc
disp('--------------------- With proposed vectorized approach')
tic
sum_mults = V.'*K; %//'
valid_mask = tril(true(size(sum_mults)));
sum_mults_shifted = zeros(size(sum_mults));
sum_mults_shifted(flipud(valid_mask)) = sum_mults(valid_mask);
out = sum(sum_mults_shifted,2);
toc
Output -
--------------------- With original loopy approach
Elapsed time is 2.696773 seconds.
--------------------- With proposed vectorized approach
Elapsed time is 0.044144 seconds.
This might be cheating (using arrayfun instead of a for loop) but I believe this expression gives you what you want:
S = arrayfun(#(t) sum(sum( V(:,(t+1):(t+N)) .* K )), 1:(M-N), 'UniformOutput', true)