This code is a programming assignment for Andrew Ng's machine learning course.
The function is expecting a row vector [J grad]. The code computes J (albeit wrongly, but that's not the issue here), and I put in a dummy value for grad (because I haven't written the code to compute it yet). When I run the code, it only outputs ans as a scalar with the value of J. Where did grad go?
function [J grad] = nnCostFunction(nn_params, ...
input_layer_size, ...
hidden_layer_size, ...
num_labels, ...
X, y, lambda)
%NNCOSTFUNCTION Implements the neural network cost function for a two layer
%neural network which performs classification
% [J grad] = NNCOSTFUNCTON(nn_params, hidden_layer_size, num_labels, ...
% X, y, lambda) computes the cost and gradient of the neural network. The
% parameters for the neural network are "unrolled" into the vector
% nn_params and need to be converted back into the weight matrices.
%
% The returned parameter grad should be a "unrolled" vector of the
% partial derivatives of the neural network.
%
% Reshape nn_params back into the parameters Theta1 and Theta2, the weight matrices
% for our 2 layer neural network
Theta1 = reshape(nn_params(1:hidden_layer_size * (input_layer_size + 1)), ...
hidden_layer_size, (input_layer_size + 1));
Theta2 = reshape(nn_params((1 + (hidden_layer_size * (input_layer_size + 1))):end), ...
num_labels, (hidden_layer_size + 1));
% Setup some useful variables
m = size(X, 1);
% You need to return the following variables correctly
J = 0;
Theta1_grad = zeros(size(Theta1));
Theta2_grad = zeros(size(Theta2));
% ====================== YOUR CODE HERE ======================
% Instructions: You should complete the code by working through the
% following parts.
%
% Part 1: Feedforward the neural network and return the cost in the
% variable J. After implementing Part 1, you can verify that your
% cost function computation is correct by verifying the cost
% computed in ex4.m
%
% Part 2: Implement the backpropagation algorithm to compute the gradients
% Theta1_grad and Theta2_grad. You should return the partial derivatives of
% the cost function with respect to Theta1 and Theta2 in Theta1_grad and
% Theta2_grad, respectively. After implementing Part 2, you can check
% that your implementation is correct by running checkNNGradients
%
% Note: The vector y passed into the function is a vector of labels
% containing values from 1..K. You need to map this vector into a
% binary vector of 1's and 0's to be used with the neural network
% cost function.
%
% Hint: We recommend implementing backpropagation using a for-loop
% over the training examples if you are implementing it for the
% first time.
%
% Part 3: Implement regularization with the cost function and gradients.
%
% Hint: You can implement this around the code for
% backpropagation. That is, you can compute the gradients for
% the regularization separately and then add them to Theta1_grad
% and Theta2_grad from Part 2.
%
% PART 1
a1 = [ones(m,1) X]; % set a1 to equal X and add column of 1's
z2 = a1 * Theta1'; % matrix times matrix [5000*401 * 401*25 = 5000*25]
a2 = [ones(m,1),sigmoid(z2)]; % sigmoid function on matrix [5000*26]
z3 = a2 * Theta2'; % matrix times matrix [5000*26 * 26*10 = 5000 * 10]
hox = sigmoid(z3); % sigmoid function on matrix [5000*10]
for k = 1:num_labels
yk = y == k; % using the correct column vector y each loop
J = J + sum(-yk.*log(hox(:,k)) - (1-yk).*log(1-hox(:,k)));
end
J = 1/m * J;
% -------------------------------------------------------------
% =========================================================================
% Unroll gradients
% grad = [Theta1_grad(:) ; Theta2_grad(:)];
grad = 6.6735;
end
You have specified in your function declaration that the function can simultaneously return more than one output value:
function [J grad] = nnCostFunction(nn_params, ... % etc
You can capture both outputs if you 'request' them by assigning to a matrix of variables instead of a single variable:
[a, b] = nnCostFunction(input1, input2, etc)
If you don't do this, you're essentially 'requesting' only the first of the returned variables:
a = nnCostFunction(input1, input2, etc) % output 'b' is discarded.
If you don't specify a variable to assign to at all, octave by default assigns to the 'default' variable ans. So it's essentially equivalent to doing
ans = nnCostFunction(input1, input2, etc) % output 'b' is discarded.
See the documentation for the find function (i.e. type help find in your octave terminal) to see an example of such a function.
PS. If you only wanted the second output and did not want to 'waste' a variable name for the first one, you can do this by specifying ~ as the first output, e.g.:
[~, b] = nnCostFunction(input1, input2, etc) % output 'a' is discarded
Related
I am using an RK4 algorithm:
function R=RK4_h(f,a,b,ya,h)
% Input
% - f field of the edo y'=f(t,y). A string of characters 'f'
% - a and b initial and final time
% - ya initial value y0
% - h lenght of the step
% Output
% - R=[T' Y'] where T independent variable and Y dependent variable
N = fix((b-a) / h);
T = zeros(1,N+1);
Y = zeros(1,N+1);
% Vector of the time values
T = a:h:b;
% Solving ordinary differential equation
Y(1) = ya;
for j = 1:N
k1 = h*feval(f,T(j),Y(j));
k2 = h*feval(f,T(j)+h/2,Y(j)+k1/2);
k3 = h*feval(f,T(j)+h/2,Y(j)+k2/2);
k4 = h*feval(f,T(j)+h,Y(j)+k3);
Y(j+1) = Y(j) + (k1+2*k2+2*k3+k4)/6;
end
R=[T' Y'];
In my main script I call it for every value as:
xlabel('x')
ylabel('y')
h=0.05;
fprintf ('\n First block \n');
xx = [0:h:1];
Nodes = length(xx);
yy = zeros(1,Nodes);
for i=1:Nodes
fp(i)=feval('edo',-1,xx(i));
end
E=RK4_h('edo',0,1,-1,h);
plot(E);
fprintf ('\n%f',E);
The problem is when I try to use RK4 algorithm with edo formula:
function edo = edo(y,t)
edo = 6*((exp(1))^(6*t))*(y-(2*t))^2+2;
The results are not logical, for example, the real value for are: y(0)=8, y(1)=11,53. But the estimate is not close. Any of both coordinates in E vector represents a feasible approach for the problem, so I do not know if this is the correct implementation.
There is a basic error of implementation?
The function edo takes t as the first parameter, and y as the second parameter. You have the parameters reversed.
Your function should be:
function edo = edo(t,y) % NOT edo(y,t)
edo = 6*((exp(1))^(6*t))*(y-(2*t))^2+2;
Hi I'm trying to implement Gradient Descent algorithm for a function:
My starting point for the algorithm is w = (u,v) = (2,2). The learning rate is eta = 0.01 and bound = 10^-14. Here is my MATLAB code:
function [resultTable, boundIter] = gradientDescent(w, iters, bound, eta)
% FUNCTION [resultTable, boundIter] = gradientDescent(w, its, bound, eta)
%
% DESCRIPTION:
% - This function will do gradient descent error minimization for the
% function E(u,v) = (u*exp(v) - 2*v*exp(-u))^2.
%
% INPUTS:
% 'w' a 1-by-2 vector indicating initial weights w = [u,v]
% 'its' a positive integer indicating the number of gradient descent
% iterations
% 'bound' a real number indicating an error lower bound
% 'eta' a positive real number indicating the learning rate of GD algorithm
%
% OUTPUTS:
% 'resultTable' a iters+1-by-6 table indicating the error, partial
% derivatives and weights for each GD iteration
% 'boundIter' a positive integer specifying the GD iteration when the error
% function got below the given error bound 'bound'
%
% The error function
E = #(u,v) (u*exp(v) - 2*v*exp(-u))^2;
% Partial derivative of E with respect to u
pEpu = #(u,v) 2*(u*exp(v) - 2*v*exp(-u))*(exp(v) + 2*v*exp(-u));
% Partial derivative of E with respect to v
pEpv = #(u,v) 2*(u*exp(v) - 2*v*exp(-u))*(u*exp(v) - 2*exp(-u));
% Initialize boundIter
boundIter = 0;
% Create a table for holding the results
resultTable = zeros(iters+1, 6);
% Iteration number
resultTable(1, 1) = 0;
% Error at iteration i
resultTable(1, 2) = E(w(1), w(2));
% The value of pEpu at initial w = (u,v)
resultTable(1, 3) = pEpu(w(1), w(2));
% The value of pEpv at initial w = (u,v)
resultTable(1, 4) = pEpv(w(1), w(2));
% Initial u
resultTable(1, 5) = w(1);
% Initial v
resultTable(1, 6) = w(2);
% Loop all the iterations
for i = 2:iters+1
% Save the iteration number
resultTable(i, 1) = i-1;
% Update the weights
temp1 = w(1) - eta*(pEpu(w(1), w(2)));
temp2 = w(2) - eta*(pEpv(w(1), w(2)));
w(1) = temp1;
w(2) = temp2;
% Evaluate the error function at new weights
resultTable(i, 2) = E(w(1), w(2));
% Evaluate pEpu at the new point
resultTable(i, 3) = pEpu(w(1), w(2));
% Evaluate pEpv at the new point
resultTable(i, 4) = pEpv(w(1), w(2));
% Save the new weights
resultTable(i, 5) = w(1);
resultTable(i, 6) = w(2);
% If the error function is below a specified bound save this iteration
% index
if E(w(1), w(2)) < bound
boundIter = i-1;
end
end
This is an exercise in my machine learning course, but for some reason my results are all wrong. There must be something wrong in the code. I have tried debugging and debugging it and haven't found anything wrong...can someone identify what is my problem here?...In other words can you check that the code is valid gradient descent algorithm for the given function?
Please let me know if my question is too unclear or if you need more info :)
Thank you for your effort and help! =)
Here is my results for five iterations and what other people got:
PARAMETERS: w = [2,2], eta = 0.01, bound = 10^-14, iters = 5
As discussed below the question: I would say the others are wrong... your minimization leads to smaller values of E(u,v), check:
E(1.4,1.6) = 37.8 >> 3.6 = E(0.63, -1.67)
Not a complete answer but lets go for it:
I added a plotting part in your code, so you can see whats going on.
u1=resultTable(:,5);
v1=resultTable(:,6);
E1=E(u1,v1);
E1(E1<bound)=NaN;
[x,y]=meshgrid(-1:0.1:5,-5:0.1:2);Z=E(x,y);
surf(x,y,Z)
hold on
plot3(u1,v1,E1,'r')
plot3(u1,v1,E1,'r*')
The result shows that your algorithm is doing the right thing for that function. So, as other said, or all the others are wrong, or you are not using the right equation from the beggining.
(I apologize for not just commenting, but I'm new to SO and cannot comment.)
It appears that your algorithm is doing the right thing. What you want to be sure is that at each step the energy is shrinking (which it is). There are several reasons why your data points may not agree with the others in the class: they could be wrong (you or others in the class), they perhaps started at a different point, they perhaps used a different step size (what you are calling eta I believe).
Ideally, you don't want to hard-code the number of iterations. You want to continue until you reach a local minimum (which hopefully is the global minimum). To check this, you want both partial derivatives to be zero (or very close). In addition, to make sure you're at a local min (not a local max, or saddle point) you should check the sign of E_uu*E_vv - E_uv^2 and the sign of E_uu look at: http://en.wikipedia.org/wiki/Second_partial_derivative_test for details (the second derivative test, at the top). If you find yourself at a local max or saddle point, your gradient will tell you not to move (since the partial derivatives are 0). Since you know this isn't optimal, you have to just perturb your solution (sometimes called simulated annealing).
Hope this helps.
I'm quite new to Matlab and I need help in speeding up some part of my code. I am writing a Matlab application that performs 3D matrix convolution but unlike in standard convolution, the kernel is not constant, it needs to be calculated for each pixel of an image.
So far, I have ended up with a working code, but incredibly slow:
function result = calculateFilteredImages(images, T)
% images - matrix [480,360,10] of 10 grayscale images of height=480 and width=360
% reprezented as a value in a range [0..1]
% i.e. images(10,20,5) = 0.1231;
% T - some matrix [480,360,10, 3,3] of double values, calculated earlier
kerN = 5; %kernel size
mid=floor(kerN/2); %half the kernel size
offset=mid+1; %kernel offset
[h,w,n] = size(images);
%add padding so as not to get IndexOutOfBoundsEx during summation:
%[i.e. changes [1 2 3...10] to [0 0 1 2 ... 10 0 0]]
images = padarray(images,[mid, mid, mid]);
result(h,w,n)=0; %preallocate, faster than zeros(h,w,n)
kernel(kerN,kerN,kerN)=0; %preallocate
% the three parameters below are not important in this problem
% (are used to calculate sigma in x,y,z direction inside the loop)
sigMin=0.5;
sigMax=3;
d = 3;
for a=1:n;
tic;
for b=1:w;
for c=1:h;
M(:,:)=T(c,b,a,:,:); % M is now a 3x3 matrix
[R D] = eig(M); %get eigenvectors and eigenvalues - R and D are now 3x3 matrices
% eigenvalues
l1 = D(1,1);
l2 = D(2,2);
l3 = D(3,3);
sig1=sig( l1 , sigMin, sigMax, d);
sig2=sig( l2 , sigMin, sigMax, d);
sig3=sig( l3 , sigMin, sigMax, d);
% calculate kernel
for i=-mid:mid
for j=-mid:mid
for k=-mid:mid
x_new = [i,j,k] * R; %calculate new [i,j,k]
kernel(offset+i, offset+j, offset+k) = exp(- (((x_new(1))^2 )/(sig1^2) + ((x_new(2))^2)/(sig2^2) + ((x_new(3))^2)/(sig3^2)) /2);
end
end
end
% normalize
kernel=kernel/sum(kernel(:));
%perform summation
xm_sum=0;
for i=-mid:mid
for j=-mid:mid
for k=-mid:mid
xm_sum = xm_sum + kernel(offset+i, offset+j, offset+k) * images(c+mid+i, b+mid+j, a+mid+k);
end
end
end
result(c,b,a)=xm_sum;
end
end
toc;
end
end
I tried replacing the "calculating kernel" part with
sigma=[sig1 sig2 sig3]
[x,y,z] = ndgrid(-mid:mid,-mid:mid,-mid:mid);
k2 = arrayfun(#(x, y, z) exp(-(norm([x,y,z]*R./sigma)^2)/2), x,y,z);
but it turned out to be even slower than the loop. I went through several articles and tutorials on vectorization but I'm quite stuck with this one.
Can it be vectorized or somehow speeded up using something else?
I'm new to Matlab, maybe there are some build-in functions that could help in this case?
Update
The profiling result:
Sample data which was used during profiling:
T.mat
grayImages.mat
As Dennis noted, this is a lot of code, cutting it down to the minimum that's slow given by the profiler will help. I'm not sure if my code is equivalent to yours, can you try it and profile it? The 'trick' to Matlab vectorization is using .* and .^, which operate element-by-element instead of having to use loops. http://www.mathworks.com/help/matlab/ref/power.html
Take your rewritten part:
sigma=[sig1 sig2 sig3]
[x,y,z] = ndgrid(-mid:mid,-mid:mid,-mid:mid);
k2 = arrayfun(#(x, y, z) exp(-(norm([x,y,z]*R./sigma)^2)/2), x,y,z);
And just pick one sigma for now. Looping over 3 different sigmas isn't a performance problem if you can vectorize the underlying k2 formula.
EDIT: Changed the matrix_to_norm code to be x(:), and no commas. See Generate all possible combinations of the elements of some vectors (Cartesian product)
Then try:
% R & mid my test variables
R = [1 2 3; 4 5 6; 7 8 9];
mid = 5;
[x,y,z] = ndgrid(-mid:mid,-mid:mid,-mid:mid);
% meshgrid is also a possibility, check that you are getting the order you want
% Going to break the equation apart for now for clarity
% Matrix operation, should already be fast.
matrix_to_norm = [x(:) y(:) z(:)]*R/sig1
% Ditto
matrix_normed = norm(matrix_to_norm)
% Note the .^ - I believe you want element-by-element exponentiation, this will
% vectorize it.
k2 = exp(-0.5*(matrix_normed.^2))
I have a black box function, f(x) and a range of values for x.
I need to find the lowest value of x for which f(x) = 0.
I know that for the start of the range of x, f(x) > 0, and if I had a value for which f(x) < 0 I could use regula falsi, or similar root finding methods, to try determine f(x)=0.
I know f(x) is continuous, and should only have 0,1 or 2 roots for the range in question, but it might have a local minimum.
f(x) is somewhat computationally expensive, and I'll have to find this first root a lot.
I was thinking some kind of hill climbing with a degree of randomness to avoid any local minimums, but then how do you know if there was no minimum less than zero or if you just haven't found it yet? I think the function shouldn't have more than two minimum points, but I can't be absolutely certain of that enough to rely on it.
If it helps, x in this case represents a time, and f(x) represents the distance between a ship and a body in orbit (moon/planet) at that time. I need the first point where they are a certain distance from each other.
My method will sound pretty complicated, but in the end the computation time of the method will be far smaller than the distance calculations (evaluation of your f(x)). Also, there are quite many implementations of it already written up in existing libraries.
So what I would do:
approximate f(x) with a Chebychev polynomial
find the real roots of that polynomial
If any are found, use those roots as initial estimates in a more precise rootfinder (if needed)
Given the nature of your function (smooth, continuous, otherwise well-behaved) and the information that there's 0,1 or 2 roots, a good Chebychev polynomial can already be found with 3 evaluations of f(x).
Then find the eigenvalues of the companion matrix of the Chebychev coefficients; these correspond to the roots of the Chebychev polynomial.
If all are imaginary, there's 0 roots.
If there are some real roots, check if two are equal (that "rare" case you spoke of).
Otherwise, all real eigenvalues are roots; the lowest one of which is the root you seek.
Then use Newton-Raphson to refine (if necessary, or use a better Chebychev polynomial). Derivatives of f can be approximated using central differences
f'(x) = ( f(x+h)-f(h-x) ) /2/h (for small h)
I have an implementation of the Chebychev routines in Matlab/Octave (given below). Use like this:
R = FindRealRoots(#f, x_min, x_max, 5, true,true);
with [x_min,x_max] your range in x, 5 the number of points to use for finding the polynomial (the higher, the more accurate. Equals the amount of function evaluations needed), and the last true will make a plot of the actual function and the Chebychev approximation to it (mainly for testing purposes).
Now, the implementation:
% FINDREALROOTS Find approximations to all real roots of any function
% on an interval [a, b].
%
% USAGE:
% Roots = FindRealRoots(funfcn, a, b, n, vectorized, make_plot)
%
% FINDREALROOTS() approximates all the real roots of the function 'funfcn'
% in the interval [a,b]. It does so by finding the roots of an [n]-th degree
% Chebyshev polynomial approximation, via the eignevalues of the associated
% companion matrix.
%
% When the argument [vectorized] is [true], FINDREALROOTS() will evaluate
% the function 'funfcn' at all [n] required points in the interval
% simultaneously. Otherwise, it will use ARRAFUN() to calculate the [n]
% function values one-by-one. [vectorized] defaults to [false].
%
% When the argument [make_plot] is true, FINDREALROOTS() plots the
% original function and the Chebyshev approximation, and shows any roots on
% the given interval. Also [make_plot] defaults to [false].
%
% All [Roots] (if any) will be sorted.
%
% First version 26th May 2007 by Stephen Morris,
% Nightingale-EOS Ltd., St. Asaph, Wales.
%
% Modified 14/Nov (Rody Oldenhuis)
%
% See also roots, eig.
function Roots = FindRealRoots(funfcn, a, b, n, vectorized, make_plot)
% parse input and initialize.
inarg = nargin;
if n <= 2, n = 3; end % Minimum [n] is 3:
if (inarg < 5), vectorized = false; end % default: function isn't vectorized
if (inarg < 6), make_plot = false; end % default: don't make plot
% some convenient variables
bma = (b-a)/2; bpa = (b+a)/2; Roots = [];
% Obtain the Chebyshev coefficients for the function
%
% Based on the routine given in Numerical Recipes (3rd) section 5.8;
% calculates the Chebyshev coefficients necessary to approximate some
% function over the interval [a,b]
% initialize
c = zeros(1,n); k=(1:n)'; y = cos(pi*((1:n)-1/2)/n);
% evaluate function on Chebychev nodes
if vectorized
f = feval(funfcn,(y*bma)+bpa);
else
f = arrayfun(#(x) feval(funfcn,x),(y*bma)+bpa);
end
% compute the coefficients
for j=1:n, c(j)=(f(:).'*(cos((pi*(j-1))*((k-0.5)/n))))*(2-(j==1))/n; end
% coefficients may be [NaN] if [inf]
% ??? TODO - it is of course possible for c(n) to be zero...
if any(~isfinite(c(:))) || (c(n) == 0), return; end
% Define [A] as the Frobenius-Chebyshev companion matrix. This is based
% on the form given by J.P. Boyd, Appl. Num. Math. 56 pp.1077-1091 (2006).
one = ones(n-3,1);
A = diag([one/2; 0],-1) + diag([1; one/2],+1);
A(end, :) = -c(1:n-1)/2/c(n);
A(end,end-1) = A(end,end-1) + 0.5;
% Now we have the companion matrix, we can find its eigenvalues using the
% MATLAB built-in function. We're only interested in the real elements of
% the matrix:
eigvals = eig(A); realvals = eigvals(imag(eigvals)==0);
% if there aren't any real roots, return
if isempty(realvals), return; end
% Of course these are the roots scaled to the canonical interval [-1,1]. We
% need to map them back onto the interval [a, b]; we widen the interval just
% a tiny bit to make sure that we don't miss any that are right on the
% boundaries.
rangevals = nonzeros(realvals(abs(realvals) <= 1+1e-5));
% also sort the roots
Roots = sort(rangevals*bma + bpa);
% As a sanity check we'll plot out the original function and its Chebyshev
% approximation: if they don't match then we know to call the routine again
% with a larger 'n'.
if make_plot
% simple grid
grid = linspace(a,b, max(25,n));
% evaluate function
if vectorized
fungrid = feval(funfcn, grid);
else
fungrid = arrayfun(#(x) feval(funfcn,x), grid);
end
% corresponding Chebychev-grid (more complicated but essentially the same)
y = (2.*grid-a-b)./(b-a); d = zeros(1,length(grid)); dd = d;
for j = length(c):-1:2, sv=d; d=(2*y.*d)-dd+c(j); dd=sv; end, chebgrid=(y.*d)-dd+c(1);
% Now make plot
figure(1), clf, hold on
plot(grid, fungrid ,'color' , 'r');
line(grid, chebgrid,'color' , 'b');
line(grid, zeros(1,length(grid)), 'linestyle','--')
legend('function', 'interpolation')
end % make plot
end % FindRealRoots
You could use the secant method which is a discrete version of Newton's method.
The root is estimated by calculating the line between two points (= the secant) and its crossing of the X axis.
Your function has only 0, 1 or 2 roots, so it can be done using an algorithm it doesn't ensure the first root.
Find one root using Newton's method or other method. If it can't find any root, this algorithm also give up.
Let the found root is r and beginning of the range of the x is x0. let d = (r-x0)/2.
While d > 0, calculate f(r-d). if f(r-d) > 0, half d (d := d / 2) and loop. if
f(r-d) <= 0, escape the loop.
if loop is finished by d = 0, report r as the first root. if d > 0, find a root between x0 and r-d by using any other method and report it.
I assumed two prerequiesite conditions.
f(x) takes x of floating point numbers
At each point of the roots of f(x), the graph of f(x) crosses to x-axis. They are not touching root like x=0 in f(x)=x^2.
Using condition 2, you can prove that if there is no point such that f(r-d) < 0, ∀ x: x0 < x < r, f(x) > 0.
You could make a small change to the uniroot.all function from the R library rootSolve.
uniroot.all <- function (f, interval, lower= min(interval),
upper= max(interval), tol= .Machine$double.eps^0.2,
maxiter= 1000, n = 100, nroots = -1, ... ) {
## error checking as in uniroot...
if (!missing(interval) && length(interval) != 2)
stop("'interval' must be a vector of length 2")
if (!is.numeric(lower) || !is.numeric(upper) || lower >=
upper)
stop("lower < upper is not fulfilled")
## subdivide interval in n subintervals and estimate the function values
xseq <- seq(lower,upper,len=n+1)
mod <- f(xseq,...)
## some function values may already be 0
Equi <- xseq[which(mod==0)]
ss <- mod[1:n]*mod[2:(n+1)] # interval where functionvalues change sign
ii <- which(ss<0)
for (i in ii) {
Equi <- c(Equi, uniroot(f, lower = xseq[i], upper = xseq[i+1] ,...)$root)
if (length(Equi) == nroots) {
return(Equi)
}
}
return(Equi)
}
And run it like this:
uniroot.all(f = your_function, interval = c(start, stop), nroots = 1)
Statement of Problem:
I have an array M with m rows and n columns. The array M is filled with non-zero elements.
I also have a vector t with n elements, and a vector omega
with m elements.
The elements of t correspond to the columns of matrix M.
The elements of omega correspond to the rows of matrix M.
Goal of Algorithm:
Define chi as the multiplication of vector t and omega. I need to obtain a 1D vector a, where each element of a is a function of chi.
Each element of chi is unique (i.e. every element is different).
Using mathematics notation, this can be expressed as a(chi)
Each element of vector a corresponds to an element or elements of M.
Matlab code:
Here is a code snippet showing how the vectors t and omega are generated. The matrix M is pre-existing.
[m,n] = size(M);
t = linspace(0,5,n);
omega = linspace(0,628,m);
Conceptual Diagram:
This appears to be a type of integration (if this is the right word for it) along constant chi.
Reference:
Link to reference
The algorithm is not explicitly stated in the reference. I only wish that this algorithm was described in a manner reminiscent of computer science textbooks!
Looking at Figure 11.5, the matrix M is Figure 11.5(a). The goal is to find an algorithm to convert Figure 11.5(a) into 11.5(b).
It appears that the algorithm is a type of integration (averaging, perhaps?) along constant chi.
It appears to me that reshape is the matlab function you need to use. As noted in the link:
B = reshape(A,siz) returns an n-dimensional array with the same elements as A, but reshaped to siz, a vector representing the dimensions of the reshaped array.
That is, create a vector siz with the number m*n in it, and say A = reshape(P,siz), where P is the product of vectors t and ω; or perhaps say something like A = reshape(t*ω,[m*n]). (I don't have matlab here, or would run a test to see if I have the product the right way around.) Note, the link does not show an example with one number (instead of several) after the matrix parameter to reshape, but I would expect from the description that A = reshape(t*ω,m*n) might also work.
You should add a pseudocode or a link to the algorithm you want to implement. From what I could understood I have developed the following code anyway:
M = [1 2 3 4; 5 6 7 8; 9 10 11 12]' % easy test M matrix
a = reshape(M, prod(size(M)), 1) % convert M to vector 'a' with reshape command
[m,n] = size(M); % Your sample code
t = linspace(0,5,n); % Your sample code
omega = linspace(0,628,m); % Your sample code
for i=1:length(t)
for j=1:length(omega) % Acces a(chi) in the desired order
chi = length(omega)*(i-1)+j;
t(i) % related t value
omega(j) % related omega value
a(chi) % related a(chi) value
end
end
As you can see, I also think that the reshape() function is the solution to your problems. I hope that this code helps,
The basic idea is to use two separate loops. The outer loop is over the chi variable values, whereas the inner loop is over the i variable values. Referring to the above diagram in the original question, the i variable corresponds to the x-axis (time), and the j variable corresponds to the y-axis (frequency). Assuming that the chi, i, and j variables can take on any real number, bilinear interpolation is then used to find an amplitude corresponding to an element in matrix M. The integration is just an averaging over elements of M.
The following code snippet provides an overview of the basic algorithm to express elements of a matrix as a vector using the spectral collapsing from 2D to 1D. I can't find any reference for this, but it is a solution that works for me.
% Amp = amplitude vector corresponding to Figure 11.5(b) in book reference
% M = matrix corresponding to the absolute value of the complex Gabor transform
% matrix in Figure 11.5(a) in book reference
% Nchi = number of chi in chi vector
% prod = product of timestep and frequency step
% dt = time step
% domega = frequency step
% omega_max = maximum angular frequency
% i = time array element along x-axis
% j = frequency array element along y-axis
% current_i = current time array element in loop
% current_j = current frequency array element in loop
% Nchi = number of chi
% Nivar = number of i variables
% ivar = i variable vector
% calculate for chi = 0, which only occurs when
% t = 0 and omega = 0, at i = 1
av0 = mean( M(1,:) );
av1 = mean( M(2:end,1) );
av2 = mean( [av0 av1] );
Amp(1) = av2;
% av_val holds the sum of all values that have been averaged
av_val_sum = 0;
% loop for rest of chi
for ccnt = 2:Nchi % 2:Nchi
av_val_sum = 0; % reset av_val_sum
current_chi = chi( ccnt ); % current value of chi
% loop over i vector
for icnt = 1:Nivar % 1:Nivar
current_i = ivar( icnt );
current_j = (current_chi / (prod * (current_i - 1))) + 1;
current_t = dt * (current_i - 1);
current_omega = domega * (current_j - 1);
% values out of range
if(current_omega > omega_max)
continue;
end
% use bilinear interpolation to find an amplitude
% at current_t and current_omega from matrix M
% f_x_y is the bilinear interpolated amplitude
% Insert bilinear interpolation code here
% add to running sum
av_val_sum = av_val_sum + f_x_y;
end % icnt loop
% compute the average over all i
av = av_val_sum / Nivar;
% assign the average to Amp
Amp(ccnt) = av;
end % ccnt loop