Implementing Eligibility Traces in SARSA - algorithm

I am writing a MATLAB implemention of the SARSA algorithm, and have successfully writtena one-step implementation.
I am now trying to extend it to use eligibility traces, but the results I obtain are worse than with one-step. (Ie: The algorithm converges at a slower rate and the final path followed by the agent is longer.)
e_trace(action_old, state_old) = e_trace(action_old, state_old) + 1;
% Update weights but only if we are past the first step
if(step > 1)
delta = (reward + discount*qval_new - qval_old);
% SARSA-lambda (Eligibility Traces)
dw = e_trace.*delta;
% One-step SARSA
dw = zeros(actions, states);
dw(action_old, state_old) = delta;
weights = weights + learning_rate*dw;
end
e_trace = discount*decay*e_trace;
Essentially, my q-values are stored in an nXm weights matrix where n = number of actions and m = number of states. Eligibility trace values are stored in the e_trace matrix. According to whether I want to use one-step or ET I use either of the two definitions of dw. I am not sure where I am going wrong. The algorithm is implemented as shown in here: http://webdocs.cs.ualberta.ca/~sutton/book/ebook/node77.html
The
dw = e_trace .* delta
Defines the weight change for all weights in the network (Ie: The change in value for all Q(s,a) pairs), which is then fed into the network adjusted by the learning-rate.
I should add that initially my weights and e-values are set to 0.
Any advice?

Related

Understanding a FastICA implementation

I'm trying to implement FastICA (independent component analysis) for blind signal separation of images, but first I thought I'd take a look at some examples from Github that produce good results. I'm trying to compare the main loop from the algorithm's steps on Wikipedia's FastICA and I'm having quite a bit of difficulty seeing how they're actually the same.
They look very similar, but there's a few differences that I don't understand. It looks like this implementation is similar to (or the same as) the "Multiple component extraction" version from Wiki.
Would someone please help me understand what's going on in the four or so lines having to do with the nonlinearity function with its first and second derivatives, and the first line of updating the weight vector? Any help is greatly appreciated!
Here's the implementation with the variables changed to mirror the Wiki more closely:
% X is sized (NxM, 3x50K) mixed image data matrix (one row for each mixed image)
C=3; % number of components to separate
W=zeros(numofIC,VariableNum); % weights matrix
for p=1:C
% initialize random weight vector of length N
wp = rand(C,1);
wp = wp / norm(wp);
% like do:
i = 1;
maxIterations = 100;
while i <= maxIterations+1
% until mat iterations
if i == maxIterations
fprintf('No convergence: ', p,maxIterations);
break;
end
wp_old = wp;
% this is the main part of the algorithm and where
% I'm confused about the particular implementation
u = 1;
t = X'*b;
g = t.^3;
dg = 3*t.^2;
wp = ((1-u)*t'*g*wp+u*X*g)/M-mean(dg)*wp;
% 2nd and 3rd wp update steps make sense to me
wp = wp-W*W'*wp;
wp = wp / norm(wp);
% or until w_p converges
if abs(abs(b'*bOld)-1)<1e-10
W(:,p)=b;
break;
end
i=i+1;
end
end
And the Wiki algorithms for quick reference:
First, I don't understand why the term that is always zero remains in the code:
wp = ((1-u)*t'*g*wp+u*X*g)/M-mean(dg)*wp;
The above can be simplified into:
wp = X*g/M-mean(dg)*wp;
Also removing u since it is always 1.
Second, I believe the following line is wrong:
t = X'*b;
The correct expression is:
t = X'*wp;
Now let's go through each variable here. Let's refer to
w = E{Xg(wTX)T} - E{g'(wTX)}w
as the iteration equation.
X is your input data, i.e. X in the iteration equation.
wp is the weight vector, i.e. w in the iteration equation. Its initial value is randomised.
g is the first derivative of a nonquadratic nonlinear function, i.e. g(wTX) in the iteration equation
dg is the first derivative of g, i.e. g'(wTX) in the iteration equation
M although its definition is not shown in the code you provide, but I think it should be the size of X.
Having the knowledge of the meaning of all variables, we can now try to understand the codes.
t = X'*b;
The above line computes wTX.
g = t.^3;
The above line computes g(wTX) = (wTX)3. Note that g(u) can be any equation as long as f(u), where g(u) = df(u)/du, is nonlinear and nonquadratic.
dg = 3*t.^2;
The above line computes the derivative of g.
wp = X*g/M-mean(dg)*wp;
Xg obviously calculates Xg(wTX). Xg/M calculates the average of Xg, which is equivalent to E{Xg(wTX)T}.
mean(dg) is E{g'(wTX)} and multiplies by wp or w in the equation.
Now you have what you needed for Newton-Raphson Method.

Choose the best cluster partition based on a cost function

I've a string that I'd like to cluster:
s = 'AAABBCCCCC'
I don't know in advance how many clusters I'll get. All I have, is a cost function that can take a clustering and give it a score.
There is also a constraint on the cluster sizes: they must be in a range [a, b]
In my exemple, for a=3 and b=4, all possible clustering are:
[
['AAA', 'BBC', 'CCCC'],
['AAA', 'BBCC', 'CCC'],
['AAAB', 'BCC', 'CCC'],
]
Concatenation of each clustering must give the string s
The cost function is something like this
cost(clustering) = alpha*l + beta*e + gamma*d
where:
l = variance(cluster_lengths)
e = mean(clusters_entropies)
d = 1 - nb_characters_in_b_that_are_not_in_a)/size_of_b (for b the
consecutive cluster of a)
alpha, beta, gamma are weights
This cost function gives a low cost (0) for the best case:
Where all clusters have the same size.
Content inside each cluster is the same.
Consecutive clusters don't have the same content.
Theoretically, the solution is to calculate the cost of all possible compositions for this string and choose the lowest. but It will take too much time.
Is there any clustering algorithme that can find the best clustering according to this cost function in a reasonable time ?
A dynamic programming approach should work here.
Imagine, first, that a cost(clustering) equals to the sum of cost(cluster) for all all clusters that constitute the clustering.
Then, a simple DP function is defined as follows:
F[i] = minimal cost of clustering the substring s[0:i]
and calculated in the following way:
for i = 0..length(s)-1:
for j = a..b:
last_cluster = s[i-j..i]
F[i] = min(F[i], F[i - j] + cost(last_cluster))
Of course, first you have to initialize values of F to some infinite values or nulls to correctly apply min function.
To actually restore the answer, you can store additional values P[i], which would contain the lengths of the last cluster with optimal clustering of string s[0..i].
When you update F[i], you also update P[i].
Then, restoring answer is little trouble:
current_pos = length(s) - 1
while (current_pos >= 0):
current_cluster_length = P[current_pos]
current_cluster = s[(current_pos - current_cluster_length + 1)..current_pos]
// grab current_cluster to the answer
current_pos -= current_cluster_length
Note that in this approach you will get the clsuters in the inverse order, meaning from the last cluster all the way to the first one.
Let's now apply this idea to the initial problem.
What we would like is to make cost(clustering) more or less linear, so that we can compute it cluster by cluster instead of computing it for the whole clustering.
The first parameter of our DP function F will be, as before, i, the number of chars in the substring s[0:i] we have found optimal answer to.
The meaning of the F function is, as usual, the minimal cost we can achieve with the given parameters.
The parameter e = mean(clusters_entropies) of the cost function is already linear and can be computed cluster by cluster, so this is not a problem.
The parameter l = variance(cluster_lengths) is a little bit more complex.
The variance of n values is defined as Sum[(x[i] - mean)^2] / n.
mean is expected value, namely mean = Sum[x[i]] / n.
Note also that Sum[x[i]] is the sum of lengths of all clusters and in our case it is always fixed and equals to length(s).
Therefore, mean = length(s) / n.
Okay, we have more or less made our l part of cost function linear except the n parameter. We will add this parameter, namely the number of clusters in the desired clustering, as a parameter to our F function.
We will also have a parameter cur which will mean the number of clusters currently assembled in the given state.
The parameter d of the cost function also requires adding additional parameter to our DP function F, namely j, sz, the size of the last cluster in our partition.
Overall, we have come up with a DP function F[i][n][cur][sz] that gives us the minimal cost function of partitioning string s[0:i] into n clusters of which cur are currently constructed with the size of the last cluster equal to sz. Of course, our responsibility is to make sure that a<=sz<=b.
The answer in terms of the minimal cost function will be the minimum among all possible n and a<=sz<=b values of DP function F[length(s)-1][n][n][sz].
Now notice that this time we do not even require the companion P function to store the length of the last cluster as we already included that information as the last sz parameter into our F function.
We will, however, store in P[i][n][cur][sz] the length of the next to last cluster in the optimal clustering with the specified parameters. We will use that value to restore our solution.
Thus, we will be able to restore an answer in the following way, assuming the minimum of F is achieved in the parameters n=n0 and sz=sz0:
current_pos = length(s) - 1
current_n = n0
current_cluster_size = sz0
while (current_n > 0):
current_cluster = s[(current_pos - current_cluster_size + 1)..current_pos]
next_cluster_size = P[current_pos][n0][current_n][current_cluster_size]
current_n--;
current_pos -= current_cluster_size;
current_cluster_size = next_cluster_size
Let's now get to the computation of F.
I will omit the corner cases and range checks, but it will be enough to just initialize F with some infinite values.
// initialize for the case of one cluster
// d = 0, l = 0, only have to calculate entropy
for i=0..length(s)-1:
for n=1..length(s):
F[i][n][1][i+1] = cluster_entropy(s[0..i]);
P[i][n][1][i+1] = -1; // initialize with fake value as in this case there is no previous cluster
// general case computation
for i=0..length(s)-1:
for n=1..length(s):
for cur=2..n:
for sz=a..b:
for prev_sz=a..b:
cur_cluster = s[i-sz+1..i]
prev_cluster = s[i-sz-prev_sz+1..i-sz]
F[i][n][cur][sz] = min(F[i][n][cur][sz], F[i-sz][n][cur - 1][prev_sz] + gamma*calc_d(prev_cluster, cur_cluster) + beta*cluster_entropy(cur_cluster)/n + alpha*(sz - s/n)^2)

Gaussian Mixture Model - Matlab training for parameters

I am running a speech enhancement algorithm based on Gaussian Mixture Model. The problem is that the estimation algorithm underflows during the training processing.
I am trying to calculate the PDF of a log spectrum frame X given a Gaussian cluster which is a product of the PDF of each frequnecy component X_k (fft is done for k=1..256)
what i get is a product of 256 exp(-v(k)) such that v(k)>=0
Here is a snippet of the MATLAB calculation:
N - number of frames; M- number of mixtures; c_i weight for each mixture;
gamma(n,i) = c_i*f(X_n|I = i)
for i=1 : N
rep_DataMat(:,:,i) = repmat(DataMat(:,i),1,M);
gamma_exp(:,:) = (1./sqrt((2*pi*sigmaSqr_curr))).*exp(((-1)*((rep_DataMat(:,:,i) - mue_curr).^2)./(2*sigmaSqr_curr)));
gamma_curr(i,:) = c_curr.*(prod(10*gamma_exp(:,:),1));
alpha_curr(i,:) = gamma_curr(i,:)./sum(gamma_curr(i,:));
end
The product goes quickly to zero due to K = 256 since the numbers being smaller then one. Is there a way I can calculate this with causing an underflow (like logsum or similar)?
You can perform the computations in the log domain.
The conversion of products into sums is straightforward.
Sums on the other hand can be converted with something such as logsumexp.
This works using the formula:
log(a + b) = log(exp(log(a)) + exp(log(b)))
= log(exp(loga) + exp(logb))
Where loga and logb are the respective representation of a and b in the log domain.
The basic idea is then to factorize the exponent with the largest argument (eg. loga for sake of illustration):
log(exp(loga)+exp(logb)) = log(exp(loga)*(1+exp(logb-loga)))
= loga + log(1+exp(logb-loga))
Note that the same idea applies if you have more than 2 terms to add.

matlab code optimization - clustering algorithm KFCG

Background
I have a large set of vectors (orientation data in an axis-angle representation... the axis is the vector). I want to apply a clustering algorithm to. I tried kmeans but the computational time was too long (never finished). So instead I am trying to implement KFCG algorithm which is faster (Kirke 2010):
Initially we have one cluster with the entire training vectors and the codevector C1 which is centroid. In the first iteration of the algorithm, the clusters are formed by comparing first element of training vector Xi with first element of code vector C1. The vector Xi is grouped into the cluster 1 if xi1< c11 otherwise vector Xi is grouped into cluster2 as shown in Figure 2(a) where codevector dimension space is 2. In second iteration, the cluster 1 is split into two by comparing second element Xi2 of vector Xi belonging to cluster 1 with that of the second element of the codevector. Cluster 2 is split into two by comparing the second element Xi2 of vector Xi belonging to cluster 2 with that of the second element of the codevector as shown in Figure 2(b). This procedure is repeated till the codebook size is reached to the size specified by user.
I'm unsure what ratio is appropriate for the codebook, but it shouldn't matter for the code optimization. Also note mine is 3-D so the same process is done for the 3rd dimension.
My code attempts
I've tried implementing the above algorithm into Matlab 2013 (Student Version). Here's some different structures I've tried - BUT take way too long (have never seen it completed):
%training vectors:
Atgood = Nx4 vector (see test data below if want to test);
vecA = Atgood(:,1:3);
roA = size(vecA,1);
%Codebook size, Nsel, is ratio of data
remainFrac2=0.5;
Nseltemp = remainFrac2*roA; %codebook size
%Ensure selected size after nearest power of 2 is NOT greater than roA
if 2^round(log2(Nseltemp)) &lt roA
NselIter = round(log2(Nseltemp));
else
NselIter = ceil(log2(Nseltemp)-1);
end
Nsel = 2^NselIter; %power of 2 - for LGB and other algorithms
MAIN BLOCK TO OPTIMIZE:
%KFCG:
%%cluster = cell(1,Nsel); %Unsure #rows - Don't know how to initialize if need mean...
codevec(1,1:3) = mean(vecA,1);
count1=1;
count2=1;
ind=1;
for kk = 1:NselIter
hh2 = 1:2:size(codevec,1)*2;
for hh1 = 1:length(hh2)
hh=hh2(hh1);
% for ii = 1:roA
% if vecA(ii,ind) &lt codevec(hh1,ind)
% cluster{1,hh}(count1,1:4) = Atgood(ii,:); %want all 4 elements
% count1=count1+1;
% else
% cluster{1,hh+1}(count2,1:4) = Atgood(ii,:); %want all 4
% count2=count2+1;
% end
% end
%EDIT: My ATTEMPT at optimizing above for loop:
repcv=repmat(codevec(hh1,ind),[size(vecA,1),1]);
splitind = vecA(:,ind)&gt=repcv;
splitind2 = vecA(:,ind)&ltrepcv;
cluster{1,hh}=vecA(splitind,:);
cluster{1,hh+1}=vecA(splitind2,:);
end
clear codevec
%Only mean the 1x3 vector portion of the cluster - for centroid
codevec = cell2mat((cellfun(#(x) mean(x(:,1:3),1),cluster,'UniformOutput',false))');
if ind &lt 3
ind = ind+1;
else
ind=1;
end
end
if length(codevec) ~= Nsel
warning('codevec ~= Nsel');
end
Alternatively, instead of cells I thought 3D Matrices would be faster? I tried but it was slower using my method of appending the next row each iteration (temp=[]; for...temp=[temp;new];)
Also, I wasn't sure what was best to loop with, for or while:
%If initialize cell to full length
while length(find(~cellfun('isempty',cluster))) < Nsel
Well, anyways, the first method was fastest for me.
Questions
Is the logic standard? Not in the sense that it matches with the algorithm described, but from a coding perspective, any weird methods I employed (especially with those multiple inner loops) that slows it down? Where can I speed up (you can just point me to resources or previous questions)?
My array size, Atgood, is 1,000,000x4 making NselIter=19; - do I just need to find a way to decrease this size or can the code be optimized?
Should this be asked on CodeReview? If so, I'll move it.
Testing Data
Here's some random vectors you can use to test:
for ii=1:1000 %My size is ~ 1,000,000
omega = 2*rand(3,1)-1;
omega = (omega/norm(omega))';
Atgood(ii,1:4) = [omega,57];
end
Your biggest issue is re-iterating through all of vecA FOR EACH CODEVECTOR, rather than just the ones that are part of the corresponding cluster. You're supposed to split each cluster on it's codevector. As it is, your cluster structure grows and grows, and each iteration is processing more and more samples.
Your second issue is the loop around the comparisons, and the appending of samples to build up the clusters. Both of those can be solved by vectorizing the comparison operation. Oh, I just saw your edit, where this was optimized. Much better. But codevec(hh1,ind) is just a scalar, so you don't even need the repmat.
Try this version:
% (preallocs added in edit)
cluster = cell(1,Nsel);
codevec = zeros(Nsel, 3);
codevec(1,:) = mean(Atgood(:,1:3),1);
cluster{1} = Atgood;
nClusters = 1;
ind = 1;
while nClusters < Nsel
for c = 1:nClusters
lower_cluster_logical = cluster{c}(:,ind) < codevec(c,ind);
cluster{nClusters+c} = cluster{c}(~lower_cluster_logical,:);
cluster{c} = cluster{c}(lower_cluster_logical,:);
codevec(c,:) = mean(cluster{c}(:,1:3), 1);
codevec(nClusters+c,:) = mean(cluster{nClusters+c}(:,1:3), 1);
end
ind = rem(ind,3) + 1;
nClusters = nClusters*2;
end

matlab: optimum amount of points for linear fit

I want to make a linear fit to few data points, as shown on the image. Since I know the intercept (in this case say 0.05), I want to fit only points which are in the linear region with this particular intercept. In this case it will be lets say points 5:22 (but not 22:30).
I'm looking for the simple algorithm to determine this optimal amount of points, based on... hmm, that's the question... R^2? Any Ideas how to do it?
I was thinking about probing R^2 for fits using points 1 to 2:30, 2 to 3:30, and so on, but I don't really know how to enclose it into clear and simple function. For fits with fixed intercept I'm using polyfit0 (http://www.mathworks.com/matlabcentral/fileexchange/272-polyfit0-m) . Thanks for any suggestions!
EDIT:
sample data:
intercept = 0.043;
x = 0.01:0.01:0.3;
y = [0.0530642513911393,0.0600786706929529,0.0673485248329648,0.0794662409166333,0.0895915873196170,0.103837395346484,0.107224784565365,0.120300492775786,0.126318699218730,0.141508831492330,0.147135757370947,0.161734674733680,0.170982455701681,0.191799936622712,0.192312642057298,0.204771365716483,0.222689541632988,0.242582251060963,0.252582727297656,0.267390860166283,0.282890010610515,0.292381165948577,0.307990544720676,0.314264952297699,0.332344368808024,0.355781519885611,0.373277721489254,0.387722683944356,0.413648156978284,0.446500064130389;];
What you have here is a rather difficult problem to find a general solution of.
One approach would be to compute all the slopes/intersects between all consecutive pairs of points, and then do cluster analysis on the intersepts:
slopes = diff(y)./diff(x);
intersepts = y(1:end-1) - slopes.*x(1:end-1);
idx = kmeans(intersepts, 3);
x([idx; 3] == 2) % the points with the intersepts closest to the linear one.
This requires the statistics toolbox (for kmeans). This is the best of all methods I tried, although the range of points found this way might have a few small holes in it; e.g., when the slopes of two points in the start and end range lie close to the slope of the line, these points will be detected as belonging to the line. This (and other factors) will require a bit more post-processing of the solution found this way.
Another approach (which I failed to construct successfully) is to do a linear fit in a loop, each time increasing the range of points from some point in the middle towards both of the endpoints, and see if the sum of the squared error remains small. This I gave up very quickly, because defining what "small" is is very subjective and must be done in some heuristic way.
I tried a more systematic and robust approach of the above:
function test
%% example data
slope = 2;
intercept = 1.5;
x = linspace(0.1, 5, 100).';
y = slope*x + intercept;
y(1:12) = log(x(1:12)) + y(12)-log(x(12));
y(74:100) = y(74:100) + (x(74:100)-x(74)).^8;
y = y + 0.2*randn(size(y));
%% simple algorithm
[X,fn] = fminsearch(#(ii)P(ii, x,y,intercept), [0.5 0.5])
[~,inds] = P(X, y,x,intercept)
end
function [C, inds] = P(ii, x,y,intercept)
% ii represents fraction of range from center to end,
% So ii lies between 0 and 1.
N = numel(x);
n = round(N/2);
ii = round(ii*n);
inds = min(max(1, n+(-ii(1):ii(2))), N);
% Solve linear system with fixed intercept
A = x(inds);
b = y(inds) - intercept;
% and return the sum of squared errors, divided by
% the number of points included in the set. This
% last step is required to prevent fminsearch from
% reducing the set to 1 point (= minimum possible
% squared error).
C = sum(((A\b)*A - b).^2)/numel(inds);
end
which only finds a rough approximation to the desired indices (12 and 74 in this example).
When fminsearch is run a few dozen times with random starting values (really just rand(1,2)), it gets more reliable, but I still wouln't bet my life on it.
If you have the statistics toolbox, use the kmeans option.
Depending on the number of data values, I would split the data into a relative small number of overlapping segments, and for each segment calculate the linear fit, or rather the 1-st order coefficient, (remember you know the intercept, which will be same for all segments).
Then, for each coefficient calculate the MSE between this hypothetical line and entire dataset, choosing the coefficient which yields the smallest MSE.

Resources