NLMS Algorithm is not converging, multiple implementation resulting equally - algorithm

The Normalized Least Mean Square algorithm is used in digital filtering, it basically tries to imitate an "unknown" filter so their difference (which is considered the error) tends to zero. The "factor" of convergence is that the error will start very high and with the continuous run of the algorithm it will be smaller.
The only difference between NLMS and LMS (which is its successor) is that NLMS normalizes the entry of the filter, so it won't be ease to high input power.
There are equations for both of the algorithms in the wiki page: https://en.wikipedia.org/wiki/Least_mean_squares_filter that is similar to my implementation of it.
I'm currently using an adaptative plant so I can filter a white noise input into a lowpass filter and try to adapt my algorithm coefficients to immitate the lowpass, its implemented in matlab:
clear all;close all;clc;
fid = fopen('ruidoBranco.pcm', 'rb');
s = fread(fid, 'int16');
fclose(fid);
itera = length(s);
L = 50;
passo = 0.00000000001;
H = passaBaixa(L,1000,2);
W = zeros(L,1);
y = zeros(itera,1);
sav_erro = zeros(itera,1);
for i=(L+1):itera,
D=0;
Y=0;
for j=1:(L),
Y = Y + W(j,1)*s(i-j,1);
D = D + H(j,1)*s(i-j,1);
end
erro = D-Y;
k=passo*erro;
for j=1:(L),
W(j,1) = W(j,1)+(k*s(i-j,1)/(s(i-j,1)^2+0.000001));
end
sav_erro(i,1) = (erro);
end
subplot(2,1,1);
plot(sav_erro);
subplot(2,1,2);
plot(W);
hold on;
plot(H,'r');
fid = fopen('saidaFIR.pcm', 'wb');
fwrite(fid,sav_erro,'int16');
fclose(fid);
The "passaBaixa" function is the lowpass filter that I was saying before:
function H = passaBaixa(M,FC,op)
iteracoes = M;
FS=8000;
FC=FC/FS;
M=iteracoes;
H = zeros(iteracoes,1);
for i=1:iteracoes,
if(i-M/2==0)
H(i,1)=2*pi*FC;
else
H(i,1)=sin(2*pi*FC*(i-M/2))/(i-M/2);
end
if(op==1)
H(i,1)=H(i,1);
else if (op==2)
H(i,1) = H(i,1)*(0.42-0.5*cos(2*pi*i/M)+0.08*cos(4*pi*i/M));
else
H(i,1)=H(i,1)*(0.54-0.46*cos(2*pi*i/M));
end
end
end
soma = sum(H);
for i=1:iteracoes,
H(i,1)=H(i,1)/soma;
end
end
The file ruidoBranco.pcm is simply an white noise generated with 80.000 samples.
The obtained result is the following:
In which the top plot is the error and the bottom plot is the impulse response of the low pass filter (red) and the "adapted" algorithm filter (blue).
Its not converging, it should look something like this:
As you can see the top plot converge into almost 0 error and the bottom plot has no more blue because its behind the red one (since it almost perfectly ajusted its coeficients to the filter)
I would like to know if there are any visible mistakes made by my implementation and perhaps this might be a future reference for people with similar mistakes.

The fix was that I wasn't using the correct multiplication for the algorithm, it needs to be a dot product instead of simple the power of 2
so the fix is in the for loop:
dotprod = dot(s(i-j,1),s(i-j,1));
for j=1:(L),
W(j,1) = W(j,1)+(k*s(i-j,1)/dotprod);
end

Related

Parallel loops and image processing in matlab

I am going to implement a salient object detection method based on a simple linear feedback control system (LFCS). The control system model is represented as in the following equation:
I've come up with the following program codes but the result would not be what should be. Specifically, the output should be something like the following image:
But the code produces this output:
The codes are as follows.
%Calculation of euclidian distance between adjacent superpixels stores in variable of Euc
A = imread('aa.jpg');
[rows, columns, cnumberOfColorChannels] = size(A);
[L,N] = superpixels(A,400);
%% Determination of adjacent superpixels
glcms = graycomatrix(L,'NumLevels',N,'GrayLimits',[1,N],'Offset',[0,1;1,0]); %Create gray-level co-occurrence matrix from image
glcms = sum(glcms,3); % add together the two matrices
glcms = glcms + glcms.'; % add upper and lower triangles together, make it symmetric
glcms(1:N+1:end) = 0; % set the diagonal to zero, we don't want to see "1 is neighbor of 1"
idx = label2idx(L); % Convert label matrix to cell array of linear indices
numRows = size(A,1);
numCols = size(A,2);
%%Mean color in Lab color space for each channel
data = zeros(N,3);
for labelVal = 1:N
redIdx = idx{labelVal};
greenIdx = idx{labelVal}+numRows*numCols;
blueIdx = idx{labelVal}+2*numRows*numCols;
data(labelVal,1) = mean(A(redIdx));
data(labelVal,2) = mean(A(greenIdx));
data(labelVal,3) = mean(A(blueIdx));
end
Euc=zeros(N);
%%Calculation of euclidian distance between adjacent superpixels stores in Euc
for a=1:N
for b=1:N
if glcms(a,b)~=0
Euc(a,b)=sqrt(((data(a,1)-data(b,1))^2)+((data(a,2)-data(b,2))^2)+((data(a,3)-data(b,3))^2));
end
end
end
%%Creation of Connectivity matrix "W" between adjacent superpixels
W=zeros(N);
W_num=zeros(N);
W_den=zeros(N);
OMG1=0.1;
for c=1:N
for d=1:N
if(Euc(c,d)~=0)
W_num(c,d)=exp(-OMG1*(Euc(c,d)));
W_den(c,c)=W_num(c,d)+W_den(c,c); %
end
end
end
%Connectivity matrix W between adjacent superpixels
for e=1:N
for f=1:N
if(Euc(e,f)~=0)
W(e,f)=(W_num(e,f))/(W_den(e,e));
end
end
end
%%calculation of geodesic distance between nonadjacent superpixels stores in variable "s_star_temp"
s_star_temp=zeros(N); %temporary variable for geodesic distance measurement
W_sparse=zeros(N);
W_sparse=sparse(W);
for g=1:N
for h=1:N
if W(g,h)==0 & g~=h;
s_star_temp(g,h)=graphshortestpath(W_sparse,g,h,'directed',false);
end
end
end
%%Calculation of connectivity matrix for nonadjacent superpixels stores in "S_star" variable"
S_star=zeros(N);
OMG2=8;
for i=1:N
for j=1:N
if s_star_temp(i,j)~=0
S_star(i,j)=exp(-OMG2*s_star_temp(i,j));
end
end
end
%%Calculation of connectivity matrix "S" for measuring connectivity between all superpixels
S=zeros(N);
S=S_star+W;
%% Defining non-isolation level for connectivity matrix "W"
g_star=zeros(N);
for k=1:N
g_star(k,k)=max(W(k,:));
end
%%Limiting the range of g_star and calculation of isolation cue matrix "G"
alpha1=0.15;
alpha2=0.85;
G=zeros(N);
for l=1:N
G(l,l)=alpha1*(g_star(l,l)- min(g_star(:)))/(max(g_star(:))- min(g_star(:)))+(alpha2 - alpha1);
end
%%Determining the supperpixels that surrounding the image boundary
lr = L([1,end],:);
tb = L(:,[1,end]);
labels = unique([lr(:);tb(:)]);
%% Calculation of background likelihood for each superpixels stores in"BgLike"
sum_temp=0;
temp=zeros(1,N);
BgLike=zeros(N,1);
BgLike_num=zeros(N);
BgLike_den=zeros(N);
for m=1:N
for n=1:N
if ismember(n,labels)==1
BgLike_num(m,m)=S(m,n)+ BgLike_num(m,m);
end
end
end
for o=1:N
for p=1:N
for q=1:N
if W(p,q)~=0
temp(q)=S(o,p)-S(o,q);
end
end
sum_temp=max(temp)+sum_temp;
temp=0;
end
BgLike_den(o,o)=sum_temp;
sum_temp=0;
end
for r=1:N
BgLike(r,1)= BgLike_num(r,r)/BgLike_den(r,r);
end
%%%%Calculation of Foreground likelihood for each superpixels stores in "FgLike"
FgLike=zeros(N,1);
for s=1:N
for t=1:N
FgLike(s,1)=(exp(-BgLike(t,1))) * Euc(s,t)+ FgLike(s,1);
end
end
The above codes are prerequisite for the following sections (in fact, they produce necessary data and matrices for the next section. The aforementioned codes provided to make the whole process reproducible).
Specifically, I think that this section did not give the desired results. I'm afraid I did not properly simulate the parallelism using for loops. Moreover, the terminating conditions (employed with for and if statements to simulate do-while loop) are never satisfied and the loops continue until the last iteration (instead terminating when a specified condition occurs). A major concern here is that if the terminating conditions are properly implemented.
The pseudo algorithm for the following code is as the image below:
%%parallel operations for background and foreground implemented here
T0 = 0 ;
Tf = 20 ;
Ts = 0.1 ;
Ti = T0:Ts:Tf ;
Nt=numel(Ti);
Y_Bg=zeros(N,Nt);
Y_Fg=zeros(N,Nt);
P_Back_Bg=zeros(N,N);
P_Back_Fg=zeros(N,N);
u_Bg=zeros(N,Nt);
u_Fg=zeros(N,Nt);
u_Bg_Star=zeros(N,Nt);
u_Fg_Star=zeros(N,Nt);
u_Bg_Normalized=zeros(N,Nt);
u_Fg_Normalized=zeros(N,Nt);
tau=0.1;
sigma_Bg=zeros(Nt,N);
Temp_Bg=0;
Temp_Fg=0;
C_Bg=zeros(Nt,N);
C_Fg=zeros(Nt,N);
%%System Initialization
for u=1:N
u_Bg(u,1)=(BgLike(u,1)- min(BgLike(:)))/(max(BgLike(:))- min(BgLike(:)));
u_Fg(u,1)=(FgLike(u,1)- min(FgLike(:)))/(max(FgLike(:))- min(FgLike(:)));
end
%% P_state and P_input
P_state=G*W;
P_input=eye(N)-G;
% State Initialization
X_Bg=zeros(N,Nt);
X_Fg=zeros(N,Nt);
for v=1:20 % v starts from 1 because we have no matrices with 0th column number
%The first column of X_Bg and X_Fg is 0 for system initialization
X_Bg(:,v+1)=P_state*X_Bg(:,v) + P_input*u_Bg(:,v);
X_Fg(:,v+1)=P_state*X_Fg(:,v) + P_input*u_Fg(:,v);
v=v+1;
if v==2
C_Bg(1,:)=1;
C_Fg(1,:)=1;
else
for w=1:N
for x=1:N
Temp_Fg=S(w,x)*X_Fg(x,v-1)+Temp_Fg;
Temp_Bg=S(w,x)*X_Bg(x,v-1)+Temp_Bg;
end
C_Fg(v-1,w)=inv(X_Fg(w,v-1)+((Temp_Bg)/(Temp_Fg)*(1-X_Fg(w,v-1))));
C_Bg(v-1,w)=inv(X_Bg(w,v-1)+((Temp_Fg)/(Temp_Bg))*(1-X_Bg(w,v-1)));
Temp_Bg=0;
Temp_Fg=0;
end
end
P_Bg=diag(C_Bg(v-1,:));
P_Fg=diag(C_Fg(v-1,:));
Y_Bg(:,v)= P_Bg*X_Bg(:,v);
Y_Fg(:,v)= P_Fg*X_Fg(:,v);
for y=1:N
Temp_sig_Bg=0;
Temp_sig_Fg=0;
for z=1:N
Temp_sig_Bg = Temp_sig_Bg +S(y,z)*abs(Y_Bg(y,v)- Y_Bg(z,v));
Temp_sig_Fg = Temp_sig_Fg +S(y,z)*abs(Y_Fg(y,v)- Y_Fg(z,v));
end
if Y_Bg(y,v)>= Y_Bg(y,v-1)
sign_Bg=1;
else
sign_Bg=-1;
end
if Y_Fg(y,v)>= Y_Fg(y,v-1)
sign_Fg=1;
else
sign_Fg=-1;
end
sigma_Bg(v-1,y)=sign_Bg*Temp_sig_Bg;
sigma_Fg(v-1,y)=sign_Fg*Temp_sig_Fg;
end
%Calculation of P_Back for background and foreground
P_Back_Bg=tau*diag(sigma_Bg(v-1,:));
P_Back_Fg=tau*diag(sigma_Fg(v-1,:));
u_Bg_Star(:,v)=u_Bg(:,v-1)+P_Back_Bg*Y_Bg(:,v);
u_Fg_Star(:,v)=u_Fg(:,v-1)+P_Back_Fg*Y_Fg(:,v);
for aa=1:N %Normalization of u_Bg and u_Fg
u_Bg(aa,v)=(u_Bg_Star(aa,v)- min(u_Bg_Star(:,v)))/(max(u_Bg_Star(:,v))-min(u_Bg_Star(:,v)));
u_Fg(aa,v)=(u_Fg_Star(aa,v)- min(u_Fg_Star(:,v)))/(max(u_Fg_Star(:,v))-min(u_Fg_Star(:,v)));
end
if (max(abs(Y_Fg(:,v)-Y_Fg(:,v-1)))<=0.0118) &&(max(abs(Y_Bg(:,v)-Y_Bg(:,v-1)))<=0.0118) %% epsilon= 0.0118
break;
end
end
Finally, the saliency map will be generated by using the following codes.
K=4;
T=0.4;
phi_1=(2-(1-T)^(K-1))/((1-T)^(K-2));
phi_2=(1-T)^(K-1);
phi_3=1-phi_1;
for bb=1:N
Y_Output_Preliminary(bb,1)=Y_Fg(bb,v)/((Y_Fg(bb,v)+Y_Bg(bb,v)));
end
for hh=1:N
Y_Output(hh,1)=(phi_1*(T^K))/(phi_2*(1-Y_Output_Preliminary(hh,1))^K+(T^K))+phi_3;
end
V_rs=zeros(N);
V_Final=zeros(rows,columns);
for cc=1:rows
for dd=1:columns
V_rs(cc,dd)=Y_Output(L(cc,dd),1);
end
end
maxDist = 10; % Maximum chessboard distance from image
wSF=zeros(rows,columns);
wSB=zeros(rows,columns);
% Get the range of x and y indices who's chessboard distance from pixel (0,0) are less than 'maxDist'
xRange = (-(maxDist-1)):(maxDist-1);
yRange = (-(maxDist-1)):(maxDist-1);
% Create a mesgrid to get the pairs of (x,y) of the pixels
[pointsX, pointsY] = meshgrid(xRange, yRange);
pointsX = pointsX(:);
pointsY = pointsY(:);
% Remove pixel (0,0)
pixIndToRemove = (pointsX == 0 & pointsY == 0);
pointsX(pixIndToRemove) = [];
pointsY(pixIndToRemove) = [];
for ee=1:rows
for ff=1:columns
% Get a shifted copy of 'pointsX' and 'pointsY' that is centered
% around (x, y)
pointsX1 = pointsX + ee;
pointsY1 = pointsY + ff;
% Remove the the pixels that are out of the image bounds
inBounds =...
pointsX1 >= 1 & pointsX1 <= rows &...
pointsY1 >= 1 & pointsY1 <= columns;
pointsX1 = pointsX1(inBounds);
pointsY1 = pointsY1(inBounds);
% Do stuff with 'pointsX1' and 'pointsY1'
wSF_temp=0;
wSB_temp=0;
for gg=1:size(pointsX1)
Temp=exp(-OMG1*(sqrt(double(A(pointsX1(gg),pointsY1(gg),1))-double(A(ee,ff,1)))^2+(double(A(pointsX1(gg),pointsY1(gg),2))-double(A(ee,ff,2)))^2 + (double(A(pointsX1(gg),pointsY1(gg),3))-double(A(ee,ff,3)))^2));
wSF_temp=wSF_temp+(Temp*V_rs(pointsX1(gg),pointsY1(gg)));
wSB_temp=wSB_temp+(Temp*(1-V_rs(pointsX1(gg),pointsY1(gg))));
end
wSF(ee,ff)= wSF_temp;
wSB(ee,ff)= wSB_temp;
V_Final(ee,ff)=V_rs(ee,ff)/(V_rs(ee,ff)+(wSB(ee,ff)/wSF(ee,ff))*(1-V_rs(ee,ff)));
end
end
imshow(V_Final,[]); %%Saliency map of the image
Part of your terminating criterion is this:
max(abs(Y_a(:,t)-Y_a(:,t-1)))<=eps
Say Y_a tends to 2. You are really close... In fact, the closest you can get without subsequent values being identical is Y_a(t)-Y_a(t-1) == 4.4409e-16. If the two values were any closer, their difference would be 0, because this is the precision with which floating-point values can be represented. So you have reached this fantastic level of closeness to your goal. Subsequent iterations are changing the target value by the smallest possible amount, 4.4409e-16. But your test is returning false! Why? Because eps == 2.2204e-16!
eps is short-hand for eps(1), the difference between 1 and the next representable larger value. Because how floating-point values are represented, this difference is half the difference between 2 and the next representable larger value (which is given by eps(2).
However, if Y_a tends to 1e-16, subsequent iterations could double or halve the value of Y_a and you'd still meet the stopping criterion!
Thus, what you need is to come up with a reasonable stopping criterion that is a fraction of the target value, something like this:
max(abs(Y_a(:,t)-Y_a(:,t-1))) <= 1e6 * eps(max(abs(Y_a(:,t))))
Unsolicited advice
You should really look into vectorized operations in MATLAB. For example,
for y=1:N
Temp_sig_a=0;
for z=1:N
Temp_sig_a = Temp_sig_a + abs(Y_a(y,t)- Y_a(z,t));
end
sigma_a(t-1,y)= Temp_sig_a;
end
can be written as
for y=1:N
sigma_a(t-1,y) = sum(abs(Y_a(y,t) - Y_a(:,t)));
end
which in turn can be written as
sigma_a(t-1,:) = sum(abs(Y_a(:,t).' - Y_a(:,t)));
Avoiding loops is not only usually more efficient, but it also leads to shorter code that is easier to read.
Also, this:
P_FB_a = diag(sigma_a(t-1,:));
u_a(:,t) = u_a(:,t-1) + P_FB_a * Y_a(:,t);
is the same as
u_a(:,t) = u_a(:,t-1) + sigma_a(t-1,:).' .* Y_a(:,t);
but of course creating a diagonal matrix and doing a matrix multiplication with so many zeros is much more expensive than directly computing an element-wise multiplication.

How to implement Roulette Wheel Selection and Rank Sleection on Matlab code for the Traveling Salesman Problom?

I have an assignment coding a genetic algorithm for the traveling salesman problem. I've written some code giving correct results using Tournament Selection.
The problem is, I have to do Wheel and Rank and the results I get are incorrect.
Here is my code using Tournament Selection:
clc;
clear all;
close all;
nofCities = 30;
initialPopulationSize = nofCities*nofCities;
generations = nofCities*ceil(nofCities/10);
cities = floor(rand([nofCities 2])*100+1);
figure;
hold on;
scatter(cities(:,1), cities(:,2), 5, 'b','fill');
line(cities(:,1), cities(:,2));
line(cities([1 end],1), cities([1 end],2));
axis([0 110 0 110]);
population = zeros(initialPopulationSize ,nofCities);
for i=1:initialPopulationSize
population(i,:) = randperm(nofCities);
end
distanceMatrix = zeros(nofCities);
for i=1:nofCities
for j=1:nofCities
if (i==j)
distanceMatrix(i,j)=0;
else
distanceMatrix(i,j) = sqrt((cities(i,1)-cities(j,1))^2+(cities(i,2)-cities(j,2))^2);
end
end
end
for u=1:generations
tourDistance = zeros(initialPopulationSize ,1);
for i=1:initialPopulationSize
for j=1:length(cities)-1
tourDistance(i) = tourDistance(i) + distanceMatrix(population(i,j),population(i,j+1));
end
end
for i=1:initialPopulationSize
tourDistance(i) = tourDistance(i) + distanceMatrix(population(i,end),population(i,1));
end
min(tourDistance)
newPopulation = zeros(initialPopulationSize,nofCities);
for k=1:initialPopulationSize
child = zeros(1,nofCities);
%tournament start
for i=1:5
tournamentParent1(i) = ceil(rand()*initialPopulationSize);
end
p1 = find(tourDistance == min(tourDistance([tournamentParent1])));
parent1 = population(p1(1), :);
for i=1:5
tournamentParent2(i) = ceil(rand()*initialPopulationSize);
end
p2 = find(tourDistance == min(tourDistance([tournamentParent2])));
parent2 = population(p2(1), :);
%tournament end
%crossover
startPos = ceil(rand()*(nofCities/2));
endPos = ceil(rand()*(nofCities/2)+10);
for i=1:nofCities
if (i>startPos && i<endPos)
child(i) = parent1(i);
end
end
for i=1:nofCities
if (isempty(find(child==parent2(i))))
for j=1:nofCities
if (child(j) == 0)
child(j) = parent2(i);
break;
end
end
end
end
newPopulation(k,:) = child;
end
%mutation
mutationRate = 0.015;
for i=1:initialPopulationSize
if (rand() < mutationRate)
pos1 = ceil(rand()*nofCities);
pos2 = ceil(rand()*nofCities);
mutation1 = newPopulation(i,pos1);
mutation2 = newPopulation(i,pos2);
newPopulation(i,pos1) = mutation2;
newPopulation(i,pos2) = mutation1;
end
end
population = newPopulation;
u
end
figure;
hold on;
scatter(cities(:,1), cities(:,2), 5, 'b','fill');
line(cities(population(i,:),1), cities(population(i,:),2));
line(cities([population(i,1) population(i,end)],1), cities([population(i,1) population(i,end)],2));
axis([0 110 0 110]);
%close all;
What I want is to replace the tournament code with wheel and rank code.
Here is what I wrote for the Wheel Selection:
fitness = tourDistance./sum(tourDistance);
wheel = cumsum(fitness);
parent1 = population(find(wheel >= rand(),1),:);
parent2 = population(find(wheel >= rand(),1),:);
Here is a vectorized implementation of a roulette wheel selection in Matlab:
[~,W] = min(ones(popSize,1)*rand(1,2*popSize) > ((cumsum(fitness)*ones(1,2*popSize)/sum(fitness))),[],1);
This assumes that the fitness input into the selection scheme is a matrix of size (popSize x 1) (or a column vector of the same size as the number of population members).
And popSize is obviously the amount of members in your population. And W is the winners or the population members that are selected to become parents/crossover.
The output of the selection will be selected_parents which is a double row vector of size 2*popSize which has all of the indices of the members of the population that will be used in the crossover stage.
This row vector can then be input into a vectorized crossover scheme that could look something like this:
%% Single-Point Preservation Crossover
Pop2 = Pop(W(1:2:end),:); % Pop2 Winners 1
P2A = Pop(W(2:2:end),:); % Pop2 Winners 2
Lidx = sub2ind(size(Pop),[1:popSize]',round(rand(popSize,1)*(genome-1)+1));
vLidx = P2A(Lidx)*ones(1,genome);
[r,c]=find(Pop2==vLidx);
[~,Ord]=sort(r);
r = r(Ord); c = c(Ord);
Lidx2 = sub2ind(size(Pop),r,c);
Pop2(Lidx2) = Pop2(Lidx);
Pop2(Lidx) = P2A(Lidx);
this crossover assumes an input of the W variable from the selection scheme. It also uses Pop which is the population members stored in a popSize by genome matrix. (genome is the number of cities in one tour and also happens to be the size of the genome). The genome is stored as an array of integers with each integer being a city and the tour being defined as the order from the value of the genome array from the array's first index to the array's last index.
while we are at it we may as well include a nice vectorized mutation scheme for a permuation genetic algorithm (which this is).
%% Mutation (Permutation)
idx = rand(popSize,1)<mutRate;
Loc1 = sub2ind(size(Pop2),1:popSize,round(rand(1,popSize)*(genome-1)+1));
Loc2 = sub2ind(size(Pop2),1:popSize,round(rand(1,popSize)*(genome-1)+1));
Loc2(idx == 0) = Loc1(idx == 0);
[Pop2(Loc1), Pop2(Loc2)] = deal(Pop2(Loc2), Pop2(Loc1));
This mutation randomly flips the order of 2 cities in our tour (genome).
Finally make sure to update your population after all of that work we did!
%% Update Population!
Pop = Pop2; % updates the population to include crossovers and mutation.
So i know this reply is probably way too late for your assignment, but hopefully it will help someone else with a similar problem.
I REALLY REALLY recommend anyone interested in vectorized genetic algorithms in Matlab to read this paper: UCL: Efficiently Vectorized Code for Population Based Optimization Algorithms
It is what i based all of the code off of in the examples and it will teach you why you are writing the code that way. Its a great resource and what got me started with GAs.
For wheel selection to work, you should start with designing a fitness measure with fitter individuals having a bigger value. In contrast to the distance where better individuals having a smaller value. Then your approach with the cumsum should work.
Where is the issue with ranking selection?

How to display error map of two binary images by matlab

I have two binary images that refer as ground truth image A and test image B. I want to calculate Dice Coefficient Similarity defined here.
To calculate it is very easy. This is one same code
function dist = dist_Dice(A,B)
% Calculation of the Dice Coefficient
idx_img = find(B== 1);
idx_ref = find(A== 1);
idx_inter = find((B== 1) & (A== 1));
dist = 2*length(idx_inter)/(length(idx_ref)+length(idx_img));
The result is a number. But my work is how to show this result visually by an image. The range of the image from 0 to 1. I have no idea to resolve it? I think it similar the overlap of two images that regions overlap have pixel equal 0 and otherwise equal 1.Could you help me implement in matlab?
I don't know if something like that is close to what you have in mind in terms of visualising the differences. As you pointed out, the quantity in which you are interested is a scalar, so there aren't too many options.
RandStream.setDefaultStream(RandStream('mt19937ar','seed',0)); % For reproducibility of results
a = rand(10);
b = rand(10);
A = im2bw(a, graythresh(a));
subplot(2,2,1)
imshow(A, 'InitialMagnification', 'fit')
title('A (ground truth image)')
B = im2bw(b, graythresh(b));
subplot(2,2,2)
imshow(B, 'InitialMagnification', 'fit')
title('B (test image)')
idx_img = find(B);
idx_ref = find(A);
idx_inter = find(A&B);
common_white = zeros(size(A));
common_white(idx_inter) = 1;
subplot(2,2,3)
imshow(common_white, 'InitialMagnification', 'fit')
title('White in both pictures')
dist = 2*length(idx_inter)/(length(idx_ref)+length(idx_img))
idx_img = find(~B);
idx_ref = find(~A);
idx_inter = find(~A&~B);
common_black = ones(size(A));
common_black(idx_inter) = 0;
subplot(2,2,4)
imshow(common_black, 'InitialMagnification', 'fit')
title('Black in both pictures')
dist = 2*length(idx_inter)/(length(idx_ref)+length(idx_img))
Think you are looking for this -
AB = false(size(A));
AB(idx_inter) = true;
figure, imshow(AB)
Generally, with binary images, note that you don't have to do the ==1 part. Also, if you just need to know how many ones there are in an image, you don't need to use find and then length, you can just sum over a binary image:
AB = A&B
imshow(AB);
dist = 2*sum(AB(:))/(sum(A(:))+sum(B(:)));
I find that (sorry, couldn't resist) m = find(A) is for a 256 x 256 image, about twice as quick as the ==1 equivalent.

DFT Matlab function

I've write a function that calculates the DFT of an image, My prupose is to show the amplitude spectrum without using the fftshift command. DFT_img.m looks like this:
function f = DFT_img(a);
[n m]=size(a);
for i =1:n
k=1;
for j =1:n
l=1;
F(i,j)=(1/n*n)*a(i,j)*exp(-i*2*pi*(k*i+l*j)/n);
l=l+1;
end
k=k+1;
end
f=F;
And when i write in Command Window the commands
A = imread("lena.tiff");
ans = DFT_img(A);
spectrum = log(abs(ans));
mesh(spectrum)
i am not getting the same result that
fftshift matlab function
does !!!
Am i having a mistake in the function, or where is the problem with that ?
Your code is not a 2D DFT, at all.
The easiest way to write a 2D DFT is to perform a 1D DFT on each of the columns, and then perform a 1D DFT on each of the rows of the results.
In pseudocode:
temp = zeros(size(a));
f = zeros(size(a));
for i = (1:m)
temp(:,i) = dft(a(:,i));
end
for j = (1:n)
f(j,:) = dft(temp(j,:));
end
Your code does not look like DFT, and has some issues with index numbers: note that the DFT is defined as a sum of complex exponentials from 0:N-1, not from 1:N like in your code. Long story short: try this function (paste in a kmv_dft2.m file):
function F = kmv_dft2(f)
[M, N] = size(f);
F = zeros(M,N);
for k = 0:M-1
for l = 0:N-1
F(k+1,l+1) = dft_atomic_sum(f,k,l,M,N);
end
end
%% computing a core of DFT2
function F_kl = dft_atomic_sum(f,k,l,M,N)
F_kl = 0;
for m=0:M-1
for n=0:N-1
F_kl = F_kl + f(m+1,n+1)*exp(-i*2*pi*( k*m/M + l*n/N) );
end
end
Now check this and compare with MATLAB FFT implementation
%%% input 2D signal
f = magic(5)
%% Fast Fourier Transform
F_fft = fftshift(fft2(f))
%% Slow Discrete Fouriers Transform
F = fftshift(kmv_dft2(f))
You may want to consult MATLAB help about fftshift function and what does it exactly do. This fftshift function is a source of constant confusion among novices, and I even wrote a short explanation for my own students about this.

How to speed this kind of for-loop?

I would like to compute the maximum of translated images along the direction of a given axis. I know about ordfilt2, however I would like to avoid using the Image Processing Toolbox.
So here is the code I have so far:
imInput = imread('tire.tif');
n = 10;
imMax = imInput(:, n:end);
for i = 1:(n-1)
imMax = max(imMax, imInput(:, i:end-(n-i)));
end
Is it possible to avoid using a for-loop in order to speed the computation up, and, if so, how?
First edit: Using Octave's code for im2col is actually 50% slower.
Second edit: Pre-allocating did not appear to improve the result enough.
sz = [size(imInput,1), size(imInput,2)-n+1];
range_j = 1:size(imInput, 2)-sz(2)+1;
range_i = 1:size(imInput, 1)-sz(1)+1;
B = zeros(prod(sz), length(range_j)*length(range_i));
counter = 0;
for j = range_j % left to right
for i = range_i % up to bottom
counter = counter + 1;
v = imInput(i:i+sz(1)-1, j:j+sz(2)-1);
B(:, counter) = v(:);
end
end
imMax = reshape(max(B, [], 2), sz);
Third edit: I shall show the timings.
For what it's worth, here's a vectorized solution using IM2COL function from the Image Processing Toolbox:
imInput = imread('tire.tif');
n = 10;
sz = [size(imInput,1) size(imInput,2)-n+1];
imMax = reshape(max(im2col(imInput, sz, 'sliding'),[],2), sz);
imshow(imMax)
You could perhaps write your own version of IM2COL as it simply consists of well crafted indexing, or even look at how Octave implements it.
Check out the answer to this question about doing a rolling median in c. I've successfully made it into a mex function and it is way faster than even ordfilt2. It will take some work to do a max, but I'm sure it's possible.
Rolling median in C - Turlach implementation

Resources