How to display error map of two binary images by matlab - image

I have two binary images that refer as ground truth image A and test image B. I want to calculate Dice Coefficient Similarity defined here.
To calculate it is very easy. This is one same code
function dist = dist_Dice(A,B)
% Calculation of the Dice Coefficient
idx_img = find(B== 1);
idx_ref = find(A== 1);
idx_inter = find((B== 1) & (A== 1));
dist = 2*length(idx_inter)/(length(idx_ref)+length(idx_img));
The result is a number. But my work is how to show this result visually by an image. The range of the image from 0 to 1. I have no idea to resolve it? I think it similar the overlap of two images that regions overlap have pixel equal 0 and otherwise equal 1.Could you help me implement in matlab?

I don't know if something like that is close to what you have in mind in terms of visualising the differences. As you pointed out, the quantity in which you are interested is a scalar, so there aren't too many options.
RandStream.setDefaultStream(RandStream('mt19937ar','seed',0)); % For reproducibility of results
a = rand(10);
b = rand(10);
A = im2bw(a, graythresh(a));
subplot(2,2,1)
imshow(A, 'InitialMagnification', 'fit')
title('A (ground truth image)')
B = im2bw(b, graythresh(b));
subplot(2,2,2)
imshow(B, 'InitialMagnification', 'fit')
title('B (test image)')
idx_img = find(B);
idx_ref = find(A);
idx_inter = find(A&B);
common_white = zeros(size(A));
common_white(idx_inter) = 1;
subplot(2,2,3)
imshow(common_white, 'InitialMagnification', 'fit')
title('White in both pictures')
dist = 2*length(idx_inter)/(length(idx_ref)+length(idx_img))
idx_img = find(~B);
idx_ref = find(~A);
idx_inter = find(~A&~B);
common_black = ones(size(A));
common_black(idx_inter) = 0;
subplot(2,2,4)
imshow(common_black, 'InitialMagnification', 'fit')
title('Black in both pictures')
dist = 2*length(idx_inter)/(length(idx_ref)+length(idx_img))

Think you are looking for this -
AB = false(size(A));
AB(idx_inter) = true;
figure, imshow(AB)

Generally, with binary images, note that you don't have to do the ==1 part. Also, if you just need to know how many ones there are in an image, you don't need to use find and then length, you can just sum over a binary image:
AB = A&B
imshow(AB);
dist = 2*sum(AB(:))/(sum(A(:))+sum(B(:)));
I find that (sorry, couldn't resist) m = find(A) is for a 256 x 256 image, about twice as quick as the ==1 equivalent.

Related

NLMS Algorithm is not converging, multiple implementation resulting equally

The Normalized Least Mean Square algorithm is used in digital filtering, it basically tries to imitate an "unknown" filter so their difference (which is considered the error) tends to zero. The "factor" of convergence is that the error will start very high and with the continuous run of the algorithm it will be smaller.
The only difference between NLMS and LMS (which is its successor) is that NLMS normalizes the entry of the filter, so it won't be ease to high input power.
There are equations for both of the algorithms in the wiki page: https://en.wikipedia.org/wiki/Least_mean_squares_filter that is similar to my implementation of it.
I'm currently using an adaptative plant so I can filter a white noise input into a lowpass filter and try to adapt my algorithm coefficients to immitate the lowpass, its implemented in matlab:
clear all;close all;clc;
fid = fopen('ruidoBranco.pcm', 'rb');
s = fread(fid, 'int16');
fclose(fid);
itera = length(s);
L = 50;
passo = 0.00000000001;
H = passaBaixa(L,1000,2);
W = zeros(L,1);
y = zeros(itera,1);
sav_erro = zeros(itera,1);
for i=(L+1):itera,
D=0;
Y=0;
for j=1:(L),
Y = Y + W(j,1)*s(i-j,1);
D = D + H(j,1)*s(i-j,1);
end
erro = D-Y;
k=passo*erro;
for j=1:(L),
W(j,1) = W(j,1)+(k*s(i-j,1)/(s(i-j,1)^2+0.000001));
end
sav_erro(i,1) = (erro);
end
subplot(2,1,1);
plot(sav_erro);
subplot(2,1,2);
plot(W);
hold on;
plot(H,'r');
fid = fopen('saidaFIR.pcm', 'wb');
fwrite(fid,sav_erro,'int16');
fclose(fid);
The "passaBaixa" function is the lowpass filter that I was saying before:
function H = passaBaixa(M,FC,op)
iteracoes = M;
FS=8000;
FC=FC/FS;
M=iteracoes;
H = zeros(iteracoes,1);
for i=1:iteracoes,
if(i-M/2==0)
H(i,1)=2*pi*FC;
else
H(i,1)=sin(2*pi*FC*(i-M/2))/(i-M/2);
end
if(op==1)
H(i,1)=H(i,1);
else if (op==2)
H(i,1) = H(i,1)*(0.42-0.5*cos(2*pi*i/M)+0.08*cos(4*pi*i/M));
else
H(i,1)=H(i,1)*(0.54-0.46*cos(2*pi*i/M));
end
end
end
soma = sum(H);
for i=1:iteracoes,
H(i,1)=H(i,1)/soma;
end
end
The file ruidoBranco.pcm is simply an white noise generated with 80.000 samples.
The obtained result is the following:
In which the top plot is the error and the bottom plot is the impulse response of the low pass filter (red) and the "adapted" algorithm filter (blue).
Its not converging, it should look something like this:
As you can see the top plot converge into almost 0 error and the bottom plot has no more blue because its behind the red one (since it almost perfectly ajusted its coeficients to the filter)
I would like to know if there are any visible mistakes made by my implementation and perhaps this might be a future reference for people with similar mistakes.
The fix was that I wasn't using the correct multiplication for the algorithm, it needs to be a dot product instead of simple the power of 2
so the fix is in the for loop:
dotprod = dot(s(i-j,1),s(i-j,1));
for j=1:(L),
W(j,1) = W(j,1)+(k*s(i-j,1)/dotprod);
end

MATLAB vectorization: computing a neighborhood matrix

Given two vectors X and Y of length n, representing points on the plane, and a neighborhood radius rad, is there a vectorized way to compute the neighborhood matrix of the points?
In other words, can the following (painfully slow for large n) loop be vectorized:
neighborhood_mat = zeros(n, n);
for i = 1 : n
for j = 1 : i - 1
dist = norm([X(j) - X(i), Y(j) - Y(i)]);
if (dist < radius)
neighborhood_mat(i, j) = 1;
neighborhood_mat(j, i) = 1;
end
end
end
Approach #1
bsxfun based approach -
out = bsxfun(#minus,X,X').^2 + bsxfun(#minus,Y,Y').^2 < radius^2
out(1:n+1:end)= 0
Approach #2
Distance matrix calculation using matrix-multiplication based approach (possibly faster) -
A = [X(:) Y(:)]
A_t = A.'; %//'
out = [-2*A A.^2 ones(n,3)]*[A_t ; ones(3,n) ; A_t.^2] < radius^2
out(1:n+1:end)= 0
Approach #3
With pdist and squareform -
A = [X(:) Y(:)]
out = squareform(pdist(A))<radius
out(1:n+1:end)= 0
Approach #4
You can use pdist as with the previous approach, but avoid squareform with some logical indexing to get the final output of neighbourhood matrix as shown below -
A = [X(:) Y(:)]
dists = pdist(A)< radius
mask_lower = bsxfun(#gt,[1:n]',1:n) %//'
%// OR tril(true(n),-1)
mask_upper = bsxfun(#lt,[1:n]',1:n) %//'
%// OR mask_upper = triu(true(n),1)
%// OR mask_upper = ~mask_lower; mask_upper(1:n+1:end) = false;
out = zeros(n)
out(mask_lower) = dists
out_t = out' %//'
out(mask_upper) = out_t(mask_upper)
Note: As one can see, for the all above mentioned approaches, we are using pre-allocation for the output. A fast way to pre-allocate would be with out(n,n) = 0 and is based upon this wonderful blog on undocumented MATLAB. This should really speed up those approaches!
The following approach is great if the number of points in your neighborhoods is small or you run low on memory using the brute-force approach:
If you have the statistics toolbox installed, you can have a look at the rangesearch method.
(Free alternatives include the k-d tree implementations of a range search on the File Exchange.)
The usage of rangesearch is straightforward:
P = [X,Y];
[idx,D] = rangesearch(P, P, rad);
It returns a cell-array idx of the indices of nodes within reach and their distances D.
Depending on the size of your data, this could be beneficial in terms of speed and memory.
Instead of computing all pairwise distances and then filtering out those that are large, this algorithm builds a data structure called a k-d tree to more efficiently search close points.
You can then use this to build a sparse matrix:
I = cell2mat(idx.').';
J = runLengthDecode(cellfun(#numel,idx));
n = size(P,1);
S = sparse(I,J,1,n,n)-speye(n);
(This uses the runLengthDecode function from this answer.)
You can also have a look at the KDTreeSearcher class if your data points don't change and you want to query your data lots of times.

Vectorizing distance calculation between vectors

I have a 3 X 1000 (and later 3 X 10 000) matrix cord given, which contains the three dimensional coordinates for my pixels.
My intention is to calculate the distance between all the pixels, and I do it with a for loop (see below), but I will have to calculate this for huge matrices soon, and am wondering if I could vectorize the code for making it faster...?
dist = zeros(size(cord,2),size(cord,2));
for i = 1:size(cord,2)
for j = 1:size(cord,2)
dist(i,j) = norm(cord(:,i)-cord(:,j));
dist(j,i) = dist(i,j);
end
end
pdist does exactly that. squareform is needed to get the result in the form of a square, symmetric matrix:
dist = squareform(pdist(cord.'));
Approach 1 (Vectorized apprach with bsxfun ) -
squeeze(sqrt(sum(bsxfun(#minus,cord,permute(cord,[1 3 2])).^2)))
Not sure if this will be faster though.
Approach 2 -
Inspired by this very smart approach and all credits to the poster. The code posted here is just slightly customized for your case and hopefully slightly better in terms of runtime. Here it is -
A = cord'; %//'
numA = size(cord,2);
helpA = ones(numA,9);
helpB = ones(numA,9);
for idx = 1:3
sqA_idx = A(:,idx).^2;
helpA(:,3*idx-1:3*idx) = [-2*A(:,idx), sqA_idx ];
helpB(:,3*idx-2:3*idx-1) = [sqA_idx , A(:,idx)];
end
dist1 = sqrt(helpA * helpB'); %// desired output
From your code, you have recognized that the dist matrix is symmetric
dist(i,j) = norm(cord(:,i)-cord(:,j));
dist(j,i) = dist(i,j);
You could change the inner loop to account for this and reduce by roughly one half the number of calculations needed
for j = i:size(cord,2)
Further, we can avoid the dist(j,i) = dist(i,j); at each iteration and just do that at the end by extracting the upper triangle part of dist and adding its transpose to the dist matrix to account for the symmetry
dist = zeros(size(cord,2),size(cord,2));
for i = 1:size(cord,2)
for j = i:size(cord,2)
dist(i,j) = norm(cord(:,i)-cord(:,j));
end
end
dist = dist + triu(dist)';
The above addition is fine since the main diagonal is all zeros.
It still performs poorly though and so we should take advantage of vectorization. We can do that as follows against the inner loop
dist = zeros(size(cord,2),size(cord,2));
for i = 1:size(cord,2)
dist(i,i+1:end) = sum((repmat(cord(:,i),1,size(cord,2)-i)-cord(:,i+1:end)).^2);
end
dist = dist + triu(dist)';
dist = sqrt(dist);
For every element in cord we need to calculate its distance with all other elements that follow it. We reproduce the element with repmat so that we can subtract it from every element that follows without the need for the loop. The differences are squared and summed and assigned to the dist matrix. We take care of the symmetry and then take the square root of the matrix to complete the norm operation.
With tic and toc, the original distance calculation with a random cord (cord = rand(3,num);) took ~93 seconds. This version took ~2.8.

How to overlay several images in Matlab?

I have the images A, B and C. How to overlay these images to result in D using Matlab? I have at least 50 images to make it. Thanks.
Please, see images here.
Download images:
A: https://docs.google.com/open?id=0B5AOSYBy_josQ3R3Y29VVFJVUHc
B: https://docs.google.com/open?id=0B5AOSYBy_josTVIwWUN1a085T0U
C: https://docs.google.com/open?id=0B5AOSYBy_josLVRwQ3JNYmJUUFk
D: https://docs.google.com/open?id=0B5AOSYBy_josd09TTFE2VDJIMzQ
To fade the images together:
Well since images in matlab are just matrices, you can add them together.
D = A + B + C
Of course if the images don't have the same dimensions, you will have to crop all the images to the dimensions of the smallest one.
The more you apply this principle, the larger the pixel values are going to get. It might be beneficial to display the images with imshow(D, []), where the empty matrix argument tells imshow to scale the pixel values to the actual minimum and maximum values contained in D.
To replace changed parts of original image:
Create a function combine(a,b).
Pseudocode:
# create empty answer matrix
c = zeros(width(a), height(a))
# compare each pixel in a to each pixel in b
for x in 1..width
for y in 1..height
p1 = a(x,y)
p2 = b(x,y)
if (p1 != p2)
c(x,y) = p2
else
c(x,y) = p1
end
end
end
Use this combine(a,b) function like so:
D = combine(combine(A,B),C)
or in a loop:
D = combine(images(1), images(2));
for i = 3:numImages
D = combine(D, images(i));
end
Judging from the example, it seems to me that the operation requested is a trivial case of "alpha compositing" in the specified order.
Something like this should work - don't have matlab handy right now, so this is untested, but it should be correct or almost so.
function abc = composite(a, b, c)
m = size(a,1); n = size(a,2);
abc = zeros(m, n, 3);
for i=1:3
% Vectorize the i-th channel of a, add it to the accumulator.
ai = a(:,:,i);
acc = ai(:);
% Vectorize the i-th channel of b, replace its nonzero pixels in the accumulator
bi = b(:,:,i);
bi = bi(:);
z = (bi ~= 0);
acc(z) = bi(z);
% Likewise for c
ci = c(:,:,i);
ci = ci(:);
z = (ci ~= 0);
acc(z) = ci(z);
% Place the result in the i-th channel of abc
abc(:,:,i) = reshape(acc, m, n);
end

MATLAB loop optimization

I have a matrix, matrix_logical(50000,100000), that is a sparse logical matrix (a lot of falses, some true). I have to produce a matrix, intersect(50000,50000), that, for each pair, i,j, of rows of matrix_logical(50000,100000), stores the number of columns for which rows i and j have both "true" as the value.
Here is the code I wrote:
% store in advance the nonzeros cols
for i=1:50000
nonzeros{i} = num2cell(find(matrix_logical(i,:)));
end
intersect = zeros(50000,50000);
for i=1:49999
a = cell2mat(nonzeros{i});
for j=(i+1):50000
b = cell2mat(nonzeros{j});
intersect(i,j) = numel(intersect(a,b));
end
end
Is it possible to further increase the performance? It takes too long to compute the matrix. I would like to avoid the double loop in the second part of the code.
matrix_logical is sparse, but it is not saved as sparse in MATLAB because otherwise the performance become the worst possible.
Since the [i,j] entry counts the number of non zero elements in the element-wise multiplication of rows i and j, you can do it by multiplying matrix_logical with its transpose (you should convert to numeric data type first, e.g matrix_logical = single(matrix_logical)):
inter = matrix_logical * matrix_logical';
And it works both for sparse or full representation.
EDIT
In order to calculate numel(intersect(a,b))/numel(union(a,b)); (as asked in your comment), you can use the fact that for two sets a and b, you have
length(union(a,b)) = length(a) + length(b) - length(intersect(a,b))
so, you can do the following:
unLen = sum(matrix_logical,2);
tmp = repmat(unLen, 1, length(unLen)) + repmat(unLen', length(unLen), 1);
inter = matrix_logical * matrix_logical';
inter = inter ./ (tmp-inter);
If I understood you correctly, you want a logical AND of the rows:
intersct = zeros(50000, 50000)
for ii = 1:49999
for jj = ii:50000
intersct(ii, jj) = sum(matrix_logical(ii, :) & matrix_logical(jj, :));
intersct(jj, ii) = intersct(ii, jj);
end
end
Doesn't avoid the double loop, but at least works without the first loop and the slow find command.
Elaborating on my comment, here is a distance function suitable for pdist()
function out = distfun(xi,xj)
out = zeros(size(xj,1),1);
for i=1:size(xj,1)
out(i) = sum(sum( xi & xj(i,:) )) / sum(sum( xi | xj(i,:) ));
end
In my experience, sum(sum()) is faster for logicals than nnz(), thus its appearance above.
You would also need to use squareform() to reshape the output of pdist() appropriately:
squareform(pdist(martrix_logical,#distfun));
Note that pdist() includes a 'jaccard' distance measure, but it is actually the Jaccard distance and not the Jaccard index or coefficient, which is the value you are apparently after.

Resources