Failed to convert structure to matrix with regionprops in MATLAB - image

I am working with particle tracking in images in MATLAB and using regionprops function. On the provided resource there is an example with circles:
stats = regionprops('table',bw,'Centroid',...
'MajorAxisLength','MinorAxisLength')
centers = stats.Centroid;
diameters = mean([stats.MajorAxisLength stats.MinorAxisLength],2);
radii = diameters/2;
In my Matlab R2014b, the line centers = stats.Centroid; produces undesired result: my stats.Centroid structure has 20 elements (each element is two numbers - the coordinates of the center of the region). However, after the following command, my variable center is only 1x2 matrix, instead of desired 20x2.
Screenshot attached.
I tried to go around this with different methods. The only solution I found is to do:
t=zeros(20,2);
for i=1:20
t(i,:)=stats(i).Centroid;
end
However, as we all know loops are slow in MATLAB. Is there another method that takes advantage of MATLAB matrix operations?

Doing stats.Centroid would in fact give you a comma-separated list of centroids, so MATLAB would only give you the first centre of that matrix if you did centers = stats.Centroid. What you must do is encapsulate the centres in an array (i.e. [stats.Centroid]), then reshape when you're done.
Something like this should work for you:
centers = reshape([stats.Centroid], 2, []).';
What this will do is read in the centroids as a 1 x 2*M array where M is the total number of blobs and because MATLAB does reshaping in column-major format, you should make sure that specify the total number of rows to be 2 and let MATLAB figure out how many columns there are after by itself. You would then transpose the result when you're done to complete what you want.
Minor Note
If you look at the regionprops documentation page in their Tips section - http://www.mathworks.com/help/images/ref/regionprops.html#buorh6l-1, you will see that they surround stats.Area, which is the area of each blob with [] brackets to ensure that the comma-separated list of values is encapsulated in an array. This is not an accident and there is a purpose of having those there and I've basically told you what that was.

Related

MATLAB: Set points within plot polygon equal to zero

I am currently doing some seismic modelling and processing in MATLAB, and would like to come up with an easy way of muting parts of various datasets. If I plot the frequency-wavenumber spectrum of some of my data, for instance, I obtain the following result:
Now, say that I want to mute some of the data present here. I could of course attempt to run through the entire matrix represented here and specify a threshold value where everything above said value should be set equal to zero, but this will be very difficult and time-consuming when I later will work with more complicated fk-spectra. I recently learned that MATLAB has an inbuilt function called impoly which allows me to interactively draw a polygon in plots. So say I, for instance, draw the following polygon in my plot with the impoly-function:
Is there anything I can do now to set all points within this polygon equal to zero? After defining the polygon as illustrated above I haven't found out how to proceed in order to mute the information contained in the polygon, so if anybody can give me some help here, then i would greatly appreciate it!
Yes, you can use the createMask function that's part of the impoly interface once you delineate the polygon in your figure. Once you use create this mask, you can use the mask to index into your data and set the right regions to zero.
Here's a quick example using the pout.tif image in MATLAB:
im = imread('pout.tif');
figure; imshow(im);
h = impoly;
I get this figure and I draw a polygon inside this image:
Now, use the createMask function with the handle to the impoly call to create a binary mask that encapsulates this polygon:
mask = createMask(h);
I get this mask:
imshow(mask);
You can then use this mask to index into your data and set the right regions to 0. First make a copy of the original data then set the data accordingly.
im_zero = im;
im_zero(mask) = 0;
I now get this:
imshow(im_zero);
Note that this only applies to single channel (2D) data. If you want to apply this to multi-channel (3D) data, then perhaps a multiplication channel-wise with the opposite of the mask may be prudent.
Something like this:
im_zero = bsxfun(#times, im, cast(~mask, class(im)));
The above code takes the opposite of the polygon mask, converts it into the same class as the original input im, then performs an element-wise multiplication of this mask with each channel of the input separately. The result will zero each spatial location that's defined in the mask over all channels.

Sort labels of segmented image in kmeans based on cluster mean

I have a simple question but is very interesting. As you know, Kmeans can be give different result after each running due to randomly initial cluster center. However, assume I know that cluster 1 has smaller mean value than cluster 2, cluster 2 has smaller mean value than cluster 3 and so on. I want to make a algorithm to implement that cluster has small mean value, then it will be assigned to small cluster index.
This is my Matlab code. If you are have more sort or more clear way. Please suggest to me
%% K-mean
num_cluster=2;
nrows = size(Img_original,1);
ncols = size(Img_original,2);
I_1D = reshape(Img_original,nrows*ncols,1);
[cluster_idx mu]=kmeans(double(I_1D),num_cluster,'distance','sqEuclidean','Replicates',3);
cluster_label = reshape(cluster_idx,nrows,ncols);
%% Sort based on mu
[mu_sort id_sort]=sort(mu);
idx=cell(1,num_cluster)
%% Save index of order if mu
for i=1:num_cluster
idx{i}=find(cluster_label==id_sort(i));
end
%% Sort cluster label based on mu
for i=1:num_cluster
cluster_label(idx{i})=i;
end
It's unclear to me as to why you'd want to relabel the clusters based on the ordering of each centroid. You can simply use the labelling vector that is output from k-means to reference which cluster / centroid each point belongs to.
Nevertheless, the initial idea that you had to sort the centroids is a good one. The last part of your code seems rather inefficient because you're looping over each label and doing the reassignment. One thing I could perhaps suggest is to have a lookup table where the input is the original label and the output is the reordered labels based on the sorted centroids.
If you want to pursue this route, you can use a containers.Map where the keys are the labels given from the sort order that is output from sort, and the values are the reordered labels... namely, a vector that goes from 1 up to as many classes you have. You need to do this because the second output of sort tells you where each value in the original array would appear in the sorted result, so you must use this ordering to properly perform the relabelling. In addition, I would use the sortrows function in MATLAB, not raw sort. With how you're doing it, you are sorting each column / variable independently and that will give the wrong centroids. This will work for grayscale images where you only have one feature to consider, namely the grayscale, but if you go beyond grayscale and perhaps go into RGB or whatever colour space you desire, using raw sort will give you incorrect results. You need to consider each row as a single point, then sort the rows jointly.
Given your code, you'd do something like this:
%% K-mean
num_cluster=2;
nrows = size(Img_original,1);
ncols = size(Img_original,2);
I_1D = reshape(Img_original,nrows*ncols,1);
[cluster_idx mu]=kmeans(double(I_1D),num_cluster,'distance','sqEuclidean','Replicates',3);
%% Sort based on mu
[mu_sort id_sort]=sortrows(mu);
%// New - Create lookup
lookup = containers.Map(id_sort, 1:size(mu_sort,1));
%// Relabel the vector
cluster_idx_sort = lookup.values(num2cell(cluster_idx));
cluster_idx_sort = [cluster_idx_sort{:}];
%// Reshape back to original image dimensions
cluster_label = reshape(cluster_idx_sort,nrows,ncols);
This should hopefully give you some speedup in your code.
To double check, I tried this on the cameraman.tif image, that's part of the image processing toolbox. Running the code gives me these cluster centres:
>> mu
mu =
153.3484
23.7291
Once I sort the clusters in ascending order, this is what I get for the ordering and for the centroids:
>> mu_sort
mu_sort =
23.7291
153.3484
>> id_sort
id_sort =
2
1
So that works as we expected... now if we display the original cluster label map before sorting on the centroids with:
cluster_label = reshape(cluster_idx, nrows, ncols);
imshow(cluster_label,[]);
... we get this image:
Now, if we run through the sorting logic and display the centroids:
imshow(cluster_label, []);
... we get this image:
This works as I expected. Because the centroids flipped, so should the colouring.

Matlab - use of principal components in finding longest axis of shape

I'm trying to use the pca function to find the longest axis of shapes in binary images. These are 2D images, so I'm expecting just two principal components. If I apply pca to the image itself I get many components.
My thoughts on this are that the matrix that pca acts on is treated such that rows are observations and columns are variables, so I need to convert my image into a list of the x,y coordinates of non-zero pixels. What function does this? Trying with find, this is what I have so far:
for k=1:cellnum %for each cell...
[nucleus, nucnum] = bwlabel(B5.*(cell==k)); %label nuclei in cell (Thanks #CapeCode)
if nucnum == 1
% other methods
[row, col] = find(nucleus);
[coeff, ~, eigen] = pca([row, col]);
disp (coeff);
end
I get two pairs of coefficients for each nucleus, as follows:
0.8327 0.5537
-0.5537 0.8327
0.9791 0.2036
-0.2036 0.9791
0.8546 0.5193
-0.5193 0.8546
so... am I actually doing what I think I'm doing?
Thanks,
Olly
Edit: Link to my earlier question regarding identification of overlapping objects, and Cape Code's elegant single-line solution - Matlab - Identifying objects in one image that overlap objects in another

How to average multiple images using Octave and matrix manipulation to reduce noise?

UPDATE
Here is my code that is meant to add up the two matrices and using element by element addition and then divide by two.
function [ finish ] = stackAndMeanImage (initFrame, finalFrame)
cd 'C:\Users\Disc-1119\Desktop\Internships\Tracking\Octave\highway\highway (6-13-2014 11-13-41 AM)';
pkg load image;
i = initFrame;
f = finalFrame;
astr = num2str(i);
tmp = imread(astr, 'jpg');
d = f - i
for a = 1:d
a
astr = num2str(i + 1);
read_tmp = imread(astr, 'jpg');
read_tmp = rgb2gray(read_tmp);
tmp = tmp :+ read_tmp;
tmp = tmp / 2;
end
imwrite(tmp, 'meanimage.JPG');
finish = 'done';
end
Here are two example input images
http://imgur.com/5DR1ccS,AWBEI0d#1
And here is one output image
http://imgur.com/aX6b0kj
I am really confused as to what is happening. I have not implemented what the other answers have said yet though.
OLD
I am working on an image processing project where I am now manually choosing images that are 'empty' or only have the background, so that my algorithm can compute the differences and then do some more analysis, I have a simple piece of code that computes the mean of the two images, which I have converted to grayscale matrices, but this only works for two images, because when I find the mean of two, then take this mean and find the mean of this versus the next image, and do this repeatedly, I end up with a washed out white image that is absolutely useless. You can't even see anything.
I found that there is a function in Matlab called imFuse that is able to average images. I was wondering if anyone knew the process that imFuse uses to combine images, I am happy to implement this into Octave, or if anyone knew of or has already written a piece of code that achieves something similiar to this. Again, I am not asking for anyone to write code for me, just wondering what the process for this is and if there are already pre-existing functions out there, which I have not found after my research.
Thanks,
AeroVTP
You should not end up with a washed-out image. Instead, you should end up with an image, which is technically speaking temporally low-pass filtered. What this means is that half of the information content is form the last image, one quarter from the second last image, one eight from the third last image, etc.
Actually, the effect in a moving image is similar to a display with slow response time.
If you are ending up with a white image, you are doing something wrong. nkjt's guess of type challenges is a good one. Another possibility is that you have forgotten to divide by two after summing the two images.
One more thing... If you are doing linear operations (such as averaging) on images, your image intensity scale should be linear. If you just use the RGB values or some grayscale values simply calculated from them, you may get bitten by the nonlinearity of the image. This property is called the gamma correction. (Admittedly, most image processing programs just ignore the problem, as it is not always a big challenge.)
As your project calculates differences of images, you should take this into account. I suggest using linearised floating point values. Unfortunately, the linearisation depends on the source of your image data.
On the other hand, averaging often the most efficient way of reducing noise. So, there you are in the right track assuming the images are similar enough.
However, after having a look at your images, it seems that you may actually want to do something else than to average the image. If I understand your intention correctly, you would like to get rid of the cars in your road cam to give you just the carless background which you could then subtract from the image to get the cars.
If that is what you want to do, you should consider using a median filter instead of averaging. What this means is that you take for example 11 consecutive frames. Then for each pixel you have 11 different values. Now you order (sort) these values and take the middle (6th) one as the background pixel value.
If your road is empty most of the time (at least 6 frames of 11), then the 6th sample will represent the road regardless of the colour of the cars passing your camera.
If you have an empty road, the result from the median filtering is close to averaging. (Averaging is better with Gaussian white noise, but the difference is not very big.) But your averaging will be affected by white or black cars, whereas median filtering is not.
The problem with median filtering is that it is computationally intensive. I am very sorry I speak very broken and ancient Octave, so I cannot give you any useful code. In MatLab or PyLab you would stack, say, 11 images to a M x N x 11 array, and then use a single median command along the depth axis. (When I say intensive, I do not mean it couldn't be done in real time with your data. It can, but it is much more complicated than averaging.)
If you have really a lot of traffic, the road is visible behind the cars less than half of the time. Then the median trick will fail. You will need to take more samples and then find the most typical value, because it is likely to be the road (unless all cars have similar colours). There it will help a lot to use the colour image, as cars look more different from each other in RGB or HSV than in grayscale.
Unfortunately, if you need to resort to this type of processing, the path is slightly slippery and rocky. Average is very easy and fast, median is easy (but not that fast), but then things tend to get rather complicated.
Another BTW came into my mind. If you want to have a rolling average, there is a very simple and effective way to calculate it with an arbitrary length (arbitrary number of frames to average):
# N is the number of images to average
# P[i] are the input frames
# S is a sum accumulator (sum of N frames)
# calculate the sum of the first N frames
S <- 0
I <- 0
while I < N
S <- S + P[I]
I <- I + 1
# save_img() saves an averaged image
while there are images to process
save_img(S / N)
S <- -P[I-N] + S + P[I]
I <- I + 1
Of course, you'll probably want to use for-loops, and += and -= operators, but still the idea is there. For each frame you only need one subtraction, one addition, and one division by a constant (which can be modified into a multiplication or even a bitwise shift in some cases if you are in a hurry).
I may have misunderstood your problem but I think what you're trying to do is the following. Basically, read all images into a matrix and then use mean(). This is providing that you are able to put them all in memory.
function [finish] = stackAndMeanImage (ini_frame, final_frame)
pkg load image;
dir_path = 'C:\Users\Disc-1119\Desktop\Internships\Tracking\Octave\highway\highway (6-13-2014 11-13-41 AM)';
imgs = cell (1, 1, d);
## read all images into a cell array
current_frame = ini_frame;
for n = 1:(final_frame - ini_frame)
fname = fullfile (dir_path, sprintf ("%i", current_frame++));
imgs{n} = rgb2gray (imread (fname, "jpg"));
endfor
## create 3D matrix out of all frames and calculate mean across 3rd dimension
imgs = cell2mat (imgs);
avg = mean (imgs, 3);
## mean returns double precision so we cast it back to uint8 after
## rescaling it to range [0 1]. This assumes that images were all
## originally uint8, but since they are jpgs, that's a safe assumption
avg = im2uint8 (avg ./255);
imwrite (avg, fullfile (dir_path, "meanimage.jpg"));
finish = "done";
endfunction

How to find the centroid of different sections of an image?

I have an image that I want to divide in three parts and find the centroid of the parts separately and display them on original image, I used blkproc for dividing the image in [1 3] grids, but can't display the centroids. Here is the code I wrote,
i=imread('F:\line3.jpg');
i2=rgb2gray(i);
bw=im2bw(i2);
imshow(bw)
fun=#(x) regionprops(x,'centroid');
b=blkproc(bw,[1 3],fun);
But I can't get to display the centroids, as well as get their values. Any help will be much appreciated.
You can just use the plot command to plot over the top of the image.
Whatever you [X,Y] centroid coordinates are, say cx(1:3) and cy(1:3)
numCentroids is the number of centroids you are plotting.
hold on;
for ii = 1:length(numCentroids)
plot(cx(ii),cy(ii),'Marker','s','MarkerSize',10,'MarkerFaceColor','r','MarkerEdgeColor','k')
end
If you wanted to write more elegant code, you could run the plot command once across all your centroids and then make the line style type invisible. The answer I supplied should work though.
Here's an example image with made up centroids.
Strong recommendation - use blockproc instead of blkproc. It is better designed and easier to use.
Now, first of all, the second input to blockproc is the blocksize and not the grid size. So if you want to divide your image into [1 3] grid, which I understand as a single row of three blocks, then you should set your blocksize as:
blocksize = [size(i,1) ceil(size(i,2)/3)];
The second thing is to turn off the 'TrimBorder' parameter in blockproc. The code would look something like:
fun=#(x) regionprops(x,'centroid');
blocksize = [size(i,1) ceil(size(i,2)/3)];
b=blockproc(bw,blocksize,fun,'TrimBorder',false);
One minor thing - I would recommend not using the variable name 'i'. By default it represents the imaginary number i = sqrt(-1); in Matlab.

Resources