I am processing image files with measured intensity, basically extracting voxels in sizes of 1x1x1 pixels. The image files are forming a volume to avoid peak intensities. I would like find a way to average over 3x3x3 pixel.
My problem is to get my head around the problem, because it is a shape within the image separated by zeros and other values. So, first of I considered a for-loop with a if-statement. These are the considerations I have made so far for the for-loop and if-statement. MATLAB perceives the volume as a long matrix so by a simple for loop it should be easy to find a non-zero value and its adjacent values, and take the average over those values. The problem comes when I have to take the z dimension into account.
This is clearly not working optimal, and I find it hard to account for the boundary effects.
I hope I'm interpreting your question right, but you want to find the average over a 3 x 3 x 3 voxel volume for each voxel in the input image where each input voxel acts as the centre of each 3 x 3 x 3 voxel volume to be averaged. If you have the option of using MATLAB's built-in functions, consider using N-D convolution with convn. Don't use loops here because it will be notoriously slow. For convn, the first parameter is the 3D image, and the second parameter is a 3 x 3 x 3 kernel with values all equal to 1/27. You also have the option of specifying what happens along the border should your convolution kernel go beyond the limits of the input image. Usually, you want to return an output image that's the same size as the input and so you may want to specify the 'same' flag as the third optional parameter. This averaging mechanism also assumes that the outer edges are zero-padded.
Therefore, supposing your image is stored in im, do something like this:
%// Create kernel of all 1/27 in a 3 x 3 x 3 matrix
kernel = ones(3,3,3);
kernel = kernel / numel(kernel);
%// Perform N-D convolution
out = convn(double(im), kernel, 'same'); %// Cast to double for precision
out = cast(out, class(im)); %// Recast back to original data type
Alternatively, if you have access to the image processing toolbox, use imfilter instead. The difference with this and convn is that imfilter was written using Intel Integrated Performance Primitives (IIPP), and so performance will definitely be faster:
%// Create kernel of all 1/27 in a 3 x 3 x 3 matrix
kernel = ones(3,3,3);
kernel = kernel / numel(kernel);
%// Perform N-D convolution
out = imfilter(im, kernel);
The added bonus is that you aren't required to change the input type. imfilter automatically infers this, does the processing respecting the input image's original type and the output type of imfilter is the same as the input type. With convn, you must ensure that your data is floating-point before using it.
Related
I am working with particle tracking in images in MATLAB and using regionprops function. On the provided resource there is an example with circles:
stats = regionprops('table',bw,'Centroid',...
'MajorAxisLength','MinorAxisLength')
centers = stats.Centroid;
diameters = mean([stats.MajorAxisLength stats.MinorAxisLength],2);
radii = diameters/2;
In my Matlab R2014b, the line centers = stats.Centroid; produces undesired result: my stats.Centroid structure has 20 elements (each element is two numbers - the coordinates of the center of the region). However, after the following command, my variable center is only 1x2 matrix, instead of desired 20x2.
Screenshot attached.
I tried to go around this with different methods. The only solution I found is to do:
t=zeros(20,2);
for i=1:20
t(i,:)=stats(i).Centroid;
end
However, as we all know loops are slow in MATLAB. Is there another method that takes advantage of MATLAB matrix operations?
Doing stats.Centroid would in fact give you a comma-separated list of centroids, so MATLAB would only give you the first centre of that matrix if you did centers = stats.Centroid. What you must do is encapsulate the centres in an array (i.e. [stats.Centroid]), then reshape when you're done.
Something like this should work for you:
centers = reshape([stats.Centroid], 2, []).';
What this will do is read in the centroids as a 1 x 2*M array where M is the total number of blobs and because MATLAB does reshaping in column-major format, you should make sure that specify the total number of rows to be 2 and let MATLAB figure out how many columns there are after by itself. You would then transpose the result when you're done to complete what you want.
Minor Note
If you look at the regionprops documentation page in their Tips section - http://www.mathworks.com/help/images/ref/regionprops.html#buorh6l-1, you will see that they surround stats.Area, which is the area of each blob with [] brackets to ensure that the comma-separated list of values is encapsulated in an array. This is not an accident and there is a purpose of having those there and I've basically told you what that was.
I have a greyscale image similar to the one below that I have achieved after some post-processing steps (image 0001). I would like a vector corresponding to the bottom of the lower bright strip (as depicted in image 0001b). I can use im2bw with various thresholds to achieve the vectors in image 0002 (the higher the threshold value the higher the tendency for the vector line to blip upwards, the lower the threshold the higher the tendency for the line to blip downwards)..and then I was thinking of going through each vector and measuring arclength over some increment (maybe 100 pixels or so) and choosing that vector with the lowest arclength...and adding that 100 pixel stretch to the final vector, creating a frankenstein-like vector using the straightest segments from each of the thresholded vectors.. I should also mention that when there are multiple straightish/parallel vectors, the top one is the best fit.
First off, is there some better strategy I should be employing here to find that line on image 0001? (this needs to be fast so some long fitting code wouldn't work). If my current Frankenstein's monster solution works, any suggestions as to how to best go about this?
Thanks in advance
image=im2bw(image,0.95); %or 0.85, 0.75, 0.65, 0.55
vec=[];
for v=1:x
for x=1:z
if image(c,v)==1
vec(v)=c;
end
end
end
vec=fastsmooth(vec,60,20,1);
Here is the modified version of what I originally did. It works well on on your images. If you want subpixel resolution, you can implement an active contour model with some fitting function.
files = dir('*.png');
filenames = {files.name};
for ifile=1:length(filenames)
%%
% read image
im0 = double(imread(filenames{ifile}));
%%
% remove background by substracting a convolution with a mask
lobj=100;
convmask = ones(lobj,1)/lobj;
im=im0-conv2(im0,convmask,'same');
im(im<0)=0;
imagesc(im);colormap gray;axis image;
%%
% use canny edge filter, alowing extremely weak edge to exist
bw=edge(im,'canny',[0.01,0.3]);
% use close operation on image to close gaps between lines
% the kernel is a flat rectangular so that it helps to connect horizontal
% gaps
se=strel('rectangle',[10,30]);
bw=imdilate(bw,se);
% thin the lines to be single pixel line
bw=bwmorph(bw,'thin',inf);
% connect H bridge
bw=bwmorph(bw,'bridge');
imagesc(bw);colormap gray;axis image;
%% smooth the image, find the decreasing region, and apply the mask
imtmp = imgaussfilt(im0,3);
imtmp = diff(imtmp);
imtmp = [imtmp(1,:);imtmp];
intensity_decrease_mask = imtmp < 0;
bw = bw & intensity_decrease_mask;
imagesc(bw);colormap gray;axis image;
%%
% find properties of the lines, and find the longest lines
cc=regionprops(bw,'Area','PixelList','Centroid','MajorAxisLength','PixelIdxList');
% now select any lines that is larger than eighth of the image width
cc=cc([cc.MajorAxisLength]>size(bw,2)/8);
%%
% select lines that has average intensity larger than gray level
for i=1:length(cc)
cc(i).meanIntensity = mean(im0(sub2ind(size(im0),cc(i).PixelList(:,2), ...
cc(i).PixelList(:,1) )));
end
cc=cc([cc.meanIntensity]>150);
cnts=reshape([cc.Centroid],2,length(cc))';
%%
% calculate the minimum distance to the bottom right of each edge
for i=1:length(cc)
cc(i).distance2bottomright = sqrt(min((cc(i).PixelList(:,2)-size(im,1)).^2 ...
+ (cc(i).PixelList(:,1)-size(im,2)).^2));
end
% select the bottom edge
[~,minindex]=min([cc.distance2bottomright]);
bottomedge = cc(minindex);
%% clean up the lines a little bit
bwtmp = false(size(bw));
bwtmp(bottomedge.PixelIdxList)=1;
% find the end points to the most left and right
endpoints = bwmorph(bwtmp, 'endpoints');
[endy,endx] = find(endpoints);
[~,minind]=min(endx);
[~,maxind]=max(endx);
pos_most_left = [endx(minind),endy(minind)];
pos_most_right = [endx(maxind),endy(maxind)];
% select the shortest path between left and right
dists = bwdistgeodesic(bwtmp,pos_most_left(1),pos_most_left(2)) + ...
bwdistgeodesic(bwtmp,pos_most_right(1),pos_most_right(2));
dists(isnan(dists))=inf;
bwtmp = imregionalmin(dists);
bottomedge=regionprops(bwtmp,'PixelList');
%% plot the lines
imagesc(im0);colormap gray;axis image;hold on;axis off;
for i=1:length(cc)
plot(cc(i).PixelList(:,1),cc(i).PixelList(:,2),'b','linewidth',2);hold on;
end
plot(bottomedge.PixelList(:,1),bottomedge.PixelList(:,2),'r','linewidth',2);hold on;
print(gcf,num2str(ifile),'-djpeg');
% pause
end
I am not sure this answers your question directly, but I have a lot of experiencing fitting arrays (or matrices in my case) to 3D raster images. We were using relatively low power machines (standard i7 processors 32 gb ram), and had to perform the fitting very quickly (<30 seconds). We also had to validate the fit with a variety of parameters (and again these were 3D rasters fit to a point cloud matrix).
Anyways, the process we used was the fminsearch function internal to Matlab. Documentation can be found here: http://www.mathworks.com/help/optim/functionlist.html
We would start with a plain point-cloud and perform successive manipulations on a per pixel basis to adjust the point-cloud to the raster. Essentially walking through each pixel in the raster to produce the lowest offset between the point cloud and the raster.
I will try to search for some code this afternoon and update my answer, but I might explore this option for your case. I would imagine you could fit a curve to certain pixels (e.g. white pixels) both rapidly and accurately by setting up an optimization function.
I also could help more if I understood your objective better. Are you just trying to fit a line to the high-albedo/white areas?
In the way of example: I can fit a 3D point cloud to the following image by starting with a standard point cloud, the 3D raster, and a minimization function (in this case just RMS error of each individual point in the z axis). Throw an fmin function on there and in a few seconds you get a modified point cloud that fits much better than the standard.
I have to remove gaussian noise from this image (before, I had to filter it and add the noise). Then, I have to use function "o" and my grade is based on how low result of this function will be. I am trying and trying different things, but I can't remove this noise so I can get a good grade :/ any help please?
img=imread('liftingbody.png');
img=double(img)/255;
maska1=[1 1 1; 1 5 1; 1 1 1]/13;
odfiltrowany=imfilter(img,maska1);
zaszumiony=imnoise(odfiltrowany,'gaussian');
nowy=wiener2(zaszumiony);
nowy4=medfilt2(nowy);
o=1/512.*sqrt(sum(sum(img-nowy4).^2));
subplot(311); imshow(img);
subplot(312); imshow(zaszumiony);
subplot(313); imshow(nowy);
Try convoluting a Gaussian filter with your noisy image to remove Gaussian noise like below:
nowx=conv2(zaszumiony,fspecial('gaussian',[3 3],1.5),'same')/(sum(sum(fspecial('gaussian',[3 3],1.5))));
It should reduce your o function somewhat.
Try playing around with the strength of the filter (i.e. the 1.5 value) and the size of the kernel (i.e. [3 3] value) to reduce the noise to a minimum.
Adding to #ALM865's answer, you can also use imfilter. In fact, this is the recommended function that you use for images as imfilter has optimizations in place specifically for images. conv2 is the more general function for any 2D signal.
I have also answered how to choose the standard deviation and ultimately the size of your a Gaussian filter / kernel here: By which measures should I set the size of my Gaussian filter in MATLAB?
In essence, once you choose which standard deviation you want, you find a floor(6*sigma) + 1 x floor(6*sigma) + 1 Gaussian kernel to use in your filtering operation. Assuming that sigma = 2, you would get a 13 x 13 kernel. As ALM865 has said, you can create a Gaussian kernel using fspecial. You specify the 'gaussian' flag, followed by the size of the kernel and the standard deviation after. As such:
sigma = 2;
width = 6*sigma + 1;
kernel = fspecial('gaussian', [width width], sigma);
out = imfilter(zaszumiony, kernel, 'replicate');
imfilter takes in the image you want to filter, the convolution kernel you want to use to filter the image, and an optional flag that specifies what happens along the image pixel borders when the kernel doesn't fit completely inside the image. 'replicate' means that it simply copies the pixels along the borders, thus replicating them. There are other options, such as padding with a value (usually zero), circular padding and symmetric padding.
Play around with the standard deviation until you get what you believe is a good result.
I am using Laplacian of Gaussian for edge detection using a combination of what is described in http://homepages.inf.ed.ac.uk/rbf/HIPR2/log.htm and http://wwwmath.tau.ac.il/~turkel/notes/Maini.pdf
Simply put, I'm using this equation :
for(int i = -(kernelSize/2); i<=(kernelSize/2); i++)
{
for(int j = -(kernelSize/2); j<=(kernelSize/2); j++)
{
double L_xy = -1/(Math.PI * Math.pow(sigma,4))*(1 - ((Math.pow(i,2) + Math.pow(j,2))/(2*Math.pow(sigma,2))))*Math.exp(-((Math.pow(i,2) + Math.pow(j,2))/(2*Math.pow(sigma,2))));
L_xy*=426.3;
}
}
and using up the L_xy variable to build the LoG kernel.
The problem is, when the image size is larger, application of the same kernel is making the filter more sensitive to noise. The edge sharpness is also not the same.
Let me put an example here...
Suppose we've got this image:
Using a value of sigma = 0.9 and a kernel size of 5 x 5 matrix on a 480 × 264 pixel version of this image, we get the following output:
However, if we use the same values on a 1920 × 1080 pixels version of this image (same sigma value and kernel size), we get something like this:
[Both the images are scaled down version of an even larger image. The scaling down was done using a photo editor, which means the data contained in the images are not exactly similar. But, at least, they should be very near.]
Given that the larger image is roughly 4 times the smaller one... I also tried scaling the sigma by factor of 4 (sigma*=4) and the output was... you guessed it right, a black canvas.
Could you please help me realize how to implement a LoG edge detector that finds the same features from an input signal, even if the incoming signal is scaled up or down (scaling factor will be given).
Looking at your images, I suppose you are working in 24-bit RGB. When you increase your sigma, the response of your filter weakens accordingly, thus what you get in the larger image with a larger kernel are values close to zero, which are either truncated or so close to zero that your display cannot distinguish.
To make differentials across different scales comparable, you should use the scale-space differential operator (Lindeberg et al.):
Essentially, differential operators are applied to the Gaussian kernel function (G_{\sigma}) and the result (or alternatively the convolution kernel; it is just a scalar multiplier anyways) is scaled by \sigma^{\gamma}. Here L is the input image and LoG is Laplacian of Gaussian -image.
When the order of differential is 2, \gammais typically set to 2.
Then you should get quite similar magnitude in both images.
Sources:
[1] Lindeberg: "Scale-space theory in computer vision" 1993
[2] Frangi et al. "Multiscale vessel enhancement filtering" 1998
This may or may not be a very stupid question so I do apologise, but I haven't come across this in any books or tutorials as yet. Also I guess it can apply to any language...
Assume you create a window of size: 640x480 and an object/shape inside it of size 32x32 and you're able to move the shape around the window with keyboard inputs.
Does it matter what Type (int, float...) you use to control the movement of the shape. Obviously you can not draw halfway through a pixel, but if you move the shape by 0.1f (for example with a glTranslation function) what happens as supposed to moving it by an int of 1... Does it move the rendered shape by 1/10 of a pixel or?
I hope I've explained that well enough not to be laughed at.
I only ask this because it can affect the precision of collision detection and other functions of a program or potential game.
glTranslate produces a translation by x y z . The current matrix (glMatrixMode) is multiplied by this translation matrix, with the product replacing the current matrix, as if glMultMatrix were called with the following matrix for its argument:
1 0 0 x 0 1 0 y 0 0 1 z 0 0 0 1
If the matrix mode is either GL_MODELVIEW or GL_PROJECTION, all objects drawn after a call to glTranslate are translated.
Use glPushMatrix and glPopMatrix to save and restore the untranslated coordinate system.
This meaning that glTranslate will give you a translation, to use with the current matrix, resulting in non decimal numbers. You can not use half a pixel. glTranslate receives either doubles or floats, so if you are supposed to move it 1 in x,y or z, just give the function a float 1 or double 1 as an argument.
http://www.opengl.org/sdk/docs/man2/xhtml/glTranslate.xml
The most important reason for using floats or doubles to represent positioning is the background calculation. If u keep calculating your position with ints not only do you have to probably use conversion steps to get back to ints. You will also lose data every x amount of steps
if you want to animate you sprite to have anything less than 1 pixel movement per update then YES you need to use floating point, otherwise you will get no movement. your drawing function would most likely round to the nearest integer so it's probably not relevant for that. however you can of course draw to sub pixel accuracy!