I'm trying to segment a part of an image in matlab. I'm using CT images and I would like to segment the teeth that have metal because this metal artifacts compromise the image quality. Can someone give me a help?
What I want to segment
2. Original image
As a simple start, you can use a simple threshold that you set manually based on the histogram.
imhist(image)
threshold = 120
binaryImage = image>threshold
imshow(binaryImage)
next find the boundary of the binary image using some function such as bwtraceboundary. Finally, combine the original image and the boundary image to produce the final image.
I think the hardest part would be the threshold.
Related
I am currently trying to remove the noise on this image.
This image is obtained using cv2 hsv thresholding. Unfortunately there are a lot of random pixels and pieces that need to be filtered out. I've already tried open cv's fastNlMeansDenoisingColored function, this did not work. Is there anything else I could try?
You can try multiple things, I would try them in this order:
Blur the image before calculating the threshold
Change the threshold value
Erode&Dilate the image before calculating the threshold
Erode&Dilate afterwards is not so good:
Or going all in: use connectedComponentsWithStats and remove all components with a small area.
I am a new user on image processing via Matlab. My first aim is applying the article and comparing my results and authors' results.
The article can be found here: http://arxiv.org/ftp/arxiv/papers/1306/1306.0139.pdf
First problem, Image Quality: In Figure 7, masks are defined but I couldn't reach the mask data set, and I use the screenshot so image quality is low. In my view, it can effect the results. Is there any suggestions?
Second problem, Merging images: I want to apply mask 1 on the Lena. But I don't want to use paint =) On the other hand, is it possible merging the images and keeping the lena?
You need to create the mask array. The first step is probably to turn your captured image from Figure 7 into a black and white image:
Mask = im2bw(Figure7, 0.5);
Now the background (white) is all 1 and the black line (or text) is 0.
Let's make sure your image of Lena that you got from imread is actually grayscale:
LenaGray = rgb2gray(Lena);
Finally, apply your mask on Lena:
LenaAndMask = LenaGray.*Mask;
Of course, this last line won't work if Lena and Figure7 don't have the same size, but this should be an easy fix.
First of all, You have to know that this paper is published in archive. when papers published in archive it is always a good idea to know more about the author and/or the university that published the paper.
TRUST me on that: you do not need to waste your time on this paper.
I understand your demand: but it is not a good idea to do get the mask by doing print screen. The pixel values that can be achieved by using print screen may not be the same as the original values. The zoom may change the size. so you need to be sure that the sizes are the same.
you can do print screen. past the image.
crop the mask.
convert rgb to gray scale.
threshold the gray scale to get the binary.
if you saved the image as jpeg. distortions because of high frequency edges will change edge shape.
I have roughly 160 images for an experiment. Some of the images, however, have clearly different levels of brightness and contrast compared to others. For instance, I have something like the two pictures below:
I would like to equalize the two pictures in terms of brightness and contrast (probably find some level in the middle and not equate one image to another - though this could be okay if that makes things easier). Would anyone have any suggestions as to how to go about this? I'm not really familiar with image analysis in Matlab so please bear with my follow-up questions should they arise. There is a question for Equalizing luminance, brightness and contrast for a set of images already on here but the code doesn't make much sense to me (due to my lack of experience working with images in Matlab).
Currently, I use Gimp to manipulate images but it's time consuming with 160 images and also just going with subjective eye judgment isn't very reliable. Thank you!
You can use histeq to perform histogram specification where the algorithm will try its best to make the target image match the distribution of intensities / histogram of a source image. This is also called histogram matching and you can read up about it on my previous answer.
In effect, the distribution of intensities between the two images should hopefully be the same. If you want to take advantage of this using histeq, you can specify an additional parameter that specifies the target histogram. Therefore, the input image would try and match itself to the target histogram. Something like this would work assuming you have the images stored in im1 and im2:
out = histeq(im1, imhist(im2));
However, imhistmatch is the more better version to use. It's almost the same way you'd call histeq except you don't have to manually compute the histogram. You just specify the actual image to match itself:
out = imhistmatch(im1, im2);
Here's a running example using your two images. Note that I'll opt to use imhistmatch instead. I read in the two images directly from StackOverflow, I perform a histogram matching so that the first image matches in intensity distribution with the second image and we show this result all in one window.
im1 = imread('http://i.stack.imgur.com/oaopV.png');
im2 = imread('http://i.stack.imgur.com/4fQPq.png');
out = imhistmatch(im1, im2);
figure;
subplot(1,3,1);
imshow(im1);
subplot(1,3,2);
imshow(im2);
subplot(1,3,3);
imshow(out);
This is what I get:
Note that the first image now more or less matches in distribution with the second image.
We can also flip it around and make the first image the source and we can try and match the second image to the first image. Just flip the two parameters with imhistmatch:
out = imhistmatch(im2, im1);
Repeating the above code to display the figure, I get this:
That looks a little more interesting. We can definitely see the shape of the second image's eyes, and some of the facial features are more pronounced.
As such, what you can finally do in the end is choose a good representative image that has the best brightness and contrast, then loop over each of the other images and call imhistmatch each time using this source image as the reference so that the other images will try and match their distribution of intensities to this source image. I can't really write code for this because I don't know how you are storing these images in MATLAB. If you share some of that code, I'd love to write more.
Let's say i have an image like that one:
After some quick messing around, i got a binary image of the axe, like that:
What is the easiest/fastest way to get the contour of that image using GNU/Octave?
In Octave you can use bwboundaries (but I will welcome patches that implement bwtraceboundaries)
octave:1> pkg load image;
octave:2> bw = logical (imread ("http://i.stack.imgur.com/BoQPe.jpg"));
octave:3> boundaries = bwboundaries (bw);
octave:4> boundaries = cell2mat (boundaries);
octave:5> imshow (bw);
octave:6> hold on
octave:7> plot (boundaries(:,2), boundaries(:,1), '.g');
There are a couple of differences here from #Benoit_11 answer:
here we get the boundaries for all the objects in the image. bwboundaries will also accept coordinates as input argument to pick only a single object but I believe that work should be done by further processing your mask (may be due to the jpeg artifacts)
because we get boundaries for all objects, so you get a cell array with the coordinates. This is why we are using dots to plot the boundaries (the default is lines and it will be all over the image as it jumps from one object to other). Also, it is not documented whether the coordinates given are for the continuous boundary, so you should not assume it (again, why we plot dots).
the image that is read seems to have some artifacts, I will guess that is from saving in jpeg.
You can use bwtraceboundary in the Image package. Here is the Matlab implementation but that should be pretty similar using Octave:
First estimate starting pixel to look for boundary and then plot (BW is the image). (Check here )
dim = size(BW);
col = round(dim(2)/2)-90;
row = min(find(BW(:,col)));
boundary = bwtraceboundary(BW,[row, col],'N');
imshow(BW)
hold on;
plot(boundary(:,2),boundary(:,1),'g','LineWidth',3);
Output:
I'm currently working on MRI images and each dataset consists of a series of images. All I need to do is to segment part of the moving image(s), based on details a from fixed image provided, strictly by using the image registration method.
I have tried some of the available code and done some tweaking but all I got was a warped transformation moving image based on features from the fixed image, which was correct but wasn't as I expected.
To help with the idea, here are some of those MRI images1:
Fixed image:
Moving image:
The plan is to segment only total area (quadriceps, inner and outer bone sections) of the moving image as per details from the fixed image, i.e. morphologically warp the boundary of moving image according to fixed image boundary.
Any idea/suggestions as to how this could be done?
1. As a new user I'm unable to post/attach more than 2 links/images but do let me know should you need further images.
'All I need to do is to segment part of the moving image/s', this is certainly not a trivial thing to do. It is called segmentation by deformable models, and there is a lot of literature on the subject. Also, your fixed image is very different from the moved image, which doesn't help.
Here are a couple of ideas to start, but you will probably need to go into more details for your application.
I1=imread('fixed.png');
I2=imread('moving.png');
model=im2bw(I1,0.54);
imshowpair(I1,Model);
This is a simple thresholding segmentation to isolate that blob in the middle of the image. The value 0.54 was obtained by fiddling, you can certainly do a better job at segmenting your fixed image.
Here is the segmented fixed image, purple is inside, green is outside.
Now, let's deform this mask to fit the moved image:
masked = activecontour(I2,model, 20, 'Chan-Vese');
imshowpair(I2,masked);
Result:
You can automatize this in a loop along all your images, deforming each subsequent mask to the next frame. Try different parameters of activecontour as well.
Edit here is another way I can think of:
In the following code, Istart is the original fixed image, Mask is the segmented region on that image (the one you called 'fixed' in your question) and Istep is the moved image.
I first turned the segmented region into a binary mask, this is not strictly necessary:
t=graythresh(Mask);
BWmask=im2bw(Mask, t);
Let's display the masked original image:
imshowpair(BWmask, Istart)
The next step was to compute intensity-based registration between the start and step images:
[optimizer, metric] = imregconfig('monomodal');
optimizer.MaximumIterations = 300;
Tform=imregtform(Istart, Istep, 'affine', optimizer, metric);
And warp the mask according to this transformation:
WarpedMask=imwarp(BWmask, Tform, 'bicubic', 'Outputview', imref2d(size(Istart)));
Now let's have a look at the result:
imshowpair(WarpedMask, Istep);
It's not perfect, but it is a start. I think your main issue is that your mask contains elements that are different from each other (that middle blob vs. the darker soft tissue in the middle) If I where you, I would try to segment these structures separately.
Good luck!