TYPE argument in Mat - image

I have a color image and I want to assign the every pixel with only gray value (In the HSV system) into another Matrix to create a gray image.
So I create a Matrix by V.create(image.rows,image.cols,CV_8UC1), then I have a gray image. But I thought what will happen when I replace that with V.create(image.rows,image.cols,CV_8UC3). I thought it will be the same because I assign value into the third Channel only, although it is 8UC3. But What I got is a gray image whose size is full height but with only 1/3 width. 2/3 left are all blank. I am curious about why?

You can investigate how matrixes and images are stored inside the memory in this documentation page. It explains pretty well how pixels are stored under the hood and how they can be read.

Related

Applying an image as a mask in matlab

I am a new user on image processing via Matlab. My first aim is applying the article and comparing my results and authors' results.
The article can be found here: http://arxiv.org/ftp/arxiv/papers/1306/1306.0139.pdf
First problem, Image Quality: In Figure 7, masks are defined but I couldn't reach the mask data set, and I use the screenshot so image quality is low. In my view, it can effect the results. Is there any suggestions?
Second problem, Merging images: I want to apply mask 1 on the Lena. But I don't want to use paint =) On the other hand, is it possible merging the images and keeping the lena?
You need to create the mask array. The first step is probably to turn your captured image from Figure 7 into a black and white image:
Mask = im2bw(Figure7, 0.5);
Now the background (white) is all 1 and the black line (or text) is 0.
Let's make sure your image of Lena that you got from imread is actually grayscale:
LenaGray = rgb2gray(Lena);
Finally, apply your mask on Lena:
LenaAndMask = LenaGray.*Mask;
Of course, this last line won't work if Lena and Figure7 don't have the same size, but this should be an easy fix.
First of all, You have to know that this paper is published in archive. when papers published in archive it is always a good idea to know more about the author and/or the university that published the paper.
TRUST me on that: you do not need to waste your time on this paper.
I understand your demand: but it is not a good idea to do get the mask by doing print screen. The pixel values that can be achieved by using print screen may not be the same as the original values. The zoom may change the size. so you need to be sure that the sizes are the same.
you can do print screen. past the image.
crop the mask.
convert rgb to gray scale.
threshold the gray scale to get the binary.
if you saved the image as jpeg. distortions because of high frequency edges will change edge shape.

How to find yellow objects on given picture?

I've this picture:
(this is just subimage of bigger image but only this part is for me important). I need algorithm for finding all these yellow objects in the image and find from them the object which contains the most yellow points. This is just one picture of thousands of similar pictures with more or less these yellow objects. What is the way to do this? I found that the scanline algorithm is good for this, but I haven't found some example which would help me. If you have some ideas or even algorithm it would be perfect. Those color lines are not important I just put them as some border in which I need to find the yellow objects.
Thanks a lot for answers
There are two basic steps:
Thresholding: Generate an array of yellow and not-yellow pixels. If the images you're working with are all like the example you provided, this should be very easy, but try adaptive thresholding if you have to deal with varying shades and hues. Store, e.g., a value of -1 for pixels that are yellow, and 0 everywhere else.
Segmentation: Initialize an ID value to 1. Scan every pixel of the thresholded image. When you encounter a pixel with a value of -1 (i.e., a yellow pixel), use a flood fill routine to write the ID value into this pixel and all the yellow pixels connected to it. Before the flood fill routine exits, you can store information such as the number of pixels it found and the average X and Y coordinates in an array indexed by the ID value. Then increment the ID value and resume scanning until you've covered the entire image.
Then search the data generated by the flood fill routine to find which yellow areas were the largest, and where they were located.
Here's a program that does something quite similar with red objects instead of yellow ones, and then draws circles around them.
It looks like OpenCV has blob detection options. I found this article showing how to detect the blobs using greyscale value, which you should be able to change to use the color value of your target color. It also mentions using the area of the blob as a threshold, so you should be able to use that to find the largest one in the image.
http://www.learnopencv.com/blob-detection-using-opencv-python-c/
One approach would be to generate a quad-tree of the image. Using this quad-tree it's pretty simple to find conjunct pieces that form a blob (even with holes) and calculate the size of these.

Converting to 8-bit image causes white spots where black was. Why is this?

Img is a dtype=float64 numpy data type. When I run this code:
Img2 = np.array(Img, np.uint8)
the background of my images turns white. How can I avoid this and still get an 8-bit image?
Edit:
Sure, I can give more info. The single image is compiled from a stack of 400 images. They are each coming from an .avi video file, and each image is converted into a NumPy array like this:
gray_img = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
A more complicated operation is performed on this whole stack, but does not involve creating new images. It's simply performing calculations on each 1D array to yield a single pixel.
The interpolation is most likely linear (the default in plotting images with matplotlib. The images were saved as .PNGs.
You probably see overflow. If you cast 257 to np.uint8, you will get 1. According to a google search, avi files contain images with a color depth of 15 - 24 bit. When you cast this depth to np.uint8, you will see white regions getting darkened and (if a normalization takes place somewhere) also dark regions getting white (-5 -> 251). For the regions that become bright, you could check whether you have negative pixel values in the original image Img.
The Docs say that sometimes you have to do some scaling to get a proper cast, and to rather use higher depth whenever possible to avoid artefacts.
The solution seems to be either working at higher depth, i.e. casting to np.uint16 or np.uint32, or to scale the pixel values before reducing the depth, i.e. with Img2 already being a numpy matrix
# make sure that values are between 0 and 255, i.e. within 8bit range
Img2 *= 255/Img2.max()
# cast to 8bit
Img2 = np.array(Img, np.uint8)

resample an image from pixel to millimiters

I have an image (logical values), like this
I need to get this image resampled from pixel to mm or cm; this is the code I use to get the resampling:
function [ Ires ] = imresample3( I, pixDim )
[r,c]=size(I);
x=1:1:c;
y=1:1:r;
[X,Y]=meshgrid(x,y);
rn=r*pixDim;
cn=c*pixDim;
xNew=1:pixDim:cn;
yNew=1:pixDim:rn;
[Xnew,Ynew]=meshgrid(xNew,yNew);
Id=double(I);
Ires=interp2(X,Y,Id,Xnew,Ynew);
end
What I get is a black image. I suspect that this code does something that is not what I have in mind: it seems to take only the upper-left part of the image.
What I want is, instead, to have the same image on a mm/cm scale: what I expect is that every white pixel should be mapped from the original position to the new position (in mm/cm); what happen is certainly not what I expect.
I'm not sure that interp2 is the right command to use.
I don't want to resize the image, I just want to go from pixel world to mm/cm world.
pixDim is of course the dimension of the image pixel, obtained dividing the height of the ear in cm by the height of the ear in mm (and it is on average 0.019 cm).
Any ideas?
EDIT: I was quite sure that the code had no sense, but someone told me to do that way...anyway, if I have two edged ears, I need first to scale both the the real dimension and then perform some operations on them. What I mean with "real dimension" is that if one has size 6.5x3.5cm and the other has size 6x3.2cm, I need to perform operations on this dimensions.
I don't get how can I move from the pixel dimension to cm dimension BEFORE doing operation.
I want to move from one world to the other because I want to get rid of the capturing distance (because I suppose that if a picture of the ear is taken near and the other is taken far, they should have different size in pixel dimension).
Am I correct? There is a way to do it? I thought I can plot the ear scaling the axis, but then I suppose I cannot subtract one from the other, right?
Matlab does not use units. To apply your factor of 0.019cm/pixel you have to scale by a factor of 0.019 to have a 1cm grid, but this would cause any artefact below a size of 1cm to be lost.
Best practice is to display the data using multiple axis, one for cm and one for pixels. It's explained here: http://www.mathworks.de/de/help/matlab/creating_plots/using-multiple-x-and-y-axes.html
Any function processing the data should be independent of the scale or use the scale factor as an input argument, everything else is a sign of some serious algorithmic issues.

Transforming raw pixel using rescale slope and rescale intercept in DIcom

I used the solution in this post Window width and center calculation of Dicom Image to transform the raw pixel, it works good most of the images, but i faced problem with some images. That images having pixel value "24", rescale slope "1.0" and rescale intercept "-1024".
When i applied the solution mentioned above am get the new pixel value in negative(-1000).
I can't find the value for this new pixel value in Lookup table created by using window level and window width because look up table having only positive values (0 to 65536). Please help me solve this problem.
You are probably dealing with CT images. RescaleIntercept tag for CTs usually set to -1024. Negative -1000 value you obtain makes perfect sense, it corresponds to air in Hounsfield units (as Anders said). Now if you want to visualize the image, you have to apply a transfer function that will map HU scale to RGB for instance.

Resources