Let me explain.
My program takes an x-ray in a format of the x-ray detector ".his" which goes from 0 to 65535, and from those values it can tell you how much of a certain material is in each pixel "4 cm of aluminum" for example.
It does that for every pixel and you end up with a matrix that tells you how much of a given material is there, and you can visualize that matrix and see only fat tissue in an image without the bones blocking your view, it's very cool I know.
What I want to do now is to save that matrix as an image so that I can analyse and modify that image with programs like Image J, but I also want that if I see the pixel value I see the original value, I want to see "4" and know that pixel shows 4 cm of lungs or whatever material I'm working on.
Is that possible?, my professor seems to think that it is but he's not sure how to do it, and figure that out is my job anyway.
It should be possible since with Image J I can open the ".his" format and I can do just that, I can see the values from 0 to 65535, provided I say Image J that the image is 16 bit unsigned and other properties of that kind of files, but I wouldn't know how to do that for a Matlab variable.
Thanks a lot.
So if I understand correctly, you want to save an image that also contains arbitrary metadata on every pixel (in this case an integer).
If you use an image format like PNG you could encode that extra data into the alpha channel (which would be nearly imperceptible with a value like 4/255 away from fully opaque), but you'd have to be careful when editing the image that you don't change the alpha channel by mistake.
However, this is rather finnicky and would be cumbersome to implement in Matlab.
Instead I would suggest simply saving a standard image and a text file (or binary file) alongside it with the data you want.
Related
I have a .png image that has been created from some grayscale numbers using Matlab's imagesc tool using the standard color map.
For some reason, I am unable to recover the raw data. Is there a way of recovering the raw data from the image? I tried rgb2gray which more or less worked, but if I replug the new image into imagesc, it gives me a slightly different result. Also, the pixel with the most intensity differs in both images.
So, to clarify: I would love to know, how Matlab applies the rgb colormap to the grayscale values, when using the standard colormap.
This is the image we are talking about:
http://imgur.com/qFsGrWw.png
Thank you!
No, you will not get the right data if you are using the standard colormap, or jet.
Generally, its a very bad thing to try to reverse engineer plots, as they will never contain the entirety of the information. This is true in general, but even more if you use colormaps that are do not change accordingly with the data. The amount of blue in jet is massively bigger in range than the amount of orange, or another color. The color changes are non-linear with the data changes, and this will make you miss a lot of resolution. You may know what value orange corresponds to, but blue will be a very wide range of possible values.
In short:
Triying to get data from representation of data (i.e. plots) is a terrible idea
jet is a terrible idea
I have roughly 160 images for an experiment. Some of the images, however, have clearly different levels of brightness and contrast compared to others. For instance, I have something like the two pictures below:
I would like to equalize the two pictures in terms of brightness and contrast (probably find some level in the middle and not equate one image to another - though this could be okay if that makes things easier). Would anyone have any suggestions as to how to go about this? I'm not really familiar with image analysis in Matlab so please bear with my follow-up questions should they arise. There is a question for Equalizing luminance, brightness and contrast for a set of images already on here but the code doesn't make much sense to me (due to my lack of experience working with images in Matlab).
Currently, I use Gimp to manipulate images but it's time consuming with 160 images and also just going with subjective eye judgment isn't very reliable. Thank you!
You can use histeq to perform histogram specification where the algorithm will try its best to make the target image match the distribution of intensities / histogram of a source image. This is also called histogram matching and you can read up about it on my previous answer.
In effect, the distribution of intensities between the two images should hopefully be the same. If you want to take advantage of this using histeq, you can specify an additional parameter that specifies the target histogram. Therefore, the input image would try and match itself to the target histogram. Something like this would work assuming you have the images stored in im1 and im2:
out = histeq(im1, imhist(im2));
However, imhistmatch is the more better version to use. It's almost the same way you'd call histeq except you don't have to manually compute the histogram. You just specify the actual image to match itself:
out = imhistmatch(im1, im2);
Here's a running example using your two images. Note that I'll opt to use imhistmatch instead. I read in the two images directly from StackOverflow, I perform a histogram matching so that the first image matches in intensity distribution with the second image and we show this result all in one window.
im1 = imread('http://i.stack.imgur.com/oaopV.png');
im2 = imread('http://i.stack.imgur.com/4fQPq.png');
out = imhistmatch(im1, im2);
figure;
subplot(1,3,1);
imshow(im1);
subplot(1,3,2);
imshow(im2);
subplot(1,3,3);
imshow(out);
This is what I get:
Note that the first image now more or less matches in distribution with the second image.
We can also flip it around and make the first image the source and we can try and match the second image to the first image. Just flip the two parameters with imhistmatch:
out = imhistmatch(im2, im1);
Repeating the above code to display the figure, I get this:
That looks a little more interesting. We can definitely see the shape of the second image's eyes, and some of the facial features are more pronounced.
As such, what you can finally do in the end is choose a good representative image that has the best brightness and contrast, then loop over each of the other images and call imhistmatch each time using this source image as the reference so that the other images will try and match their distribution of intensities to this source image. I can't really write code for this because I don't know how you are storing these images in MATLAB. If you share some of that code, I'd love to write more.
I have a 16bit voxel data set from which I need to extract the integer values for each voxel. The data set can be downloaded from here, it is the 'Head Aneuyrism 16Bits' data set (You need to click on the blood vessels image to download the 16bit version). Its size is 512x512x512, but I don't know whether it is greyscale or color, nor if that matters. Looking at the image on the website I'd guess that it is color, but I am not sure whether the image should be taken literally.
A related question on SO is the following: How can I read in a RAW image in MATLAB?
and the following on mathworks: http://www.mathworks.com/matlabcentral/answers/63311-how-to-read-an-n-dimensioned-matrix-from-a-binary-file
Thanks to the information in the answers to these questions I managed to extract some information from the file with matlab as follows:
fileID=fopen('vertebra16.raw','r');
A=fread(fileID,512*512*512,'int16');
B=reshape(A,[512 512 512]);
I don't need to visualise the image, I only need to have the integer values for each voxel, but I am not sure whether I am reading the information in the correct way with my script.
The only way I found to try and check whether I have the correct voxel values is to visualise B using the following:
implay(B)
Now, with the code above, and then using implay(B) I get a black and white movie with a white disc in the center and black background and some black pixels moving in the disc (I tried to upload a frame of the movie, but it didn't work). Looking at the image on the website from which I downloaded the file, the movie frames I get seem quite different from that image, so I'd conclude that I do not have the correct voxel values.
Here are some questions related to my problem:
Do I need to know whether the image is in grey scale or color to read the voxel values correctly?
On the data set website there is only written that the data set is in 16bit format, so how do I know whether I am dealing with signed or unsigned integers?
In the SO question linked to above they use 'uint8=>uint8'. I could not find this in the matlab manual, so I wonder whether 'uint8=>uint8' is an obsolete matlab notation for 'uint8' or if it does something different. I suspect that it does something different since if I use 'int16=>int16' instead of 'int16' in my code above I get a completely black movie with implay.
It looks like you read the data correctly.
The problem when displaying it is the scale of the values. implay seems to assume the values to be in [0,1] and therefore clamps all values to be in that range, where are your data range is [0,3000].
Simply doing
B = B / max(B(:))
will rescale your data to [0,1] and looking at the data again with
implay(B)
shows you something much more sensible.
I remember a story about someone filtering images with a spam filter which he fed with some training data.
I come to the point where I exactly need something like this.
I have a lot different types of images (mainly people, e.g. selfies, group pictures, portraits, ..) but I only want a certain type (e.g. only male) of them.
With the right algorithm and training data I think it's possible to get it to the point where I can pass an image to it and i get true or false whether it matches my type or not.
I had a look at a few Face/Gender Detection APIs, but none of them worked for me that's why I want to try the approach with the spam-filter - seems like a funny idea.
Here's what I need:
a trainable spam-filter algorithm/code sample/API
has to work offline
preferably for C# or Java
I already spent a few hours trying different things and googling, now I'm here and I'd like to get your opinion on this problem and the solution you think is appropriate.
Buddha
There is a simple image comparison algorithm that you can read about here: compareImages php class.
Basically the way it works is this:
it takes an image (a cropped image would be best), scales it down to a 8x8 pixels image, converts it to a BW / Greyscale image, and then it calculates the mean value of the pixels (which is the average value).
Then it goes over all the pixels of the scaled image (64 pixels), and in every pixel where the pixel's value >= the mean value, it puts "1", and if the pixel's value < the mean value, it puts "0", resulting in a 64bit "signature" value of 0s and 1s.
This signature value is what identifies the image, and then you can save this signature value in some kind of a database, as your "learned" filter.
Then if an email arrives with some images.. you can just crop them, and scan them, produce a signature, and see if it matches any known signature in your database.
The good things about this algorithm are:
It is very fast and scalable (scaling an image down to 8x8 is fast, and scanning the pixels as described is fast too).
Because it converts the image to greyscale & resizes it down, it means it can detect any color variations or sizes of the same image.
Because you use 64bit signatures, it doesn't take alot of space in your database as well.
Hope this helps.
I would like to know why we need to decode let's say a png to a bitmap in order to show the image.
Why not just show the png like that (encoded).
I'm asking here a moron type of question on purpose. It's clear to me it's impossible to show an encoded image just like that but I want to know why, and how an image is shown on a screen because it's easy just to do :
canvas.drawBitmap(((AndroidImage)Image).bitmap, x, y, null);
I want to understand the full of it. I'm guessing we need to show every pixels one by one, but I want more details.
It's easy to know how to do, it's a bit harder to understand why.
If someone has a course/tuto/article/explanation that explains it... I would appreciate
Thanks in advance
PS : Please don't respond "you need to decode/convert png to bitmap" I know that... And that's not my question
There are lots of reasons. There is not really a direct relation between 'a value in a file' and 'a pixel on a screen'.
You need to know the width and height of the bitmap. You cannot infer this from the image size -- it has to be stored somewhere inside the image file. (Or anywhere else. Point is, you have to know its size.)
You need to know the bit depth and color model of the bitmap. You cannot meaningfully copy an 8-bit indexed image directly onto a screen that accepts 32-bit BGR ordering with an unused byte, for example.
Your example, the PNG file format, specifies that all image data is compressed. This is for a sane reason: the PNG format was designed for use on web pages, in a time period where every byte still counted. But even the lowly simple BMP file format uses a very specific form of 'encoding': in its 24-bit format, every line consists of sets of BGR values for each pixel and is padded at the end with enough bytes to make its total length evenly divisible by 4.
JPEG uses an even more advanced encoding scheme (which is too difficult to explain in a few short words) so it can compress images even more. The advanced encoding scheme allows far more compression than regular methods (which in turn means there is only the tiniest relation between 'values in the file' and 'pixels on the screen').