inaccurate colormap [FLIR TOOLS] in .jpeg thermal image - colorbar

I am currently working on a project with a thermal datasets I found online. Basically I just need to extract the .csv from the thermal image. Because the datasets only provide the jpg format, I tried to use the FLIR TOOLS to get the temperature value out of the image.
As seen, the FLIR TOOLS detect that the temperature of the face is 74.7 at the highest, which makes no sense, is this possible to get the 'closest' value to? or it is impossible because the .jpg format? Can anyone recommend me on how to get the temperatrue from a Thermal image? Thanks a lot!

In general - you can't get temperature from plain grayscale JPEG. It has no temperature calibration data which relate gray level to real temperature. Real FLIR images contain embedded metadata for that.

Related

Why everybody convert image to gray-scale before performing operations on openCV

been trying to find the answer to why everybody converts an image to grayscale before processing?
For example, this website with instructions teaching people how to build a simple scanning program converts photo to greyscale first before passing commands to manipulate the image itself.
In the second example, this thread on stackoverflow shows a person also converts the image to grayscale before extracting text from his image.
Does this process make the image easier to manipulate? Or does it give better results when extracting text? If so, shouldn't a binary image give the best result in the case of extracting text?
More often than not, grayscale has all the relevant information to complete a particular task. So reducing the image to grayscale greatly simplifies calculations and removes redundancies.
Binary image is great too but it sacrifices too many information for it to be useful in many cases. And most library supports a minimum of 8 bit image processing anyway for a true binary data structure to be useful.
Imagine having to create a program to recognize text on paper. Having a color image doesn't help you to better read the text. The text can be in various color but you can read the text even if its in black and white. You can argue that binary image should also give the same performance and that is true IF there are no noise such as shadow on the paper.
Once there are noise elements exist on the image, you will need more information to separate text from noise and that is when grayscale is useful.
Moreover the most used and reliable information for advanced image processing is the edges and its textures. Both which can be obtained from a grayscale image.

imshow shows image as noisy even though its not in windows image viewer

Im trying to learn a segmentation network using CNN, my network is producing very poor results. I looked at the images and I'm wondering if this is the reason. My input images are stack of images in a .tif file. In Windows image viewer below is what I get:
I am trying to detect and segment the bright spots shown above
But when I open the same in Matlab using imshow() I get
All the information is basically lost. However when I use imagesc() I get the following:
Which is much better, but why are my images not working with my network? I am getting very very unpredictable losses and accuracy even with tried and tested networks.
Is it because my image is reading in the version shown in imshow()?
In MATLAB, the following convention is used for images:
uint8: pixels are in the range [0,255].
double: pixels are in the range [0,1].
When using imshow on a double image, values between 0 and 1 are mapped to a color scale (typically black to white). Any value above 1 is also mapped to white. This is what is happening to you: most of your pixels are shown as white.
It is likely that the CNN you are using makes the same assumptions and therefore clips your data.
The solution is to properly scale your images when you read them in. See for example im2double.
Neither imshow nor imagesc is designed to handle tiff stacks. They are for viewing, not reading image data. You may see a warning along the lines of the following, also:
"Can only display one frame from this multiframe file"
You can use imread to read in each of the frames in the file separately, as per this answer, or tiff which is a Matlab gateway to LibTiff library routines and provides more detailed control of how you read in your images if imread doesn't hack it.

Speeding up postscript image print

I am developing an application that prints an image via generating postscript output and sending it to the printer. So I convert my image to jpg, then to ASCII85 string, append this data to postscript file and send it to the printer.
Output looks like:
%!
{/DeviceRGB setcolorspace
/T currentfile/ASCII85Decode filter def
/F T/DCTDecode filter def
<</ImageType 1/Width 3600/Height 2400/BitsPerComponent
8/ImageMatrix[7.809 0 0 -8.053 0 2400]/Decode
[0 1 0 1 0 1]/DataSource F>> image F closefile T closefile}
exec
s4IA0!"_al8O`[\!<E1.!+5d,s5<tI7<iNY!!#_f!%IsK!!iQ0!?(qA!!!!"!!!".!?2"B!!!!"!!!
---------------------------------------------------------------
ASCII85 data
---------------------------------------------------------------
bSKs4I~>
showpage
My goal now is to speed up this code. Now it takes about 14 seconds from sending .ps to the printer to the moment printer actually starts printing the page (for the 2MB file).
Why is it so slow?
Maybe I can reformat the image so printer doesn't need to perform an affine transform of the image?
Maybe i can use better image encoding?
Any tutorials, clues or advices would be valuable.
One reason its slow is because JPEG is an expensive compression filter. Try using Flate instead. Don't ASCII85 encode the image, send it as binary, that reduces transmission time and removes another filter. Note that jpeg is a lossy compression, so by 'converting to jpeg' you are also sacrificing quality.
You can reduce the amount of effort the printer goes to by creating/scaling the image (before creating the PostScript) so that each image sample matches one pixel in device space. On the other hand, if you are scaling an image up, this means you will need to send more image data to the printer. But usually these days the data connection is fast.
However this is usually hard to do and often defeated by the fact that the printer may not be able to print to the edge of the media, and so may scale the marking operations by a small amount, so that the content fits on the printable area. Its usually pretty hard to figure out if that's going on.
Your ImageMatrix is, well, odd..... It isn't a 1:1 scaling and floating point scale factors are really going to slow down the mapping from user space to device space. And you have a lot of samples to map.
You could also map the image samples into PostScript device space (so that bottom left is at 0,0 instead of top left) which would mean you wouldn't have to flip the CTM In the y axis.
But in short, trying to play with the scale factors is probably not worth it, and most printers optimise these transformations anyway.
The colour model of the printer is usually CMYK, so by sending an RGB image you are forcing the printer to do a colour conversion on every sample in the image. For your image that's more than 8.5 million conversions.

Imagesc conversion formula

I have a .png image that has been created from some grayscale numbers using Matlab's imagesc tool using the standard color map.
For some reason, I am unable to recover the raw data. Is there a way of recovering the raw data from the image? I tried rgb2gray which more or less worked, but if I replug the new image into imagesc, it gives me a slightly different result. Also, the pixel with the most intensity differs in both images.
So, to clarify: I would love to know, how Matlab applies the rgb colormap to the grayscale values, when using the standard colormap.
This is the image we are talking about:
http://imgur.com/qFsGrWw.png
Thank you!
No, you will not get the right data if you are using the standard colormap, or jet.
Generally, its a very bad thing to try to reverse engineer plots, as they will never contain the entirety of the information. This is true in general, but even more if you use colormaps that are do not change accordingly with the data. The amount of blue in jet is massively bigger in range than the amount of orange, or another color. The color changes are non-linear with the data changes, and this will make you miss a lot of resolution. You may know what value orange corresponds to, but blue will be a very wide range of possible values.
In short:
Triying to get data from representation of data (i.e. plots) is a terrible idea
jet is a terrible idea

Caffe - training autoencoder with image data/image label pairs

I am very unfamiliar with Caffe. My task is to train an autoencoder net on image pairs, given in .tif format, where one is a grayscale image of nerves, and the other is the corresponding binary mask which shows if a certain structure is present on the image or not. I have these in the same "train" folder. What I would like to accomplish, is a meaningful experiment with these images (segmentation, classification, it is not specified). My first problem is that I do not know how to feed the images into the net without an existing train.txt. Can I use the images directly, or another format like lmdb, hdf5 needed? Any suggestion is appreciated.
you can accomplish it with simple classification (existing like alexnet, googlenet, lenet). You can use only the binary mask or gray scale image and the class name to do this. Nvidia Digits is a good graphical tool to make the pair dataset and learning....
Please see this link:
https://developer.nvidia.com/digits

Resources