Reading data from colour terrain map - image

I have a question about converting a height-map that is in colour into a matrix - look here to see examples of such maps. If I were to have a terrain plot and plot it using imagesc, then I would see it as a colour map. I was wondering how I could convert an image that looks like this into its corresponding matrix.
This seems like it should be a pretty basic procedure, but I can neither work out how to do it myself nor find out how to do it online (including looking on SO).
To put it another way, the image in question is a jpeg; what I'd like is to be able to convert the .jpg file into a matrix, M say, so that imagesc(M), or surf(M), with the camera looking at the (x,y)-plane (from above), give the same as viewing the image, eg imshow(imread('Picture.jpg')).

You can use Matlab's rbg2ind function for this. All you need to choose is the "resolution" of the output colormap that you want, i.e. the second parameter n. So if you specify n as 8 for example, then your colormap will only have 8 values and your output indexed image should only have 8 values as well.

Depending on the color coding scheme used, you might try first converting the RGB values to HSL or HSV and using the hue values for the terrain heights.

Related

Imagesc conversion formula

I have a .png image that has been created from some grayscale numbers using Matlab's imagesc tool using the standard color map.
For some reason, I am unable to recover the raw data. Is there a way of recovering the raw data from the image? I tried rgb2gray which more or less worked, but if I replug the new image into imagesc, it gives me a slightly different result. Also, the pixel with the most intensity differs in both images.
So, to clarify: I would love to know, how Matlab applies the rgb colormap to the grayscale values, when using the standard colormap.
This is the image we are talking about:
http://imgur.com/qFsGrWw.png
Thank you!
No, you will not get the right data if you are using the standard colormap, or jet.
Generally, its a very bad thing to try to reverse engineer plots, as they will never contain the entirety of the information. This is true in general, but even more if you use colormaps that are do not change accordingly with the data. The amount of blue in jet is massively bigger in range than the amount of orange, or another color. The color changes are non-linear with the data changes, and this will make you miss a lot of resolution. You may know what value orange corresponds to, but blue will be a very wide range of possible values.
In short:
Triying to get data from representation of data (i.e. plots) is a terrible idea
jet is a terrible idea

Reading voxel values from binary file into matlab

I have a 16bit voxel data set from which I need to extract the integer values for each voxel. The data set can be downloaded from here, it is the 'Head Aneuyrism 16Bits' data set (You need to click on the blood vessels image to download the 16bit version). Its size is 512x512x512, but I don't know whether it is greyscale or color, nor if that matters. Looking at the image on the website I'd guess that it is color, but I am not sure whether the image should be taken literally.
A related question on SO is the following: How can I read in a RAW image in MATLAB?
and the following on mathworks: http://www.mathworks.com/matlabcentral/answers/63311-how-to-read-an-n-dimensioned-matrix-from-a-binary-file
Thanks to the information in the answers to these questions I managed to extract some information from the file with matlab as follows:
fileID=fopen('vertebra16.raw','r');
A=fread(fileID,512*512*512,'int16');
B=reshape(A,[512 512 512]);
I don't need to visualise the image, I only need to have the integer values for each voxel, but I am not sure whether I am reading the information in the correct way with my script.
The only way I found to try and check whether I have the correct voxel values is to visualise B using the following:
implay(B)
Now, with the code above, and then using implay(B) I get a black and white movie with a white disc in the center and black background and some black pixels moving in the disc (I tried to upload a frame of the movie, but it didn't work). Looking at the image on the website from which I downloaded the file, the movie frames I get seem quite different from that image, so I'd conclude that I do not have the correct voxel values.
Here are some questions related to my problem:
Do I need to know whether the image is in grey scale or color to read the voxel values correctly?
On the data set website there is only written that the data set is in 16bit format, so how do I know whether I am dealing with signed or unsigned integers?
In the SO question linked to above they use 'uint8=>uint8'. I could not find this in the matlab manual, so I wonder whether 'uint8=>uint8' is an obsolete matlab notation for 'uint8' or if it does something different. I suspect that it does something different since if I use 'int16=>int16' instead of 'int16' in my code above I get a completely black movie with implay.
It looks like you read the data correctly.
The problem when displaying it is the scale of the values. implay seems to assume the values to be in [0,1] and therefore clamps all values to be in that range, where are your data range is [0,3000].
Simply doing
B = B / max(B(:))
will rescale your data to [0,1] and looking at the data again with
implay(B)
shows you something much more sensible.

Matlab rgb2hsv changes image

I am trying to convert an array of RGB values into an array of HSV values in Matlab. I am running the following code:
pic = imread('ColoradoMountains.jpg');
pic = rgb2hsv(pic);
imwrite(pic,'pic.jpg')
But the image that gets written has completely different colors. I've been trying to set the color map to hsv, but I'm not sure if that even makes sense. Nothing on the internet comes up with my keywords, can someone please point me in the right direction?
You can specify the colormap that imwrite is supposed to use. Try this:
imwrite(pic,colormap('HSV'),'pic.png');
Here's the documentation for imwrite: http://www.mathworks.com/help/matlab/ref/imwrite.html
In Matlab you have to distinguish between an indexed image and an 3-channel image. An indexed image is a n*m*1 image where each value of the [0,1] range is associated to a colour. This association is called colour map, which could be for example a unit circle in HSV or RGB. This can be written using imwrite with the map parameter, but this is not what you want.
What you obviously want is an HSV-encoded image, which means the three rgb-channels are converted to three hsv channels. This is (as far as I know) not possible. If you take a look into the documentation of imwrite, you can see how CMYK-Encoded images are written, this is done implicit passing a n*m*4 image. Is there any of the standard file formats which supports HSV? If so I'll take a closer look at this format.

Dealing with filters and colour's

I want to make filters like shown here
these are my target filters but can you please guide me how to go for them
how i can make filters like these?
which algorithms i need to follow? and which step i need to take as beginner?
Which is the better and easiest way to get the values of RGB and shades of filters .
copy of image from link above by spektre:
the source image is the first after camera in the first line.
very hard to say from single non test-screen image.
the black and white filter
is easy just convert RGB to intensity i and then instead RGB write iii color. The simplest not precise conversion is
i=(R+G+B)/3
but better way is use of weights
i=w0*R+w1*G+w2*B
where w0+w1+w2=1 the values can be found by a little google search effort
the rest
some filters seem like over exponated colors or weighted colors like this:
r=w0*r; if (r>255) r=255;
g=w1*g; if (g>255) g=255;
b=w2*b; if (b>255) b=255;
write an app with 3 scrollbars for w0,w1,w2 in range <0-10> and redraw image with above formula. After little experimenting you should find w0,w1,w2 for most of the filters ... The rest can be mix of colors like this:
r=w00*r+w01*g+w02*b; if (r>255) r=255;
g=w10*r+w11*g+w12*b; if (g>255) g=255;
b=w20*r+w21*g+w22*b; if (b>255) b=255;
or:
i=(r+g+b)/3
r=w0*r+w3*i; if (r>255) r=255;
g=w1*g+w3*i; if (g>255) g=255;
b=w2*b+w3*i; if (b>255) b=255;
btw if you want the closest similarity you can:
find test colors in input image
like R shades, G shades , B shades , RG,RB,BG,RGB shades from 0-255. Then get colors from filtered image at the same position and draw depedency graphs for each shade draw R,G,B intensities.
One axis is input image color intensity and the other one is R,G,B intensity of filtered color. Then you should see which formula is used directly and can also compute the weights from it. This is how over-exponation works for Red color
if the lines are not lines but curves
then some kind of gamma correction is used so formulas use polynomial of higher order (power of 2,3,4...) mostly power of 2 suffice. In that case the weights can be also negative !!!
some filters could use different color spaces
for example transform RGB to HSV shift hue and convert back to RGB. That will shift colors a little.

Image Compression Algorithm - Breaking an Image Into Squares By Color

I'm trying to develop a mobile application, and I'm wondering the easiest way to convert an image into a text file, and then be able to recreate it later in memory said text. The image(s) in question will contain no more than 16 or so colors, so it would work out fine.
Basically, brute-forcing this solution would require me saving each individual's pixel color data into a file. However, this would result in a HUGE file. I know there's a better way - like, if there's a huge portion of the image that consists of the same color, breaking up the area into smaller squares and rectangles and saving their coordinates and size to file.
Here's an example. The image is supposed to be just black/white. The big color boxes represent theoretical 'data points' in the outputted text file. These boxes would really state their origin, size, and what color they should be.
E.g., top box has an origin of 0,0, a size of 359,48, and it represents the color black.
Saved in a text file, the data would be 0,0,359,48,0.
What kind of algorithm would this be?
NOTE: The SDK that I am using cannot return a pixel's color from an X,Y coordinate. However, I can load external information into the program from a text file and manipulate it that way. This data that I need to export to a text file will be from a different utility that will have the capability to get a pixel's color from X,Y coordinates.
EDIT: Added a picture
EDIT2: Added constraints
Could you elaborate on why you want to save an image (or its parts) as plain text? Can't you use a binary representation instead? Also, if images typically have lots of contiguous runs of pixels of same color, you may want to use the so-called run-length encoding (RLE). Alternatively, one of Lempel-Ziv-something compression algorithms could be used (LZ77, LZ78, LZW).
Compress the image into a compressed format (e.g. JPEG, PNG, GIF, etc) and then save it as a .txt file or whatever. To recreate the image, just read in the file into your program using whatever library function suits your particular needs.
If it's necessary that the .txt file have some textual meaning, then you may be in some trouble.
In cs there is an algorithm like spatial index to recursivley subdivide a plane into 4 tiles. If the cell has the same size it looks like a quadtree. If want you to subdivide a plane into pattern (of colors) you can use this tiling idea to dynamically change the size of the cell. A good start to look at is a z-curve or a hilbert curve.

Resources