If we shift the hue by 2*pi/3, what will the R, G, B, histograms change? - image

If we shift the hue by 2*pi/3, what will the R, G, B, histograms change?
How can I test this? I have access to photoshop, so is there a way to test this and find the answer?

According to HSV into RGB conversion formula (part of it):
Shifting HUE by 120° will swap channel histograms:
+120° : R-->G-->B-->R
-120° : B<--R<--G<--B
To test this in GIMP,- open image histogram in Colors \ Info \ Histogram.
Choose Red,Green or Blue channel to see it's histogram and then open dialog
Colors \ Hue-Saturation and then adjust Hue by +- 120 degrees and see live effect in Histogram window.

I do not think there is an generic answer to this as the result is dependent on the image colors present not just on R,G,B histograms. You need to:
compute histograms
convert RGB to HSV
add hue and clamp it to angular interval
convert back to RGB
compute histograms
I do not use photoshop but I think #1,#2,#4,#5 should be present there. the #3 should be there too (in some filter that manipulates brithness, gama etc) but hard to say if adding to hue will be clamped by only limiting angle or it will handle it as periodic value. In the first case you need to correct the results by:
compute histograms
convert to HSV
clone result A to second image B
add A.hue+=pi/3 and **B.hue-=2*pi/3
the A holds un-clamped colors and B the colors that were clamped in A shifted to the correct hue posititon.
in A recolor all pixels with hue==pi2 with some specified color
the pi2 should be the value your tool clamped hues above pi2 so it can be zero, pi2 or one step less then pi2. This will allow as to ignore clamped values later.
in B recolor all pixels with hue==0 with some specified color
convert A,B to RGB
compute histograms ignoring specified color
merge the A,B histograms
simply add the graph values together.
And now you can compare the histograms to evaluate the change on some sample images.
Anyway you can do all this in any programing language. For example most of the operations needed are present in most image processing and computer vision libs like OpenCV and adding to hue are just 2 nested for loops addition and single if statement like:
for (y=0;y<ys;y++)
for (x=0;x<xs;x++)
{
pixel[y][x].h+=pi2/3.0;
if (pixel[y][x].h>=pi2)
pixel[y][x].h-=pi2;
}
of coarse most HSV pixel formats I used does not use floating values so the hue could be represented for example by 8 bit unsigned integer in which case the code would look like:
for (y=0;y<ys;y++)
for (x=0;x<xs;x++)
pixel[y][x].h=(pixel[y][x].h+(256/3))&255;
If you need to implement the RGB/HSV conversions look here:
RGB value base color name
I think this might interests you:
HSV histogram

Looking at it from a mathematical point of view 2×pi/3 with pi = 3.14 you have 2×pi which is the the "scope" of a circle.
Devided by 3 that means you have a third of a circle or simply 120°

Related

How can I change the overall hue of an image, akin to Photoshop's hue adjustment?

Within Photoshop, I can adjust the overall hue of an image so that the black/white/grey levels remain unaffected while the colour hues change respectively to around -100 (picture of scale attached). Is there a MATLAB or PsychToolbox function that could accomplish this?
For example, some function like newImage = hueadjust(Image, -100) or something simple like this.
I have many images, and I'm sure there's a way to batch all my images through Photoshop so the same hue scale is applied to them all, but wondering if this can be accomplished relatively easily through code given my lack of coding experience.
Yeah that's pretty simple. Simply convert the image to HSV, take the hue channel and add or subtract modulo 360 degrees, then convert the image back. Note that manipulating the hue does not affect the black/gray/white colours seen in the image which is actually what you desire. The saturation and value components in the HSV colour space do that for us. This addition and/or subtraction of each pixel's hue component is what Photoshop (and other similar graphical editors like GIMP) performs under the hood.
Use the function rgb2hsv and its equivalent counterpart hsv2rgb. However, the dynamic range of each converted channel is from [0,1] but usually the hue is represented between 0 and 360 degrees. Therefore, multiply the hue channel by 360, do the modulo, and then divide by 360 when you're done.
If you wanted to create a function called hueadjust as you have in your post, simply do this:
function newImage = hueadjust(img, hAdjust)
hsv = rgb2hsv(img);
hue = 360*hsv(:,:,1);
hsv(:,:,1) = (mod(hue + hAdjust, 360)) / 360;
newImage = hsv2rgb(hsv);
end
The first line of code converts an image into HSV. The next line of code copies over the hue channel into a variable called hue and multiplies each value by 360. We then write back into the hue channel of the original image by taking each hue value of the original image (stored in hue) adding each hue by hAdjust which is the desired shift in degrees and ensuring that we wrap around back to 0 if we exceed 360 degrees (that's what mod is doing), then dividing the result by 360 to get back to the [0,1] dynamic range that is expected. This modified image is now converted back into RGB and is sent to the output.
Take note that the output image will be of a double type due to the call to hsv2rgb. The dynamic range for each channel is still within the range of [0,1] so you can easily write these images to file without a problem. If you for some reason want to go back to the original type of the image itself, you can use functions such as im2uint8 from the Image Processing Toolbox to convert an input back into its unsigned 8-bit integer form. Most images seen in practice adopt this standard.
Here's a quick example seeing how this function works. Also let's take a look at what the hue circle looks like just for completeness:
Source: Lode's Computer Graphics Tutorial
We can see that red has a hue of around 0 degrees while magenta has a hue of around 315 degrees or -45 degrees. Let's also use the onions.png image that is accompanied with the Image Processing Toolbox in MATLAB. It looks like this:
Let's also read in this image first. The above image can be read in directly from the Internet using the URL that the image refers to:
im = imread('http://i.stack.imgur.com/BULoQ.png');
The nice thing about the imread function from the IPT is that you can read images directly from the Internet. I'd like to transform or shift the hues so that the red hues are magenta so that means shifting all of the hues in the each by -45 degrees. So run the function like so:
out = hueadjust(im, -45);
Showing the image with imshow(out), we get:
Note that the rest of the hues when transformed make sense. The greens are a rather dull green and are hovering probably at around the 60 degree hue or so. Subtracting by 45 would push this to a red hue and you can see that in the resulting image. The same can be said about the yellow colours as they hover at around a 30 - 40 degree hue. Also note that the white colours are affected a bit slightly most likely because the colours seen in those white areas are not purely white (a.k.a. not 100% saturation) which is to be expected.

what is the difference between image vs imagesc in matlab

I want to know the difference between imagesc & image in matlab
I used this example to try to figure out the difference beween the two but i couldn't explain the difference in the output images by myself; could you help me with that ?
I = rand(256,256);
for i=1:256
for j=1:256
I(i,j) = j;
end
end
figure('Name','Comparison between image et imagesc')
subplot(2,1,1);image(I);title('using image(I)');
subplot(2,1,2);imagesc(I);title('using imagesc(I)');
figure('Name','gray level of image');
image(I);colormap('gray');
figure('Name','gray level of imagesc');
imagesc(I);colormap('gray');
image displays the input array as an image. When that input is a matrix, by default image has the CDataMapping property set to 'direct'. This means that each value of the input is interpreted directly as an index to a color in the colormap, and out of range values are clipped:
image(C) [...] When C is a 2-dimensional MxN matrix, the elements of C are used as indices into the current colormap to determine the color. The
value of the image object's CDataMapping property determines the
method used to select a colormap entry. For 'direct' CDataMapping (the default), values in C are treated as colormap indices (1-based if double, 0-based if uint8 or uint16).
Since Matlab colormaps have 64 colors by default, in your case this has the effect that values above 64 are clipped. This is what you see in your image graphs.
Specifically, in the first figure the colormap is the default parula with 64 colors; and in the second figure colormap('gray') applies a gray colormap of 64 gray levels. If you try for example colormap(gray(256)) in this figure the image range will match the number of colors, and you'll get the same result as with imagesc.
imagesc is like image but applying automatic scaling, so that the image range spans the full colormap:
imagesc(...) is the same as image(...) except the data is scaled to use the full colormap.
Specifically, imagesc corresponds to image with the CDataMapping property set to 'scaled':
image(C) [...] For 'scaled' CDataMapping, values in C are first scaled according to the axes CLim and then the result is treated as a colormap index.
That's why you don't see any clipping with imagesc.

Color quantization of an image using K-means clustering (using RGB features)

Is it possible to clustering for RGB + spatial features of images with matlab?
NOTE: I want to use kmeans for clustering.
In fact basicly i want to do one thing, i want to get this image
from this
I think you are looking for color quantization.
[imgQ,map]= rgb2ind(img,4,'nodither'); %change this 4 to the number of desired colors
%in quantized image
imshow(imgQ,map);
Result:
Using kmeans :
%img is the original image
imgVec=[reshape(img(:,:,1),[],1) reshape(img(:,:,2),[],1) reshape(img(:,:,3),[],1)];
[imgVecQ,imgVecC]=kmeans(double(imgVec),4); %4 colors
imgVecQK=pdist2(imgVec,imgVecC); %choosing the closest centroid to each pixel,
[~,indMin]=min(imgVecQK,[],2); %avoiding double for loop
imgVecNewQ=imgVecC(indMin,:); %quantizing
imgNewQ=img;
imgNewQ(:,:,1)=reshape(imgVecNewQ(:,1),size(img(:,:,1))); %arranging back into image
imgNewQ(:,:,2)=reshape(imgVecNewQ(:,2),size(img(:,:,1)));
imgNewQ(:,:,3)=reshape(imgVecNewQ(:,3),size(img(:,:,1)));
imshow(img)
figure,imshow(imgNewQ,[]);
Result of kmeans :
If you want to add distance constraint to kmeans, the code will be slightly different. Basically, you need to concatenate pixel coordinates of corresponding pixel vales too. But remember, while assigning nearest centroid to each pixel, assign only the color i.e. the first 3 dimensions, not the last 2. That doesn't make sense, obviously. The code is very similar to the previous, please note the changes and understand them.
[col,row]=meshgrid(1:size(img,2),1:size(img,1));
imgVec=[reshape(img(:,:,1),[],1) reshape(img(:,:,2),[],1) reshape(img(:,:,3),[],1) row(:) col(:)];
[imgVecQ,imgVecC]=kmeans(double(imgVec),4); %4 colors
imgVecQK=pdist2(imgVec(:,1:3),imgVecC(:,1:3));
[~,indMin]=min(imgVecQK,[],2);
imgVecNewQ=imgVecC(indMin,1:3); %quantizing
imgNewQ=img;
imgNewQ(:,:,1)=reshape(imgVecNewQ(:,1),size(img(:,:,1))); %arranging back into image
imgNewQ(:,:,2)=reshape(imgVecNewQ(:,2),size(img(:,:,1)));
imgNewQ(:,:,3)=reshape(imgVecNewQ(:,3),size(img(:,:,1)));
imshow(img)
figure,imshow(imgNewQ,[]);
Result of kmeans with distance constraint:

Dealing with filters and colour's

I want to make filters like shown here
these are my target filters but can you please guide me how to go for them
how i can make filters like these?
which algorithms i need to follow? and which step i need to take as beginner?
Which is the better and easiest way to get the values of RGB and shades of filters .
copy of image from link above by spektre:
the source image is the first after camera in the first line.
very hard to say from single non test-screen image.
the black and white filter
is easy just convert RGB to intensity i and then instead RGB write iii color. The simplest not precise conversion is
i=(R+G+B)/3
but better way is use of weights
i=w0*R+w1*G+w2*B
where w0+w1+w2=1 the values can be found by a little google search effort
the rest
some filters seem like over exponated colors or weighted colors like this:
r=w0*r; if (r>255) r=255;
g=w1*g; if (g>255) g=255;
b=w2*b; if (b>255) b=255;
write an app with 3 scrollbars for w0,w1,w2 in range <0-10> and redraw image with above formula. After little experimenting you should find w0,w1,w2 for most of the filters ... The rest can be mix of colors like this:
r=w00*r+w01*g+w02*b; if (r>255) r=255;
g=w10*r+w11*g+w12*b; if (g>255) g=255;
b=w20*r+w21*g+w22*b; if (b>255) b=255;
or:
i=(r+g+b)/3
r=w0*r+w3*i; if (r>255) r=255;
g=w1*g+w3*i; if (g>255) g=255;
b=w2*b+w3*i; if (b>255) b=255;
btw if you want the closest similarity you can:
find test colors in input image
like R shades, G shades , B shades , RG,RB,BG,RGB shades from 0-255. Then get colors from filtered image at the same position and draw depedency graphs for each shade draw R,G,B intensities.
One axis is input image color intensity and the other one is R,G,B intensity of filtered color. Then you should see which formula is used directly and can also compute the weights from it. This is how over-exponation works for Red color
if the lines are not lines but curves
then some kind of gamma correction is used so formulas use polynomial of higher order (power of 2,3,4...) mostly power of 2 suffice. In that case the weights can be also negative !!!
some filters could use different color spaces
for example transform RGB to HSV shift hue and convert back to RGB. That will shift colors a little.

Value as colour representation

Converting a value to a colour is well known, I do understand the following two approaches (very well described in changing rgb color values to represent a value)
Value as shades of grey
Value as brightness of a base colour (e.g. brightness of blue)
But what is the best algorithm when I want to use the full colour range ("all colours"). When I use "greys" with 8bit RGB values, I actually do have a representation of 256 shades (white to black). But if I use the whole range, I could use more shades. Something like this. Also this would be easier to recognize.
Basically I need the algorithm in Javascript, but I guess all code such as C#, Java, pseudo code would do as well. The legend at the bottom shows the encoding, and I am looking for the algorithm for this.
So having a range of values(e.g. 1-1000), I could represent 1 as white and 1000 as black, but I could also represent 1 as yellow and 1000 as blue. But is there a standard algorithm for this? Looking at the example here, it is shown that they use colour intervals. I do not only want to use greys or change the brightness, but use all colours.
This is a visual demonstration (Flash required). Given values a represented in a color scheme, my goal is to calculate the colours.
I do have a linear colour range, e.g. from 1-30000
-- Update --
Here I found that here is something called a LabSpace:
Lab space is a way of representing colours where points that are close to each other are those that look similar to each other to humans.
So what I would need is an algorithm to represent the linear values in this lab space.
There are two basic ways to specify colors. One is a pre-defined list of colors (a palette) and then your color value is an index into this list. This is how old 8-bit color systems worked, and how GIF images still work. There are lists of web-safe colors, eg http://en.wikipedia.org/wiki/Web_colors, that typically fit into an 8-bit value. Often similar colors are adjacent, but sometimes not.
A palette has the advantage of requiring a small amount of data per pixel, but the disadvantage that you're limited in the number of different colors that can be on the screen at the same time.
The other basic way is to specify the coordinates of a color. One way is RGB, with a separate value for each primary color. Another is Hue/Saturation/Luminance. CMYK (Cyan, Magenta, Yellow and sometimes blacK) is used for print. This is what's typically referred to as true color and when you use a phrase like "all colors" it sounds like you're looking for a solution like this. For gradients and such HSL might be a perfect fit for you. For example, a gradient from a color to grey simply reduces the saturation value. If all you want are "pure" colors, then fix the saturation and luminance values and vary the hue.
Nearly all drawing systems require RGB, but the conversion from HSL to RGB is straight forward. http://en.wikipedia.org/wiki/HSL_and_HSV
If you can't spare the full 24 bits per color (8 bits per color, 32-bit color is the same but adds a transparency channel) you can use 15 or 16 bit color. It's the same thing, but instead of 8 bits per color you get 5 each (15 bit) or 5-6-5 (16 bit, green gets the extra bit because our eyes are more sensitive to shades of green). That fits into a short integer.
It depends on the purposes of your datasets.
For example, you can assign a color to each range of values (0-100 - red, 100-200 - green, 200-300 - blue) by changing the brightness within the range.
Horst,
The example you gave does not create gradients. Instead, they use N preset colors from an array and pick the next color as umbr points out. Something like this:
a = { "#ffffff", "#ff00ff", "#ff0000", "#888888", ... };
c = a[pos / 1000];
were pos is your value from 1 to 30,000 and c is the color you want to use. (you'd need to better define the index than pos / 1000 for this to work right in all situations.)
If you want a gradient effect, you can just use the simple math shown on the other answer you pointed out, although if you want to do that with any number of points, it has to be done with triangles. You'll have a lot of work to determine the triangles and properly define every point.
In JavaScript, it will be dog slow. (with OpenGL it would be instantaneous and you would not even have to compute the gradients, and that would be "faster than realtime.")
What you need is a transfer function.
given a float number, a transfer function can generate a color.
see this:
http://http.developer.nvidia.com/GPUGems/gpugems_ch39.html
and this:
http://graphicsrunner.blogspot.com/2009/01/volume-rendering-102-transfer-functions.html
the second article says that the isovalue is between [0,255]. But it doesn't have to be in that range.
Normally, we scale any float number to the [0,1] range, and apply transfer function to get the color value.

Resources