How can I change the overall hue of an image, akin to Photoshop's hue adjustment? - image

Within Photoshop, I can adjust the overall hue of an image so that the black/white/grey levels remain unaffected while the colour hues change respectively to around -100 (picture of scale attached). Is there a MATLAB or PsychToolbox function that could accomplish this?
For example, some function like newImage = hueadjust(Image, -100) or something simple like this.
I have many images, and I'm sure there's a way to batch all my images through Photoshop so the same hue scale is applied to them all, but wondering if this can be accomplished relatively easily through code given my lack of coding experience.

Yeah that's pretty simple. Simply convert the image to HSV, take the hue channel and add or subtract modulo 360 degrees, then convert the image back. Note that manipulating the hue does not affect the black/gray/white colours seen in the image which is actually what you desire. The saturation and value components in the HSV colour space do that for us. This addition and/or subtraction of each pixel's hue component is what Photoshop (and other similar graphical editors like GIMP) performs under the hood.
Use the function rgb2hsv and its equivalent counterpart hsv2rgb. However, the dynamic range of each converted channel is from [0,1] but usually the hue is represented between 0 and 360 degrees. Therefore, multiply the hue channel by 360, do the modulo, and then divide by 360 when you're done.
If you wanted to create a function called hueadjust as you have in your post, simply do this:
function newImage = hueadjust(img, hAdjust)
hsv = rgb2hsv(img);
hue = 360*hsv(:,:,1);
hsv(:,:,1) = (mod(hue + hAdjust, 360)) / 360;
newImage = hsv2rgb(hsv);
end
The first line of code converts an image into HSV. The next line of code copies over the hue channel into a variable called hue and multiplies each value by 360. We then write back into the hue channel of the original image by taking each hue value of the original image (stored in hue) adding each hue by hAdjust which is the desired shift in degrees and ensuring that we wrap around back to 0 if we exceed 360 degrees (that's what mod is doing), then dividing the result by 360 to get back to the [0,1] dynamic range that is expected. This modified image is now converted back into RGB and is sent to the output.
Take note that the output image will be of a double type due to the call to hsv2rgb. The dynamic range for each channel is still within the range of [0,1] so you can easily write these images to file without a problem. If you for some reason want to go back to the original type of the image itself, you can use functions such as im2uint8 from the Image Processing Toolbox to convert an input back into its unsigned 8-bit integer form. Most images seen in practice adopt this standard.
Here's a quick example seeing how this function works. Also let's take a look at what the hue circle looks like just for completeness:
Source: Lode's Computer Graphics Tutorial
We can see that red has a hue of around 0 degrees while magenta has a hue of around 315 degrees or -45 degrees. Let's also use the onions.png image that is accompanied with the Image Processing Toolbox in MATLAB. It looks like this:
Let's also read in this image first. The above image can be read in directly from the Internet using the URL that the image refers to:
im = imread('http://i.stack.imgur.com/BULoQ.png');
The nice thing about the imread function from the IPT is that you can read images directly from the Internet. I'd like to transform or shift the hues so that the red hues are magenta so that means shifting all of the hues in the each by -45 degrees. So run the function like so:
out = hueadjust(im, -45);
Showing the image with imshow(out), we get:
Note that the rest of the hues when transformed make sense. The greens are a rather dull green and are hovering probably at around the 60 degree hue or so. Subtracting by 45 would push this to a red hue and you can see that in the resulting image. The same can be said about the yellow colours as they hover at around a 30 - 40 degree hue. Also note that the white colours are affected a bit slightly most likely because the colours seen in those white areas are not purely white (a.k.a. not 100% saturation) which is to be expected.

Related

If we shift the hue by 2*pi/3, what will the R, G, B, histograms change?

If we shift the hue by 2*pi/3, what will the R, G, B, histograms change?
How can I test this? I have access to photoshop, so is there a way to test this and find the answer?
According to HSV into RGB conversion formula (part of it):
Shifting HUE by 120° will swap channel histograms:
+120° : R-->G-->B-->R
-120° : B<--R<--G<--B
To test this in GIMP,- open image histogram in Colors \ Info \ Histogram.
Choose Red,Green or Blue channel to see it's histogram and then open dialog
Colors \ Hue-Saturation and then adjust Hue by +- 120 degrees and see live effect in Histogram window.
I do not think there is an generic answer to this as the result is dependent on the image colors present not just on R,G,B histograms. You need to:
compute histograms
convert RGB to HSV
add hue and clamp it to angular interval
convert back to RGB
compute histograms
I do not use photoshop but I think #1,#2,#4,#5 should be present there. the #3 should be there too (in some filter that manipulates brithness, gama etc) but hard to say if adding to hue will be clamped by only limiting angle or it will handle it as periodic value. In the first case you need to correct the results by:
compute histograms
convert to HSV
clone result A to second image B
add A.hue+=pi/3 and **B.hue-=2*pi/3
the A holds un-clamped colors and B the colors that were clamped in A shifted to the correct hue posititon.
in A recolor all pixels with hue==pi2 with some specified color
the pi2 should be the value your tool clamped hues above pi2 so it can be zero, pi2 or one step less then pi2. This will allow as to ignore clamped values later.
in B recolor all pixels with hue==0 with some specified color
convert A,B to RGB
compute histograms ignoring specified color
merge the A,B histograms
simply add the graph values together.
And now you can compare the histograms to evaluate the change on some sample images.
Anyway you can do all this in any programing language. For example most of the operations needed are present in most image processing and computer vision libs like OpenCV and adding to hue are just 2 nested for loops addition and single if statement like:
for (y=0;y<ys;y++)
for (x=0;x<xs;x++)
{
pixel[y][x].h+=pi2/3.0;
if (pixel[y][x].h>=pi2)
pixel[y][x].h-=pi2;
}
of coarse most HSV pixel formats I used does not use floating values so the hue could be represented for example by 8 bit unsigned integer in which case the code would look like:
for (y=0;y<ys;y++)
for (x=0;x<xs;x++)
pixel[y][x].h=(pixel[y][x].h+(256/3))&255;
If you need to implement the RGB/HSV conversions look here:
RGB value base color name
I think this might interests you:
HSV histogram
Looking at it from a mathematical point of view 2×pi/3 with pi = 3.14 you have 2×pi which is the the "scope" of a circle.
Devided by 3 that means you have a third of a circle or simply 120°

Matlab imshow doesn't plot correctly but imshowpair does

I have imported an image. I have parsed it to double precision and performed some filtering on it.
When I plot the result with imshow, the double image is too dark. But when I use imshowpair to plot the original and the final image, both images are correctly displayed.
I have tried to use uint8, im2uint8, multiply by 255 and then use those functions, but the only way to obtain the correct image is using imshowpair.
What can I do?
It sounds like a problem where the majority of your intensities / colour data are outside the dynamic range of what is accepted for imshow when showing double data.
I also see that you're using im2double, but im2double simply converts the image to double and if the image is already double, nothing happens. It's probably because of the way you are filtering the images. Are you doing some sort of edge detection? The reason why you're getting dark images is probably because the majority of your intensities are negative, or are hovering around 0. imshow whe displaying double type images assumes that the dynamic range of intensities is [0,1].
Therefore, one way to resolve your problem is to do:
imshow(im,[]);
This shifts the display so that range so the smallest value is mapped to 0, and the largest to 1.
If you'd like a more permanent solution, consider creating a new output variable that does this for you:
out = (im - min(im(:))) / (max(im(:)) - min(im(:)));
This will perform the same shifting that imshow does when displaying data for you. You can now just do:
imshow(out);

Converting to 8-bit image causes white spots where black was. Why is this?

Img is a dtype=float64 numpy data type. When I run this code:
Img2 = np.array(Img, np.uint8)
the background of my images turns white. How can I avoid this and still get an 8-bit image?
Edit:
Sure, I can give more info. The single image is compiled from a stack of 400 images. They are each coming from an .avi video file, and each image is converted into a NumPy array like this:
gray_img = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
A more complicated operation is performed on this whole stack, but does not involve creating new images. It's simply performing calculations on each 1D array to yield a single pixel.
The interpolation is most likely linear (the default in plotting images with matplotlib. The images were saved as .PNGs.
You probably see overflow. If you cast 257 to np.uint8, you will get 1. According to a google search, avi files contain images with a color depth of 15 - 24 bit. When you cast this depth to np.uint8, you will see white regions getting darkened and (if a normalization takes place somewhere) also dark regions getting white (-5 -> 251). For the regions that become bright, you could check whether you have negative pixel values in the original image Img.
The Docs say that sometimes you have to do some scaling to get a proper cast, and to rather use higher depth whenever possible to avoid artefacts.
The solution seems to be either working at higher depth, i.e. casting to np.uint16 or np.uint32, or to scale the pixel values before reducing the depth, i.e. with Img2 already being a numpy matrix
# make sure that values are between 0 and 255, i.e. within 8bit range
Img2 *= 255/Img2.max()
# cast to 8bit
Img2 = np.array(Img, np.uint8)

Blue patch appear after subtracting image background in MATLAB

I have tried image subtraction in MatLab, but realised that there is a big blue patch on the image. Please see image for more details.
Another images showing where the blue patch approximately cover till.
The picture on the left in the top 2 images shows the picture after subtraction.You can ignore the picture on the right of the top 2 images. This is one of the original image:
and this is the background I am subtracting.
The purpose is to get the foreground image and blob it, followed by counting the number of blobs to see how many books are stacked vertically from their sides. I am experimenting how blobs method works on matlab.
Do anybody have any idea? Below is the code on how I carry out my background subtraction as well as display it. Thanks.
[filename, user_canceled] = imgetfile;
fullFileName=filename;
rgbImage = imread(fullFileName);
folder = fullfile('C:\Users\Aaron\Desktop\OPENCV\Book Detection\Sample books');
baseFileName = 'background.jpg';
fullFileName = fullfile(folder, baseFileName);
backgroundImage =imread(fullFileName);
rgbImage= rgbImage - backgroundImage;
%display foreground image after background substraction%%%%%%%%%%%%%%
subplot( 1,2,1);
imshow(rgbImage, []);
Because the foreground objects (i.e. the books) are opaque, the background does not affect those pixels at all. In other words, you are subtracting out something that is not there. What you need is a method of detecting which pixels in your image correspond to foreground, and which correspond to background. Unfortunately, solving this problem might be at least as difficult as the problem you set out to solve in the first place.
If you just want a pixel-by-pixel comparison with the background you could try something like this:
thresh = 250;
imdiff = sum(((rgbImage-backgroundImage).^2),3);
mask = uint8(imdiff > thresh);
maskedImage = rgbImage.*cat(3,mask,mask,mask);
imshow(maskedImage, []);
You will have to play around with the threshold value until you get the desired masking. The problem you are going to have is that the background is poorly suited for the task. If you had the books in front of a green screen for example, you could probably do a much better job.
You are getting blue patches because you are subtracting two color RGB images. Ideally, in the difference image you expect to get zeros for the background pixels, and non-zeros for the foreground pixels. Since you are in RGB, the foreground pixels may end up having some weird color, which does not really matter. All you care about is that the absolute value of the difference is greater than 0.
By the way, your images are probably uint8, which is unsigned. You may want to convert them to double using im2double before you do the subtraction.

How to reconstruct Bayer to RGB from Canon RAW data?

I'm trying to reconstruct RGB from RAW Bayer data from a Canon DSLR but am having no luck. I've taken a peek at the dcraw.c source, but its lack of comments makes it a bit tough to get through. Anyway, I have debayering working but I need to then take this debayered data and get something that looks correct. My current code does something like this, in order:
Demosaic/debayer
Apply white balance multipliers (I'm using the following ones: 1.0, 2.045, 1.350. These work perfectly in Adobe Camera Raw as 5500K, 0 Tint.)
Multiply the result by the inverse of the camera's color matrix
Multiply the result by an XYZ to sRGB matrix fromm Bruce Lindbloom's site (the D50 sRGB one)
Set white/black point, I am using an input levels control for this
Adjust gamma
Some of what I've read says to apply the white balance and black point correction before the debayer. I've tried, but it's still broken.
Do these steps look correct? I'm trying to determine if the problem is 1.) my sequence of operations, or 2.) the actual math being used.
The first step should be setting black and saturation point because you need to apply white balance looking after saturated pixels in order to avoid magenta highlights:
And before demosaicing, apply white balacing. See here (http://www.guillermoluijk.com/tutorial/dcraw/index_en.htm) how applying white balance before demosaicing introduce artifacts.
After the first step (debayer) you should have a proper RGB image with right colors. Remaining steps are just cosmetics. So I'm guessing there's something wrong at step one.
One problem could be the Bayer pattern you're using to generate RGB image is different from the CFA pattern of the camera. Match sensor alignment in your code to that of the camera!

Resources