In a program I am working on, I am trying to classify colors based on their RGB values as red, orange, yellow, green, blue, or white. I am classifying them by comparing the given RGB values to a constant "ideal" value for each color and finding the minimum euclidean distance in a three dimensional space. However, I am having trouble when the color I am analyzing comes from a dark image as the current program has difficulty differentiating between orange, yellow, and red within dark images. How should I fix or work around this issue?
The environment is always going to cause problems. Say if the only light source is red you won't be able to tell apart gray, green or blue.
If your situation is not that complex you could try to adjust the image. Almost all image processing software/libraries have some functions to that end. Probably the best solution would be to have some calibration components in the scene like a white ball that you know what color it should be and can help you adjust the image and make the color identification easier.
Related
Let's suppose we have a regular RGB image. Now we would want to approximate the color of each individual pixel of our source image with a color out of a small set of colors.
For example, all tones of red should be converted to that specific red out of my set of colors, same goes for green, blue, etc.
Is there any elegant way/algorithm to achieve this?
I am developing an application for treatment of children. It must show different images for left and right eyes. I decided to use cheap red-cyan glasses for separating the fields of view of the eyes. The first eye will see only red images the second one - only cyan.
The problem is that colors on monitor are not really red and cyan. Also glasses are not ideal. I need to implement the calibration procedure for searching the best red and cyan colors for current monitor and glasses. I mean I need to change white (color of background), red and cyan classes on some more suitable colors to make red and cyan colors visible only for one eye.
Does anybody know any algorithms for calibrating anaglyph colors? I think I need to implement a special UI for calibrating colors. I am developing an application for iOS and Android.
You obviously lack the background knowledge.
Monitors
Nowadays are used mostly LCDs these emits 3 basic wavelength bands (R,G,B). Red and green have fairly sharp spectra but blue is relatively wide. It also emits cyan and orange wavelength bands (not as sharp as R,G but sharper then B).
I suspect these two are from back-light (an is present on all devices I measured even phones)
Anaglyph glasses
these are band filters so they block all wavelength out of their range up to a scale
spectra
This is how it looks like (White on my LCD):
and how I see/interpret it:
bands are approximate (I have just homemade spectroscope with nonlinear scale and not spectrograph) and I am unable to take clear image of the spectra (have only automatic cameras). The back-light residue is blocked by my glasses completely and even the cyan filter passes it but it lower the brightness to point that is unseen to me on mine current LCD brightness settings.
calibration
The wavelengths you can use are just R,G,B (no matter the color).
Color is not the same as wavelength it is just subjective Human Perception not a physical variable !!!
so the color is irrelevant just filter image for one eye by setting all pixels with R,G only and the other one with R only and merge them together.
The only thing to calibrate is brightness. The filters in glasses should have the same blocking properties but the cheap one usually have not. That means one eye is getting different brightness then the other one which can cause discomfort so you can multiply pixels by the brightness (separate value for left and right eye). This is the only thing to calibrate the less quality filters the darker image you need.
anaglyphs colors
you can use B/W images these are most comfortable to look at. You can use Color images too but for some colors (like blue water) is this uncomfortable because one eye see it and the other not. Brain computes the rest but the feeling is uncomfortable after time. It is similar to hearing music that is off key.
It can be helped by adding white-ish component to such color but that will lose the color correctness of image it depends on what you need to do ...
anaglyphs eye distance
I am from central Europe so all data bellow are from that region !!!
average distance of human eyes view axises is 6.5 cm.
male horizontal FOV angle is 90 degree (including peripheral vision)
male horizontal FOV angle is 60 degree (excluding peripheral vision)
so if your anaglyph render has real sizes then set the FOV and cameras distances accordingly. If not then you also should add horizontal camera distance to calibration because the depth perception is affected also by:
the distance of viewer to monitor
their subjective depth perception
the scale the objects are rendered (also the monitor/image size)
I recently asked a question about using matlab to reduce the number of colors in an image. However, when I attempted this, I was only able to get color approximations which then matched the pixel to the nearest color within the color map.
For example, using a color map with only three colors [red, green, blue], it would scan each color and then map either red green or blue. However, this process did not vary the RGB densities to create realistic looking color.
I'm curious if there is any sort of built in function that would use these three colors and vary the density of them to achieve the average color of a certain "pixel field".
I realize this would lose resolution, but I'm essentially trying to make realistic looking images, using only three colors by varying the amounts of RGB within a certain region.
You are looking for the function rgb2ind and its 'dither' option.
I wanna to color a sprite/icon with a transparent background and with shadows. I tried to shift the hue to all pixels but it looks not so natural and I have problems with the black and the white colors in an image. If an image tend to be black shifting the hue do not change the black in red or another color even shifting by 360 degrees.
Tried to color addicting and subtracting color and even in that case the black and the white tend to be colored or disappears at all.
Maybe should I put an image on the icon to achieve the coloring effect ?
Any suggestions on how to proceed.
I lost.
You've been asking a lot about this hue shifting thing, so I figured I'd try to work out an example: http://jsfiddle.net/EMujN/3/
Here's another that uses an actual icon: http://jsfiddle.net/EMujN/4/
There's a lot in there. There's a huge data URL which you can ignore unless you want to replace it. Here's the relevant part where we modify HSL.
//SHIFT H HERE
var hMod = .3;
hsl[0]=(hsl[0]+hMod)%1;
//MODIFY S HERE
var sMod = .6;
hsl[1]=Math.max(0,Math.min(1,
hsl[1]+sMod
));
//MODIFY L HERE
var lMod = 0;
hsl[2]=Math.max(0,Math.min(1,
hsl[2]+lMod
));
I've converted to HSL because it's a lot easier to accomplish what you want in that color space than RGB.
Without getting any more complex, you have three variables you can tune: how much to add to either Hue, Saturation, or Lightness. I have the lightness variable set to 0 because any higher and you will see some nasty JPEG artifacts (if you can find a decent .png that would be better, but I went with the first CC night image I could find).
I think the hue shift (yellow to green) looks pretty good though and I have maxed out the saturation, so even a normally white light appears bright purple. Like I said in my comment, you will need to increase the lightness and saturation if you want to colorize patches of black and white. Hopefully, you can figure out what you need from this example.
image used: http://commons.wikimedia.org/wiki/File:Amman_(Jordan)_at_night.jpg
I found a better solution by myself which can solve the problem with the black and white.
So basically the solution can be solved in multiple steps. Here I will define the steps. Later I'll provide some working code:
Get the image
Calculate the predominant color, averaging the image pixels or simply providing an input RGB value which is the predominant that your eye can catch.
If the predominant tends to be black or white, or both, the image has to be recolored with an addictive or subtractive method, addictive if black, subtractive if white. So basically all RGB pixels should be attenuated or sharpened until RED. I think that the best solution should be RED, because RED is first in the HUE scale, and this can help when we will hue-shift the pixels.
To have a unique algorithm which can work with different kind of images, not only black predominant or white, ideally the input the non-black and non-white predominant images should be pre-hueshifted manually, using photoshop or with another algorithm in a way that the new predominant color results to be RED too
After that the Hue shifting coloring is straighforward. We know that the predominant color is RED for all the images, and we'll shift the HUE values with a difference between the HSV value of the desired color and the HSV of the predominant color (RED).
Game over. We have a pretty universal way to color different images with hue shifting in a natural way.
Another question could be how to authomatically pre-shift the input images which predominant color is not black or white.
But this is another question.
Why this coloring method could be considered natural. Simply consider one thing. Generally the non dominant black or white colors are part of the shadows and light which gives a 3D feel to the images. On the other hand if my shoes are 100% black and i will tint them with some colors, they will no more be black. Color the dominant black cannot be achieved simply shifting the HSV parameters but other steps should be performed. The steps are the above described.
Does it mean to control the combination between an image and a color overlay applied to it depending on the color space used (RGB, RGBA, CMYK, Lab, Grayscale, HSL, HSLA)? Or does it mean to change the color layer used in combination with other layers to form the final image? (if so, what could be changed in what regard?).
RGB are abbreviations for three color channels (red, green and blue). They represent specific frequencies of light. Inside each color channel is a range of intensity and a level of saturation. This model of colors is commonly taught in school and is how most people understand colors and mixing them. A different way to represent colors is HSL which stands for Hue, Saturation and Level. Here the Hue is the frequency of the color, while the Saturation can be like the contrast level, and Level is the amount of black. HSL (A stands for Alpha or transparency) is actually a much more programmer centric way of working with color (although most programmers seem to learn the RGB Hex values for colors). There is a great website called Mothereffing HSL which lets you play with HSL values to better understand them. CMYK is for pigments (which mix differently than light) and is found on printers. Same basic idea as RGB just with Cyan Magenta Yellow and Black. Now because light and pigments don't mix the same way there is a lot of work devoted to converting one color system to another (so you can see on your screen what will eventually come out of your printer). These systems are not perfectly aligned however so the goal is to get acceptability close.
All of these colors when presented on a graph are called the color space.