What algorithm does Photoshop use to desaturate an image? - algorithm

I have been trying to figure out what kind of mathematical algorithm that programs like Photoshop use when they desaturate each pixel of an image. By desaturate, I mean turning a colored image into a greyscale image and still maintaining the colorspace. I am still talking about an RGB image but one that has just been desaturated in color and is now black and white.
Does anyone know what kind of algorithm is used?

Desaturating is pretty simple. The usual is something like G*.59+R*.3+B*.11
Photoshop also has a B&W conversion tool that (basically) lets you select the factor for each. For example, you can get the effect of a red filter by increasing the percentage of red, and decreasing the green and blue to match.

As noted in the comments, the accepted answer is not the formula used by Photoshop. The real Photoshop desaturate formula is average of minimum RGB and maximum RGB components.
float bw = (fminf(r, fminf(g, b)) + fmaxf(r, fmaxf(g, b))) * 0.5f;
I believe HSL operations in Photoshop are run in min-max-hue space, so this formula is chosen for speed.

Related

Calculate contrast of a color image(RGB)

In black and white image,we can easily calculate the contrast by (total no. of white pixels - total no. of black pixels).
How can I calculate this for color(RGB) image?
Any idea will be appreciated?
You may use the standard Deviation of the grayscale image as measure for the contrast. This is called "RMS contrast". See https://en.wikipedia.org/wiki/Contrast_(vision)#RMS_contrast for Details.
Contrast is defined as the difference between the highest and lowest intensity value of the image. So you can easily calculate it from the respective histogram.
Example: If you have a plain white image, the lowest and highest value are both 255, thus the contrast is 255-255=0. If you have an image with only black (0) and white (255) you have a contrast of 255, the highest possible value.
The same method can be applied for color images if you calculate the luminescence of each pixel (and thus convert the image to greyscale). There are several different methods to convert images to greyscale, you can chose one you like.
To make the approach more sophisticated, it is advisable to ignore a certain percentage of pixels to account for outliers (otherwise a single white and black pixel would lead to "full contrast", regardless of all other pixels). Another approach would be to take the number of dark and light pixels into account, as described by #Yves Daoust. This approach has the flaw that one has to set an arbitrary threshold to determine which pixels count as dark/light (usually 127).
This does not have a single answer. One idea I can think of is to operate on each of the three channels separately, Red, Green and Blue. Compute the histogram of each channel and operate on it.
A simple google search resulted in many relevant algorithms, one of them that I have used is Root Mean Square (standard deviation of the pixel intensities).

Processing image to get the accent color

I want to get the most used color from image.
In most used color I dont mean specific pixel, I mean a most used color RANGE.
For example if there is an 2x3 pixels image and two pixels are f00(red) and the rest for are: 0b0, 0c0, 0d0, 0e0, 0f0 (Kind of green), I should get 0d0 (average of greens) and not the F00 (red, because there are exactly 2 pixels of this color).
I want to distinguish that kind of cases.
How am I supposed to do it?
Or where can I find materials to learn how it can be done?
Thanks.
Search about Color histogram using matlab.
There are a lot of recourse for this topic.

how to perform color quantization in matlab or otherwise

I am implementing a machine learning algorithm in matlab, and was doing some reading up on the color range of the human eye, and was informed that the human eye can only perceive about 17,000 colors, where as the images I have about 256^3 colours. What is the best way to quantization my images, in matlab or otherwise.
Also, as a side question in terms of machine learning, which one is better to use bitmap or jpeg?
JPEG is a lossy format. You should not use it if your input data is not already JPEG. Even if so, you should not re-compress your data to avoid introduction of further artifacts.
A very simple, yet popular method for color quantization is the k-means algorithm. You can find it in Matlab. This is a good starting point. However, there exist a broad range of paradigms and methods in recent research.
If your colour quantisation aims to somehow mimic human perception I recommend moving from the sRGB space to something more bio-inspired like LAB where L stands for overall luminance, A for the red-green colour pair and B for yellow-blue. Using LAB will allow you a first stab at "illumination invariant" colour quantisation. There is a number of RGB2Lab conversion codes on the web. Then I would discard the L channel completely unless you also want to encode black and white.
Finally, the 17000 colours number claim is meaningless. Men perceive 7 colours: red purple pink orange yellow green blue.
Color Reduction Using K-Means Clustering (Matlab).

Generic algorithm to get and set the brightness of a pixel?

I've been looking around for a simple algorithm to get and set the brightness of a pixel, but can't find anything - only research papers and complex libraries.
So does anyone know what is the formula to calculate the brightness of a pixel? And which formula should I use to change the brightness?
Edit: to clarify the question. I'm using Qt with C++ but I'm mainly looking for a generic math formula - I will adapt it to the language. I'm talking about RGB pixels of an image in memory. By "brightness", I mean the same as in Photoshop - changing the brightness makes the image more "white" (a brightness value of 1.0 is completely white), decreasing it makes it more "black" (value of 0.0).
Change the color representation to HSV. The V component stands for value and represents the brightness!
Here the algorithm implemented in PHP.
Here is a description of how to do it in C.
What do you mean by a pixel?
You can set the brightness of a pixel in an image with '=' you just need to know the memory layout of the image
To set a pixel on the screen is a little more complicated

Get dominant colors from image discarding the background

What is the best (result, not performance) algorithm to fetch dominant colors from an image. The algorithm should discard the background of the image.
I know I can build an array of colors and how many they appear in the image, but I need a way to determine what is the background and what is the foreground, and keep only the second (foreground) in mind while read the dominant colors.
The problem is very hard especially for gradient backgrounds or backrounds with patterns (not plain)
Isolating the foreground from the background is beyond the scope of this particular answer, but...
I've found that applying a pixelation filter to an image will draw out a really good set of 'average' colours.
Before
After
I sometimes use this approach to derive a pallete of colours with a particular mood. I first find a photograph with the general tones I'm after, pixelate and then sample from the resulting image.
(Thanks to Pietro De Grandi for the image, found on unsplash.com)
The colour summarizer is a pretty sweet spot for info on this subject, not to mention their seemingly free XML Web API that will produce descriptive colour statistics for an image of your choosing, reporting back the following formatted with swatches in HTML or as XML...
what is the average color hue, saturation and value in my image?
what is the RGB colour that is most representative of the image?
what do the RGB and HSV histograms look like?
what is the image's human readable colour description (e.g. dark pure blue)?
The purpose of this utility is to generate metadata that summarizes an
image's colour characteristics for inclusion in an image database,
such as Flickr. In particular this tool is being used to generate
metadata for Flickr's Color Fields group.
In my experience though.. this tool still misses the "human-readable" / obvious "main" color, A LOT of the time. Silly machines!
I would say this problem is closer to "impossible" than "very hard". The only approach to it that I can think of would be to make the assumption that the background of an image is likely to consist of solid blocks of similar colors, while the foreground is likely to consist of smaller blocks of dissimilar colors.
If this assumption is generally true, then you could scan through the whole image and weight pixels according to how similar or dissimilar they are to neighboring pixels. In other words, if a pixel's neighbors (within some arbitrary radius, perhaps) were all similar colors, you would not incorporate that pixel into the overall estimate. If the neighbors tend to be very different colors, you would weight the pixel heavily, perhaps in proportion to the degree of difference.
This may not work perfectly, but it would definitely at least tend to exclude large swaths of similar colors.
As far as my knowledge of image processing algorithms extends , there is no certain way to get the "foreground"; it is only possible to get the borders between objects. You'll probably have to make do with an average, or your proposed array count method. In that, you'll want to give colours with higher saturation a higher "score" as they're much more prominent.

Resources