Is there any theory or image filter for whitening the skin color in an image on iPhone ?
I have the RGB data on hand. But whatever I change, it cannot archieve my needs. I alter the RGB as follows:
New value of R/G/B = (int)roundf(Old R/G/B - 128) * contrast_value + 128 + brightness_value);
Where contrast_value = 1 to 1.3 and brightness_value = 0 to 50
But I found that it comes to pale yellow ...
Convert your image to YUV colorspace first. This gives you two advantages:
easily detect skin areas using this approach (see my answer to that question)
easily increase the brightness (just increment the Y value)
One thing I have seen to work is to add a constand value to all the three components of the color, like R + 0x33 G + 0x33 B + 0x33
EDITED: I just found another thread that has additional information: Programmatically Lighten a Color
Related
I have just started learning image processing by myself, and I am using MATLAB. I have been getting myself familiarized with basic image operations. When I read the below image(res: 225x300), which is supposed to be an 8-bit color image, I expected the resultant matrix to have 3 dimensions with one each for RGB. A simple web search about 8-bit color image led me to Wikipedia with the following information:
The simplest form of quantization frequently called 8-bit truecolor is to simply assign 3 bits to red, 3 to green and 2 to blue (the human eye is less sensitive to blue light) to create a 3-3-2.
Therefore, I expected the image matrix to have 225x300x3 dimensions with the above distribution of bits b/w RGB. But after I checked the dimensions of the matrix of the image, I found it to be 225x300 unit8, which is what one expects from an 8-bit grayscale image. But the image is a color image, as seen by any image viewer. So what is that I lack in knowledge or doing wrong? Is the problem with how I read the image?
Also, it occurred to me that uint8 is the smallest unsigned integer class. So how can we have colored images of 4,8,10, etc., bits represented and created?
My code:
>> I_8bit = imread('input_images\8_bit.png');
>> size(I_8bit)
ans =
225 300
>> class(I_8bit)
ans =
'uint8'
>> I_24bit = imread('input_images\24_bit.png');
>> size(I_24bit)
ans =
225 300 3
>> class(I_24bit)
ans =
'uint8'
(source: https://en.wikipedia.org/wiki/Color_depth#/media/File:8_bit.png)
Matlab supports several types of images, including
RGB images, which allow arbitrary colors, stored in terms of R,G,B components. The image is defined by a 3D m×n×3 array
Indexed images, in which each pixel is defined by an index to a colormap. The image is defined by a 2D m×n array and a c×3 colormap, where c si the number of colors.
It looks like the image you are loading is indexed. So you need the two-output version of imread to get the 2D array and the colormap:
[I_8bit, cmap] = imread('input_images\8_bit.png');
To display the image you need to specify the 2D array and the colormap:
imshow(I_8bit, cmap)
You can see the effect of changing the colormap, for example
cmap_wrong = copper(size(cmap, 1)); % different colormap with the same size
imshow(I_8bit, cmap_wrong)
To convert to an RGB image, use ind2rgb:
I_8bit_RGB = ind2rgb(I_8bit, cmap);
which then you can display as
imshow(I_8bit_RGB)
I am trying to come up with an algorithm to determine the dominant color in an image (either taken from a devices camera or by selecting an existing photo in the photo library). I have written an iOS 8 application in Swift that can grab the RGB value of each pixel in the image, but I don't really know what to do from there.
For pixels that have a distinct dominant color, say RGB(230, 15, 30), it's pretty easy to determine the dominant color. However, I don't really know what to do for pixels that have RGB values where 2 of the 3 values are similar, say RGB(200, 215, 30).
My original thought was to keep 3 counters (one for each color) and add each pixels corresponding RGB values to that counter. At the end I would divide each counter by the total number of pixels and the max of the 3 values would be the dominant color. However, like I mentioned before, when the results are close to each other I can't say that one color necessarily dominates the other.
Just looking for some thoughts and suggestions
I came up with this problem a few weeks ago, and having read many posts talking about it, I found the best method is Hierarchical Quantization presented by this post: http://aishack.in/tutorials/dominant-color/. Also, I have implemented it in python: https://github.com/wenmin-wu/dominant-colors-py . You can install it with pip:pip install dominantcolors and use it as following:
from dominantcolors import get_image_dominant_colors
dominant_colors = get_image_dominant_colors(image_path='/path/to/image_path',num_colors=3)
An idea:
First step is to reduce the number of colors, for example "Color Quantization using K-Means". In the example from the link, the number of colors was reduced to 64 from 96K.
Second step is to calculate the ratio for every color and pick the biggest value.
You can check my hobby project to find the dominant color in a UIImage: https://github.com/ruuki/ColorFinder
What it does basically is creating clusters of colors of the image and returns the most dominant one in a completion block. You can tweak threshold parameters within the source code. Hope it helps.
i had a similar task to do, here is my python code:
import picamera
import picamera.array
import numpy as np
from math import sqrt, atan2, degrees
def get_colour_name(rgb):
rgb = rgb / 255
alpha = (2 * rgb[0] - rgb[1] - rgb [2])/2
beta = sqrt(3)/2*(rgb[1] - rgb[2])
hue = int(degrees(atan2(beta, alpha)))
std = np.std(rgb)
mean = np.mean(rgb)
if hue < 0:
hue = hue + 360
if std < 0.055:
if mean > 0.85:
colour = "white"
elif mean < 0.15:
colour = "black"
else:
colour = "grey"
elif (hue > 50) and (hue <= 160):
colour = "green"
elif (hue > 160) and (hue <= 250):
colour = "blue"
else:
colour = "red"
if DEBUG:
print rgb, hue, std, mean, colour
return str(int(hue)) + ": " + colour
def scan_colour:
with picamera.PiCamera() as camera:
with picamera.array.PiRGBArray(camera) as stream:
camera.start_preview()
camera.resolution = (100, 100)
for foo in camera.capture_continuous(stream, 'rgb', use_video_port=False, resize=None, splitter_port=0, burst=True):
stream.truncate()
stream.seek(0)
RGBavg = stream.array.mean(axis=0).mean(axis=0)
colour = get_colour_name(RGBavg)
print colour
scan_colour()
What i thought is to build the mean Color of all Pixels and to determine the Color out of the hue angle. For getting grayscale answers i wanted to check if the Color is near the middle line of the Color corpus.
I have a custom color map cmap that I use to display a matrix X that contains negative values. I display it using
image(X, 'CDataMapping', 'scaled');
colormap(cmap);
axis normal;
It works great, but now I would like to save the matrix as an image with that same color map.
When I try the following :
imwrite(X, cmap, 'test.tif');
I get an all-black image. I understand that tiff wants to map these into the 0 to 1 or 0 to 2^16 space, so I tried doing
X = X - min(X(:));
X = (X/max(X(:)))*(2^16);
X = uint16(X);
But then when I tried to save X with the cmap, the file was corrupted and wouldn't open. I tried regenerating the custom color map using the new scale of 0 to 2^16-1, but the image created from that also was unreadable.
Any ideas on how this might be accomplished?
it seems like there's no image file format that supports 16 bit indexed color image saving:
TIFF does not support indexed colors, and PNG does not support 16 bit index.
Try converting to full RGB and save that as tiff:
rgb = ind2rgb( uint16(X), cmp ); % with X scaled as in your question
imwrite( rgb, 'myTiffImage.tif' ); % write the RGB image
Here's how I finally resolved it:
In addition to modifying the matrix, I had to normalize the color map into the 0 to 1 space. I also had to multiple by the length of the color map instead of 2^16.
Here's what that ended up looking like:
cmap = cmap - min(cmap(:));
cmap = cmap/max(cmap(:));
N = size(cmap, 1);
X = X - min(X(:));
X = (X/max(X(:)))*N;
X = uint16(X);
imwrite(X, cmap, 'test.tif');
I hope this saves some of you the trouble it caused me!
I need to generate new dimensions for an image to match a ratio of a given width and height ...but without increasing the size of the original.
The concept seems oh so simple yet I can't seem to join the dots.
Also, for code samples the language is PHP.
Update:
This is what I have so far:
http://codepad.org/fTdCNhQf
This is the output I need:
Example Image • (can't embed yet)
Since enlarging is not an option, your only options are cropping and extending.
Try this: let's say your image is W*H, and the desired aspect ratio of width to height is R.
Using the width and the aspect ratio, calculate the target height TH = W/R
Using the height and the aspect ratio, calculate the target width TW = H*R
Calculate area changes aH = ABS(TH-H)*W and aW = ABS(TW-W)*H
if aH is less than aW, use target width; pad or crop the image horizontally based on the sign of TH-H
Otherwise, use target height; pad or crop the image vertically based on the sigh of TW-W
Here is a quick example:
Target R: 5/6
Image: W=200, H= 300;
TH = 200/5*6 = 240
TW = 300*5/6 = 250
aH = 60*200=12000
aW = 50*300=15000
Resulting action: since aH is less than aW, crop image vertically to 240
Are you using something like ImageMagick libraries to generate an image or do you just need to generate the new dimensions based on a known ratio? Also, do you need to discover the ratio from the existing image?
This may be useful then:
http://www.zedwood.com/article/119/php-resize-an-image-with-gd
I know this is possible duplicated question.
Ruby, Generate a random hex color
My question is slightly different. I need to know, how to generate the random hex light colors only, not the dark.
In this thread colour lumincance is described with a formula of
(0.2126*r) + (0.7152*g) + (0.0722*b)
The same formula for luminance is given in wikipedia (and it is taken from this publication). It reflects the human perception, with green being the most "intensive" and blue the least.
Therefore, you can select r, g, b until the luminance value goes above the division between light and dark (255 to 0). For example:
lum, ary = 0, []
while lum < 128
ary = (1..3).collect {rand(256)}
lum = ary[0]*0.2126 + ary[1]*0.7152 + ary[2]*0.0722
end
Another article refers to brightness, being the arithmetic mean of r, g and b. Note that brightness is even more subjective, as a given target luminance can elicit different perceptions of brightness in different contexts (in particular, the surrounding colours can affect your perception).
All in all, it depends on which colours you consider "light".
Just some pointers:
Use HSL and generate the individual values randomly, but keeping L in the interval of your choosing. Then convert to RGB, if needed.
It's a bit harder than generating RGB with all components over a certain value (say 0x7f), but this is the way to go if you want the colors distributed evenly.
-- I found that 128 to 256 gives the lighter colors
Dim rand As New Random
Dim col As Color
col = Color.FromArgb(rand.Next(128, 256), rand.Next(128, 256), rand.Next(128, 256))
All colors where each of r, g ,b is greater than 0x7f
color = (0..2).map{"%0x" % (rand * 0x80 + 0x80)}.join
I modified one of the answers from the linked question (Daniel Spiewak's answer) to come up with something that is pretty flexible in terms of excluding darker colors:
floor = 22 # meaning darkest possible color is #222222
r = (rand(256-floor) + floor).to_s 16
g = (rand(256-floor) + floor).to_s 16
b = (rand(256-floor) + floor).to_s 16
[r,g,b].map {|h| h.rjust 2, '0'}.join
You can change the floor value to suit your needs. A higher value will limit the output to lighter colors, and a lower value will allow darker colors.
A really nice solution is provided by the color-generator gem, where you can call:
ColorGenerator.new(saturation: 0.75, lightness: 0.5).create_hex