Determine Dominant Color of Picture - image

I am trying to come up with an algorithm to determine the dominant color in an image (either taken from a devices camera or by selecting an existing photo in the photo library). I have written an iOS 8 application in Swift that can grab the RGB value of each pixel in the image, but I don't really know what to do from there.
For pixels that have a distinct dominant color, say RGB(230, 15, 30), it's pretty easy to determine the dominant color. However, I don't really know what to do for pixels that have RGB values where 2 of the 3 values are similar, say RGB(200, 215, 30).
My original thought was to keep 3 counters (one for each color) and add each pixels corresponding RGB values to that counter. At the end I would divide each counter by the total number of pixels and the max of the 3 values would be the dominant color. However, like I mentioned before, when the results are close to each other I can't say that one color necessarily dominates the other.
Just looking for some thoughts and suggestions

I came up with this problem a few weeks ago, and having read many posts talking about it, I found the best method is Hierarchical Quantization presented by this post: http://aishack.in/tutorials/dominant-color/. Also, I have implemented it in python: https://github.com/wenmin-wu/dominant-colors-py . You can install it with pip:pip install dominantcolors and use it as following:
from dominantcolors import get_image_dominant_colors
dominant_colors = get_image_dominant_colors(image_path='/path/to/image_path',num_colors=3)

An idea:
First step is to reduce the number of colors, for example "Color Quantization using K-Means". In the example from the link, the number of colors was reduced to 64 from 96K.
Second step is to calculate the ratio for every color and pick the biggest value.

You can check my hobby project to find the dominant color in a UIImage: https://github.com/ruuki/ColorFinder
What it does basically is creating clusters of colors of the image and returns the most dominant one in a completion block. You can tweak threshold parameters within the source code. Hope it helps.

i had a similar task to do, here is my python code:
import picamera
import picamera.array
import numpy as np
from math import sqrt, atan2, degrees
def get_colour_name(rgb):
rgb = rgb / 255
alpha = (2 * rgb[0] - rgb[1] - rgb [2])/2
beta = sqrt(3)/2*(rgb[1] - rgb[2])
hue = int(degrees(atan2(beta, alpha)))
std = np.std(rgb)
mean = np.mean(rgb)
if hue < 0:
hue = hue + 360
if std < 0.055:
if mean > 0.85:
colour = "white"
elif mean < 0.15:
colour = "black"
else:
colour = "grey"
elif (hue > 50) and (hue <= 160):
colour = "green"
elif (hue > 160) and (hue <= 250):
colour = "blue"
else:
colour = "red"
if DEBUG:
print rgb, hue, std, mean, colour
return str(int(hue)) + ": " + colour
def scan_colour:
with picamera.PiCamera() as camera:
with picamera.array.PiRGBArray(camera) as stream:
camera.start_preview()
camera.resolution = (100, 100)
for foo in camera.capture_continuous(stream, 'rgb', use_video_port=False, resize=None, splitter_port=0, burst=True):
stream.truncate()
stream.seek(0)
RGBavg = stream.array.mean(axis=0).mean(axis=0)
colour = get_colour_name(RGBavg)
print colour
scan_colour()
What i thought is to build the mean Color of all Pixels and to determine the Color out of the hue angle. For getting grayscale answers i wanted to check if the Color is near the middle line of the Color corpus.

Related

Generate RGB color set with highest diversity

I'm trying to create an algorithm that will output a set of different RGB color values, that should be as distinct as possible.
For Example:
following a set of 3 colors:
(255, 0, 0) [Red]
(0, 255, 0) [Green]
(0, 0, 255) [Blue]
the next 3 colors would be:
(255, 255, 0) [Yellow]
(0, 255, 255) [Cyan]
(255, 0, 255) [Purple]
The next colors should be in-between the new intervals. Basically, my idea is to traverse the whole color spectrum systematic intervals similar to this:
A set of 13 colors should include the color in between 1 and 7, continue that pattern infinitely.
I'm currently struggling to apply this pattern to an algorithm to RGB values as it does not seem trivial to me. I'm thankful for any hints that can point me to a solution.
The Wikipedia article on color difference is worth reading, and so is the article on a “low-cost approximation” by CompuPhase linked therein. I will base my attempt on the latter.
You didn't specify a language, so I'll write it in not optimized Python (except for the integer optimizations already present in the reference article), in order for it to be readily translatable into other languages.
n_colors = 25
n_global_moves = 32
class Color:
max_weighted_square_distance = (((512 + 127) * 65025) >> 8) + 4 * 65025 + (((767 - 127) * 65025) >> 8)
def __init__(self, r, g, b):
self.r, self.g, self.b = r, g, b
def weighted_square_distance(self, other):
rm = (self.r + other.r) // 2 # integer division
dr = self.r - other.r
dg = self.g - other.g
db = self.b - other.b
return (((512 + rm) * dr*dr) >> 8) + 4 * dg*dg + (((767 - rm) * db*db) >> 8)
def min_weighted_square_distance(self, index, others):
min_wsd = self.max_weighted_square_distance
for i in range(0, len(others)):
if i != index:
wsd = self.weighted_square_distance(others[i])
if min_wsd > wsd:
min_wsd = wsd
return min_wsd
def is_valid(self):
return 0 <= self.r <= 255 and 0 <= self.g <= 255 and 0 <= self.b <= 255
def add(self, other):
return Color(self.r + other.r, self.g + other.g, self.b + other.b)
def __repr__(self):
return f"({self.r}, {self.g}, {self.b})"
colors = [Color(127, 127, 127) for i in range(0, n_colors)]
steps = [Color(dr, dg, db) for dr in [-1, 0, 1]
for dg in [-1, 0, 1]
for db in [-1, 0, 1] if dr or dg or db] # i.e., except 0,0,0
moved = True
global_move_phase = False
global_move_count = 0
while moved or global_move_phase:
moved = False
for index in range(0, len(colors)):
color = colors[index]
if global_move_phase:
best_min_wsd = -1
else:
best_min_wsd = color.min_weighted_square_distance(index, colors)
for step in steps:
new_color = color.add(step)
if new_color.is_valid():
new_min_wsd = new_color.min_weighted_square_distance(index, colors)
if best_min_wsd < new_min_wsd:
best_min_wsd = new_min_wsd
colors[index] = new_color
moved = True
if not moved:
if global_move_count < n_global_moves:
global_move_count += 1
global_move_phase = True
else:
global_move_phase = False
print(f"n_colors: {n_colors}")
print(f"n_global_moves: {n_global_moves}")
print(colors)
The colors are first set all to grey, i.e., put in the center of the RGB color cube, and then moved in the color cube in such a way as to hopefully maximize the minimum distance between colors.
To save CPU time the square of the distance is used instead of the distance itself, which would require the calculation of a square root.
Colors are moved one at a time, by a maximum of 1 in each of the 3 directions, to one of the adjacent colors that maximizes the minimum distance from the other colors. By so doing, the global minimum distance is (approximately) maximized.
The “global move” phases are needed in order to overcome situations where no color would move, but forcing all colors to move to a position which is not much worse than their current one causes the whole to find a better configuration with the subsequent regular moves. This can best be seen with 3 colors and no global moves, modifying the weighted square distance to be simply rd*rd+gd*gd+bd*bd: the configuration becomes
[(2, 0, 0), (0, 253, 255), (255, 255, 2)]
while, by adding 2 global moves, the configuration becomes the expected one
[(0, 0, 0), (0, 255, 255), (255, 255, 0)]
The algorithm produces just one of several possible solutions. Unfortunately, since the metric used is not Euclidean, it's not possible to simply flip the 3 dimension in the 8 possible combinations (i.e., replace r→255-r and/or the same for g and/or b) to get equivalent solutions. It's probably best to introduce randomness in the order the color moving steps are tried out, and vary the random seed.
I have not corrected for the monitor's gamma, because its purpose is precisely that of altering the spacing of the brightness in order to compensate for the eyes' different sensitivity at high and low brightness. Of course, the screen gamma curve deviates from the ideal, and a (system dependent!) modification of the gamma would yield better results, but the standard gamma is a good starting point.
This is the output of the algorithm for 25 colors:
Note that the first 8 colors (the bottom row and the first 3 colors of the row above) are close to the corners of the RGB cube (they are not at the corners because of the non-Euclidean metric).
First let me ask, do you want to remain in sRGB, and go through each RGB combination?
OR (and this is my assumption) do you actually want colors that are "farthest" from each other? Since you used the term "distinct" I'm going to cover finding color differences.
Model Your Perceptions
sRGB is a colorspace that refers to your display/output. And while the gamma curve is "sorta" perceptually uniform, the overall sRGB colorspace is not, it is intended more model the display than human perception.
To determine "maximum distance" between colors in terms of perception, you want a model of perception, either using a colorspace that is perceptually uniform or using a color appearance model (CAM).
As you just want sRGB values as a result, then using a uniform colorspace is probably sufficient, such as CIELAB or CIELUV. As these use cartesian coordinates, the difference between two colors in (L*a*b*) is simply the euclidian distance.
If you want to work with polar coordinates (i.e. hue angle) then you can go one step past CIELAB, into CIELCh.
How To Do It
I suggest Bruce Lindbloom's site for the relevant math.
The simplified steps:
Linearize the sRGB by removing the gamma curve from each of the three color channels.
Convert the linearized values into CIE XYZ (use D65, no adaptation)
Convert XYZ into L* a* b*
Find the Opposite:
a. Now find the "opposite" color by plotting an line through 0, making the line equal in distance from both sides of zero. OR
b. ALT: Do one more transform from LAB into CIELCh, then find the opposite by rotating hue 180 degrees. Then convert back to LAB.
Convert LAB to XYZ.
Convert XYZ to sRGB.
Add the sRGB gamma curve to back to each channel.
Staying in sRGB?
If you are less concerned about perceptual uniformity, then you could just stay in sRGB, though the results will be less accurate. In this case all you need to do is take the difference of each channel relative to 255 (i.e. invert each channel):
What's The Difference of the Differences?
Here are some comparisons of the two methods discussed above:
For starting color #0C5490 sRGB Difference Method:
Same starting color, but using CIELAB L* C* h* (and just rotating hue 180 degrees, no adjustment to L*).
Starting color #DFE217, sRGB difference method:
Here in CIELAB LCh, just rotating hue 180:
And again in LCh, but this time also adjusting L* as (100 - L*firstcolor)
Now you'll notice a lot of change in hue angle on these — the truth is that while LAB is "somewhat uniform," it's pretty squirrly in the area of blue.
Take a look at the numbers:
They seem substantially different for hue, chroma, a, b... yet they create the same HEX color value! So yes, even CIELAB has inaccuracies (especially blue).
If you want more accuracy, try CIECAM02

Remove optical disk image of the retina, setting the color of the optical disk as background color

I've been working with the retina image, currently I am submitting to the wavelet, but I have noticed that I have two problems are:
The optical disk which causes me image noise
And the circle delimiting the retina
The original image is the next
My plan is to establish the bottom of the tone of the optical disk in order not to lose any detail of the blood vessels of the retina (I post a code with which I played but still do not understand much as I know the tone of the optical disc and how to set it to the image without altering the blood vessels)
And with respect to the outer circle of the retina, I don´t know that you recommend me (I do not know about masks, I would appreciate if they have to consult my literature can provide)
c = [242 134 72];% Background to change
thresh = 50;
A = imread('E:\Prueba.jpg');
B = zeros(size(A));
Ar = A(:,:,1);
Ag = A(:,:,2);
Ab = A(:,:,3);
Br = B(:,:,1);
Bg = B(:,:,2);
Bb = B(:,:,3);
logmap = (Ar > (c(1) - thresh)).*(Ar < (c(1) + thresh)).*...
(Ag > (c(2) - thresh)).*(Ag < (c(2) + thresh)).*...
(Ab > (c(3) - thresh)).*(Ab < (c(3) + thresh));
Ar(logmap == 1) = Br(logmap == 1);
Ag(logmap == 1) = Bg(logmap == 1);
Ab(logmap == 1) = Bb(logmap == 1);
A = cat(3 ,Ar,Ag,Ab);
imshow(A);
courtesy of the question How can I change the background color of the image?
The image I get is the following
I need a picture like this where the optical disc does not cause me noise when segmenting the blood vessels of the retina.
I want to be uniform background ... and only the veins are perceived
I continued to work and have obtained the following image As you can realize the optical disk removes some parts of the blood vessels (veins) that are above him, so I require eliminating or make uniform the entire bottom of the image.
As Wouter said, you should first correct the inhomogeneity of the image. I would do it in my own way:
First, the parameters you can adjust to optimize the output:
gfilt = 3;
thresh = 0.4;
erode = 3;
brighten = 20;
You will see how they are used in the code.
This is the main step: to apply a Gaussian filter to the image to make it smooth and then subtract the result from the original image. This way you end up with the sharp changes in your data, which happens to be the vessels:
A = imread('Prueba.jpg');
B = imgaussfilt(A, gfilt) - A; % Gaussian filter and subtraction
% figure; imshow(B)
Then I create a binary mask to remove the unwanted area of the image:
% the 'imadjust' makes sure that you get the same result even if you ...
% change the intensity of illumination. "thresh" is the threshold of ...
% conversion to black and white:
circ = im2bw(imadjust(A(:,:,1)), thresh);
% here I am shrinking the "circ" for "erode" pixels:
circ = imerode(circ, strel('disk', erode));
circ3 = repmat(circ, 1, 1, 3); % and here I extended it to 3D.
% figure; imshow(circ)
And finally, I remove everything on the surrounding dark area and show the result:
B(~circ3) = 0; % ignore the surrounding area
figure; imshow(B * brighten) % brighten and show the output
Notes:
I do not see the last image as a final result, but probably you could apply some thresholds to it and separate the vessels from the rest.
The quality of the image you provided is quite low. I expect good results with a better data.
Although the intensity of blue channel is less than the rest, the vessels are expressed there better than the other channels, because blood is red!
If you are acquiring this data or you have access to the person, I suggest you to use blue light for illumination, since it provides you with higher contrast of the vessels.
Morphological operations are good for working with sphagetti images.
Original image:
Convert to grayscale:
original = rgb2gray(gavrF);
Estimate the background via morphological closing:
se = strel('disk', 3);
background = imclose(original, se);
Estimate of the background:
You could then for example subtract this background from the original grayscale image. You can do this straight by doing a bottom hat transform on the grayscale image:
flatImage = imbothat(original, strel('disk', 4));
With a output:
Noisy, but now you got access to global thresholding methods. Remember to change the datatypes to double if you wish to do some subtraction or division manually.

how to detect edges in an image having only red object

I have an image, in that image all red objects are detected.
Here's an example with two images:
http://img.weiku.com/waterpicture/2011/10/30/18/road_Traffic_signs_634577283637977297_4.jpg
But when i proceed that image for edge detection method i got the output as only black color. However, I want to detect the edges in that red object.
r=im(:,:,1); g=im(:,:,2); b=im(:,:,3);
diff=imsubtract(r,rgb2gray(im));
bw=im2bw(diff,0.18);
area=bwareaopen(bw,300);
rm=immultiply(area,r); gm=g.*0; bm=b.*0;
image=cat(3,rm,gm,bm);
axes(handles.Image);
imshow(image);
I=image;
Thresholding=im2bw(I);
axes(handles.Image);
imshow(Thresholding)
fontSize=20;
edgeimage=Thresholding;
BW = edge(edgeimage,'canny');
axes(handles.Image);
imshow(BW);
When you apply im2bw you want to use only the red channel of I(i.e the 1st channel). Therefore using this command:
Thresholding =im2bw(I(:,:,1));
for example yields this output:
Just FYI for anyone else that manages to stumble here. The HSV colorspace is better suited for detecting colors over the RGB colorspace. A good example is in gnovice's answer. The main reason for this is that there are colors which can contain full 255 red values but aren't actually red (yellow can be formed from (255,255,0), white from (255,255,255), magenta from (255,0,255), etc).
I modified his code for your purpose below:
cdata = imread('roadsign.jpg');
hsvImage = rgb2hsv(cdata); %# Convert the image to HSV space
hPlane = 360.*hsvImage(:,:,1); %# Get the hue plane scaled from 0 to 360
sPlane = hsvImage(:,:,2); %# Get the saturation plane
bPlane = hsvImage(:,:,3); %# Get the brightness plane
% Must get colors with high brightness and saturation of the red color
redIndex = ((hPlane <= 20) | (hPlane >= 340)) & sPlane >= 0.7 & bPlane >= 0.7;
% Show edges
imshow(edge(redIndex));
Output:

To convert only black color to white in Matlab

I know this thread about converting black color to white and white to black simultaneously.
I would like to convert only black to white.
I know this thread about doing this what I am asking but I do not understand what goes wrong.
Picture
Code
rgbImage = imread('ecg.png');
grayImage = rgb2gray(rgbImage); % for non-indexed images
level = graythresh(grayImage); % threshold for converting image to binary,
binaryImage = im2bw(grayImage, level);
% Extract the individual red, green, and blue color channels.
redChannel = rgbImage(:, :, 1);
greenChannel = rgbImage(:, :, 2);
blueChannel = rgbImage(:, :, 3);
% Make the black parts pure red.
redChannel(~binaryImage) = 255;
greenChannel(~binaryImage) = 0;
blueChannel(~binaryImage) = 0;
% Now recombine to form the output image.
rgbImageOut = cat(3, redChannel, greenChannel, blueChannel);
imshow(rgbImageOut);
Which gives
Where seems to be something wrong in red color channel.
The Black color is just (0,0,0) in RGB so its removal should mean to turn every (0,0,0) pixel to white (255,255,255).
Doing this idea with
redChannel(~binaryImage) = 255;
greenChannel(~binaryImage) = 255;
blueChannel(~binaryImage) = 255;
Gives
So I must have misunderstood something in Matlab. The blue color should not have any black. So this last image is strange.
How can you turn only black color to white?
I want to keep the blue color of the ECG.
If I understand you properly, you want to extract out the blue ECG plot while removing the text and axes. The best way to do that would be to examine the HSV colour space of the image. The HSV colour space is great for discerning colours just like the way humans do. We can clearly see that there are two distinct colours in the image.
We can convert the image to HSV using rgb2hsv and we can examine the components separately. The hue component represents the dominant colour of the pixel, the saturation denotes the purity or how much white light there is in the pixel and the value represents the intensity or strength of the pixel.
Try visualizing each channel doing:
im = imread('http://i.stack.imgur.com/cFOSp.png'); %// Read in your image
hsv = rgb2hsv(im);
figure;
subplot(1,3,1); imshow(hsv(:,:,1)); title('Hue');
subplot(1,3,2); imshow(hsv(:,:,2)); title('Saturation');
subplot(1,3,3); imshow(hsv(:,:,3)); title('Value');
Hmm... well the hue and saturation don't help us at all. It's telling us the dominant colour and saturation are the same... but what sets them apart is the value. If you take a look at the image on the right, we can tell them apart by the strength of the colour itself. So what it's telling us is that the "black" pixels are actually blue but with almost no strength associated to it.
We can actually use this to our advantage. Any pixels whose values are above a certain value are the values we want to keep.
Try setting a threshold... something like 0.75. MATLAB's dynamic range of the HSV values are from [0-1], so:
mask = hsv(:,:,3) > 0.75;
When we threshold the value component, this is what we get:
There's obviously a bit of quantization noise... especially around the axes and font. What I'm going to do next is perform a morphological erosion so that I can eliminate the quantization noise that's around each of the numbers and the axes. I'm going to make it the mask a bit large to ensure that I remove this noise. Using the image processing toolbox:
se = strel('square', 5);
mask_erode = imerode(mask, se);
We get this:
Great, so what I'm going to do now is make a copy of your original image, then set any pixel that is black from the mask I derived (above) to white in the final image. All of the other pixels should remain intact. This way, we can remove any text and the axes seen in your image:
im_final = im;
mask_final = repmat(mask_erode, [1 1 3]);
im_final(~mask_final) = 255;
I need to replicate the mask in the third dimension because this is a colour image and I need to set each channel to 255 simultaneously in the same spatial locations.
When I do that, this is what I get:
Now you'll notice that there are gaps in the graph.... which is to be expected due to quantization noise. We can do something further by converting this image to grayscale and thresholding the image, then filling joining the edges together by a morphological dilation. This is safe because we have already eliminated the axies and text. We can then use this as a mask to index into the original image to obtain our final graph.
Something like this:
im2 = rgb2gray(im_final);
thresh = im2 < 200;
se = strel('line', 10, 90);
im_dilate = imdilate(thresh, se);
mask2 = repmat(im_dilate, [1 1 3]);
im_final_final = 255*ones(size(im), class(im));
im_final_final(mask2) = im(mask2);
I threshold the previous image that we got without the text and axes after I convert it to grayscale, and then I perform dilation with a line structuring element that is 90 degrees in order to connect those lines that were originally disconnected. This thresholded image will contain the pixels that we ultimately need to sample from the original image so that we can get the graph data we need.
I then take this mask, replicate it, make a completely white image and then sample from the original image and place the locations we want from the original image in the white image.
This is our final image:
Very nice! I had to do all of that image processing because your image basically has quantization noise to begin with, so it's going to be a bit harder to get the graph entirely. Ander Biguri in his answer explained in more detail about colour quantization noise so certainly check out his post for more details.
However, as a qualitative measure, we can subtract this image from the original image and see what is remaining:
imshow(rgb2gray(abs(double(im) - double(im_final_final))));
We get:
So it looks like the axes and text are removed fine, but there are some traces in the graph that we didn't capture from the original image and that makes sense. It all has to do with the proper thresholds you want to select in order to get the graph data. There are some trouble spots near the beginning of the graph, and that's probably due to the morphological processing that I did. This image you provided is quite tricky with the quantization noise, so it's going to be very difficult to get a perfect result. Also, these thresholds unfortunately are all heuristic, so play around with the thresholds until you get something that agrees with you.
Good luck!
What's the problem?
You want to detect all black parts of the image, but they are not really black
Example:
Your idea (or your code):
You first binarize the image, selecting the pixels that ARE something against the pixels that are not. In short, you do: if pixel>level; pixel is something
Therefore there is a small misconception you have here! when you write
% Make the black parts pure red.
it should read
% Make every pixel that is something (not background) pure red.
Therefore, when you do
redChannel(~binaryImage) = 255;
greenChannel(~binaryImage) = 255;
blueChannel(~binaryImage) = 255;
You are doing
% Make every pixel that is something (not background) white
% (or what it is the same in this case, delete them).
Therefore what you should get is a completely white image. The image is not completely white because there has been some pixels that were labelled as "not something, part of the background" by the value of level, in case of your image around 0.6.
A solution that one could think of is manually setting the level to 0.05 or similar, so only black pixels will be selected in the gray to binary threholding. But this will not work 100%, as you can see, the numbers have some very "no-black" values.
How would I try to solve the problem:
I would try to find the colour you want, extract just that colour from the image, and then delete outliers.
Extract blue using HSV (I believe I answered you somewhere else how to use HSV).
rgbImage = imread('ecg.png');
hsvImage=rgb2hsv(rgbImage);
I=rgbImage;
R=I(:,:,1);
G=I(:,:,2);
B=I(:,:,3);
th=0.1;
R((hsvImage(:,:,1)>(280/360))|(hsvImage(:,:,1)<(200/360)))=255;
G((hsvImage(:,:,1)>(280/360))|(hsvImage(:,:,1)<(200/360)))=255;
B((hsvImage(:,:,1)>(280/360))|(hsvImage(:,:,1)<(200/360)))=255;
I2= cat(3, R, G, B);
imshow(I2)
Once here we would like to get the biggest blue part, and that would be our signal. Therefore the best approach seems to first binarize the image taking all blue pixels
% Binarize image, getting all the pixels that are "blue"
bw=im2bw(rgb2gray(I2),0.9999);
And then using bwlabel, label all the independent pixel "islands".
% Label each "blob"
lbl=bwlabel(~bw);
The label most repeated will be the signal. So we find it and separate the background from the signal using that label.
% Find the blob with the highes amount of data. That will be your signal.
r=histc(lbl(:),1:max(lbl(:)));
[~,idxmax]=max(r);
% Profit!
signal=rgbImage;
signal(repmat((lbl~=idxmax),[1 1 3]))=255;
background=rgbImage;
background(repmat((lbl==idxmax),[1 1 3]))=255;
Here there is a plot with the signal, background and difference (using the same equation as #rayryang used)
Here is a variation on #rayryeng's solution to extract the blue signal:
%// retrieve picture
imgRGB = imread('http://i.stack.imgur.com/cFOSp.png');
%// detect axis lines and labels
imgHSV = rgb2hsv(imgRGB);
BW = (imgHSV(:,:,3) < 1);
BW = imclose(imclose(BW, strel('line',40,0)), strel('line',10,90));
%// clear those masked pixels by setting them to background white color
imgRGB2 = imgRGB;
imgRGB2(repmat(BW,[1 1 3])) = 255;
%// show extracted signal
imshow(imgRGB2)
To get a better view, here is the detected mask overlayed on top of the original image (I'm using imoverlay function from the File Exchange):
figure
imshow(imoverlay(imgRGB, BW, uint8([255,0,0])))
Here is a code for this:
rgbImage = imread('ecg.png');
redChannel = rgbImage(:, :, 1);
greenChannel = rgbImage(:, :, 2);
blueChannel = rgbImage(:, :, 3);
black = ~redChannel&~greenChannel&~blueChannel;
redChannel(black) = 255;
greenChannel(black) = 255;
blueChannel(black) = 255;
rgbImageOut = cat(3, redChannel, greenChannel, blueChannel);
imshow(rgbImageOut);
black is the area containing the black pixels. These pixels are set to white in each color channel.
In your code you use a threshold and a grayscale image so of course you have much bigger area of pixels that is set to white resp. red color. In this code only pixel that contain absolutly no red, green and blue are set to white.
The following code does the same with a threshold for each color channel:
rgbImage = imread('ecg.png');
redChannel = rgbImage(:, :, 1);
greenChannel = rgbImage(:, :, 2);
blueChannel = rgbImage(:, :, 3);
black = (redChannel<150)&(greenChannel<150)&(blueChannel<150);
redChannel(black) = 255;
greenChannel(black) = 255;
blueChannel(black) = 255;
rgbImageOut = cat(3, redChannel, greenChannel, blueChannel);
imshow(rgbImageOut);

Simplifying an image in matlab

I have a picture of a handwritten letter (say the letter, "y"). Keeping only the first of the three color values (since it is a grayscale image), I get a 111x81 matrix which I call aLetter. I can see this image (please ignore the title) using:
colormap gray; image(aLetter,'CDataMapping','scaled')
What I want is to remove the white space around this letter and somehow average the remaining pixels so that I have an 8x8 matrix (let's call it simpleALetter). Now if I use:
colormap gray; image(simpleALetter,'CDataMapping','scaled')
I should see a pixellated version of the letter:
Any advice on how to do this would be greatly appreciated!
You need several steps to achieve what you want (updated in the light of #rwong's observation that I had white and black flipped…):
Find the approximate 'bounding box' of the letter:
make sure that "text" is the highest value in the image
set things that are "not text" to zero - anything below a threshold
sum along row and column, find non-zero pixels
upsample the image in the bounding box to a multiple of 8
downsample to 8x8
Here is how you might do that with your situation
aLetter = max(aLetter(:)) - aLetter; % invert image: now white = close to zero
aLetter = aLetter - min(aLetter(:)); % make the smallest value zero
maxA = max(aLetter(:));
aLetter(aLetter < 0.1 * maxA) = 0; % thresholding; play with this to set "white" to zero
% find the bounding box:
rowsum = sum(aLetter, 1);
colsum = sum(aLetter, 2);
nonzeroH = find(rowsum);
nonzeroV = find(colsum);
smallerLetter = aLetter(nonzeroV(1):nonzeroV(end), nonzeroH(1):nonzeroH(end));
% now we have the box, but it's not 8x8 yet. Resampling:
sz = size(smallerLetter);
% first upsample in both X and Y by a factor 8:
bigLetter = repmat(reshape(smallerLetter, [1 sz(1) 1 sz(2)]), [8 1 8 1]);
% then reshape and sum so you end up with 8x8 in the final matrix:
letter8 = squeeze(sum(sum(reshape(bigLetter, [sz(1) 8 sz(2) 8]), 3), 1));
% finally, flip it back "the right way" black is black and white is white:
letter8 = 255 - (letter8 * 255 / max(letter8(:)));
You can do this with explicit for loops but it would be much slower.
You can also use some of the blockproc functions in Matlab but I am using Freemat tonight and it doesn't have those… Neither does it have any image processing toolbox functions, so this is "hard core".
As for picking a good threshold: if you know that > 90% of your image is "white", you could determine the correct threshold by sorting the pixels and finding the threshold dynamically - as I mentioned in my comment in the code "play with it" until you find something that works in your situation.

Resources