Dealing with filters and colour's - image

I want to make filters like shown here
these are my target filters but can you please guide me how to go for them
how i can make filters like these?
which algorithms i need to follow? and which step i need to take as beginner?
Which is the better and easiest way to get the values of RGB and shades of filters .
copy of image from link above by spektre:
the source image is the first after camera in the first line.

very hard to say from single non test-screen image.
the black and white filter
is easy just convert RGB to intensity i and then instead RGB write iii color. The simplest not precise conversion is
i=(R+G+B)/3
but better way is use of weights
i=w0*R+w1*G+w2*B
where w0+w1+w2=1 the values can be found by a little google search effort
the rest
some filters seem like over exponated colors or weighted colors like this:
r=w0*r; if (r>255) r=255;
g=w1*g; if (g>255) g=255;
b=w2*b; if (b>255) b=255;
write an app with 3 scrollbars for w0,w1,w2 in range <0-10> and redraw image with above formula. After little experimenting you should find w0,w1,w2 for most of the filters ... The rest can be mix of colors like this:
r=w00*r+w01*g+w02*b; if (r>255) r=255;
g=w10*r+w11*g+w12*b; if (g>255) g=255;
b=w20*r+w21*g+w22*b; if (b>255) b=255;
or:
i=(r+g+b)/3
r=w0*r+w3*i; if (r>255) r=255;
g=w1*g+w3*i; if (g>255) g=255;
b=w2*b+w3*i; if (b>255) b=255;
btw if you want the closest similarity you can:
find test colors in input image
like R shades, G shades , B shades , RG,RB,BG,RGB shades from 0-255. Then get colors from filtered image at the same position and draw depedency graphs for each shade draw R,G,B intensities.
One axis is input image color intensity and the other one is R,G,B intensity of filtered color. Then you should see which formula is used directly and can also compute the weights from it. This is how over-exponation works for Red color
if the lines are not lines but curves
then some kind of gamma correction is used so formulas use polynomial of higher order (power of 2,3,4...) mostly power of 2 suffice. In that case the weights can be also negative !!!
some filters could use different color spaces
for example transform RGB to HSV shift hue and convert back to RGB. That will shift colors a little.

Related

How to separate a picture to color groups?

Let's say I have an image of a ball like this one:
I want to separate the colors of the ball to the color groups. In this case I should have 2 main color groups - "brown" and "white". The "brown" group will have all the brown pixels and the "white" group will have all the white pixels.
I'm using matlab for this task. The way that I thought to do is:
to look at the RGB channels. I used scatter to look if I could clearly see some groups, but I didn't.
to look at the bayer vales. But couldn't see any groups either.
to run an edge detector. Then, in each enclosed area I'll find the mean of the pixels. The areas that will have similar mean values (within a certain threshold) will belong to the same group. It seemed to sort of to work but in many case it didn't
Any other ideas?
This task is called segmentation, in your case each color is a segment and segments are not always continuous.
Searching segmentation examples for Matlab should yield a lot of code examples and theorems.
Note one thing, there is no ground truth solution, you can't say how many segments there are for each image since it is subjective question. In a general case you can run clustering algorithm on the color values which will break the image to color segments, there are algorithms which will find the number of groups automatically - this can be a good start for the number of color groups in your image.
quick search yielded these works, they can get you started with ideas:
Image segmentation with matlab
Using EM for image segmentation
While image segmentation would be the correct way to treat color separation, if your image is simple, you can try to do it brute-force.
Here, converting to HSV would be easier to handle with the image.
For the white parts of the image:
I=imread('ball.jpg');
H=rgb2hsv(I);
% separate dimensions
h=H(:,:,1);
s=H(:,:,2);
v=H(:,:,3);
% color conditions
v(v<0.8 | s>0.7 | h>0.7 )=NaN;
h(isnan(v))=NaN;
s(isnan(v))=NaN;
% convert image back
W=cat(3,h,s,v);
White_image=hsv2rgb(W);
figure; imagesc(White_image);
And for the brown parts:
% separate dimensions
h=H(:,:,1);
s=H(:,:,2);
v=H(:,:,3);
% color conditions
v(s<0.6 | v>0.8 )=NaN;
h(isnan(v))=NaN;
s(isnan(v))=NaN;
% convert image back
B=cat(3,h,s,v);
Brown_image=hsv2rgb(B);
figure; imagesc(Brown_image); axis off

Processing image to get the accent color

I want to get the most used color from image.
In most used color I dont mean specific pixel, I mean a most used color RANGE.
For example if there is an 2x3 pixels image and two pixels are f00(red) and the rest for are: 0b0, 0c0, 0d0, 0e0, 0f0 (Kind of green), I should get 0d0 (average of greens) and not the F00 (red, because there are exactly 2 pixels of this color).
I want to distinguish that kind of cases.
How am I supposed to do it?
Or where can I find materials to learn how it can be done?
Thanks.
Search about Color histogram using matlab.
There are a lot of recourse for this topic.

Reading data from colour terrain map

I have a question about converting a height-map that is in colour into a matrix - look here to see examples of such maps. If I were to have a terrain plot and plot it using imagesc, then I would see it as a colour map. I was wondering how I could convert an image that looks like this into its corresponding matrix.
This seems like it should be a pretty basic procedure, but I can neither work out how to do it myself nor find out how to do it online (including looking on SO).
To put it another way, the image in question is a jpeg; what I'd like is to be able to convert the .jpg file into a matrix, M say, so that imagesc(M), or surf(M), with the camera looking at the (x,y)-plane (from above), give the same as viewing the image, eg imshow(imread('Picture.jpg')).
You can use Matlab's rbg2ind function for this. All you need to choose is the "resolution" of the output colormap that you want, i.e. the second parameter n. So if you specify n as 8 for example, then your colormap will only have 8 values and your output indexed image should only have 8 values as well.
Depending on the color coding scheme used, you might try first converting the RGB values to HSL or HSV and using the hue values for the terrain heights.

Color quantization of an image using K-means clustering (using RGB features)

Is it possible to clustering for RGB + spatial features of images with matlab?
NOTE: I want to use kmeans for clustering.
In fact basicly i want to do one thing, i want to get this image
from this
I think you are looking for color quantization.
[imgQ,map]= rgb2ind(img,4,'nodither'); %change this 4 to the number of desired colors
%in quantized image
imshow(imgQ,map);
Result:
Using kmeans :
%img is the original image
imgVec=[reshape(img(:,:,1),[],1) reshape(img(:,:,2),[],1) reshape(img(:,:,3),[],1)];
[imgVecQ,imgVecC]=kmeans(double(imgVec),4); %4 colors
imgVecQK=pdist2(imgVec,imgVecC); %choosing the closest centroid to each pixel,
[~,indMin]=min(imgVecQK,[],2); %avoiding double for loop
imgVecNewQ=imgVecC(indMin,:); %quantizing
imgNewQ=img;
imgNewQ(:,:,1)=reshape(imgVecNewQ(:,1),size(img(:,:,1))); %arranging back into image
imgNewQ(:,:,2)=reshape(imgVecNewQ(:,2),size(img(:,:,1)));
imgNewQ(:,:,3)=reshape(imgVecNewQ(:,3),size(img(:,:,1)));
imshow(img)
figure,imshow(imgNewQ,[]);
Result of kmeans :
If you want to add distance constraint to kmeans, the code will be slightly different. Basically, you need to concatenate pixel coordinates of corresponding pixel vales too. But remember, while assigning nearest centroid to each pixel, assign only the color i.e. the first 3 dimensions, not the last 2. That doesn't make sense, obviously. The code is very similar to the previous, please note the changes and understand them.
[col,row]=meshgrid(1:size(img,2),1:size(img,1));
imgVec=[reshape(img(:,:,1),[],1) reshape(img(:,:,2),[],1) reshape(img(:,:,3),[],1) row(:) col(:)];
[imgVecQ,imgVecC]=kmeans(double(imgVec),4); %4 colors
imgVecQK=pdist2(imgVec(:,1:3),imgVecC(:,1:3));
[~,indMin]=min(imgVecQK,[],2);
imgVecNewQ=imgVecC(indMin,1:3); %quantizing
imgNewQ=img;
imgNewQ(:,:,1)=reshape(imgVecNewQ(:,1),size(img(:,:,1))); %arranging back into image
imgNewQ(:,:,2)=reshape(imgVecNewQ(:,2),size(img(:,:,1)));
imgNewQ(:,:,3)=reshape(imgVecNewQ(:,3),size(img(:,:,1)));
imshow(img)
figure,imshow(imgNewQ,[]);
Result of kmeans with distance constraint:

How to reconstruct Bayer to RGB from Canon RAW data?

I'm trying to reconstruct RGB from RAW Bayer data from a Canon DSLR but am having no luck. I've taken a peek at the dcraw.c source, but its lack of comments makes it a bit tough to get through. Anyway, I have debayering working but I need to then take this debayered data and get something that looks correct. My current code does something like this, in order:
Demosaic/debayer
Apply white balance multipliers (I'm using the following ones: 1.0, 2.045, 1.350. These work perfectly in Adobe Camera Raw as 5500K, 0 Tint.)
Multiply the result by the inverse of the camera's color matrix
Multiply the result by an XYZ to sRGB matrix fromm Bruce Lindbloom's site (the D50 sRGB one)
Set white/black point, I am using an input levels control for this
Adjust gamma
Some of what I've read says to apply the white balance and black point correction before the debayer. I've tried, but it's still broken.
Do these steps look correct? I'm trying to determine if the problem is 1.) my sequence of operations, or 2.) the actual math being used.
The first step should be setting black and saturation point because you need to apply white balance looking after saturated pixels in order to avoid magenta highlights:
And before demosaicing, apply white balacing. See here (http://www.guillermoluijk.com/tutorial/dcraw/index_en.htm) how applying white balance before demosaicing introduce artifacts.
After the first step (debayer) you should have a proper RGB image with right colors. Remaining steps are just cosmetics. So I'm guessing there's something wrong at step one.
One problem could be the Bayer pattern you're using to generate RGB image is different from the CFA pattern of the camera. Match sensor alignment in your code to that of the camera!

Resources