How can I detect a known image or pattern within an image so that I can replace it with my own text?
Input option 1 (detect aaa and bbb separately):
Input option 2 (detect red value and blue value separately):
Output:
Running code
I'll show you the code and results running in Mathematica using your option 2.
First we read the image
m = Import#"C:\\imagereplace.png"
Then we separate the channels RGB
ColorSeparate[m]
Obtaining
Next we correlate the red channel image (the one to the right) with a Box Matrix, retaining only the rectangular shape. And transform the result to a B&W image.
Binarize#ImageCorrelate[m1[[3]], BoxMatrix[3]] ;
Obtaining a full size image but containing only the black rectangle.
Now we find the edges of the rectangle (just a loop).
Having the size and coordinates of the rectangles, we create a raster image of the text, corresponding to the detected size, getting:
r1 = Binarize#Rasterize["My Text", RasterSize -> {jmax-jmin + 1, imax-imin + 1},
ImageSize -> {jmax-jmin + 1, imax- imin + 1}]
Now we replace the data block with the new one. Obtaining:
I'll not do the Blue channel, as it is the same thing.
HTH!!
Note: The Image correlation is the only trick used. The rest is code as usual. Here you can find the basics about correlation.
Related
I have an image
I have obtained its phase only reconstructed image using fftn function.
My aim is
Using phase only reconstruction of the given image,i will get only edges and lines
Then i want to color these lines and edges say with red or blue color in the phase only reconstructed image.
Then i want to put this "colored" image on original image so that edges and lines from the original images can be high-lightened with respective red or blue color.
But when i run the code, i get following error
'Subscript indices must either be real positive integers or logicals.
Error in sagar_image (line 17)
superimposing(ph) = 255;'
So what should I do?
clc;
close all;
clear all;
img=imread('D:\baby2.jpg');
figure,imshow(img);
img=rgb2gray(img);
fourier_transform=fftn(img);%take fourier transform of gray scale image
phase=exp(1j*angle(fourier_transform));
phase_only=ifftn(phase);%compute phase only reconstruction
figure,imshow(phase_only,[]);
ph=im2uint8(phase_only);%convert image from double to uint8
superimposing = img;
superimposing(ph) = 255;
figure,
imshow(superimposing,[]),
superimposing(ph) = 255 could mean -
1. ph contains indices of superimposing that you wish to paint white(255).
2. ph is a 'logical' image of the same size as superimposing, every pixel that evaluates to 'true' in ph would be painted white in superimposing.
What you meant was probably:
threshold = 0.2;
superimposing(real(phase_only) > threshold) = 255;
If you want to fuse the two images and see them one on top of the other use imfuse:
imshow(imfuse(real(phase_only), img, 'blend'))
How can I convert an RGB or indexed images into gray scale without using B=ind2gray(A,map) from MATLAB?
I don't understand why you just can't use ind2gray.... but if you really have to implement this from first principles, that's actually not too bad. What ind2gray does (IIRC) is that it takes an indexed image and with the colour map, it converts the image into a colour image. Once you're done that, you convert the resulting colour image to grayscale. The index image is actually a grid of lookup values that span from [1,N]. Also, the colour map is a N x 3 array where each row is a RGB tuple / colour. It should be noted that the colour map is double precision where each component spans between [0,1]. Therefore, for each location in the index image, it tells you which tuple from the lookup table is mapped to this location. For example, if we had an index image such that:
X =
[1 2
3 4]
... and we had a colour map that was 4 x 3, this means that the top left corner gets the first colour denoted by the first row of the map, the top right corner gets the second colour, bottom left corner gets the third colour and finally the bottom right corner gets the fourth colour.
The easiest way to do this would be to use X to index into each column of the input map, then concatenate all of the results together into a single 3D matrix. Once you're done, you can convert the image into its luminance / grayscale counterpart. Given that you have an index image called X and its corresponding colour map, do this:
colour_image = cat(3, map(X), map(X + size(map,1)), map(X + 2*size(map,1)));
gray = sum(bsxfun(#times, colour_image, permute([0.2126 0.7152 0.0722], [3 1 2])), 3);
The first statement is very simple. Take note that map is N x 3 and X can range between [1,N]. If we use X and index directly into map, you would only be grabbing the first column of values, or the first component of the colours / red. We need to access the same values in the right order for the second column, and because MATLAB accesses elements in column-major format, we simply have to add all of the offsets by N so that we can access the values in the second column to get the second component of colours / green. Finally, you'd offset by 2N to get the third component of colours / blue. We'd take each red, green and blue channel and concatenate them together to get a 3D image.
Once we get this 3D image, it's a matter of converting the colour image into luminance. I am using the SMPTE Rec. 709 standard to convert from a colour pixel to luminance. That relationship is:
Y = 0.2126 R + 0.7152 G + 0.0722 B
That's the purpose of the second statement. We will take each component, multiply them by their respective weight and sum all of the values together. You should have your luminance image as a result.
To check to see if this works, we can use the trees dataset from the image processing toolbox. This comes with an index image X, followed by a colour map map:
load trees;
%// Previous code
colour_image = cat(3, map(X), map(X + size(map,1)), map(X + 2*size(map,1)));
gray = sum(bsxfun(#times, colour_image, permute([0.2126 0.7152 0.0722], [3 1 2])), 3);
%// Show colour image as well as resulting gray image
figure; subplot(1,2,1);
imshow(colour_image);
subplot(1,2,2);
imshow(gray);
We get:
We can actually show that this is indeed the right output by converting the image to grayscale using ind2gray, then showing the difference between the two images. If the images are equal, that means that the resulting image should be black, which means that the outputs produced by the above procedure and ind2gray are exact.
Therefore:
gray2 = ind2gray(X, map);
figure;
imshow(abs(gray-gray2));
We get:
... yup... zilch, nothing, zero, notta.... so what I implemented in comparison to ind2gray is basically the same thing.
I'm looking for an algorithm.
I want to draw a image (2d-array of pixels) with the lowest number of rectangles . It's possible to overwrite already drawed areas by a new rectangles.
In the first step I convert every pixel of the picture to a quad with the size 1x1 and a color. Than I want to reduce the number of objects by creating bigger rectangles.
In the end I want an array of rectangles. When I iterate over it and draw it on the pane, I want to have the original picture.
Is there any algorithm?
The runtime doesn't matter.
Example1:
|.bl.|.bl.|.bl.|-----|.bl...........|
|.bl.|.gr.|.bl.| -> |...............| + |.gr.|
|.bl.|.bl.|.bl.|-----|..............|
bl = black, gr = green
Example2:
|....|....|.bl.|
|.bl.|.bl.|.bl.| --> |.bl.|.bl.|.bl.| + |.bl.|
|.bl.|.bl.|.bl.|-----|.bl.|.bl.|.bl.|
I was looking for Quad Tree Compression :)
http://www.gitta.info/DataCompress/en/html/rastercomp_chain.html
I've got two images, one 100x100 that I want to plot in grayscale and one 20x20 that I want to plot using another colormap. The latter should be superimposed on the former.
This is my current attempt:
A = randn(100);
B = ones(20);
imagesc(A);
colormap(gray);
hold on;
imagesc(B);
colormap(jet);
There are a couple of problems with this:
I can't change the offset of the smaller image. (They always share the upper-left pixel.)
They have the same colormap. (The second colormap changes the color of all pixels.)
The pixel values are normalised over the composite image, so that the first image changes if the second image introduces new extreme values. The scalings for the two images should be separate.
How can I fix this?
I want an effect similar to this, except that my coloured overlay is rectangular and not wibbly:
Just change it so that you pass in a full and proper color matrix for A (i.e. 100x100x3 matrix), rather than letting it decide:
A = rand(100); % Using rand not randn because image doesn't like numbers > 1
A = repmat(A, [1, 1, 3]);
B = rand(20); % Changed to rand to illustrate effect of colormap
imagesc(A);
hold on;
Bimg = imagesc(B);
colormap jet;
To set the position of B's image within its parent axes, you can use its XData and YData properties, which are both set to [1 20] when this code has completed. The first number specifies the coordinate of the leftmost/uppermost point in the image, and the second number the coordinate of the rightmost/lowest point in the image. It will stretch the image if it doesn't match the original size.
Example:
xpos = get(Bimg, 'XData');
xpos = xpos + 20; % shift right a bit
set(Bimg, 'XData', xpos);
I'm developing a handwriting recognition project. one of the requirements of this project is getting an image input, this image only contains some character object in a random location, and firstly I must extract this characters to process in next step.
Now I'm confusing a hard problem like that: how to extract one character from black/white (binary)image or how to draw a bound rectangle of a character in black - white (binary) image?
Thanks very much!
If you are using MATLAB (which I hope you are, since it it awesome for tasks like these), I suggest you look into the built in function bwlabel() and regionprops(). These should be enough to segment out all the characters and get their bounding box information.
Some sample code is given below:
%Read image
Im = imread('im1.jpg');
%Make binary
Im(Im < 128) = 1;
Im(Im >= 128) = 0;
%Segment out all connected regions
ImL = bwlabel(Im);
%Get labels for all distinct regions
labels = unique(ImL);
%Remove label 0, corresponding to background
labels(labels==0) = [];
%Get bounding box for each segmentation
Character = struct('BoundingBox',zeros(1,4));
nrValidDetections = 0;
for i=1:length(labels)
D = regionprops(ImL==labels(i));
if D.Area > 10
nrValidDetections = nrValidDetections + 1;
Character(nrValidDetections).BoundingBox = D.BoundingBox;
end
end
%Visualize results
figure(1);
imagesc(ImL);
xlim([0 200]);
for i=1:nrValidDetections
rectangle('Position',[Character(i).BoundingBox(1) ...
Character(i).BoundingBox(2) ...
Character(i).BoundingBox(3) ...
Character(i).BoundingBox(4)]);
end
The image I read in here are from 0-255, so I have to threshold it to make it binary. As dots above i and j can be a problem, I also threshold on the number of pixels which make up the distinct region.
The result can be seen here:
https://www.sugarsync.com/pf/D775999_6750989_128710
The better way to extract the character in my case was the segmentation for histogram i only can share with you some papers.
http://cut.by/j7LE8
http://cut.by/PWJf1
may be this can help you
One simple option is to use an exhaustive search, like (assuming text is black and background is white):
Starting from the leftmost column, step through all the rows checking for a black pixel.
When you encounter your first black pixel, save your current column index as left.
Continue traversing the columns until you encounter a column with no black pixels in it, save this column index as right.
Now traverse the rows in a similar fashion, starting from the topmost row and stepping through each column in that row.
When you encounter your first black pixel, save your current row index as top.
Continue traversing the rows until you find one with no black pixels in it, and save this row as `bottom.
You character will be contained within the box defined by (left - 1, top - 1) as the top-left corner and (right, bottom) as the bottom-right corner.