Extract the image2 from image 1 - image

I want draw boundery box around the text like this image Image 2
from this Image 1. Can anyone suggest me a good way to do this or some algorithm or turorial anything?.

As you haven't suggested a tool, I will use ImageMagick straight at the command line as it is installed on most Linux distros and is available for OSX and Windows. It also has PHP, Perl, Python and .Net bindings.
So, as your background is uniform (ish) you can just use trim to trim it off:
convert image.jpg -fuzz 20% -trim result.jpg
Now you can add a border like this:
convert result.jpg -bordercolor black -border 5 result.jpg
Except you want the other grey background to be retained so that doesn't work for you. So, instead of actually trimming, we can ask ImageMagick where it "would" trim but to not actually do it like this:
convert image.jpg -fuzz 20% -format %# info:
81x22+1+14
So, we know it would make a 81x22px box starting 1 pixel in from the left and 14 pixels down from the top, so we'll just draw a rectangle there instead of trimming it:
convert image.jpg -fill none -stroke black -draw "rectangle 1,14 82,36" result.jpg
Or, if you want the outline fatter:
convert image.jpg -fill none -stroke black -strokewidth 5 -draw "rectangle 1,14 82,36" result.jpg

For a uniform background, a simple solution would be to identify all of the pixels that do not match the background color and then find the minimum and maximum indices in each axis of those pixels to define a rectangle.
For instance, if you were using Matlab, this might resemble:
Use 'find' to identify non-background pixels (e.g. linearIndices = find(~(image1 == background)) where background is either a hard coded set of RGB values corresponding to the background pixels or a set of RGB values identified by the mode of the image.
'Find' will return linear indices rather than subscripts (i.e. bottom right corner of a 3x3 matrix is 9, not [3,3]) so use 'ind2sub' to convert to subscripts (e.g. [I,J] = ind2sub(imageSize, linearIndices)
Use 'max' and 'min' to find range in x and y (e.g. rangeX = [min(I) max(I); rangeY = [min(J) max(J)])
Change pixels along min and max indices to border color. For instance, image1 ( rangeX(1), rangeY(1):rangeY(2) ) = boxColour (where boxColour is the RGB values of the colour you want the box to be) would draw the left border of the box. Repeat this process for the three other borders and you're done.
Of course this approach only works if the background is completely uniform. It also assumes you only want to draw a border that is one pixel thick.
While the function recommendations correspond specifically to Matlab functions, the thought process behind those functions could likely be ported elsewhere.

Related

Imagemagick - How to create an annotation that would fit its size into any image?

I need to write a bash script that would annotate a given image using imagemagick. The problem is, that image could be any size and its annotation should look nearly same on any image.
Output result should look like this on any image:
So there are some points I'd like to figure out:
How to draw a transparent rectangle with a text, which will adjust its size depending on image size?
How to place this rectangle at the bottom of an image like in the example?
Using ImageMagick version 6 or 7 you can make a label sized to fit any input image, with a semi-transparent background, and composite it at the bottom of the input image to get the result you describe. Here is a command with IM 6 that does it...
convert input.png -set option:size %[w]x \
-fill white -background "#00000080" \
\( label:"This is my text." \
-virtual-pixel background -distort SRT "0.8 0" \
-virtual-pixel none -distort SRT "0.8 0" \) \
-gravity south -composite result.png
That uses the width of the input image "%[w]" to set the width of the label. It sets the text color to white and the background to semi-transparent black, "#00000080".
Inside the parentheses it creates your label. It uses "distort SRT" to scale the label down a bit to pull it in from the edges. Then it scales the label down a bit more to add some transparent space around it.
After the label is created it sets the gravity to "south" and composites the label onto the input image. It finishes by writing the output file.
Using IM 7 you'll need to change "convert" to "magick". For Windows change the continued line backslashes "\" to carets "^" and get rid of the backslashes that escape the parentheses.
Edited to add: Normally you'd use "-size WxH" ahead of making a "label:" to constrain it within particular dimensions. I used "-set option:size" instead because it allows for using percent escapes like "%[w]" with IM 6. That way the label dimensions are relative to any input image width.

How to convert coloured Captchas to Grey Scale?

I'm trying to make a capcha solver, but I have ran into some trouble. The captcha that I am trying to solve has different coloured backgrounds.
I need to convert it to black text on white background so that it could easily be recognised by tesseract-ocr
I have tried
convert *.png -threshold 50% *.png which only shows some of the digits.
The problem with simple 50% thresholding is that both colours may be lighter than 50% grey and will therefore come out as white. Or, conversely, both colours may be darker than mid-grey and therefore bith come out as black.
You need to do a 2-colour quantisation to get just 2 colours, then go to greyscale and normalize so the lighter colour goes white and the darker one goes black. I am not near a computer, to test, but that should be:
convert input.png -colors 2 -colorspace gray -normalize result.png
Now, you will find some images are inverted (black on white instead of white on black), so you can either test the top left corner pixel and if it is white, then invert the image. Or, you could get the mean of the image and if it is more than 0.5 that would indicate that the image is largely white and therefore needs inverting.
Invert with:
convert input.png -negate output.png
Get top-left pixel with:
convert image.png -format '%[pixel:p{0,0}]' info:-
Get mean value with:
convert image.png -format "%[mean]" info:-

How to trim/crop an image uniformly in ImageMagick?

Assume I have an original image (gray backgorund) with a black circle a bit down to the right (not centered), and the minimum space from any edge of the circle to the edge is, lets say, 75px. I would like to trim the same amount of space on all sides, and the space should be the maximum space possible without cropping the actual object in the image (area in magenta in image). Would love to hear how this could be solved.
Thanks in advance!
If I understand the question correctly, you want to trim an image not based on minimum bounding rectangle, but outer bounding rectangle.
I would do something like this..
Given I create an image with.
convert -size 200x200 xc:gray75 -fill black -draw 'circle 125 125 150 125' base.png
I would drop the image to a binary edge & trim everything down to the minimum bounding rectangle.
convert base.png -canny 1x1 -trim mbr.png
This will generate mbr.png image which will also have the original page information. The page information can be extracted with identify utility to calculate the outer bounding rectangle.
sX=$(identify -format '%W-(0 %X)-%w\n' mbr.png | bc)
sY=$(identify -format '%H-(0 %Y)-%h\n' mbr.png | bc)
Finally apply the calculated result(s) with -shave back on the original image.
convert base.png -shave "${sX}x${sY}" out.png
I assume that you want to trim your image (or shave in ImageMagick terms) by minimal horizontal or vertical distance to edge. If so this can be done with this one liner:
convert circle.png -trim -set page "%[fx:page.width-min(page.width-page.x-w,page.height-page.y-h)*2]x%[fx:page.height-min(page.width-page.x-w,page.height-page.y-h)*2]+%[fx:page.x-min(page.width-page.x-w,page.height-page.y-h)]+%[fx:page.y-min(page.width-page.x-w,page.height-page.y-h)]" -background none -flatten output.png
This may look complicated but in reality isn't. First trim the image. The result will still have stored information on page geometry including original width, height and actual offsets. With this info I can set the page geometry (correct width & height and new offsets) using ImageMagick FX expressions. Finally, flattening the image will produce desired output.

How to replace a color by another?

This is general question (between programming and math):
How to replace a color by another in a bitmap?
I assume the bitmap is a 2D-array
Example : Let's replace RGB color [234,211,23] by RGB color [234,205,10].
How to do this color replacement, such that the neighbour colors are replaced as well ? I.e. a smooth color replacement.
I assume there exists methods like linear interpolation for neighbour colors, etc.
What are the classical ways to do this?
Here is an example of how to detect color RGB 234,211,23 and its neighbour colors in a 500x500px image bitmap array x:
for i in range(500):
for j in range(500):
if abs(x[i,j][0] - 234) < TRESH and abs(x[i,j][1] - 211) < TRESH and abs(x[i,j][2] - 23) < TRESH:
x[i,j] = ... # how to set the new color in a smooth way?
I think that a good approach that you can use is to change the whole image to a new color space, I'd rather use HSV color space instead of RGB, you can find some info here: HSV color Space.
When you wish to search for a specific color, RGB model is not the best option the principal reason is the large changes between brightness and darkness of the color. The thresholds on the RGB color space are not useful in this case.
In HSV color space you have a channel to select the color of your interest and the other 2 channels are for the saturation and brightness of the color. But you can get accurate results only using the Hue channel (the first). The only thing that you need to take care about is in realize that you need to work this channel as a circular buffer because the maximum and minimum value are very similar in color, both are the red color.
Once you have the detected color you can set the new one and you can keep the saturation and brightness properties of the old color, by doing this the color changes will look like smoother.
If you wish to replace the colours of pixel in a 2D - Array you do so as following:
Array[x][y] = new value
where x and y stand for the location of the pixel, but keep in mind that images use the right-handed system thus the values of y grow bottom to top while in computers you use the left handed system so y values grow from top to bottom. The exact syntax of assigning the value of the new colour depends on the programming language you are using (the example above works in ruby). Also some programming languages already offer image manipulation functions built in so make sure to read the documentation to avoid implementing an already implemented function.
New Answer
New answer coming - now I understand that you mean the neighbours in the colour sense rather than the geometric sense...
You could calculate the vector colour distance from each pixel of your image to the colour you want to change and use that as a mask. So, if we create the same image as below... say we have a red-yellow gradient as background with a blue square on it and we wish to replace the central orange colour across the middle.
# Make red-yellow gradient with blue square within
convert -size 500x500 gradient:red-yellow -fill none -stroke blue -strokewidth 10 -draw "rectangle 100,100 400,400" image.png
Now clone that image, and fill the clone with the orange tone we want to replace, then calculate the vector colour distance from each pixel to that orange tone:
convert image.png \( +clone -fill "rgb(255,128,0)" -colorize 100% \) \
-compose difference -composite \
-evaluate Pow 2 -separate \
-evaluate-sequence Add -evaluate pow 0.5 \
-negate \
colour_distance.png
You can then use this colour_distance.png as a mask for alpha-compositing, so if we decide to replace that orangey tone with pink, we can do this:
convert image.png \
\( +clone -fill fuchsia -colorize 100% \) \
\( colour_distance.png -sigmoidal-contrast 20 \) \
-composite z.png
Note that I changed the pow 0.5 to pow 0.3 to roll off the mask more sharply.
Original Answer
Here's one way to do it. Say we have a red-yellow gradient as background with a blue square on it and we wish to replace the blue with green...
First, extract all the blue pixels onto a transparent background, then change them to green and blur them so they spread into the neighbouring pixels. Then overlay the blurred green pixels onto the original image.
I choose to do it with ImageMagick but you seem happy to adapt to other languages and libraries...
#!/bin/bash
# Make red-yellow gradient with blue square within
convert -size 500x500 gradient:red-yellow -fill none -stroke blue -strokewidth 10 -draw "rectangle 100,100 400,400" image.png
# Make everything blue green, then everything else transparent, then blur the lot
convert image.png -fill green -opaque blue -fill white +opaque green -transparent white -blur x6 x.png
# Now overlay the blurred greeness onto the original after replacing blues with green
convert x.png \( image.png -fill green -opaque blue \) -compose overlay -composite result.png
Image.png
x.png (the blurred, colour-replaced image)
result.png

To remove background greyed pixels from image

I want to remove background unnecessary greyed pixels in above image.
May i know how to do that.
Quick and dirty with ImageMagick:
convert 9AWLa.png -blur x1 -threshold 50% out.png
Somewhat better, still with ImageMagick:
convert 9AWLa.png -morphology thicken '1x3>:1,0,1' out.png
Updated Answer
It is rather harder to keep the noisy border intact while removing noise elsewhere in the image. Is it acceptable to trim 3 pixels off all the way around the edge, then add a 3 pixel wide black border back on?
convert 9AWLa.png -morphology thicken '1x3>:1,0,1' \
-shave 3x3 -bordercolor black -border 3 out.png
Some methods that come to my mind are
If the backgroud is gray color rather than sparse black dots then you can convert the image in binary by thresholding it with proper value of grayscale. i.e. all values above particular values of pixel are white and all values below that are black. Something like this.
Another thing you can do is first smoothing the picture my some filter like mean or median filter and then converting into binary presented in previous point.
I think in this way the unnecessary background can be removed

Resources