I my application users are able to upload photos. Sometime, I want them to hide some information of the picture, for instance the registration plate of a vehicle, or the personal address of an invoice.
To meet that need I plan to pixelate a portion of the image. How can I pixelate an image in such a way given the coordinates of the area to hide and the size of the area.
I found out how to pixelate (by scaling the image down and up) but how can I only target an area of the image?
The area is specified by two pairs of coordinates (x1, y1, x2, y2), or a pair of coordinates and dimensions (x, y, width, height).
I am at work at the moment so can not test any code. I would see if you could work with -region or else use a mask.
Copy the image and pixelate the whole image create a mask of the area required, cut a hole in the original image with the mask and overlay it over the pixelated image.
You could modify this code ( quite old and could probably be improved on ):
// Get the image size to creat the mask
// This can be done within Imagemagick but as we are using php this is simple.
$size = getimagesize("$input14");
// Create a mask with a round hole
$cmd = " -size {$size[0]}x{$size[1]} xc:none -fill black ".
" -draw \"circle 120,220 120,140\" ";
exec("convert $cmd mask.png");
// Cut out the hole in the top image
$cmd = " -compose Dst_Out mask.png -gravity south $input14 -matte ";
exec("composite $cmd result_dst_out1.png");
// Composite the top image over the bottom image
$cmd = " $input20 result_dst_out1.png -geometry +60-20 -composite ";
exec("convert $cmd output_temp.jpg");
// Crop excess from the image where the bottom image is larger than the top
$cmd = " output_temp.jpg -crop 400x280+60+0 ";
exec("convert $cmd composite_sunflower_hard.jpg ");
// Delete tempory images
unlink("mask.png");
unlink("result_dst_out1.png");
unlink("output_temp.jpg");
Thanks for your answer, Bonzo.
I found a way to achieve what I want with ImageMagick convert command. It's a 3-steps process:
I create a pixelated version of the whole source image.
I then build a mask using the original image (to keep the same size) filled with black (with gamma 0) then I draw blank rectangle where I want unreadable areas.
Then I merge the three images (original, pixelated and mask) in a composite operation.
Here is an example with 2 areas (a et b) pixelated.
convert original.png -scale 10% -scale 1000% pixelated.png
convert original.png -gamma 0 -fill white -draw "rectangle X1a, Y1a, X2a, Y2a" -draw "rectangle X1b, Y1b, X2b, Y2b" mask.png
convert original.png pixelated.png mask.png -composite result.png
It works like a charm. Now I will do it with RMagick.
Related
How I can corp the image or other methods so that the summation of pixels in the boundary, be less than the hole x and y position. I mean in the images below
the white hole summation x and y of the pixels must be higher in the hole position but as shown in the chart the surrounding of the image especially the x summation of pixels have the higher summation
In this way, I want to found the sum pixels that have high values and illustrate the hole pixels or coordination. In the y-axis, the summation of course so as the below better result than the x one but in some others not, of course, I crop all of 1000 image by below code then after returning the pixel data sum of them and the id and the max values than return the hole is illustrated.
img = Image.open('J:\py.pro\path\picture_1.png').convert('L') # convert image to 8-bit grayscale
if img.mode == "CMYK":
# color profiles can be found at C:\Program Files (x86)\Common Files\Adobe\Color\Profiles\Recommended
img = ImageCms.profileToProfile(img, "USWebCoatedSWOP.icc", "sRGB_Color_Space_Profile.icm", outputMode="RGB")
# PIL image -> OpenCV image; see https://stackoverflow.com/q/14134892/2202732
img = cv2.cvtColor(np.array(img), cv2.COLOR_RGB2BGR)
## (1) Convert to gray, and threshold
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
th, threshed = cv2.threshold(gray, 240, 255, cv2.THRESH_BINARY_INV)
## (2) Morph-op to remove noise
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (10,30))
morphed = cv2.morphologyEx(threshed, cv2.MORPH_CLOSE, kernel)
## (3) Find the max-area contour
cnts = cv2.findContours(morphed, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[-2]
cnt = sorted(cnts, key=cv2.contourArea)[-1]
## (4) Crop and save it
x,y,w,h = cv2.boundingRect(cnt)
dst = img[y:y+h, x:x+w]
# add border/padding around the cropped image
# dst = cv2.copyMakeBorder(dst, 10, 10, 10, 10, cv2.BORDER_CONSTANT, value=[255,255,255])
#cv2.imshow("J:\\py.pro\\path\\pic_1.png", dst)
cv2.imwrite("J:\\py.pro\\path\\pic_1.png", dst)
cv2.waitKey(0)
cv2.destroyAllWindows()
How about adding a 1-pixel wide white border on all sides and flood-filling white-coloured pixels with black starting from (0,0) so that the flood-fill flows around the edges filling all white areas at image edges?
I'll demonstrate with magenta for the flood-fill so you can see which pixels are affected:
I actually did that with ImageMagick in Terminal, but you can do just the same with OpenCV:
magick lattice.png -bordercolor white -border 1 -fill magenta -draw 'color 0,0 floodfill' result.png
I have an image:
In this image, the OpenCV Hough transform can't detect the big -45 degree line using
minLineLength = 150
maxLineGap = 5
line_thr = 150
linesP = cv.HoughLinesP(dst, 1, np.pi / 180, line_thr, None, minLineLength, maxLineGap)
The only lines found are:
I tried playing with various thresholds also but I can't find the line here.
If I manually crop the image like this:
then I can clearly see the OpenCV Hough transform finding the right line:
I want to find this same line in the non cropped version. Any suggestions on the non-cropped version to find it?
Also there can be cases where there is no line at all or the line doesn't go all the way for X-axis length.
Examples
I implemented a slightly simpler algorithm than my other answer but in Python with OpenCV this time.
Basically, rather than taking the mean of vertical columns of pixels, it sums the pixels in the columns and chooses the column that is brightest. If I show the padded, rotated image with another image below representing the sums of the columns, you should see how it works:
#!/usr/bin/env python3
import cv2
import numpy as np
# Load image as greyscale
im = cv2.imread('45.jpg',cv2.IMREAD_GRAYSCALE)
# Pad with border so it isn't cropped when rotated
bw=300
bordered = cv2.copyMakeBorder(im, top=bw, bottom=bw, left=bw, right=bw, borderType= cv2.BORDER_CONSTANT)
# Rotate -45 degrees
w, h = bordered.shape
M = cv2.getRotationMatrix2D((h/2,w/2),-45,1)
paddedrotated = cv2.warpAffine(bordered,M,(h,w))
# DEBUG cv2.imwrite('1.tif',paddedrotated)
# Sum the elements of each column and find column with most white pixels
colsum = np.sum(paddedrotated,axis=0,dtype=np.float)
col = np.argmax(colsum)
# DEBUG cv2.imwrite('2.tif',colsum)
# Fill with black except for the line we have located which we make white
paddedrotated[:,:] = 0
paddedrotated[:,col] = 255
# Rotate back to straight
w, h = paddedrotated.shape
M = cv2.getRotationMatrix2D((h/2,w/2),45,1)
straight = cv2.warpAffine(paddedrotated,M,(h,w))
# Remove padding and save to disk
straight = straight[bw:-bw,bw:-bw]
cv2.imwrite('result.png',straight)
Note that you don't actually have to rotate the image back to straight and crop it back to its original size. You could actually stop after the first line that says:
col = np.argmax(colsum)
and use some elementary trigonometry to work out what that means in your original image.
Here is the output:
Keywords: line detection, detect line, rotate, pad, border, projection, project, image, image processing, Python, OpenCV, affine, Hough
I did this on the command-line in Terminal with ImageMagick but you can apply exactly the same technique with OpenCV.
Step 1
Take the image and rotate it 45 degrees introducing black pixels as background where required:
convert 45.jpg -background black -rotate 45 result.png
Step 2
Now, building on the previous command, set every pixel to the median of the box 1px wide and 250px tall centred on it:
convert 45.jpg -background black -rotate 45 -statistic median 1x250 result.png
Step 3
Now, again building on the previous command, rotate it back 45 degrees:
convert 45.jpg -background black -rotate 45 -statistic median 1x250 -rotate -45 result.png
So, in summary, the entire processing is:
convert input.jpg -background black -rotate 45 -statistic median 1x250 -rotate -45 result.png
Obviously then crop it back to the original size and append side-by-side with the original for checking:
convert 45.jpg -background black -rotate 45 -statistic median 5x250 -rotate -45 +repage -gravity center -crop 184x866+0+0 result.png
convert 45.jpg result.png +append result.png
You can also use mean statistic plus thresholding rather than median since it is quicker than sorting to find the median, however it tends to lead to smearing:
convert 45.jpg -background black -rotate 45 -statistic mean 1x250 result.png
Your newly-added image gets processed to this result:
The problem is clearly that the line you are searching for is not a line. It looks actually like a train of connected circles and boxes. Therefore, I recommend that you do the following:
Find all contours in the image using find contours
img = cv.imread('image.jpg')
img_gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
ret, thresh = cv.threshold(img_gray, 127, 255, 0)
img2, contours, hierarchy = cv.findContours(thresh, CHAIN_APPROX_SIMPLE ,cv.RETR_EXTERNAL)
This will return many many contours, so use a loop to save only long enough contours. Since the image size is 814x1041 pixels, I assume the contour long if it is at least 10% of the image width which is almost 100 (you must apparently optimize this value)
long_contours = []
for contour in contours[i]:
perimeter = cv2.arcLength(contour,True)
if (perimeter > 0.1 * 1018) # 10% of the image width
long_contours.append(contour)
Now draw a rotated bounding rectangle around those long contours that might be a line as well. The long contour is considered a line if its width is much longer than its height, or its aspect ratio is large (such as 8, and you need also to optimize this value)
for long_contour in long_contours:
rect = cv2.minAreaRect(long_contour)
aspec_ratio = rect.width / rect.height
if aspec_ratio > 8 :
box = cv2.boxPoints(rect)
box = np.int0(box)
cv2.drawContours(img,[box],0,(255,255,255),cv.FILLED)
Finally you should get something like that. Please note the code here is for guidance only.
Your original code is fine as a whistle. The only problem is that your image contains too many information which mess up the accumulator scores. Everything will work out if you increase the line threshold to 255.
minLineLength = 150
maxLineGap = 5
line_thr = 255
linesP = cv2.HoughLinesP(dst, 1, np.pi / 180.0, line_thr, None, minLineLength, maxLineGap)
Here are the results using that value.
3 lines are detected here due to the large white pixel size.
[ 1 41 286 326]
[ 0 42 208 250]
[ 1 42 286 327]
5 lines are detected around the same area due to the same reason as above. Reducing the pixel sizes using morphological operation or distance transform should fix this.
[110 392 121 598]
[112 393 119 544]
[141 567 147 416]
[ 29 263 29 112]
[ 0 93 179 272]
No line found here.
I have a black and white image which has to be rendered on screen as grayscale image of precise colours. Black should be displayed as rgb(40,40,40) and white as rgb(128,128,128).
The problem is the software to render this image does not allow colours to be specified directly; the only parameters I can vary are brightness, contrast and gamma (converting image to the desired colours is not an option).
Is there any formulae to calculate specific values for those parameters to adjust colours as desribed?
Without knowing how they compute brightness and contrast, it is hard to tell you how to do your computation.
Perhaps I still misunderstand. But you can find the min and max values in your image using Imagemagick
convert image -format %[fx:255*minima] info:
convert image -format %[fx:255*maxima] info:
Those will be in the range of 0 to 255.
As Mark showed above the transformation is linear. So it obeys the equation
Y = a*X + b
where a is a measure of contrast and b is a measure of brightness; X is your input value and Y is your desired output value.
Thus
Ymax = a*Xmax + b
and
Ymin = a*Xmin + b
Subtracting and solving for a, we get
a = (Ymax-Ymin)/(Xmax-Xmin)
and substituting that into the equation for Ymax and saving for b, we get
b = Ymax - a*Xmax = Ymax - ( (Ymax-Ymin)/(Xmax-Xmin) )*Xmax
Then you can use the Imagemagick function -function polynomial to process your image.
In unix, I would do it as follows
Xmin=$(convert image -format %[fx:255*minima] info:)
Xmax=$(convert image -format %[fx:255*maxima] info:)
If your image is pure black and pure white, then you can skip the above and just use
Xmin=0
Xmax=255
And your desired values are
Ymin=40
Ymax=128
These are now variables and I can use -fx to do the calculations for a and b
a = $(convert xc: -format "%[fx:($Ymax-$Ymin)/($Xmax-$Xmin)]" info:)
b = $(convert xc: -format "%[fx:$Ymax - $a*$Xmax]" info:)
And to convert your image,
convert image -function polynomial "$a,$b" result image
In general, there are several ways to alter an image's contrast, gamma and brightness and it is difficult to know which method your chosen tool uses, and therefore provide the correct answer.
What you are trying to do is move the blue line (no contrast or brightness changes) in the image below to where the red line (decreased contrast) is:
In general, decreasing the contrast will rotate the blue line clockwise whereas increasing it will rotate it anti-clockwise. In general, increasing the brightness will shift the blue line to the right whereas decreasing the brightness will shift it left. Changing the gamma will likely make the line into a curve.
Can you use ImageMagick at the commandline instead?
convert input.png +level 15.69%,50.2% -depth 8 result.png
If you have v7+, use magick in place of convert.
I made a little gradient for you with:
convert -size 60x255 gradient: -rotate 90 gradient.png
And if you apply the suggested command:
convert gradient.png +level 15.69%,50.2% -depth 8 result.png
You will get this:
And you can check the statistics (min and max) with:
identify -verbose result.png | more
Image: result.png
Format: PNG (Portable Network Graphics)
Mime type: image/png
Class: PseudoClass
Geometry: 255x60+0+0
Units: Undefined
Type: Grayscale
Base type: Palette
Endianess: Undefined
Colorspace: Gray
Depth: 8-bit
Channel depth:
Gray: 8-bit
Channel statistics:
Pixels: 15300
Gray:
min: 40 (0.156863) <--- MIN looks good
max: 128 (0.501961) <--- MAX looks good
mean: 84.0078 (0.329443)
standard deviation: 25.5119 (0.100047)
kurtosis: -1.19879
skewness: 0.000197702
I have an image, in that image all red objects are detected.
Here's an example with two images:
http://img.weiku.com/waterpicture/2011/10/30/18/road_Traffic_signs_634577283637977297_4.jpg
But when i proceed that image for edge detection method i got the output as only black color. However, I want to detect the edges in that red object.
r=im(:,:,1); g=im(:,:,2); b=im(:,:,3);
diff=imsubtract(r,rgb2gray(im));
bw=im2bw(diff,0.18);
area=bwareaopen(bw,300);
rm=immultiply(area,r); gm=g.*0; bm=b.*0;
image=cat(3,rm,gm,bm);
axes(handles.Image);
imshow(image);
I=image;
Thresholding=im2bw(I);
axes(handles.Image);
imshow(Thresholding)
fontSize=20;
edgeimage=Thresholding;
BW = edge(edgeimage,'canny');
axes(handles.Image);
imshow(BW);
When you apply im2bw you want to use only the red channel of I(i.e the 1st channel). Therefore using this command:
Thresholding =im2bw(I(:,:,1));
for example yields this output:
Just FYI for anyone else that manages to stumble here. The HSV colorspace is better suited for detecting colors over the RGB colorspace. A good example is in gnovice's answer. The main reason for this is that there are colors which can contain full 255 red values but aren't actually red (yellow can be formed from (255,255,0), white from (255,255,255), magenta from (255,0,255), etc).
I modified his code for your purpose below:
cdata = imread('roadsign.jpg');
hsvImage = rgb2hsv(cdata); %# Convert the image to HSV space
hPlane = 360.*hsvImage(:,:,1); %# Get the hue plane scaled from 0 to 360
sPlane = hsvImage(:,:,2); %# Get the saturation plane
bPlane = hsvImage(:,:,3); %# Get the brightness plane
% Must get colors with high brightness and saturation of the red color
redIndex = ((hPlane <= 20) | (hPlane >= 340)) & sPlane >= 0.7 & bPlane >= 0.7;
% Show edges
imshow(edge(redIndex));
Output:
Is there a cli way of getting the average value of all the pixels in an image
for example - if i have an all black image, I would want to type:
*cmd* black-img.jpg
and the output will be 0 in the shell
Oh, this is simple:
convert image.jpg -scale 1x1\! txt:-
The command uses ImageMagick's convert to enforce scaling of the input image to a size 1x1 pixels. Output will be something like this for an 8-bit RGBA image:
# ImageMagick pixel enumeration: 1,1,255,srgba
0,0: (151,161,212, 92) #97A1D45C srgba(151,161,212,0.361928)
or this for an 8-bit sRGB image:
# ImageMagick pixel enumeration: 1,1,255,srgb
0,0: (229,226,229) #E5E2E5 srgb(229,226,229)
It represents the color value of the single-pixel image that was produced:
0,0: are the coordinates of this pixel: 1st row in 1st column.
(151,161,212, 92) represent the R ed, G reen, B lue and A lpha values of the RGBA pixel.
(229,226,229) represent the R ed, G reen and B lue values of the sRGB pixel.
#97A1D45C and #E5E2E5 are the respective hex values.
Now it's your own job to compute this output into 'a '0' in the shell' if it is a black pixel. :-)
Ok, so the original poster seems to want not the 3 or 4 color channel values when he asks about 'getting the average value of all the pixels in an image'... he wants one single value that can be derived by converting the image to grayscale first:
convert image.jpg -colorspace gray -scale 1x1\! txt:-
Sample output for the same 8-bit RGBA (with Alpha channel) image as in the other answer:
# ImageMagick pixel enumeration: 1,1,255,graya
0,0: (119,119,119, 92) #7777775C graya(119,119,119,0.361928)
Output for the 8-bit sRGB image (without Alpha channel) from before:
# ImageMagick pixel enumeration: 1,1,255,gray
0,0: (221,221,221) #DDDDDD gray(221,221,221)