I am currently applying the Contrast Limited Adaptive Histogram Equalization algorithm together with an algorithm to perform the photo denoise.
My problem is that I am working with 360 photos. As the contrast generates different values at the edges when I join the photo, the edge line is highly noticeable. How can I mitigate that line? What changes should I make so that it is not noticeable and the algorithm is applied consistently?
Original Photo:
Code to Contrast Limited Adaptive Histogram Equalization
# CLAHE (Contrast Limited Adaptive Histogram Equalization)
clahe = cv2.createCLAHE(clipLimit=1., tileGridSize=(6, 6))
lab = cv2.cvtColor(image, cv2.COLOR_BGR2LAB) # convert from BGR to LAB color space
l, a, b = cv2.split(lab) # split on 3 different channels
l2 = clahe.apply(l) # apply CLAHE to the L-channel
lab = cv2.merge((l2, a, b)) # merge channels
img2 = cv2.cvtColor(lab, cv2.COLOR_LAB2BGR) # convert from LAB to BGR
Result:
360 performed:
It is highly notorious line of separation because it is not taken into account that the photo is joined later. What can I do?
Here's an answer for C++, you can probably convert it easily to python/numpy.
The idea is to use a border region before performing CLAHE and crop the image afterwards.
These are the subimage regions in the original image:
and they will be copied the the left/right of the image like this:
Maybe you can reduce the size of the border strongly:
int main()
{
cv::Mat img = cv::imread("C:/data/SO_360.jpg");
int borderSize = img.cols / 4;
// make image that can have some border region
cv::Mat borderImage = cv::Mat(cv::Size(img.cols + 2 * borderSize, img.rows), img.type());
// posX, posY, width, height of the subimages
cv::Rect leftBorderRegion = cv::Rect(0, 0, borderSize, borderImage.rows);
cv::Rect rightBorderRegion = cv::Rect(borderImage.cols - borderSize, 0, borderSize, borderImage.rows);
cv::Rect imgRegion = cv::Rect(borderSize, 0, img.cols, borderImage.rows);
// original image regions to copy:
cv::Rect left = cv::Rect(0, 0, borderSize, borderImage.rows);
cv::Rect right = cv::Rect(img.cols - borderSize, 0, borderSize, img.rows);
cv::Rect full = cv::Rect(0, 0, img.cols, img.rows);
// perform copying to subimage (left part of the img goes to right part of the border image):
img(left).copyTo(borderImage(rightBorderRegion));
img(right).copyTo(borderImage(leftBorderRegion));
img.copyTo(borderImage(imgRegion));
cv::imwrite("SO_360_border.jpg", borderImage);
//# CLAHE(Contrast Limited Adaptive Histogram Equalization)
//clahe = cv2.createCLAHE(clipLimit = 1., tileGridSize = (6, 6))
// apply the CLAHE algorithm to the L channel
cv::Ptr<cv::CLAHE> clahe = cv::createCLAHE();
clahe->setClipLimit(1);
clahe->setTilesGridSize(cv::Size(6, 6));
cv::Mat lab;
cv::cvtColor(borderImage, lab, cv::COLOR_BGR2Lab); // # convert from BGR to LAB color space
std::vector<cv::Mat> labChannels; //l, a, b = cv2.split(lab) # split on 3 different channels
cv::split(lab, labChannels);
//l2 = clahe.apply(l) # apply CLAHE to the L - channel
cv::Mat dst;
clahe->apply(labChannels[0], dst);
labChannels[0] = dst;
//lab = cv2.merge((l2, a, b)) # merge channels
cv::merge(labChannels, lab);
//img2 = cv2.cvtColor(lab, cv2.COLOR_LAB2BGR) # convert from LAB to BGR
cv::cvtColor(lab, dst, cv::COLOR_Lab2BGR);
cv::imwrite("SO_360_border_clahe.jpg", dst);
// to crop the image after performing clahe:
cv::Mat cropped = dst(imgRegion).clone();
cv::imwrite("SO_360_clahe.jpg", cropped);
}
Images:
input as in your original post.
After creating the border:
After performing CLAHE (with border):
After cropping the CLAHE-border-image:
Related
How can I make the cv2.imshow output the same as the plt.imshow output?
# loading image
img0 = cv2.imread("image.png")
# converting to gray scale
gray = cv2.cvtColor(img0, cv2.COLOR_BGR2GRAY)
# remove noise
img = cv2.GaussianBlur(gray, (3, 3), 0)
# convolute with proper kernels
laplacian = cv2.Laplacian(img, cv2.CV_64F)
sobelx = cv2.Sobel(img, cv2.CV_64F, 1, 0, ksize=5) # x
sobely = cv2.Sobel(img, cv2.CV_64F, 0, 1, ksize=5) # y
imgboth = cv2.addWeighted(sobelx, 0.5, sobely, 0.5, 0)
plt.imshow(imgboth, cmap='gray')
plt.show()
cv2.imshow("img", cv2.resize(imgboth, (960, 540)))
cv2.waitKey(0)
cv2.destroyAllWindows()
original image
plt.output
cv2.imshow
# ...
canvas = imgboth.astype(np.float32)
canvas /= np.abs(imgboth).max()
canvas += 0.5
cv.namedWindow("canvas", cv.WINDOW_NORMAL)
cv.imshow("canvas", canvas)
cv.waitKey()
cv.destroyWindow("canvas")
only looks different because you posted thumbnails, not the original size image.
when you give imshow floating point values, which you do because laplacian and sobelx are floating point, then it assumes a range of 0.0 .. 1.0 as black .. white.
matplotlib automatically scales data. OpenCV's imshow doesn't. both behaviors have pros and cons.
I have lots of images of planets in differing sizes like
They are all positioned exactly in the middle of the square images but with different height.
Now I want to crop them and make the black border transparent. I tried with convert (ImageMagick 6.9.10-23) like this:
for i in planet_*.jpg; do
nr=$(echo ${i/planet_/}|sed s/.jpg//g|xargs)
convert $i -fuzz 1% -transparent black trans/planet_${nr}.png
done
But this leaves some artifacts like:
Is there a command to crop all images in a circle, so the planet is untouched? (It mustn't be imagemagick).
I could also imagine a solution where I would use a larger -fuzz value and then fill all transparent pixels in the inner planet circle with black.
Those are all planets, I want to convert: download zip
Here is one way using Python Opencv from the minEclosingCircle.
Input:
import cv2
import numpy as np
import skimage.exposure
# read image
img = cv2.imread('planet.jpg')
h, w, c = img.shape
# convert to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# threshold
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)[1]
# get contour
contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
big_contour = max(contours, key=cv2.contourArea)
# get enclosing circle
center, radius = cv2.minEnclosingCircle(big_contour)
cx = int(round(center[0]))
cy = int(round(center[1]))
rr = int(round(radius))
# draw outline circle over input
circle = img.copy()
cv2.circle(circle, (cx,cy), rr, (0, 0, 255), 1)
# draw white filled circle on black background as mask
mask = np.full((h,w), 0, dtype=np.uint8)
cv2.circle(mask, (cx,cy), rr, 255, -1)
# antialias
blur = cv2.GaussianBlur(mask, (0,0), sigmaX=2, sigmaY=2, borderType = cv2.BORDER_DEFAULT)
mask = skimage.exposure.rescale_intensity(blur, in_range=(127,255), out_range=(0,255))
# put mask into alpha channel to make outside transparent
imgT = cv2.cvtColor(img, cv2.COLOR_BGR2BGRA)
imgT[:,:,3] = mask
# crop the image
ulx = int(cx-rr+0.5)
uly = int(cy-rr+0.5)
brx = int(cx+rr+0.5)
bry = int(cy+rr+0.5)
print(ulx,brx,uly,bry)
crop = imgT[uly:bry+1, ulx:brx+1]
# write result to disk
cv2.imwrite("planet_thresh.jpg", thresh)
cv2.imwrite("planet_circle.jpg", circle)
cv2.imwrite("planet_mask.jpg", mask)
cv2.imwrite("planet_transparent.png", imgT)
cv2.imwrite("planet_crop.png", crop)
# display it
cv2.imshow("thresh", thresh)
cv2.imshow("circle", circle)
cv2.imshow("mask", mask)
cv2.waitKey(0)
Threshold image:
Circle on input:
Mask image:
Transparent image:
Cropped transparent image:
packages to install
sudo apt install python3-opencv python3-sklearn python3-skimage
I'm trying to locate an object (here a PWB) on a picture.
First I do this by finding the largest contour. Then I want to rewrite solely this object into a new picture so that in the future I can work on smaller pictures.
The problem however is that when I rewrite this ROI, the picture gets of a lighter color than the original one.
CODE:
Original = cv2.imread(picture_location)
image = cv2.imread(mask_location)
img = cv2.medianBlur(image,29)
imgray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
dst = cv2.bitwise_and(Original, image)
roi = cv2.add(dst, Original)
ret,thresh = cv2.threshold(imgray,127,255,0)
im2, contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
area = 0
max_x = 0
max_y = 0
min_x = Original.shape[1]
min_y = Original.shape[0]
for i in contours:
new_area = cv2.contourArea(i)
if new_area > area:
area = new_area
cnt = i
x,y,w,h = cv2.boundingRect(cnt)
min_x = min(x, min_x)
min_y = min(y, min_y)
max_x = max(x+w, max_x)
max_y = max(y+h, max_y)
roi = roi[min_y-10:max_y+10, min_x-10:max_x+10]
Original = cv2.rectangle(Original,(x-10,y-10),(x+w+10,y+h+10),(0,255,0),2)
#Writing down the images
cv2.imwrite('Pictures/PCB1/LocatedPCB.jpg', roi)
cv2.imwrite('Pictures/PCB1/LocatedPCBContour.jpg',Original)
Since I don't have 10 reputation yet I cannot post the pictures. I can however provide the links:
Original
Region of Interest
The main question is how do I get the software to write down the ROI in the exact same colour as the original picture?
I'm a elektromechanical engineer however, so I'm fairly new to this, remarks on the way I wrote my code would also be appreciated if possible.
The problem is that you first let roi = cv2.add(dst, Original)
and finally cut from the lighten picture in here:
roi = roi[min_y-10:max_y+10, min_x-10:max_x+10]
If you want to crop the original image, you should do:
roi = Original[min_y-10:max_y+10, min_x-10:max_x+10]
You can perhaps perform an edge detection after blurring your image.
How to select best parameters for Canny edge? SEE HERE
lower = 46
upper = 93
edged = cv2.Canny(img, lower, upper) #--- Perform canny edge on the blurred image
kernel = np.ones((5,5),np.uint8)
dilate = cv2.morphologyEx(edged, cv2.MORPH_DILATE, kernel, 3) #---Morphological dilation
_, contours , _= cv2.findContours(dilate, cv2.RETR_EXTERNAL, 1) #---Finds all parent contours, does not find child contours(i.e; does not consider contours within another contour)
max = 0
cc = 0
for i in range(len(contours)): #---For loop for finding contour with maximum area
if (cv2.contourArea(contours[i]) > max):
max = cv2.contourArea(contours[i])
cc = i
cv2.drawContours(img, contours[cc], -1, (0,255,0), 2) #---Draw contour having the maximum area
cv2.imshow(Contour of PCB.',img)
x,y,w,h = cv2.boundingRect(cnt[cc]) #---Calibrates a straight rectangle for the contour of max. area
crop_img = img1[y:y+h, x:x+w] #--- Cropping the ROI having the coordinates of the bounding rectangle
cv2.imshow('cropped PCB.jpg',crop_img)
I have a sample image and a target image. I want to transfer the color shades of sample image to target image. Please tell me how to extract the color from sample image.
Here the images:
input source image:
input map for desired output image
output image
You can use a technique called "Histogram matching" (another description)
Basically, you use the histogram for your source image as a goal and transform the values for each input map pixel to get the output histogram as close to source as possible. You do it for each rgb channel of the image.
Here is my python code for that:
from scipy.misc import imsave, imread
import numpy as np
imsrc = imread("source.jpg")
imtint = imread("tint_target.jpg")
nbr_bins=255
imres = imsrc.copy()
for d in range(3):
imhist,bins = np.histogram(imsrc[:,:,d].flatten(),nbr_bins,normed=True)
tinthist,bins = np.histogram(imtint[:,:,d].flatten(),nbr_bins,normed=True)
cdfsrc = imhist.cumsum() #cumulative distribution function
cdfsrc = (255 * cdfsrc / cdfsrc[-1]).astype(np.uint8) #normalize
cdftint = tinthist.cumsum() #cumulative distribution function
cdftint = (255 * cdftint / cdftint[-1]).astype(np.uint8) #normalize
im2 = np.interp(imsrc[:,:,d].flatten(),bins[:-1],cdfsrc)
im3 = np.interp(imsrc[:,:,d].flatten(),cdftint, bins[:-1])
imres[:,:,d] = im3.reshape((imsrc.shape[0],imsrc.shape[1] ))
imsave("histnormresult.jpg", imres)
The output for you samples will look like that:
You could also try making the same in HSV colorspace - it might give better results.
I think the hardest part is to determine the dominant color of the first image. Just looking at it, with all the highlights and shadows, the best overall color will be the one that has the highest combination of brightness and saturation. I start with a blurred image to reduce the effects of noise and other anomalies, then convert each pixel to the HSV color space for the brightness and saturation measurement. Here's how it looks in Python with PIL and colorsys:
blurred = im1.filter(ImageFilter.BLUR)
ld = blurred.load()
max_hsv = (0, 0, 0)
for y in range(blurred.size[1]):
for x in range(blurred.size[0]):
r, g, b = tuple(c / 255. for c in ld[x, y])
h, s, v = colorsys.rgb_to_hsv(r, g, b)
if s + v > max_hsv[1] + max_hsv[2]:
max_hsv = h, s, v
r, g, b = tuple(int(c * 255) for c in colorsys.hsv_to_rgb(*max_hsv))
For your image I get a color of (210, 61, 74) which looks like:
From that point it's just a matter of transferring the hue and saturation to the other image.
The histogram matching solutions above did not work for me. Here is my own, based on OpenCV:
def match_image_histograms(image, reference):
chans1 = cv2.split(image)
chans2 = cv2.split(reference)
new_chans = []
for ch1, ch2 in zip(chans1, chans2):
hist1 = cv2.calcHist([ch1], [0], None, [256], [0, 256])
hist1 /= hist1.sum()
hist2 = cv2.calcHist([ch2], [0], None, [256], [0, 256])
hist2 /= hist2.sum()
lut = np.searchsorted(hist1.cumsum(), hist2.cumsum())
new_chans.append(cv2.LUT(ch1, lut))
return cv2.merge(new_chans).astype('uint8')
obtain average color from color map
ignore saturated white/black colors
convert light map to grayscale
change dynamic range of lightmap to match your desired output
I use max dynamic range. You could compute the range of color map and set it for light map
multiply the light map by avg color
This is how it looks like:
And this is the C++ source code
//picture pic0,pic1,pic2;
// pic0 - source color
// pic1 - source light map
// pic2 - output
int x,y,rr,gg,bb,i,i0,i1;
double r,g,b,a;
// init output as source light map in grayscale i=r+g+b
pic2=pic1;
pic2.rgb2i();
// change light map dynamic range to maximum
i0=pic2.p[0][0].dd; // min
i1=pic2.p[0][0].dd; // max
for (y=0;y<pic2.ys;y++)
for (x=0;x<pic2.xs;x++)
{
i=pic2.p[y][x].dd;
if (i0>i) i0=i;
if (i1<i) i1=i;
}
for (y=0;y<pic2.ys;y++)
for (x=0;x<pic2.xs;x++)
{
i=pic2.p[y][x].dd;
i=(i-i0)*767/(i1-i0);
pic2.p[y][x].dd=i;
}
// extract average color from color map (normalized to unit vecotr)
for (r=0.0,g=0.0,b=0.0,y=0;y<pic0.ys;y++)
for (x=0;x<pic0.xs;x++)
{
rr=BYTE(pic0.p[y][x].db[picture::_r]);
gg=BYTE(pic0.p[y][x].db[picture::_g]);
bb=BYTE(pic0.p[y][x].db[picture::_b]);
i=rr+gg+bb;
if (i<400) // ignore saturated colors (whiteish) 3*255=white
if (i>16) // ignore too dark colors (whiteish) 0=black
{
r+=rr;
g+=gg;
b+=bb;
}
}
a=1.0/sqrt((r*r)+(g*g)+(b*b)); r*=a; g*=a; b*=a;
// recolor output
for (y=0;y<pic2.ys;y++)
for (x=0;x<pic2.xs;x++)
{
a=DWORD(pic2.p[y][x].dd);
rr=r*a; if (rr>255) rr=255; pic2.p[y][x].db[picture::_r]=BYTE(rr);
gg=g*a; if (gg>255) gg=255; pic2.p[y][x].db[picture::_g]=BYTE(gg);
bb=b*a; if (bb>255) bb=255; pic2.p[y][x].db[picture::_b]=BYTE(bb);
}
I am using own picture class so here some members:
xs,ys size of image in pixels
p[y][x].dd is pixel at (x,y) position as 32 bit integer type
p[y][x].db[4] is pixel access by color bands (r,g,b,a)
[notes]
If this does not meet your needs then please specify more and add more images. Because your current example is really not self explanatonary
Regarding previous answer, one thing to be careful with:
once the CDF will reach its maximum (=1), the interpolation will get mislead and will match wrongly your values. To avoid this, you should provide the interpolation function only the part of CDF meaningful (not after where it reaches 1) and the corresponding bins. Here the answer adapted:
from scipy.misc import imsave, imread
import numpy as np
imsrc = imread("source.jpg")
imtint = imread("tint_target.jpg")
nbr_bins=255
imres = imsrc.copy()
for d in range(3):
imhist,bins = np.histogram(imsrc[:,:,d].flatten(),nbr_bins,normed=True)
tinthist,bins = np.histogram(imtint[:,:,d].flatten(),nbr_bins,normed=True)
cdfsrc = imhist.cumsum() #cumulative distribution function
cdfsrc = (255 * cdfsrc / cdfsrc[-1]).astype(np.uint8) #normalize
cdftint = tinthist.cumsum() #cumulative distribution function
cdftint = (255 * cdftint / cdftint[-1]).astype(np.uint8) #normalize
im2 = np.interp(imsrc[:,:,d].flatten(),bins[:-1],cdfsrc)
if (cdftint==1).sum()>0:
idx_max = np.where(cdftint==1)[0][0]
im3 = np.interp(im2,cdftint[:idx_max+1], bins[:idx_max+1])
else:
im3 = np.interp(im2,cdftint, bins[:-1])
Enjoy!
Is it possible to only blur a subregion of an image, instead of the whole image with OpenCV, to save some computational cost?
EDIT: One important point is that when blurring the boundary of the subregion, one should use the existing image content as much as possible; only when the convolution exceeds the boundary of the original image, an extrapolation or other artificial border conditions can be used.
To blur the whole image, assuming you want to overwrite the original (In-place filtering is supported by cv::GaussianBlur), you will have something like
cv::GaussianBlur(image, image, Size(0, 0), 4);
To blur just a region use Mat::operator()(const Rect& roi) to extract the region:
cv::Rect region(x, y, w, h);
cv::GaussianBlur(image(region), image(region), Size(0, 0), 4);
Or if you want the blurred output in a separate image:
cv::Rect region(x, y, w, h);
cv::Mat blurred_region;
cv::GaussianBlur(image(region), blurred_region, Size(0, 0), 4);
The above uses the default BORDER_CONSTANT option that just assumes everything outside the image is 0 when doing the blurring.
I am not sure what it does with pixels at the edge of a region. You can force it to ignore pixels outside the region (BORDER_CONSTANT|BORDER_ISOLATE). SO it think it probably does use the pixels outside the region. You need to compare the results from above with:
const int bsize = 10;
cv::Rect region(x, y, w, h);
cv::Rect padded_region(x - bsize, y - bsize, w + 2 * bsize, h + 2 * bsize)
cv::Mat blurred_padded_region;
cv::GaussianBlur(image(padded_region), blurred_padded_region, Size(0, 0), 4);
cv::Mat blurred_region = blurred_padded_region(cv::Rect(bsize, bsize, w, h));
// and you can then copy that back into the original image if you want:
blurred_region.copyTo(image(region));
Here's how to do it in Python. The idea is to select a ROI, blur it, then insert it back into the image
import cv2
# Read in image
image = cv2.imread('1.png')
# Create ROI coordinates
topLeft = (60, 140)
bottomRight = (340, 250)
x, y = topLeft[0], topLeft[1]
w, h = bottomRight[0] - topLeft[0], bottomRight[1] - topLeft[1]
# Grab ROI with Numpy slicing and blur
ROI = image[y:y+h, x:x+w]
blur = cv2.GaussianBlur(ROI, (51,51), 0)
# Insert ROI back into image
image[y:y+h, x:x+w] = blur
cv2.imshow('blur', blur)
cv2.imshow('image', image)
cv2.waitKey()
Before -> After
Yes it is possible to blur a Region Of Interest in OpenCV.
size( 120, 160 );
OpenCV opencv = new OpenCV(this);
opencv.loadImage("myPicture.jpg");
opencv.ROI( 60, 0, 60, 160 );
opencv.blur( OpenCV.BLUR, 13 );
image( opencv.image(), 0, 0 );
For more information, check out this link.
Good luck,
If you are using javacv provided by bytecode
then you can do like this way. It will only blur particular ROI.
Mat src = imread("xyz.jpg",IMREAD_COLOR);
Rect rect = new Rect(50,50,src.size().width()/3,100);
GaussianBlur(new Mat(src, rect), new Mat(src, rect), new Size(23,23), 30);