How can I make the `cv2.imshow` output the same as the `plt.imshow` output? - image

How can I make the cv2.imshow output the same as the plt.imshow output?
# loading image
img0 = cv2.imread("image.png")
# converting to gray scale
gray = cv2.cvtColor(img0, cv2.COLOR_BGR2GRAY)
# remove noise
img = cv2.GaussianBlur(gray, (3, 3), 0)
# convolute with proper kernels
laplacian = cv2.Laplacian(img, cv2.CV_64F)
sobelx = cv2.Sobel(img, cv2.CV_64F, 1, 0, ksize=5) # x
sobely = cv2.Sobel(img, cv2.CV_64F, 0, 1, ksize=5) # y
imgboth = cv2.addWeighted(sobelx, 0.5, sobely, 0.5, 0)
plt.imshow(imgboth, cmap='gray')
plt.show()
cv2.imshow("img", cv2.resize(imgboth, (960, 540)))
cv2.waitKey(0)
cv2.destroyAllWindows()
original image
plt.output
cv2.imshow

# ...
canvas = imgboth.astype(np.float32)
canvas /= np.abs(imgboth).max()
canvas += 0.5
cv.namedWindow("canvas", cv.WINDOW_NORMAL)
cv.imshow("canvas", canvas)
cv.waitKey()
cv.destroyWindow("canvas")
only looks different because you posted thumbnails, not the original size image.
when you give imshow floating point values, which you do because laplacian and sobelx are floating point, then it assumes a range of 0.0 .. 1.0 as black .. white.
matplotlib automatically scales data. OpenCV's imshow doesn't. both behaviors have pros and cons.

Related

Contrast Limited Adaptive Histogram Equalization in 360 images

I am currently applying the Contrast Limited Adaptive Histogram Equalization algorithm together with an algorithm to perform the photo denoise.
My problem is that I am working with 360 photos. As the contrast generates different values ​​at the edges when I join the photo, the edge line is highly noticeable. How can I mitigate that line? What changes should I make so that it is not noticeable and the algorithm is applied consistently?
Original Photo:
Code to Contrast Limited Adaptive Histogram Equalization
# CLAHE (Contrast Limited Adaptive Histogram Equalization)
clahe = cv2.createCLAHE(clipLimit=1., tileGridSize=(6, 6))
lab = cv2.cvtColor(image, cv2.COLOR_BGR2LAB) # convert from BGR to LAB color space
l, a, b = cv2.split(lab) # split on 3 different channels
l2 = clahe.apply(l) # apply CLAHE to the L-channel
lab = cv2.merge((l2, a, b)) # merge channels
img2 = cv2.cvtColor(lab, cv2.COLOR_LAB2BGR) # convert from LAB to BGR
Result:
360 performed:
It is highly notorious line of separation because it is not taken into account that the photo is joined later. What can I do?
Here's an answer for C++, you can probably convert it easily to python/numpy.
The idea is to use a border region before performing CLAHE and crop the image afterwards.
These are the subimage regions in the original image:
and they will be copied the the left/right of the image like this:
Maybe you can reduce the size of the border strongly:
int main()
{
cv::Mat img = cv::imread("C:/data/SO_360.jpg");
int borderSize = img.cols / 4;
// make image that can have some border region
cv::Mat borderImage = cv::Mat(cv::Size(img.cols + 2 * borderSize, img.rows), img.type());
// posX, posY, width, height of the subimages
cv::Rect leftBorderRegion = cv::Rect(0, 0, borderSize, borderImage.rows);
cv::Rect rightBorderRegion = cv::Rect(borderImage.cols - borderSize, 0, borderSize, borderImage.rows);
cv::Rect imgRegion = cv::Rect(borderSize, 0, img.cols, borderImage.rows);
// original image regions to copy:
cv::Rect left = cv::Rect(0, 0, borderSize, borderImage.rows);
cv::Rect right = cv::Rect(img.cols - borderSize, 0, borderSize, img.rows);
cv::Rect full = cv::Rect(0, 0, img.cols, img.rows);
// perform copying to subimage (left part of the img goes to right part of the border image):
img(left).copyTo(borderImage(rightBorderRegion));
img(right).copyTo(borderImage(leftBorderRegion));
img.copyTo(borderImage(imgRegion));
cv::imwrite("SO_360_border.jpg", borderImage);
//# CLAHE(Contrast Limited Adaptive Histogram Equalization)
//clahe = cv2.createCLAHE(clipLimit = 1., tileGridSize = (6, 6))
// apply the CLAHE algorithm to the L channel
cv::Ptr<cv::CLAHE> clahe = cv::createCLAHE();
clahe->setClipLimit(1);
clahe->setTilesGridSize(cv::Size(6, 6));
cv::Mat lab;
cv::cvtColor(borderImage, lab, cv::COLOR_BGR2Lab); // # convert from BGR to LAB color space
std::vector<cv::Mat> labChannels; //l, a, b = cv2.split(lab) # split on 3 different channels
cv::split(lab, labChannels);
//l2 = clahe.apply(l) # apply CLAHE to the L - channel
cv::Mat dst;
clahe->apply(labChannels[0], dst);
labChannels[0] = dst;
//lab = cv2.merge((l2, a, b)) # merge channels
cv::merge(labChannels, lab);
//img2 = cv2.cvtColor(lab, cv2.COLOR_LAB2BGR) # convert from LAB to BGR
cv::cvtColor(lab, dst, cv::COLOR_Lab2BGR);
cv::imwrite("SO_360_border_clahe.jpg", dst);
// to crop the image after performing clahe:
cv::Mat cropped = dst(imgRegion).clone();
cv::imwrite("SO_360_clahe.jpg", cropped);
}
Images:
input as in your original post.
After creating the border:
After performing CLAHE (with border):
After cropping the CLAHE-border-image:

batch crop quad images with diffenrent sizes to a circle

I have lots of images of planets in differing sizes like
They are all positioned exactly in the middle of the square images but with different height.
Now I want to crop them and make the black border transparent. I tried with convert (ImageMagick 6.9.10-23) like this:
for i in planet_*.jpg; do
nr=$(echo ${i/planet_/}|sed s/.jpg//g|xargs)
convert $i -fuzz 1% -transparent black trans/planet_${nr}.png
done
But this leaves some artifacts like:
Is there a command to crop all images in a circle, so the planet is untouched? (It mustn't be imagemagick).
I could also imagine a solution where I would use a larger -fuzz value and then fill all transparent pixels in the inner planet circle with black.
Those are all planets, I want to convert: download zip
Here is one way using Python Opencv from the minEclosingCircle.
Input:
import cv2
import numpy as np
import skimage.exposure
# read image
img = cv2.imread('planet.jpg')
h, w, c = img.shape
# convert to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# threshold
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)[1]
# get contour
contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
big_contour = max(contours, key=cv2.contourArea)
# get enclosing circle
center, radius = cv2.minEnclosingCircle(big_contour)
cx = int(round(center[0]))
cy = int(round(center[1]))
rr = int(round(radius))
# draw outline circle over input
circle = img.copy()
cv2.circle(circle, (cx,cy), rr, (0, 0, 255), 1)
# draw white filled circle on black background as mask
mask = np.full((h,w), 0, dtype=np.uint8)
cv2.circle(mask, (cx,cy), rr, 255, -1)
# antialias
blur = cv2.GaussianBlur(mask, (0,0), sigmaX=2, sigmaY=2, borderType = cv2.BORDER_DEFAULT)
mask = skimage.exposure.rescale_intensity(blur, in_range=(127,255), out_range=(0,255))
# put mask into alpha channel to make outside transparent
imgT = cv2.cvtColor(img, cv2.COLOR_BGR2BGRA)
imgT[:,:,3] = mask
# crop the image
ulx = int(cx-rr+0.5)
uly = int(cy-rr+0.5)
brx = int(cx+rr+0.5)
bry = int(cy+rr+0.5)
print(ulx,brx,uly,bry)
crop = imgT[uly:bry+1, ulx:brx+1]
# write result to disk
cv2.imwrite("planet_thresh.jpg", thresh)
cv2.imwrite("planet_circle.jpg", circle)
cv2.imwrite("planet_mask.jpg", mask)
cv2.imwrite("planet_transparent.png", imgT)
cv2.imwrite("planet_crop.png", crop)
# display it
cv2.imshow("thresh", thresh)
cv2.imshow("circle", circle)
cv2.imshow("mask", mask)
cv2.waitKey(0)
Threshold image:
Circle on input:
Mask image:
Transparent image:
Cropped transparent image:
packages to install
sudo apt install python3-opencv python3-sklearn python3-skimage

How Can I only keep text with specific color from image via opencv and python?

I have some invoice image with some text overlapping, which make some trouble for later processing, and what I only is the text in black. some I want to remove the text which is in other colors.
is there any way to achieve this?
the image is attached as example.
I have tried to solve it with opencv, but i still can't solve this:
import numpy as np import cv2
img = cv2.imread('11.png')
lower = np.array([150,150,150])
upper = np.array([200,200,200])
mask = cv2.inRange(img, lower, upper)
res = cv2.bitwise_and(img, img, mask=mask)
cv2.imwrite('22.png',res)
[image with multiple color][1]
[1]: https://i.stack.imgur.com/nWQrV.pngstrong text
The text is darker and less saturated. And as suggested as #J.D. the HSV color space is good. But his range is wrong.
In OpenCV, the H ranges in [0, 180], while the S/V ranges in [0, 255]
Here is a colormap I made in the last year, I think it's helpful.
(1) Use cv2.inRange
(2) Just threshold the V(HSV) channel:
th, threshed = cv2.threshold(v, 150, 255, cv2.THRESH_BINARY_INV)
(3) Just threshold the S(HSV) channel:
th, threshed2 = cv2.threshold(s, 30, 255, cv2.THRESH_BINARY_INV)
The result:
The demo code:
# 2018/12/30 22:21
# 2018/12/30 23:25
import cv2
img = cv2.imread("test.png")
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
h,s,v = cv2.split(hsv)
mask = cv2.inRange(hsv, (0,0,0), (180, 50, 130))
dst1 = cv2.bitwise_and(img, img, mask=mask)
th, threshed = cv2.threshold(v, 150, 255, cv2.THRESH_BINARY_INV)
dst2 = cv2.bitwise_and(img, img, mask=threshed)
th, threshed2 = cv2.threshold(s, 30, 255, cv2.THRESH_BINARY_INV)
dst3 = cv2.bitwise_and(img, img, mask=threshed2)
cv2.imwrite("dst1.png", dst1)
cv2.imwrite("dst2.png", dst2)
cv2.imwrite("dst3.png", dst3)
How to detect colored patches in an image using OpenCV?
How to define a threshold value to detect only green colour objects in an image :Opencv
Converting to the HSV colorspace makes selecting colors easier.
The code below does what you want.
Result:
import numpy as np
import cv2
kernel = np.ones((2,2),np.uint8)
# load image
img = cv2.imread("image.png")
# Convert BGR to HSV
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# define range of black color in HSV
lower_val = np.array([0,0,0])
upper_val = np.array([179,100,130])
# Threshold the HSV image to get only black colors
mask = cv2.inRange(hsv, lower_val, upper_val)
# Bitwise-AND mask and original image
res = cv2.bitwise_and(img,img, mask= mask)
# invert the mask to get black letters on white background
res2 = cv2.bitwise_not(mask)
# display image
cv2.imshow("img", res)
cv2.imshow("img2", res2)
cv2.waitKey(0)
cv2.destroyAllWindows()
To change the level of black selected, tweak from the upper_val, the value currently set at 130. Higher = allow lighter shades (it's called the Value). Also the value currently at 100: lower = allow less color (actually: saturation).
Read more about the HSV colorspace here.
I always find the image below very helpfull. The bottom 'disc' is all black. As you move up in Value, lighter pixels are also selected. The pixels with low saturation stay shades of gray until white (the center), the pixels with high saturation get colored(the edge).That's why you tweak those values.
Edit: As #Silencer pointed out, my range was off. Fixed it.

Overlaying MATLAB Scaled Image to Grayscale Image for selected pixels

I am new to MATLAB Image Processing and currently I have two images - one is a grayscale image of my object and the second is the scaled image generated from MATLAB using imagesc function. I am trying to overlay this scaled image on top of my grayscale image to get a spatial resolution for easier observation. Attached are the two images:
A) Grayscale Image:
B) Scaled Image:
There were a few difficulties that I encountered. Firstly, the scaled image is not saved in the same pixel dimensions, but I can get around that using the imwrite function:
im = imagesc(ScaledDiff);
imwrite(get(im,'cdata'),'scaleddiff.tif')
However, doing so will result in a loss of colorbar and the colormap. Secondly, even if I manage to shrink the scaled image to the size of the grayscale image, overlaying it is still a challenge. Ideally, I would like to set the transparency (or 'alpha') to 0 for those pixels with < 0.02 in scaled image value.
Any idea on how to do this will be greatly appreciated! Sorry if I was unclear!
UPDATE:
Thanks to Rotem, I have managed to overlay the grayscale image and a particular region of my heatmap:
However, I need to display the colorbar corresponding to the heatmap values, because otherwise the information is lost and the overlay will be useless. How should I do this? Below is a snippet of my code, where ScaledIntDiff contains the values from 0 to 0.25 that is displayed on the heatmap:
Brightfield = imread('gray.jpg'); % read background image
I1 = ind2rgb(gray2ind(Brightfield), gray); % convert indices into RGB scale
scale = 1000;
ScaledIntDiff2 = round(ScaledIntDiff.*scale);
I2 = ind2rgb(ScaledIntDiff2, jet(256)); % this is for the heatmap
threshold = 0.02;
I2R = I2(:,:,1); I2G = I2(:,:,2); I2B = I2(:,:,3);
I1R = I1(:,:,1); I1G = I1(:,:,2); I1B = I1(:,:,3);
% Replace pixels in I2 with pixels in I1 if the value of ScaledIntDiff of those pixels is below the threshold
I2R(ScaledIntDiff<threshold) = I1R([ScaledIntDiff<threshold]);
I2G(ScaledIntDiff<threshold) = I1G([ScaledIntDiff<threshold]);
I2B(ScaledIntDiff<threshold) = I1B([ScaledIntDiff<threshold]);
I2(:,:,1) = I2R; I2(:,:,2) = I2G; I2(:,:,3) = I2B;
figure
imshow(I2)
I know that the code above is highly inefficient, so suggestions on how to improve it will be very welcomed. Thank you!
Check the following:
I = imread('CKbi2Ll.jpg'); %Load input.
%Convert I to true color RGB image (all pixels R=G=B are gray color).
I1 = ind2rgb(I, gray(256));
%Convert I to true color RGB image with color map parula (instead of using imagesc)
I2 = ind2rgb(I, parula(256));
%Set the transparency (or 'alpha') to 0 for those pixels with < 0.02 in scaled image value.
%Instead of setting transparency, replace pixels with values from I1.
threshold = 0.02; %Set threshold to 0.02
I2(I1 < threshold) = I1(I1 < threshold);
%Blend I1 and I2 into J.
alpha = 0.3; %Take 0.3 from I2 and 0.7 from I1.
J = I2*alpha + I1*(1-alpha);
%Display output
imshow(J);
Adding a colorbar with ticks labels from 0 to 0.25:
Set color map to parula - it doesn't affect the displayed image, because image format is true color RGB.
Crate array of add 6 ticks from 0 to 250.
Create cell array of 6 TickLabels from 0 to 0.25.
Add colorbar with Ticks and TickLabels properties created earlier.
Add the following code after imshow(J);:
colormap(parula(256));
TLabels = cellstr(num2str((linspace(0, 0.25, 6))'));
T = linspace(1, 250, 6);
colorbar('Ticks', T', 'TickLabels', TLabels);
imshow('ClmypzU.jpg');
colormap(jet(256));
TLabels = cellstr(num2str((linspace(0, 0.25, 6))'));
T = linspace(1, 250, 6);
colorbar('Ticks', T', 'TickLabels', TLabels);

How to extract color shade from a given sample image to convert another image using color of sample image?

I have a sample image and a target image. I want to transfer the color shades of sample image to target image. Please tell me how to extract the color from sample image.
Here the images:
input source image:
input map for desired output image
output image
You can use a technique called "Histogram matching" (another description)
Basically, you use the histogram for your source image as a goal and transform the values for each input map pixel to get the output histogram as close to source as possible. You do it for each rgb channel of the image.
Here is my python code for that:
from scipy.misc import imsave, imread
import numpy as np
imsrc = imread("source.jpg")
imtint = imread("tint_target.jpg")
nbr_bins=255
imres = imsrc.copy()
for d in range(3):
imhist,bins = np.histogram(imsrc[:,:,d].flatten(),nbr_bins,normed=True)
tinthist,bins = np.histogram(imtint[:,:,d].flatten(),nbr_bins,normed=True)
cdfsrc = imhist.cumsum() #cumulative distribution function
cdfsrc = (255 * cdfsrc / cdfsrc[-1]).astype(np.uint8) #normalize
cdftint = tinthist.cumsum() #cumulative distribution function
cdftint = (255 * cdftint / cdftint[-1]).astype(np.uint8) #normalize
im2 = np.interp(imsrc[:,:,d].flatten(),bins[:-1],cdfsrc)
im3 = np.interp(imsrc[:,:,d].flatten(),cdftint, bins[:-1])
imres[:,:,d] = im3.reshape((imsrc.shape[0],imsrc.shape[1] ))
imsave("histnormresult.jpg", imres)
The output for you samples will look like that:
You could also try making the same in HSV colorspace - it might give better results.
I think the hardest part is to determine the dominant color of the first image. Just looking at it, with all the highlights and shadows, the best overall color will be the one that has the highest combination of brightness and saturation. I start with a blurred image to reduce the effects of noise and other anomalies, then convert each pixel to the HSV color space for the brightness and saturation measurement. Here's how it looks in Python with PIL and colorsys:
blurred = im1.filter(ImageFilter.BLUR)
ld = blurred.load()
max_hsv = (0, 0, 0)
for y in range(blurred.size[1]):
for x in range(blurred.size[0]):
r, g, b = tuple(c / 255. for c in ld[x, y])
h, s, v = colorsys.rgb_to_hsv(r, g, b)
if s + v > max_hsv[1] + max_hsv[2]:
max_hsv = h, s, v
r, g, b = tuple(int(c * 255) for c in colorsys.hsv_to_rgb(*max_hsv))
For your image I get a color of (210, 61, 74) which looks like:
From that point it's just a matter of transferring the hue and saturation to the other image.
The histogram matching solutions above did not work for me. Here is my own, based on OpenCV:
def match_image_histograms(image, reference):
chans1 = cv2.split(image)
chans2 = cv2.split(reference)
new_chans = []
for ch1, ch2 in zip(chans1, chans2):
hist1 = cv2.calcHist([ch1], [0], None, [256], [0, 256])
hist1 /= hist1.sum()
hist2 = cv2.calcHist([ch2], [0], None, [256], [0, 256])
hist2 /= hist2.sum()
lut = np.searchsorted(hist1.cumsum(), hist2.cumsum())
new_chans.append(cv2.LUT(ch1, lut))
return cv2.merge(new_chans).astype('uint8')
obtain average color from color map
ignore saturated white/black colors
convert light map to grayscale
change dynamic range of lightmap to match your desired output
I use max dynamic range. You could compute the range of color map and set it for light map
multiply the light map by avg color
This is how it looks like:
And this is the C++ source code
//picture pic0,pic1,pic2;
// pic0 - source color
// pic1 - source light map
// pic2 - output
int x,y,rr,gg,bb,i,i0,i1;
double r,g,b,a;
// init output as source light map in grayscale i=r+g+b
pic2=pic1;
pic2.rgb2i();
// change light map dynamic range to maximum
i0=pic2.p[0][0].dd; // min
i1=pic2.p[0][0].dd; // max
for (y=0;y<pic2.ys;y++)
for (x=0;x<pic2.xs;x++)
{
i=pic2.p[y][x].dd;
if (i0>i) i0=i;
if (i1<i) i1=i;
}
for (y=0;y<pic2.ys;y++)
for (x=0;x<pic2.xs;x++)
{
i=pic2.p[y][x].dd;
i=(i-i0)*767/(i1-i0);
pic2.p[y][x].dd=i;
}
// extract average color from color map (normalized to unit vecotr)
for (r=0.0,g=0.0,b=0.0,y=0;y<pic0.ys;y++)
for (x=0;x<pic0.xs;x++)
{
rr=BYTE(pic0.p[y][x].db[picture::_r]);
gg=BYTE(pic0.p[y][x].db[picture::_g]);
bb=BYTE(pic0.p[y][x].db[picture::_b]);
i=rr+gg+bb;
if (i<400) // ignore saturated colors (whiteish) 3*255=white
if (i>16) // ignore too dark colors (whiteish) 0=black
{
r+=rr;
g+=gg;
b+=bb;
}
}
a=1.0/sqrt((r*r)+(g*g)+(b*b)); r*=a; g*=a; b*=a;
// recolor output
for (y=0;y<pic2.ys;y++)
for (x=0;x<pic2.xs;x++)
{
a=DWORD(pic2.p[y][x].dd);
rr=r*a; if (rr>255) rr=255; pic2.p[y][x].db[picture::_r]=BYTE(rr);
gg=g*a; if (gg>255) gg=255; pic2.p[y][x].db[picture::_g]=BYTE(gg);
bb=b*a; if (bb>255) bb=255; pic2.p[y][x].db[picture::_b]=BYTE(bb);
}
I am using own picture class so here some members:
xs,ys size of image in pixels
p[y][x].dd is pixel at (x,y) position as 32 bit integer type
p[y][x].db[4] is pixel access by color bands (r,g,b,a)
[notes]
If this does not meet your needs then please specify more and add more images. Because your current example is really not self explanatonary
Regarding previous answer, one thing to be careful with:
once the CDF will reach its maximum (=1), the interpolation will get mislead and will match wrongly your values. To avoid this, you should provide the interpolation function only the part of CDF meaningful (not after where it reaches 1) and the corresponding bins. Here the answer adapted:
from scipy.misc import imsave, imread
import numpy as np
imsrc = imread("source.jpg")
imtint = imread("tint_target.jpg")
nbr_bins=255
imres = imsrc.copy()
for d in range(3):
imhist,bins = np.histogram(imsrc[:,:,d].flatten(),nbr_bins,normed=True)
tinthist,bins = np.histogram(imtint[:,:,d].flatten(),nbr_bins,normed=True)
cdfsrc = imhist.cumsum() #cumulative distribution function
cdfsrc = (255 * cdfsrc / cdfsrc[-1]).astype(np.uint8) #normalize
cdftint = tinthist.cumsum() #cumulative distribution function
cdftint = (255 * cdftint / cdftint[-1]).astype(np.uint8) #normalize
im2 = np.interp(imsrc[:,:,d].flatten(),bins[:-1],cdfsrc)
if (cdftint==1).sum()>0:
idx_max = np.where(cdftint==1)[0][0]
im3 = np.interp(im2,cdftint[:idx_max+1], bins[:idx_max+1])
else:
im3 = np.interp(im2,cdftint, bins[:-1])
Enjoy!

Resources