Remove ink mark from paper using OpenCV - image

I have an image like this,
As you can see, there is a pen mark in the image. I want to remove that mark. How to do it in OpenCV.?
I tried converting it to HSV, creating a mask with blue range and removing it using the code.
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
lower_blue = np.array([110,50,50])
upper_blue = np.array([130,255,255])
mask = cv2.inRange(hsv, lower_blue , upper_blue )
res = cv2.bitwise_and(img, img, mask= mask)
It is not working as needed. All the text gets removed. How to fix this.?

You can take threshold of the first array of image. It looks like this:
Here it is clearly visible the difference in pixel values of the ink mark and the letters. After thresholding it looks like:
The ink mark can now be removed via closing. However it will reduce the size of letters as well. Therefore erosion is performed followed by a bitwise OR to obtain our mask without the ink mark.
If however you want the letters to look like the original image you can store the mask in a numpy array of 255s and perform it bitwise OR with original image.
The full code I have used is:
img = cv2.imread('ink_mark.png')
wimg = img[:, :, 0]
ret,thresh = cv2.threshold(wimg,100,255,cv2.THRESH_BINARY)
kernel = np.ones((7, 7), np.uint8)
closing = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)
erosion = cv2.erode(closing, kernel, iterations = 1)
mask = cv2.bitwise_or(erosion, thresh)
white = np.ones(img.shape,np.uint8)*255
white[:, :, 0] = mask
white[:, :, 1] = mask
white[:, :, 2] = mask
result = cv2.bitwise_or(img, white)
cv2.imshow('image', result)
cv2.waitKey(0)
cv2.destroyAllWindows()

Try using inpaint. First create a mask of the ink:
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
lower_blue = np.array([100,50,50])
upper_blue = np.array([150,255,255])
kernel = np.ones((5,5),np.uint8)
mask = cv2.inRange(hsv, lower_blue, upper_blue)
mask = cv2.dilate(mask,kernel,iterations = 4)
Use the inpaint function to paint in areas where the mask it white. OpenCV will throw away the original pixels, and use guess which pixels should go there.
dst = cv2.inpaint(img, mask, 3, cv2.INPAINT_TELEA)

Related

How Can I only keep text with specific color from image via opencv and python?

I have some invoice image with some text overlapping, which make some trouble for later processing, and what I only is the text in black. some I want to remove the text which is in other colors.
is there any way to achieve this?
the image is attached as example.
I have tried to solve it with opencv, but i still can't solve this:
import numpy as np import cv2
img = cv2.imread('11.png')
lower = np.array([150,150,150])
upper = np.array([200,200,200])
mask = cv2.inRange(img, lower, upper)
res = cv2.bitwise_and(img, img, mask=mask)
cv2.imwrite('22.png',res)
[image with multiple color][1]
[1]: https://i.stack.imgur.com/nWQrV.pngstrong text
The text is darker and less saturated. And as suggested as #J.D. the HSV color space is good. But his range is wrong.
In OpenCV, the H ranges in [0, 180], while the S/V ranges in [0, 255]
Here is a colormap I made in the last year, I think it's helpful.
(1) Use cv2.inRange
(2) Just threshold the V(HSV) channel:
th, threshed = cv2.threshold(v, 150, 255, cv2.THRESH_BINARY_INV)
(3) Just threshold the S(HSV) channel:
th, threshed2 = cv2.threshold(s, 30, 255, cv2.THRESH_BINARY_INV)
The result:
The demo code:
# 2018/12/30 22:21
# 2018/12/30 23:25
import cv2
img = cv2.imread("test.png")
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
h,s,v = cv2.split(hsv)
mask = cv2.inRange(hsv, (0,0,0), (180, 50, 130))
dst1 = cv2.bitwise_and(img, img, mask=mask)
th, threshed = cv2.threshold(v, 150, 255, cv2.THRESH_BINARY_INV)
dst2 = cv2.bitwise_and(img, img, mask=threshed)
th, threshed2 = cv2.threshold(s, 30, 255, cv2.THRESH_BINARY_INV)
dst3 = cv2.bitwise_and(img, img, mask=threshed2)
cv2.imwrite("dst1.png", dst1)
cv2.imwrite("dst2.png", dst2)
cv2.imwrite("dst3.png", dst3)
How to detect colored patches in an image using OpenCV?
How to define a threshold value to detect only green colour objects in an image :Opencv
Converting to the HSV colorspace makes selecting colors easier.
The code below does what you want.
Result:
import numpy as np
import cv2
kernel = np.ones((2,2),np.uint8)
# load image
img = cv2.imread("image.png")
# Convert BGR to HSV
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# define range of black color in HSV
lower_val = np.array([0,0,0])
upper_val = np.array([179,100,130])
# Threshold the HSV image to get only black colors
mask = cv2.inRange(hsv, lower_val, upper_val)
# Bitwise-AND mask and original image
res = cv2.bitwise_and(img,img, mask= mask)
# invert the mask to get black letters on white background
res2 = cv2.bitwise_not(mask)
# display image
cv2.imshow("img", res)
cv2.imshow("img2", res2)
cv2.waitKey(0)
cv2.destroyAllWindows()
To change the level of black selected, tweak from the upper_val, the value currently set at 130. Higher = allow lighter shades (it's called the Value). Also the value currently at 100: lower = allow less color (actually: saturation).
Read more about the HSV colorspace here.
I always find the image below very helpfull. The bottom 'disc' is all black. As you move up in Value, lighter pixels are also selected. The pixels with low saturation stay shades of gray until white (the center), the pixels with high saturation get colored(the edge).That's why you tweak those values.
Edit: As #Silencer pointed out, my range was off. Fixed it.

Overlapping grayscale and RGB Images

I would like to overlap two images, one grayscale and one RGB image. I would like to impose the RGB image on top of the grayscale image, but ONLY for pixels greater than a certain value. I tried using the double function in MATLAB, but this seems to change the color scheme and I cannot recover the original RGB colors. What should I do in order to retain the original RGB image instead of mapping it to one of the MATLAB colormaps? Below is my attempt at superimposing:
pixelvalues = double(imread('hello.png'));
PixelInt = mean(pixelvalues,3);
I1 = ind2rgb(Brightfield(:,:,1), gray(256)); %Brightfield
I2 = ind2rgb(PixelInt, jet(256)); %RGB Image
imshow(I2,[])
[r,c,d] = size(I2);
I1 = I1(1:r,1:c,1:d);
% Replacing those pixels below threshold with Brightfield Image
threshold = 70;
I2R = I2(:,:,1); I2G = I2(:,:,2); I2B = I2(:,:,3);
I1R = I1(:,:,1); I1G = I1(:,:,2); I1B = I1(:,:,3);
I2R(PixelInt<threshold) = I1R(PixelInt<threshold);
I2G(PixelInt<threshold) = I1G(PixelInt<threshold);
I2B(PixelInt<threshold) = I1B(PixelInt<threshold);
I2(:,:,1) = I2R; I2(:,:,2) = I2G; I2(:,:,3) = I2B;
h = figure;
imshow(I2,[])
Original RGB Image:
Brightfield:
Overlay:
Is the content of pixelvalues what you show in your first image? If so, that image does not use a jet colormap. It has pink and white values above the red values, whereas jet stops at dark red at the upper limits. When you take the mean of those values and then generate a new RGB image with ind2rgb using the jet colormap, you're creating an inherently different image. You probably want to use pixelvalues directly in generating your overlay, like so:
% Load/create your starting images:
pixelvalues = imread('hello.png'); % Color overlay
I1 = repmat(Brightfield(:, :, 1), [1 1 3]); % Grayscale underlay
[r, c, d] = size(pixelvalues);
I1 = I1(1:r, 1:c, 1:d);
% Create image mask:
PixelInt = mean(double(pixelvalues), 3);
threshold = 70;
mask = repmat((PixelInt > threshold), [1 1 3]);
% Combine images:
I1(mask) = pixelvalues(mask);
imshow(I1);
Note that you may need to do some type conversions when loading/creating the starting images. I'm assuming 'hello.png' is a uint8 RGB image and Brightfield is of type uint8. If I load your first image as pixelvalues and your second image as I1, I get the following when running the above code:
Create a mask and use it to combine the images:
onionOrig = imread('onion.png');
onionGray = rgb2gray(onionOrig);
onionMask = ~(onionOrig(:,:,1)<100 & onionOrig(:,:,2)<100 & onionOrig(:,:,3)<100);
onionMasked(:,:,1) = double(onionOrig(:,:,1)) .* onionMask + double(onionGray) .* ~onionMask;
onionMasked(:,:,2) = double(onionOrig(:,:,2)) .* onionMask + double(onionGray) .* ~onionMask;
onionMasked(:,:,3) = double(onionOrig(:,:,3)) .* onionMask + double(onionGray) .* ~onionMask;
onionFinal = uint8(onionMasked);
imshow(onionFinal)

ROI is written in lighter colors than original picture

I'm trying to locate an object (here a PWB) on a picture.
First I do this by finding the largest contour. Then I want to rewrite solely this object into a new picture so that in the future I can work on smaller pictures.
The problem however is that when I rewrite this ROI, the picture gets of a lighter color than the original one.
CODE:
Original = cv2.imread(picture_location)
image = cv2.imread(mask_location)
img = cv2.medianBlur(image,29)
imgray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
dst = cv2.bitwise_and(Original, image)
roi = cv2.add(dst, Original)
ret,thresh = cv2.threshold(imgray,127,255,0)
im2, contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
area = 0
max_x = 0
max_y = 0
min_x = Original.shape[1]
min_y = Original.shape[0]
for i in contours:
new_area = cv2.contourArea(i)
if new_area > area:
area = new_area
cnt = i
x,y,w,h = cv2.boundingRect(cnt)
min_x = min(x, min_x)
min_y = min(y, min_y)
max_x = max(x+w, max_x)
max_y = max(y+h, max_y)
roi = roi[min_y-10:max_y+10, min_x-10:max_x+10]
Original = cv2.rectangle(Original,(x-10,y-10),(x+w+10,y+h+10),(0,255,0),2)
#Writing down the images
cv2.imwrite('Pictures/PCB1/LocatedPCB.jpg', roi)
cv2.imwrite('Pictures/PCB1/LocatedPCBContour.jpg',Original)
Since I don't have 10 reputation yet I cannot post the pictures. I can however provide the links:
Original
Region of Interest
The main question is how do I get the software to write down the ROI in the exact same colour as the original picture?
I'm a elektromechanical engineer however, so I'm fairly new to this, remarks on the way I wrote my code would also be appreciated if possible.
The problem is that you first let roi = cv2.add(dst, Original)
and finally cut from the lighten picture in here:
roi = roi[min_y-10:max_y+10, min_x-10:max_x+10]
If you want to crop the original image, you should do:
roi = Original[min_y-10:max_y+10, min_x-10:max_x+10]
You can perhaps perform an edge detection after blurring your image.
How to select best parameters for Canny edge? SEE HERE
lower = 46
upper = 93
edged = cv2.Canny(img, lower, upper) #--- Perform canny edge on the blurred image
kernel = np.ones((5,5),np.uint8)
dilate = cv2.morphologyEx(edged, cv2.MORPH_DILATE, kernel, 3) #---Morphological dilation
_, contours , _= cv2.findContours(dilate, cv2.RETR_EXTERNAL, 1) #---Finds all parent contours, does not find child contours(i.e; does not consider contours within another contour)
max = 0
cc = 0
for i in range(len(contours)): #---For loop for finding contour with maximum area
if (cv2.contourArea(contours[i]) > max):
max = cv2.contourArea(contours[i])
cc = i
cv2.drawContours(img, contours[cc], -1, (0,255,0), 2) #---Draw contour having the maximum area
cv2.imshow(Contour of PCB.',img)
x,y,w,h = cv2.boundingRect(cnt[cc]) #---Calibrates a straight rectangle for the contour of max. area
crop_img = img1[y:y+h, x:x+w] #--- Cropping the ROI having the coordinates of the bounding rectangle
cv2.imshow('cropped PCB.jpg',crop_img)

Color only a segment of an image in Matlab

I'm trying to color only a segment of an image in Matlab. For example, I load an RGB image, then I obtain a mask with Otsu's method (graythresh). I want to keep the color only in the pixels that have value of 1 after applying im2bw with graythresh as the threshold. For example:
image = imread('peppers.png');
thr = graythresh(image);
bw = im2bw(image, thr);
With this code I obtain the following binary image:
My goal is to keep the color in the white pixels.
Thanks!
I have another suggestion on how to replace the pixels we don't care about. This works by creating linear indices for each of the slices where black pixels exist in the bw image. The summation with the result of find is done because bw is the size of just one "slice" of image and this is how we get the indices for the other 2 slices.
Starting MATLAB 2016b:
image(find(~bw)+[0 numel(bw)*[1 2]]) = NaN;
In older versions:
image(bsxfun(#plus,find(~bw),[0 numel(bw)*[1 2]])) = NaN;
Then imshow(image) gives:
Note that NaN gets converted to 0 for integer classes.
Following the clarification that the other pixels should be kept in their gray version, see the below code:
% Load image:
img = imread('peppers.png');
% Create a grayscale version:
grayimg = rgb2gray(img);
% Segment image:
if ~verLessThan('matlab','9.0') && exist('imbinarize.m','file') == 2
% R2016a onward:
bw = imbinarize(grayimg);
% Alternatively, work on just one of the color channels, e.g. red:
% bw = imbinarize(img(:,:,1));
else
% Before R2016a:
thr = graythresh(grayimg);
bw = im2bw(grayimg, thr);
end
output_img = repmat(grayimg,[1 1 3]);
colorpix = bsxfun(#plus,find(bw),[0 numel(bw)*[1 2]]);
output_img(colorpix) = img(colorpix);
figure; imshow(output_img);
The result when binarizing using only the red channel:
Your question misses "and replace the rest with black". here are two ways:
A compact solution: use bsxfun:
newImage = bsxfun(#times, Image, cast(bw, 'like', Image));
Although I am glad with the previous one, you can also take a look at this step-by-step approach:
% separate the RGB layers:
R = image(:,:,1);
G = image(:,:,2);
B = image(:,:,3);
% change the values to zero or your desired color wherever bw is false:
R(~bw) = 0;
G(~bw) = 0;
B(~bw) = 0;
% concatenate the results:
newImage = cat(3, R, G, B);
Which can give you different replacements for the black region:
UPDATE:
According to the comments, the false area of bw should be replaced with grayscale image of the same input. This is how to achieve it:
image = imread('peppers.png');
thr = graythresh(image);
bw = im2bw(image, thr);
gr = rgb2gray(image); % generate grayscale image from RGB
newImage(repmat(~bw, 1, 1, 3)) = repmat(gr(~bw), 1, 1, 3); % substitude values
% figure; imshow(newImage)
With this result:

How to extract color shade from a given sample image to convert another image using color of sample image?

I have a sample image and a target image. I want to transfer the color shades of sample image to target image. Please tell me how to extract the color from sample image.
Here the images:
input source image:
input map for desired output image
output image
You can use a technique called "Histogram matching" (another description)
Basically, you use the histogram for your source image as a goal and transform the values for each input map pixel to get the output histogram as close to source as possible. You do it for each rgb channel of the image.
Here is my python code for that:
from scipy.misc import imsave, imread
import numpy as np
imsrc = imread("source.jpg")
imtint = imread("tint_target.jpg")
nbr_bins=255
imres = imsrc.copy()
for d in range(3):
imhist,bins = np.histogram(imsrc[:,:,d].flatten(),nbr_bins,normed=True)
tinthist,bins = np.histogram(imtint[:,:,d].flatten(),nbr_bins,normed=True)
cdfsrc = imhist.cumsum() #cumulative distribution function
cdfsrc = (255 * cdfsrc / cdfsrc[-1]).astype(np.uint8) #normalize
cdftint = tinthist.cumsum() #cumulative distribution function
cdftint = (255 * cdftint / cdftint[-1]).astype(np.uint8) #normalize
im2 = np.interp(imsrc[:,:,d].flatten(),bins[:-1],cdfsrc)
im3 = np.interp(imsrc[:,:,d].flatten(),cdftint, bins[:-1])
imres[:,:,d] = im3.reshape((imsrc.shape[0],imsrc.shape[1] ))
imsave("histnormresult.jpg", imres)
The output for you samples will look like that:
You could also try making the same in HSV colorspace - it might give better results.
I think the hardest part is to determine the dominant color of the first image. Just looking at it, with all the highlights and shadows, the best overall color will be the one that has the highest combination of brightness and saturation. I start with a blurred image to reduce the effects of noise and other anomalies, then convert each pixel to the HSV color space for the brightness and saturation measurement. Here's how it looks in Python with PIL and colorsys:
blurred = im1.filter(ImageFilter.BLUR)
ld = blurred.load()
max_hsv = (0, 0, 0)
for y in range(blurred.size[1]):
for x in range(blurred.size[0]):
r, g, b = tuple(c / 255. for c in ld[x, y])
h, s, v = colorsys.rgb_to_hsv(r, g, b)
if s + v > max_hsv[1] + max_hsv[2]:
max_hsv = h, s, v
r, g, b = tuple(int(c * 255) for c in colorsys.hsv_to_rgb(*max_hsv))
For your image I get a color of (210, 61, 74) which looks like:
From that point it's just a matter of transferring the hue and saturation to the other image.
The histogram matching solutions above did not work for me. Here is my own, based on OpenCV:
def match_image_histograms(image, reference):
chans1 = cv2.split(image)
chans2 = cv2.split(reference)
new_chans = []
for ch1, ch2 in zip(chans1, chans2):
hist1 = cv2.calcHist([ch1], [0], None, [256], [0, 256])
hist1 /= hist1.sum()
hist2 = cv2.calcHist([ch2], [0], None, [256], [0, 256])
hist2 /= hist2.sum()
lut = np.searchsorted(hist1.cumsum(), hist2.cumsum())
new_chans.append(cv2.LUT(ch1, lut))
return cv2.merge(new_chans).astype('uint8')
obtain average color from color map
ignore saturated white/black colors
convert light map to grayscale
change dynamic range of lightmap to match your desired output
I use max dynamic range. You could compute the range of color map and set it for light map
multiply the light map by avg color
This is how it looks like:
And this is the C++ source code
//picture pic0,pic1,pic2;
// pic0 - source color
// pic1 - source light map
// pic2 - output
int x,y,rr,gg,bb,i,i0,i1;
double r,g,b,a;
// init output as source light map in grayscale i=r+g+b
pic2=pic1;
pic2.rgb2i();
// change light map dynamic range to maximum
i0=pic2.p[0][0].dd; // min
i1=pic2.p[0][0].dd; // max
for (y=0;y<pic2.ys;y++)
for (x=0;x<pic2.xs;x++)
{
i=pic2.p[y][x].dd;
if (i0>i) i0=i;
if (i1<i) i1=i;
}
for (y=0;y<pic2.ys;y++)
for (x=0;x<pic2.xs;x++)
{
i=pic2.p[y][x].dd;
i=(i-i0)*767/(i1-i0);
pic2.p[y][x].dd=i;
}
// extract average color from color map (normalized to unit vecotr)
for (r=0.0,g=0.0,b=0.0,y=0;y<pic0.ys;y++)
for (x=0;x<pic0.xs;x++)
{
rr=BYTE(pic0.p[y][x].db[picture::_r]);
gg=BYTE(pic0.p[y][x].db[picture::_g]);
bb=BYTE(pic0.p[y][x].db[picture::_b]);
i=rr+gg+bb;
if (i<400) // ignore saturated colors (whiteish) 3*255=white
if (i>16) // ignore too dark colors (whiteish) 0=black
{
r+=rr;
g+=gg;
b+=bb;
}
}
a=1.0/sqrt((r*r)+(g*g)+(b*b)); r*=a; g*=a; b*=a;
// recolor output
for (y=0;y<pic2.ys;y++)
for (x=0;x<pic2.xs;x++)
{
a=DWORD(pic2.p[y][x].dd);
rr=r*a; if (rr>255) rr=255; pic2.p[y][x].db[picture::_r]=BYTE(rr);
gg=g*a; if (gg>255) gg=255; pic2.p[y][x].db[picture::_g]=BYTE(gg);
bb=b*a; if (bb>255) bb=255; pic2.p[y][x].db[picture::_b]=BYTE(bb);
}
I am using own picture class so here some members:
xs,ys size of image in pixels
p[y][x].dd is pixel at (x,y) position as 32 bit integer type
p[y][x].db[4] is pixel access by color bands (r,g,b,a)
[notes]
If this does not meet your needs then please specify more and add more images. Because your current example is really not self explanatonary
Regarding previous answer, one thing to be careful with:
once the CDF will reach its maximum (=1), the interpolation will get mislead and will match wrongly your values. To avoid this, you should provide the interpolation function only the part of CDF meaningful (not after where it reaches 1) and the corresponding bins. Here the answer adapted:
from scipy.misc import imsave, imread
import numpy as np
imsrc = imread("source.jpg")
imtint = imread("tint_target.jpg")
nbr_bins=255
imres = imsrc.copy()
for d in range(3):
imhist,bins = np.histogram(imsrc[:,:,d].flatten(),nbr_bins,normed=True)
tinthist,bins = np.histogram(imtint[:,:,d].flatten(),nbr_bins,normed=True)
cdfsrc = imhist.cumsum() #cumulative distribution function
cdfsrc = (255 * cdfsrc / cdfsrc[-1]).astype(np.uint8) #normalize
cdftint = tinthist.cumsum() #cumulative distribution function
cdftint = (255 * cdftint / cdftint[-1]).astype(np.uint8) #normalize
im2 = np.interp(imsrc[:,:,d].flatten(),bins[:-1],cdfsrc)
if (cdftint==1).sum()>0:
idx_max = np.where(cdftint==1)[0][0]
im3 = np.interp(im2,cdftint[:idx_max+1], bins[:idx_max+1])
else:
im3 = np.interp(im2,cdftint, bins[:-1])
Enjoy!

Resources