Opencv MOG2 Background substracting resut of stable video has many noise - opencv3.0

I want to use this code
fgbg = cv2.createBackgroundSubtractorMOG2(detectShadows=True)
cap = cv2.VideoCapture('drunker-1.mp4')
while True:
grabed, img = cap.read()
if not grabed:
break
ori = img.copy()
gray = cv2.cvtColor(ori, cv2.COLOR_BGR2GRAY)
img = fgbg.apply(gray)
ret, img = cv2.threshold(img, 127, 255, cv2.THRESH_BINARY)
extract human body from this video:
https://www.youtube.com/watch?v=Xvj4Ud-RKrM
but I got result like this:
This is complete messy,and I think it caused by light and shadow changing,so how to reduce these noises?Thanks in advance!!

You can try to blur the image using GaussianBlur before background subtraction:
Imgproc.GaussianBlur(resize_blur_Img, resize_blur_Img, new Size(9, 9), 2, 2);

Related

How can I make the `cv2.imshow` output the same as the `plt.imshow` output?

How can I make the cv2.imshow output the same as the plt.imshow output?
# loading image
img0 = cv2.imread("image.png")
# converting to gray scale
gray = cv2.cvtColor(img0, cv2.COLOR_BGR2GRAY)
# remove noise
img = cv2.GaussianBlur(gray, (3, 3), 0)
# convolute with proper kernels
laplacian = cv2.Laplacian(img, cv2.CV_64F)
sobelx = cv2.Sobel(img, cv2.CV_64F, 1, 0, ksize=5) # x
sobely = cv2.Sobel(img, cv2.CV_64F, 0, 1, ksize=5) # y
imgboth = cv2.addWeighted(sobelx, 0.5, sobely, 0.5, 0)
plt.imshow(imgboth, cmap='gray')
plt.show()
cv2.imshow("img", cv2.resize(imgboth, (960, 540)))
cv2.waitKey(0)
cv2.destroyAllWindows()
original image
plt.output
cv2.imshow
# ...
canvas = imgboth.astype(np.float32)
canvas /= np.abs(imgboth).max()
canvas += 0.5
cv.namedWindow("canvas", cv.WINDOW_NORMAL)
cv.imshow("canvas", canvas)
cv.waitKey()
cv.destroyWindow("canvas")
only looks different because you posted thumbnails, not the original size image.
when you give imshow floating point values, which you do because laplacian and sobelx are floating point, then it assumes a range of 0.0 .. 1.0 as black .. white.
matplotlib automatically scales data. OpenCV's imshow doesn't. both behaviors have pros and cons.

Contrast Limited Adaptive Histogram Equalization in 360 images

I am currently applying the Contrast Limited Adaptive Histogram Equalization algorithm together with an algorithm to perform the photo denoise.
My problem is that I am working with 360 photos. As the contrast generates different values ​​at the edges when I join the photo, the edge line is highly noticeable. How can I mitigate that line? What changes should I make so that it is not noticeable and the algorithm is applied consistently?
Original Photo:
Code to Contrast Limited Adaptive Histogram Equalization
# CLAHE (Contrast Limited Adaptive Histogram Equalization)
clahe = cv2.createCLAHE(clipLimit=1., tileGridSize=(6, 6))
lab = cv2.cvtColor(image, cv2.COLOR_BGR2LAB) # convert from BGR to LAB color space
l, a, b = cv2.split(lab) # split on 3 different channels
l2 = clahe.apply(l) # apply CLAHE to the L-channel
lab = cv2.merge((l2, a, b)) # merge channels
img2 = cv2.cvtColor(lab, cv2.COLOR_LAB2BGR) # convert from LAB to BGR
Result:
360 performed:
It is highly notorious line of separation because it is not taken into account that the photo is joined later. What can I do?
Here's an answer for C++, you can probably convert it easily to python/numpy.
The idea is to use a border region before performing CLAHE and crop the image afterwards.
These are the subimage regions in the original image:
and they will be copied the the left/right of the image like this:
Maybe you can reduce the size of the border strongly:
int main()
{
cv::Mat img = cv::imread("C:/data/SO_360.jpg");
int borderSize = img.cols / 4;
// make image that can have some border region
cv::Mat borderImage = cv::Mat(cv::Size(img.cols + 2 * borderSize, img.rows), img.type());
// posX, posY, width, height of the subimages
cv::Rect leftBorderRegion = cv::Rect(0, 0, borderSize, borderImage.rows);
cv::Rect rightBorderRegion = cv::Rect(borderImage.cols - borderSize, 0, borderSize, borderImage.rows);
cv::Rect imgRegion = cv::Rect(borderSize, 0, img.cols, borderImage.rows);
// original image regions to copy:
cv::Rect left = cv::Rect(0, 0, borderSize, borderImage.rows);
cv::Rect right = cv::Rect(img.cols - borderSize, 0, borderSize, img.rows);
cv::Rect full = cv::Rect(0, 0, img.cols, img.rows);
// perform copying to subimage (left part of the img goes to right part of the border image):
img(left).copyTo(borderImage(rightBorderRegion));
img(right).copyTo(borderImage(leftBorderRegion));
img.copyTo(borderImage(imgRegion));
cv::imwrite("SO_360_border.jpg", borderImage);
//# CLAHE(Contrast Limited Adaptive Histogram Equalization)
//clahe = cv2.createCLAHE(clipLimit = 1., tileGridSize = (6, 6))
// apply the CLAHE algorithm to the L channel
cv::Ptr<cv::CLAHE> clahe = cv::createCLAHE();
clahe->setClipLimit(1);
clahe->setTilesGridSize(cv::Size(6, 6));
cv::Mat lab;
cv::cvtColor(borderImage, lab, cv::COLOR_BGR2Lab); // # convert from BGR to LAB color space
std::vector<cv::Mat> labChannels; //l, a, b = cv2.split(lab) # split on 3 different channels
cv::split(lab, labChannels);
//l2 = clahe.apply(l) # apply CLAHE to the L - channel
cv::Mat dst;
clahe->apply(labChannels[0], dst);
labChannels[0] = dst;
//lab = cv2.merge((l2, a, b)) # merge channels
cv::merge(labChannels, lab);
//img2 = cv2.cvtColor(lab, cv2.COLOR_LAB2BGR) # convert from LAB to BGR
cv::cvtColor(lab, dst, cv::COLOR_Lab2BGR);
cv::imwrite("SO_360_border_clahe.jpg", dst);
// to crop the image after performing clahe:
cv::Mat cropped = dst(imgRegion).clone();
cv::imwrite("SO_360_clahe.jpg", cropped);
}
Images:
input as in your original post.
After creating the border:
After performing CLAHE (with border):
After cropping the CLAHE-border-image:

Remove ink mark from paper using OpenCV

I have an image like this,
As you can see, there is a pen mark in the image. I want to remove that mark. How to do it in OpenCV.?
I tried converting it to HSV, creating a mask with blue range and removing it using the code.
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
lower_blue = np.array([110,50,50])
upper_blue = np.array([130,255,255])
mask = cv2.inRange(hsv, lower_blue , upper_blue )
res = cv2.bitwise_and(img, img, mask= mask)
It is not working as needed. All the text gets removed. How to fix this.?
You can take threshold of the first array of image. It looks like this:
Here it is clearly visible the difference in pixel values of the ink mark and the letters. After thresholding it looks like:
The ink mark can now be removed via closing. However it will reduce the size of letters as well. Therefore erosion is performed followed by a bitwise OR to obtain our mask without the ink mark.
If however you want the letters to look like the original image you can store the mask in a numpy array of 255s and perform it bitwise OR with original image.
The full code I have used is:
img = cv2.imread('ink_mark.png')
wimg = img[:, :, 0]
ret,thresh = cv2.threshold(wimg,100,255,cv2.THRESH_BINARY)
kernel = np.ones((7, 7), np.uint8)
closing = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)
erosion = cv2.erode(closing, kernel, iterations = 1)
mask = cv2.bitwise_or(erosion, thresh)
white = np.ones(img.shape,np.uint8)*255
white[:, :, 0] = mask
white[:, :, 1] = mask
white[:, :, 2] = mask
result = cv2.bitwise_or(img, white)
cv2.imshow('image', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Try using inpaint. First create a mask of the ink:
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
lower_blue = np.array([100,50,50])
upper_blue = np.array([150,255,255])
kernel = np.ones((5,5),np.uint8)
mask = cv2.inRange(hsv, lower_blue, upper_blue)
mask = cv2.dilate(mask,kernel,iterations = 4)
Use the inpaint function to paint in areas where the mask it white. OpenCV will throw away the original pixels, and use guess which pixels should go there.
dst = cv2.inpaint(img, mask, 3, cv2.INPAINT_TELEA)

How Can I only keep text with specific color from image via opencv and python?

I have some invoice image with some text overlapping, which make some trouble for later processing, and what I only is the text in black. some I want to remove the text which is in other colors.
is there any way to achieve this?
the image is attached as example.
I have tried to solve it with opencv, but i still can't solve this:
import numpy as np import cv2
img = cv2.imread('11.png')
lower = np.array([150,150,150])
upper = np.array([200,200,200])
mask = cv2.inRange(img, lower, upper)
res = cv2.bitwise_and(img, img, mask=mask)
cv2.imwrite('22.png',res)
[image with multiple color][1]
[1]: https://i.stack.imgur.com/nWQrV.pngstrong text
The text is darker and less saturated. And as suggested as #J.D. the HSV color space is good. But his range is wrong.
In OpenCV, the H ranges in [0, 180], while the S/V ranges in [0, 255]
Here is a colormap I made in the last year, I think it's helpful.
(1) Use cv2.inRange
(2) Just threshold the V(HSV) channel:
th, threshed = cv2.threshold(v, 150, 255, cv2.THRESH_BINARY_INV)
(3) Just threshold the S(HSV) channel:
th, threshed2 = cv2.threshold(s, 30, 255, cv2.THRESH_BINARY_INV)
The result:
The demo code:
# 2018/12/30 22:21
# 2018/12/30 23:25
import cv2
img = cv2.imread("test.png")
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
h,s,v = cv2.split(hsv)
mask = cv2.inRange(hsv, (0,0,0), (180, 50, 130))
dst1 = cv2.bitwise_and(img, img, mask=mask)
th, threshed = cv2.threshold(v, 150, 255, cv2.THRESH_BINARY_INV)
dst2 = cv2.bitwise_and(img, img, mask=threshed)
th, threshed2 = cv2.threshold(s, 30, 255, cv2.THRESH_BINARY_INV)
dst3 = cv2.bitwise_and(img, img, mask=threshed2)
cv2.imwrite("dst1.png", dst1)
cv2.imwrite("dst2.png", dst2)
cv2.imwrite("dst3.png", dst3)
How to detect colored patches in an image using OpenCV?
How to define a threshold value to detect only green colour objects in an image :Opencv
Converting to the HSV colorspace makes selecting colors easier.
The code below does what you want.
Result:
import numpy as np
import cv2
kernel = np.ones((2,2),np.uint8)
# load image
img = cv2.imread("image.png")
# Convert BGR to HSV
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# define range of black color in HSV
lower_val = np.array([0,0,0])
upper_val = np.array([179,100,130])
# Threshold the HSV image to get only black colors
mask = cv2.inRange(hsv, lower_val, upper_val)
# Bitwise-AND mask and original image
res = cv2.bitwise_and(img,img, mask= mask)
# invert the mask to get black letters on white background
res2 = cv2.bitwise_not(mask)
# display image
cv2.imshow("img", res)
cv2.imshow("img2", res2)
cv2.waitKey(0)
cv2.destroyAllWindows()
To change the level of black selected, tweak from the upper_val, the value currently set at 130. Higher = allow lighter shades (it's called the Value). Also the value currently at 100: lower = allow less color (actually: saturation).
Read more about the HSV colorspace here.
I always find the image below very helpfull. The bottom 'disc' is all black. As you move up in Value, lighter pixels are also selected. The pixels with low saturation stay shades of gray until white (the center), the pixels with high saturation get colored(the edge).That's why you tweak those values.
Edit: As #Silencer pointed out, my range was off. Fixed it.

Playing image sequences in MATLAB

Greetings to all of you.
I have this somewhat frustrating problem, and I hope that you kindly help me solve it.
I am developing a human tracking system in MATLAB, and would like to show the result in an appealing GUI (also in MATLAB using GUIDE).
There is this main window where an image sequence of about 2500 gray scale images of size 320x240 would be played like a video but where the humans be outlined in them nicely.
The challenge is; these images need a bit of processing (detection outlining of humans) before being shown on the window.
Now, is it possible to display a set of images while at the same time do some processing for another set to be shown afterwards?
I would very much prefer it to play like a normal video, but I guess that would be somehow ambitious.
Here is an example showing a scenario similar to what you described. This was adapted from the demo I mentioned in the comments.
function ImgSeqDemo()
figure()
for i=1:10
%# read image
img = imread( sprintf('AT3_1m4_%02d.tif',i) );
%# process image to extract some object of interest
[BW,rect] = detectLargestCell(img);
%# show image
imshow(img), hold on
%# overlay mask in red color showing object
RGB = cat(3, BW.*255, zeros(size(BW),'uint8'), zeros(size(BW),'uint8'));
hImg = imshow(RGB); set(hImg, 'AlphaData',0.5);
%# show bounding rectangle
rectangle('Position', rect, 'EdgeColor','g');
hold off
drawnow
end
end
Here is the processing function used above. In your case, you would insert your algorithm instead:
function [BW,rect] = detectLargestCell(I)
%# OUTPUT
%# BW binary mask of largest detected cell
%# rect bounding box of largest detected cell
%# find components
[~, threshold] = edge(I, 'sobel');
BW = edge(I,'sobel', threshold*0.5);
se90 = strel('line', 3, 90);
se0 = strel('line', 3, 0);
BW = imdilate(BW, [se90 se0]);
BW = imclearborder(BW, 4);
BW = bwareaopen(BW, 200);
BW = bwmorph(BW, 'close');
BW = imfill(BW, 'holes');
%# keep largest component
CC = bwconncomp(BW);
stats = regionprops(CC, {'Area','BoundingBox'});
[~,idx] = max([stats.Area]);
rect = stats(idx).BoundingBox;
BW(:) = 0;
BW(CC.PixelIdxList{idx}) = 1;
end

Resources