Gaussian blurring with OpenCV: only blurring a subregion of an image? - image

Is it possible to only blur a subregion of an image, instead of the whole image with OpenCV, to save some computational cost?
EDIT: One important point is that when blurring the boundary of the subregion, one should use the existing image content as much as possible; only when the convolution exceeds the boundary of the original image, an extrapolation or other artificial border conditions can be used.

To blur the whole image, assuming you want to overwrite the original (In-place filtering is supported by cv::GaussianBlur), you will have something like
cv::GaussianBlur(image, image, Size(0, 0), 4);
To blur just a region use Mat::operator()(const Rect& roi) to extract the region:
cv::Rect region(x, y, w, h);
cv::GaussianBlur(image(region), image(region), Size(0, 0), 4);
Or if you want the blurred output in a separate image:
cv::Rect region(x, y, w, h);
cv::Mat blurred_region;
cv::GaussianBlur(image(region), blurred_region, Size(0, 0), 4);
The above uses the default BORDER_CONSTANT option that just assumes everything outside the image is 0 when doing the blurring.
I am not sure what it does with pixels at the edge of a region. You can force it to ignore pixels outside the region (BORDER_CONSTANT|BORDER_ISOLATE). SO it think it probably does use the pixels outside the region. You need to compare the results from above with:
const int bsize = 10;
cv::Rect region(x, y, w, h);
cv::Rect padded_region(x - bsize, y - bsize, w + 2 * bsize, h + 2 * bsize)
cv::Mat blurred_padded_region;
cv::GaussianBlur(image(padded_region), blurred_padded_region, Size(0, 0), 4);
cv::Mat blurred_region = blurred_padded_region(cv::Rect(bsize, bsize, w, h));
// and you can then copy that back into the original image if you want:
blurred_region.copyTo(image(region));

Here's how to do it in Python. The idea is to select a ROI, blur it, then insert it back into the image
import cv2
# Read in image
image = cv2.imread('1.png')
# Create ROI coordinates
topLeft = (60, 140)
bottomRight = (340, 250)
x, y = topLeft[0], topLeft[1]
w, h = bottomRight[0] - topLeft[0], bottomRight[1] - topLeft[1]
# Grab ROI with Numpy slicing and blur
ROI = image[y:y+h, x:x+w]
blur = cv2.GaussianBlur(ROI, (51,51), 0)
# Insert ROI back into image
image[y:y+h, x:x+w] = blur
cv2.imshow('blur', blur)
cv2.imshow('image', image)
cv2.waitKey()
Before -> After

Yes it is possible to blur a Region Of Interest in OpenCV.
size( 120, 160 );
OpenCV opencv = new OpenCV(this);
opencv.loadImage("myPicture.jpg");
opencv.ROI( 60, 0, 60, 160 );
opencv.blur( OpenCV.BLUR, 13 );
image( opencv.image(), 0, 0 );
For more information, check out this link.
Good luck,

If you are using javacv provided by bytecode
then you can do like this way. It will only blur particular ROI.
Mat src = imread("xyz.jpg",IMREAD_COLOR);
Rect rect = new Rect(50,50,src.size().width()/3,100);
GaussianBlur(new Mat(src, rect), new Mat(src, rect), new Size(23,23), 30);

Related

Contrast Limited Adaptive Histogram Equalization in 360 images

I am currently applying the Contrast Limited Adaptive Histogram Equalization algorithm together with an algorithm to perform the photo denoise.
My problem is that I am working with 360 photos. As the contrast generates different values ​​at the edges when I join the photo, the edge line is highly noticeable. How can I mitigate that line? What changes should I make so that it is not noticeable and the algorithm is applied consistently?
Original Photo:
Code to Contrast Limited Adaptive Histogram Equalization
# CLAHE (Contrast Limited Adaptive Histogram Equalization)
clahe = cv2.createCLAHE(clipLimit=1., tileGridSize=(6, 6))
lab = cv2.cvtColor(image, cv2.COLOR_BGR2LAB) # convert from BGR to LAB color space
l, a, b = cv2.split(lab) # split on 3 different channels
l2 = clahe.apply(l) # apply CLAHE to the L-channel
lab = cv2.merge((l2, a, b)) # merge channels
img2 = cv2.cvtColor(lab, cv2.COLOR_LAB2BGR) # convert from LAB to BGR
Result:
360 performed:
It is highly notorious line of separation because it is not taken into account that the photo is joined later. What can I do?
Here's an answer for C++, you can probably convert it easily to python/numpy.
The idea is to use a border region before performing CLAHE and crop the image afterwards.
These are the subimage regions in the original image:
and they will be copied the the left/right of the image like this:
Maybe you can reduce the size of the border strongly:
int main()
{
cv::Mat img = cv::imread("C:/data/SO_360.jpg");
int borderSize = img.cols / 4;
// make image that can have some border region
cv::Mat borderImage = cv::Mat(cv::Size(img.cols + 2 * borderSize, img.rows), img.type());
// posX, posY, width, height of the subimages
cv::Rect leftBorderRegion = cv::Rect(0, 0, borderSize, borderImage.rows);
cv::Rect rightBorderRegion = cv::Rect(borderImage.cols - borderSize, 0, borderSize, borderImage.rows);
cv::Rect imgRegion = cv::Rect(borderSize, 0, img.cols, borderImage.rows);
// original image regions to copy:
cv::Rect left = cv::Rect(0, 0, borderSize, borderImage.rows);
cv::Rect right = cv::Rect(img.cols - borderSize, 0, borderSize, img.rows);
cv::Rect full = cv::Rect(0, 0, img.cols, img.rows);
// perform copying to subimage (left part of the img goes to right part of the border image):
img(left).copyTo(borderImage(rightBorderRegion));
img(right).copyTo(borderImage(leftBorderRegion));
img.copyTo(borderImage(imgRegion));
cv::imwrite("SO_360_border.jpg", borderImage);
//# CLAHE(Contrast Limited Adaptive Histogram Equalization)
//clahe = cv2.createCLAHE(clipLimit = 1., tileGridSize = (6, 6))
// apply the CLAHE algorithm to the L channel
cv::Ptr<cv::CLAHE> clahe = cv::createCLAHE();
clahe->setClipLimit(1);
clahe->setTilesGridSize(cv::Size(6, 6));
cv::Mat lab;
cv::cvtColor(borderImage, lab, cv::COLOR_BGR2Lab); // # convert from BGR to LAB color space
std::vector<cv::Mat> labChannels; //l, a, b = cv2.split(lab) # split on 3 different channels
cv::split(lab, labChannels);
//l2 = clahe.apply(l) # apply CLAHE to the L - channel
cv::Mat dst;
clahe->apply(labChannels[0], dst);
labChannels[0] = dst;
//lab = cv2.merge((l2, a, b)) # merge channels
cv::merge(labChannels, lab);
//img2 = cv2.cvtColor(lab, cv2.COLOR_LAB2BGR) # convert from LAB to BGR
cv::cvtColor(lab, dst, cv::COLOR_Lab2BGR);
cv::imwrite("SO_360_border_clahe.jpg", dst);
// to crop the image after performing clahe:
cv::Mat cropped = dst(imgRegion).clone();
cv::imwrite("SO_360_clahe.jpg", cropped);
}
Images:
input as in your original post.
After creating the border:
After performing CLAHE (with border):
After cropping the CLAHE-border-image:

Rotate image in Matlab about arbitrary points

I am trying to rotate an image in Matlab about an arbitrary set of points.
So far, I have used imrotate, but it looks like imrotate only rotates about the center.
Is there a nice way of doing this without first padding the image and then using imrotate?
Thank you
The "nice way" is using imwarp
Building the transformation matrix is a little tricky.
I figured out how to build it from a the following question: Matlab image rotation
The transformation supports rotation, translation and zoom.
Parameters:
(x0, y0) is the center point that you rotate around it.
phi is rotation angle.
sx, sy are horizontal and vertical zoom (set to value to 1).
W and H are width and height of input (and output) images.
Building 3x3 transformation matrix:
T = [sx*cos(phi), -sx*sin(phi), 0
sy*sin(phi), sy*cos(phi), 0
(W+1)/2-((sx*x0*cos(phi))+(sy*y0*sin(phi))), (H+1)/2+((sx*x0*sin(phi))-(sy*y0*cos(phi))), 1];
Using imwarp:
tform = affine2d(T);
J = imwarp(I, tform, 'OutputView', imref2d([H, W]), 'Interp', 'cubic');
Here is a complete executable code sample:
I = imresize(imread('peppers.png'), 0.5); %I is the input image
[H, W, ~] = size(I); %Height and Width of I
phi = 120*pi/180; %Rotate 120 degrees
%Zoom coefficients
sx = 1;
sy = 1;
%Center point (the point that the image is rotated around it).
x0 = (W+1)/2 + 50;
y0 = (H+1)/2 + 20;
%Draw white cross at the center of the point of the input image.
I(y0-0.5:y0+0.5, x0-19.5:x0+19.5, :) = 255;
I(y0-19.5:y0+19.5, x0-0.5:x0+0.5, :) = 255;
%Build transformation matrix.
T = [sx*cos(phi), -sx*sin(phi), 0
sy*sin(phi), sy*cos(phi), 0
(W+1)/2-((sx*x0*cos(phi))+(sy*y0*sin(phi))), (H+1)/2+((sx*x0*sin(phi))-(sy*y0*cos(phi))), 1];
tform = affine2d(T);
J = imwarp(I, tform, 'OutputView', imref2d([H, W]), 'Interp', 'cubic');
%Draw black cross at the center of the output image:
J(end/2:end/2+1, end/2-15:end/2+15, :) = 0;
J(end/2-15:end/2+15, end/2:end/2+1, :) = 0;
%Shows that the center of the output image is the point that the image was rotated around it.
figure;imshow(J)
Input image:
Output image:
Note:
Important advantage over other methods (like imrotate after padding), is that the center coordinates don't have to be integer values.
You can rotate around (100.4, 80.7) for example.

Colorbar resizes the subplots

I'm trying to plot 3 images side by side in MATLAB using subplot:
maxValue = 9;
minValue = 5;
figure(1)
subplot(1,3,1);
imshow(im1);
axis equal;
subplot(1,3,2);
imagesc(im2);colorbar;
caxis([minValue maxValue])
axis equal;
subplot(1,3,3);
imagesc(im3);colorbar;
caxis([minValue maxValue])
axis equal;
but the result looks like this:
Apparently the colorbar is resizing the image. How can I make all 3 images the same size and the colorbar fit the size of the image?
Your image is resized to maintain its aspect ratio according to the available space.
Use axis normal; for subplot(1,3,1) instead of axis equal.
You might need to maximise the figure window as well.
For im1 = imread('peppers.png');, the result is:
Here is what I ended up doing:
fig = figure(1);
set(fig, 'Position', [52 529 1869 445]); % Resize the image
subplot(1,3,1); % Add a subplot
subaxis(1,3,1, 'Spacing', 0.03, 'Padding', 0, 'Margin', 0); % Remove whitespace from subplot
imshow(im);
axis equal; % Use undistorted images
subplot(1,3,2);
subaxis(1,3,2, 'Spacing', 0.03, 'Padding', 0, 'Margin', 0);
imagesc(depth_gt);colorbar;
caxis([minValue maxValue])
axis equal;
subplot(1,3,3);
subaxis(1,3,3, 'Spacing', 0.03, 'Padding', 0, 'Margin', 0);
imagesc(depth_pred);colorbar;
caxis([minValue maxValue])
axis equal;
You can get this 'position' my manually resizing the image and then printing the output of fig in the Matlab command terminal

Align 2 images based on Hough Lines with openCV

I have 2 (aerial) images, taken from a slightly different angle:
image 1:
image2:
I need to rescale image1 in horizontal direction to align it to image2. Without any modification, both images placed next to each other look like this:
This is the desired result: (made with photoshop)
In Photoshop, I took the right half of image1 and scaled it down horizontally a little bit. I did the same for the left half of image1, where I had to scale slightly more.
I would like to know how I can accomplish this using openCV - by using Hough Line Transform. I already started drawing hough lines, but I have no idea how to do the transform to make the hough lines match:
Here's my C++ code (called from objective-c):
cv::Mat image1 = [im1 CVMat3];
cv::Mat gray_image1;
// Convert to Grayscale
cvtColor( image1, gray_image1, CV_RGB2GRAY );
cv::Mat dst1, cdst;
Canny(image1, dst1, 40, 90, 3);
double minLineLength = 0;
double maxLineGap = 10;
std::vector<cv::Vec2f> lines;
// detect lines
cv::HoughLines(dst1, lines, 1, CV_PI/180, 90, minLineLength,maxLineGap );
for( size_t i = 0; i < lines.size(); i++ ) {
float rho = lines[i][0], theta = lines[i][1];
if( theta>CV_PI/180*70 && theta<CV_PI/180*110) {
cv::Point pt1, pt2;
double a = cos(theta), b = sin(theta);
double x0 = a*rho, y0 = b*rho;
pt1.x = cvRound(x0 + 1000*(-b));
pt1.y = cvRound(y0 + 1000*(a));
pt2.x = cvRound(x0 - 1000*(-b));
pt2.y = cvRound(y0 - 1000*(a));
line( image1, pt1, pt2, cvScalar(10,100,255), 3, CV_AA);
}
}
Some help would be really appreciated :-). Thanks in advance.

pupil detection using opencv, with infrared image

I am trying the detect the pupil from a infrared image and calculate the center of the pupil.
In my setup, i used a camera sensitive to infrared light, and I added a visible light filter to the lens and two infrared LED around the camera.
However, the image I got is blur not so clear, maybe this caused by the low resolution of the camera, whose max is about 700x500.
In the processing, the first thing i did was to convert this RGB image to gray image, how ever the result is terrible. and it got nothing in the results.
int main()
{
//load image
cv::Mat src = cv::imread("11_13_2013_15_36_09.jpg");
cvNamedWindow("original");
cv::imshow("original", src);
cv::waitKey(10);
if (src.empty())
{
std::cout << "failed to find the image";
return -1;
}
// Invert the source image and convert to graysacle
cv::Mat gray;
cv::cvtColor(~src, gray, CV_BGR2GRAY);
cv::imshow("image1", gray);
cv::waitKey(10);
// Convert to binary image by thresholding it
cv::threshold(gray, gray, 220, 255, cv::THRESH_BINARY);
cv::imshow("image2", gray);
cv::waitKey(10);
// Find all contours
std::vector<std::vector<cv::Point>>contours;
cv::findContours(gray.clone(), contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
// Fill holes in each contour
cv::drawContours(gray, contours, -1, CV_RGB(255, 255, 255), -1);
cv::imshow("image3", gray);
cv::waitKey(10);
for (int i = 0; i < contours.size(); i++)
{
double area = cv::contourArea(contours[i]);
cv::Rect rect = cv::boundingRect(contours[i]);
int radius = rect.width / 2;
// If controu is big enough and has round shape
// Then it is the pupil
if (area >= 800 &&
std::abs(1 - ((double)rect.width / (double)rect.height)) <= 0.3 &&
std::abs(1 - (area / (CV_PI * std::pow(radius, 2)))) <= 0.3)
{
cv::circle(src, cv::Point(rect.x + radius, rect.y + radius), radius, CV_RGB(255, 0, 0), 2);
}
}
cv::imshow("image", src);
cvWaitKey(0);
}
When the original image was converted, the gray image is terrible, does anyone know a better solution to this? I am completely new to this. for the rest of the code for finding the circle, if you have any comments, just tell me. and also i need to extra the position of the two glint (the light point) on the original image, does anyone has some idea?
thanks.
Try equalizing and filtering your source image before thresholding it ;)

Resources