ROI is written in lighter colors than original picture - image

I'm trying to locate an object (here a PWB) on a picture.
First I do this by finding the largest contour. Then I want to rewrite solely this object into a new picture so that in the future I can work on smaller pictures.
The problem however is that when I rewrite this ROI, the picture gets of a lighter color than the original one.
CODE:
Original = cv2.imread(picture_location)
image = cv2.imread(mask_location)
img = cv2.medianBlur(image,29)
imgray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
dst = cv2.bitwise_and(Original, image)
roi = cv2.add(dst, Original)
ret,thresh = cv2.threshold(imgray,127,255,0)
im2, contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
area = 0
max_x = 0
max_y = 0
min_x = Original.shape[1]
min_y = Original.shape[0]
for i in contours:
new_area = cv2.contourArea(i)
if new_area > area:
area = new_area
cnt = i
x,y,w,h = cv2.boundingRect(cnt)
min_x = min(x, min_x)
min_y = min(y, min_y)
max_x = max(x+w, max_x)
max_y = max(y+h, max_y)
roi = roi[min_y-10:max_y+10, min_x-10:max_x+10]
Original = cv2.rectangle(Original,(x-10,y-10),(x+w+10,y+h+10),(0,255,0),2)
#Writing down the images
cv2.imwrite('Pictures/PCB1/LocatedPCB.jpg', roi)
cv2.imwrite('Pictures/PCB1/LocatedPCBContour.jpg',Original)
Since I don't have 10 reputation yet I cannot post the pictures. I can however provide the links:
Original
Region of Interest
The main question is how do I get the software to write down the ROI in the exact same colour as the original picture?
I'm a elektromechanical engineer however, so I'm fairly new to this, remarks on the way I wrote my code would also be appreciated if possible.

The problem is that you first let roi = cv2.add(dst, Original)
and finally cut from the lighten picture in here:
roi = roi[min_y-10:max_y+10, min_x-10:max_x+10]
If you want to crop the original image, you should do:
roi = Original[min_y-10:max_y+10, min_x-10:max_x+10]

You can perhaps perform an edge detection after blurring your image.
How to select best parameters for Canny edge? SEE HERE
lower = 46
upper = 93
edged = cv2.Canny(img, lower, upper) #--- Perform canny edge on the blurred image
kernel = np.ones((5,5),np.uint8)
dilate = cv2.morphologyEx(edged, cv2.MORPH_DILATE, kernel, 3) #---Morphological dilation
_, contours , _= cv2.findContours(dilate, cv2.RETR_EXTERNAL, 1) #---Finds all parent contours, does not find child contours(i.e; does not consider contours within another contour)
max = 0
cc = 0
for i in range(len(contours)): #---For loop for finding contour with maximum area
if (cv2.contourArea(contours[i]) > max):
max = cv2.contourArea(contours[i])
cc = i
cv2.drawContours(img, contours[cc], -1, (0,255,0), 2) #---Draw contour having the maximum area
cv2.imshow(Contour of PCB.',img)
x,y,w,h = cv2.boundingRect(cnt[cc]) #---Calibrates a straight rectangle for the contour of max. area
crop_img = img1[y:y+h, x:x+w] #--- Cropping the ROI having the coordinates of the bounding rectangle
cv2.imshow('cropped PCB.jpg',crop_img)

Related

How to blur an image in one specific direction in Matlab?

I have an image and I would like to blur it in one specific direction and distance using Matlab.
I found out there is a filter called fspecial('motion',len,theta).
Here there is an example:
I = imread('cameraman.tif');
imshow(I);
H = fspecial('motion',20,45);
MotionBlur = imfilter(I,H,'replicate');
imshow(MotionBlur);
However the blurred picture is blurred in 2 directions! In this case 225 and 45 degrees.
What should it do in order to blur it just in a specific direction (e.g. 45) and not both?
I think you want what's called a "comet" kernel. I'm not sure what kernel is used for the "motion" blur, but I'd guess that it's symmetrical based on the image you provided.
Here is some code to play with that applies the comet kernel in one direction. You'll have to change things around if you want an arbitrary angle. You can see from the output that it's smearing in one direction, since there is a black band on only one side (due to the lack of pixels there).
L = 5; % kernel width
sigma=0.2; % kernel smoothness
I = imread('cameraman.tif');
x = -L:1.0:L;
[X,Y] = meshgrid(x,x);
H1 = exp((-sigma.*X.^2)+(-sigma.*Y.^2));
kernel = H1/sum((H1(:)));
Hflag = double((X>0));
comet_kernel = Hflag.*H1;
comet_kernel=comet_kernel/sum(comet_kernel(:));
smearedImage = conv2(double(I),comet_kernel,'same');
imshow(smearedImage,[]);
Updated code: This will apply an arbitrary rotation to the comet kernel. Note also the difference between sigma in the previous example and sx and sy here, which control the length and width parameters of the kernel, as suggested by Andras in the comments.
L = 5; % kernel width
sx=3;
sy=10;
theta=0;
I = imread('cameraman.tif');
x = -L:1.0:L;
[X,Y] = meshgrid(x,x);
rX = X.*cos(theta)-Y.*sin(theta);
rY = X.*sin(theta)+Y.*cos(theta);
H1 = exp(-((rX./sx).^2)-((rY./sy).^2));
Hflag = double((0.*rX+rY)>0);
H1 = H1.*Hflag;
comet_kernel = H1/sum((H1(:)))
smearedImage = conv2(double(I),comet_kernel,'same');
imshow(smearedImage,[]);
Based on Anger Density's answer I wrote this code that solves my problem completely:
L = 10; % kernel width
sx=0.1;
sy=100;
THETA = ([0,45,90,135,180,225,270,320,360])*pi/180;
for i=1:length(THETA)
theta=(THETA(i)+pi)*-1;
I = imread('cameraman.tif');
x = -L:1.0:L;
[X,Y] = meshgrid(x,x);
rX = X.*cos(theta)-Y.*sin(theta);
rY = X.*sin(theta)+Y.*cos(theta);
H1 = exp(-((rX./sx).^2)-((rY./sy).^2));
Hflag = double((0.*rX+rY)>0);
H1 = H1.*Hflag;
comet_kernel = H1/sum((H1(:)));
smearedImage = conv2(double(I),comet_kernel,'same');
% Fix edges
smearedImage(:,[1:L, end-L:end]) = I(:,[1:L, end-L:end]); % Left/Right edge
smearedImage([1:L, end-L:end], :) = I([1:L, end-L:end], :); % Top/bottom edge
% Keep only inner blur
smearedImage(L:end-L,L:end-L) = min(smearedImage(L:end-L,L:end-L),double(I(L:end-L,L:end-L)));
figure
imshow(smearedImage,[]);
title(num2str(THETA(i)*180/pi))
set(gcf, 'Units', 'Normalized', 'OuterPosition', [0 0 1 1]);
end

Fast Radial Symmetry Transform (FRST) implementation (python) results in unusual cross-hair looking artifacts

I am trying to implement FRST on python to detect centroids of elliptical objects (e.g. cells in microscopy images), but my implementation does not find seed points (more or less center points) of elliptical objects. This effort comes from duplicating FRST from Segmentation of Overlapping Elliptical Objects in Silhouette Images (https://ieeexplore.ieee.org/document/7300433). I don't know why I have these artifacts. An interesting thing is that I see these patterns (crosses) all in the same direction per object. Any point in the right direction to generate the same result as in the paper (just to find the seed points) will be most welcome.
Original Paper: A Fast Radial Symmetry Transform for Detecting Points of Interest by Loy and Zelinsky (ECCV 2002)
I have also tried the pre-existing python package for FRST: https://pypi.org/project/frst/. This somehow results in the same artifacts. Weird.
First image: Original Image
Second image: Sobel-operated Image
Third image: Magnitude Projection Image
Fourth image: Magnitude Projection Image with positively affected pixels only
Fifth image: FRST'd image: end-product with original image overlaid (shadowed)
Sixth image: FRST'd image by the pre-existing python package with original image overlaid (shadowed).
from scipy.ndimage import gaussian_filter
import numpy as np
from scipy.signal import convolve
# Get orientation projection image
def get_proj_img(image, radius):
workingDims = tuple((e + 2*radius) for e in image.shape)
h,w = image.shape
ori_img = np.zeros(workingDims) # Orientation Projection Image
mag_img = np.zeros(workingDims) # Magnitutde Projection Image
# Kenels for the sobel operator
a1 = np.matrix([1, 2, 1])
a2 = np.matrix([-1, 0, 1])
Kx = a1.T * a2
Ky = a2.T * a1
# Apply the Sobel operator
sobel_x = convolve(image, Kx)
sobel_y = convolve(image, Ky)
sobel_norms = np.hypot(sobel_x, sobel_y)
# Distances to afpx, afpy (affected pixels)
dist_afpx = np.multiply(np.divide(sobel_x, sobel_norms, out = np.zeros(sobel_x.shape), where = sobel_norms!=0), radius)
dist_afpx = np.round(dist_afpx).astype(int)
dist_afpy = np.multiply(np.divide(sobel_y, sobel_norms, out = np.zeros(sobel_y.shape), where = sobel_norms!=0), radius)
dist_afpy = np.round(dist_afpy).astype(int)
for cords, sobel_norm in np.ndenumerate(sobel_norms):
i, j = cords
pos_aff_pix = (i+dist_afpx[i,j], j+dist_afpy[i,j])
neg_aff_pix = (i-dist_afpx[i,j], j-dist_afpy[i,j])
ori_img[pos_aff_pix] += 1
ori_img[neg_aff_pix] -= 1
mag_img[pos_aff_pix] += sobel_norm
mag_img[neg_aff_pix] -= sobel_norm
ori_img = ori_img[:h, :w]
mag_img = mag_img[:h, :w]
print ("Did it go back to the original image size? ")
print (ori_img.shape == image.shape)
# try normalizing ori and mag img
return ori_img, mag_img
def get_sn(ori_img, mag_img, radius, kn, alpha):
ori_img_limited = np.minimum(ori_img, kn)
fn = np.multiply(np.divide(mag_img,kn), np.power((np.absolute(ori_img_limited)/kn), alpha))
# convolute fn with gaussian filter.
sn = gaussian_filter(fn, 0.25*radius)
return sn
def do_frst(image, radius, kn, alpha, ksize = 3):
ori_img, mag_img = get_proj_img(image, radius)
sn = get_sn(ori_img, mag_img, radius, kn, alpha)
return sn
Parameters:
radius = 50
kn = 10
alpha = 2
beta = 0
stdfactor = 0.25

Plot over an image background in MATLAB

I'd like to plot a graph over an image. I followed this tutorial to Plot over an image background in MATLAB and it works fine:
% replace with an image of your choice
img = imread('myimage.png');
% set the range of the axes
% The image will be stretched to this.
min_x = 0;
max_x = 8;
min_y = 0;
max_y = 6;
% make data to plot - just a line.
x = min_x:max_x;
y = (6/8)*x;
imagesc([min_x max_x], [min_y max_y], img);
hold on;
plot(x,y,'b-*','linewidth',1.5);
But when I apply the procedure to my study case, it doesn't work. I'd like to do something like:
I = imread('img_png.png'); % here I load the image
DEM = GRIDobj('srtm_bigtujunga30m_utm11.tif');
FD = FLOWobj(DEM,'preprocess','c');
S = STREAMobj(FD,flowacc(FD)>1000);
% with the last 3 lines I calculated the stream network on a geographic area using the TopoToolBox
imagesc(I);
hold on
plot(S)
The aim is to plot the stream network over the satellite image of the same area.
The only difference between the two examples that doesn't let the code working is in the plot line, in the first case "plot(x,y)" works, in the other one "plot(S)" doesn't.
Thanks guys.
This is the satellite image, imagesc(I)
It is possible that the plot method of the STREAMobj performs it's own custom plotting including creating new figures, axes, toggling hold states, etc. Because you can't easily control what their plot routine does, it's likely easier to flip the order of your plotting so that you plot your stuff after the toolbox plots the STREAMobj. This way you have completely control over how your image is added.
% Plot the STREAMobj
hlines = plot(S);
% Make sure we plot on the same axes
hax = ancestor(hlines, 'axes');
% Make sure that we can add more plot objects
hold(hax, 'on')
% Plot your image data on the same axes
imagesc(I, 'Parent', hax)
Maybe I am preaching to the choir or overlooking something here but the example you used actually mapped the image to the data range of the plot, hence the lines:
% set the range of the axes
% The image will be stretched to this.
min_x = 0;
max_x = 8;
min_y = 0;
max_y = 6;
imagesc([min_x max_x], [min_y max_y], img);
where you directly plot your image
imagesc(I);
If now your data coordinates and your image coordinates are vastly different you either see one or the other.
Thanks guys, I solved in this way:
I = imread('orto.png'); % satellite image loading
DEM = GRIDobj('demF1.tif');
FD = FLOWobj(DEM,'preprocess','c');
S = STREAMobj(FD,flowacc(FD)>1000); % Stream network extraction
x = S.x; % [node attribute] x-coordinate vector
y = S.y; % [node attribute] y-coordinate vector
min_x = min(x);
max_x = max(x);
min_y = min(y);
max_y = max(y);
imagesc([min_x max_x], [min_y max_y], I);
hold on
plot(S);
Here's the resulting image: stream network over the satellite image
Actually the stream network doesn't match the satellite image just because I'm temporarily using different images and DEM.

Connect disjoint edges in binary image

I performed some operations on an image of a cube and I obtained a binary image of the edges of the cube which are disconnected at some places.The image I obtained is shown below:
I want to join the sides to make it a closed figure.I have tried the following:
BW = im2bw(image,0.5);
BW = imdilate(BW,strel('square',5));
figure,imshow(BW);
But this only thickens the image.It does not connect the edges.I have also tried bwmorph() and various other functions, but it is not working.Can anyone please suggest any function or steps to connect the edges? Thank you
This could be one approach -
%// Read in the input image
img = im2bw(imread('http://i.imgur.com/Bl7zhcn.jpg'));
%// There seems to be white border, which seems to be non-intended and
%// therefore could be removed
img = img(5:end-4,5:end-4);
%// Thin input binary image, find the endpoints in it and connect them
im1 = bwmorph(img,'thin',Inf);
[x,y] = find(bwmorph(im1,'endpoints'));
for iter = 1:numel(x)-1
two_pts = [y(iter) x(iter) y(iter+1) x(iter+1)];
shapeInserter = vision.ShapeInserter('Shape', 'Lines', 'BorderColor', 'White');
rectangle = int32(two_pts);
im1 = step(shapeInserter, im1, rectangle);
end
figure,imshow(im1),title('Thinned endpoints connected image')
%// Dilate the output image a bit
se = strel('diamond', 1);
im2 = imdilate(im1,se);
figure,imshow(im2),title('Dilated Thinned endpoints connected image')
%// Get a convex shaped blob from the endpoints connected and dilate image
im3 = bwconvhull(im2,'objects',4);
figure,imshow(im3),title('Convex blob corresponding to previous output')
%// Detect the boundary of the convex shaped blob and
%// "attach" to the input image to get the final output
im4 = bwboundaries(im3);
idx = im4{:};
im5 = false(size(im3));
im5(sub2ind(size(im5),idx(:,1),idx(:,2))) = 1;
img_out = img;
img_out(im5==1 & img==0)=1;
figure,imshow(img_out),title('Final output')
Debug images -
I used the above code to write the following one.I haven't tested it on many images and it may not be as efficient as the one above but it executes faster comparatively.So I thought I would post it as a solution.
I = imread('image.jpg'); % your original image
I=im2bw(I);
figure,imshow(I)
I= I(5:end-4,5:end-4);
im1 = bwmorph(I,'thin',Inf);
[x,y] = find(bwmorph(im1,'endpoints'));
for iter = 1:numel(x)-1
im1=linept(im1, x(iter), y(iter), x(iter+1), y(iter+1));
end
im2=imfill(im1,'holes');
figure,imshow(im2);
BW = edge(im2);
figure,imshow(BW);
se = strel('diamond', 1);
im3 = imdilate(BW,se);
figure,imshow(im3);
The final result is this:
I got the "linept" function from here:http://in.mathworks.com/matlabcentral/fileexchange/4177-connect-two-pixels

Separating Background and Foreground

I am new to Matlab and to Image Processing as well. I am working on separating background and foreground in images like this
I have hundreds of images like this, found here. By trial and error I found out a threshold (in RGB space): the red layer is always less than 150 and the green and blue layers are greater than 150 where the background is.
so if my RGB image is I and my r,g and b layers are
redMatrix = I(:,:,1);
greenMatrix = I(:,:,2);
blueMatrix = I(:,:,3);
by finding coordinates where in red, green and blue the values are greater or less than 150 I can get the coordinates of the background like
[r1 c1] = find(redMatrix < 150);
[r2 c2] = find(greenMatrix > 150);
[r3 c3] = find(blueMatrix > 150);
now I get coordinates of thousands of pixels in r1,c1,r2,c2,r3 and c3.
My questions:
How to find common values, like the coordinates of the pixels where red is less than 150 and green and blue are greater than 150?
I have to iterate every coordinate of r1 and c1 and check if they occur in r2 c2 and r3 c3 to check it is a common point. but that would be very expensive.
Can this be achieved without a loop ?
If somehow I came up with common points like [commonR commonC] and commonR and commonC are both of order 5000 X 1, so to access this background pixel of Image I, I have to access first commonR then commonC and then access image I like
I(commonR(i,1),commonC(i,1))
that is expensive too. So again my question is can this be done without loop.
Any help would be appreciated.
I got solution with #Science_Fiction answer's
Just elaborating his/her answer
I used
mask = I(:,:,1) < 150 & I(:,:,2) > 150 & I(:,:,3) > 150;
No loop is needed. You could do it like this:
I = imread('image.jpg');
redMatrix = I(:,:,1);
greenMatrix = I(:,:,2);
blueMatrix = I(:,:,3);
J(:,:,1) = redMatrix < 150;
J(:,:,2) = greenMatrix > 150;
J(:,:,3) = blueMatrix > 150;
J = 255 * uint8(J);
imshow(J);
A greyscale image would also suffice to separate the background.
K = ((redMatrix < 150) + (greenMatrix > 150) + (blueMatrix > 150))/3;
imshow(K);
EDIT
I had another look, also using the other images you linked to.
Given the variance in background colors, I thought you would get better results deriving a threshold value from the image histogram instead of hardcoding it.
Occasionally, this algorithm is a little to rigorous, e.g. erasing part of the clothes together with the background. But I think over 90% of the images are separated pretty well, which is more robust than what you could hope to achieve with a fixed threshold.
close all;
path = 'C:\path\to\CUHK_training_cropped_photos\photos';
files = dir(path);
bins = 16;
for f = 3:numel(files)
fprintf('%i/%i\n', f, numel(files));
file = files(f);
if isempty(strfind(file.name, 'jpg'))
continue
end
I = imread([path filesep file.name]);
% Take the histogram of the blue channel
B = I(:,:,3);
h = imhist(B, bins);
h2 = h(bins/2:end);
% Find the most common bin in the *upper half*
% of the histogram
m = bins/2 + find(h2 == max(h2));
% Set the threshold value somewhat below
% the value corresponding to that bin
thr = m/bins - .25;
BW = im2bw(B, thr);
% Pad with ones to ensure background connectivity
BW = padarray(BW, [1 1], 1);
% Find connected regions in BW image
CC = bwconncomp(BW);
L = labelmatrix(CC);
% Crop back again
L = L(2:end-1,2:end-1);
% Set the largest region in the orignal image to white
for c = 1:3
channel = I(:,:,c);
channel(L==1) = 255;
I(:,:,c) = channel;
end
% Show the results with a pause every 16 images
subplot(4,4,mod(f-3,16)+1);
imshow(I);
title(sprintf('Img %i, thr %.3f', f, thr));
if mod(f-3,16)+1 == 16
pause
clf
end
end
pause
close all;
Results:
Your approach seems basic but decent. Since for this particular image the background is composed of mainly blue so you be crude and do:
mask = img(:,:,3) > 150;
This will set those pixels which evaluate to true for > 150 to 0 and false to 1. You will have a black and white image though.
imshow(mask);
To add colour back
mask3d(:,:,1) = mask;
mask3d(:,:,2) = mask;
mask3d(:,:,3) = mask;
img(mask3d) = 255;
imshow(img);
Should give you the colour image of face hopefully, with a pure white background. All this requires some trial and error.

Resources