MATLAB, algorithm for free surface detection in bubbly flow - algorithm

I am trying to figure out an algorithm for detecting the free surface from a PIV image (see attached). The major problem is that in the flow under consideration gas bubbles are injected into the fluid, these rise up due to buoyancy and tend to sit on top of the surface. I don't want these to be mistaken for the free surface (actually want the '2nd' edge underneath them) - I'm struggling to figure out how to include that in the algorithm.
Ideally, I want an array of x and y values representing coordinates of the free surface (like a continuous, smooth curve).
My initial approach was to scan the picture left to right, one column at a time, find an edge, move to the next column etc... That works somewhat ok, but fails as soon as the bubbles appear and my 'edge' splits in two. So I am wondering if there is some more sophisticated way of going about it.
If anybody have any expertise in the area of image processing/edge detection, any advice would be greatly appreciated.
Typical PIV image
Desired outcome

I think you can actually solve the problem by using morphologic methods.
A = imread('./MATLAB/ZBhAM.jpg');
figure;
subplot 131;
imshow(A)
subplot 132;
B = double(A(:,:,1));
B = B/255;
B = im2bw(B, 0.1);
imshow(B);
subplot 133;
st = strel('diamond', 5);
B = imerode(B, st);
B = imdilate(B, st);
B = imshow(B);
This gives the following result:
As you can see this approach is not perfect mostly because I picked a random value for the threshold in im2bw, if you use an adaptive threshold for the different column of your images you should have something better.
Try to work on your lighting otherwise.

Related

Draw a curve in the segmented image using matlab

I have a segmented image as shown here
i want to fit a curve along the top pixels of the segmented image(show as red curve) and i want to find the top point along the curve show in blue. I have already worked on basic idea like traversing through the top to bottom and collecting the top point along each column. i want to know is there any easy solution for this problem like directly taking out the boundary pixels and find the top point.I am using MATLAB for this problem
%download the image
img = logical(imread('http://i.stack.imgur.com/or2iX.png'));
%for some reason it appeared RGB with big solid borders.
%to monochrome
img = img(:,:,1);
%remove borders
img = img(~all(img,2), ~all(img,1));
%split into columns
cimg = num2cell(img,1);
%find first nonzero element per column
ridx = cellfun(#(x) find(x,1,'first'), cimg);
figure, imshow(img)
hold on
%image dim1 is Y, dim2 is X
plot(1:size(img,2),ridx-1,'r','linewidth',2)
%find top point
[yval, xval] = min(ridx);
If you want a smoother curve, try polyfit/polyval
#EDIT
If we want the line to have break at gaps between connected components, we should change the code to something like
bord_idx = sub2ind(size(img), ridx, 1:size(img,2));
regs=regionprops(bwlabel(img),'pixelidxlist');
regs_idx = struct2cell(regs);
split_step = cellfun(#(x) sum(ismember(bord_idx,x)), regs_idx);
split_step = split_step(split_step>0);
split_yvals = mat2cell(ridx',split_val);
split_xvals = mat2cell([1:size(img,2)]',split_val);
figure, imshow(img)
hold on
for k = 1:length(split_step),
plot(split_xvals{k}, split_yvals{k}, 'r', 'linewidth', 2),
end
However, the result is not ideal if one region is positioned over the other. If the "shadowed" points are needed, you should try bwtraceboundary or convexhull and find where the border turns down
As far as "simplest matlab solution" by which I think you mean built in matlab functions: imclose()->edge()->bwboundaries()->findpeaks()'on each boundary'->'filter results based on width and magnitude of peaks'. *you will need to tune all the parameters in these functions, I am just listing what would get you there if appropriately applied.
As far as processing speed is concerned, I think I would have done exactly what you did, basically collecting the top edge from a top down column search and then looking for the point of highest inflection. As soon as you start doing processing of any type, you start doing several operations per pixel which will quickly become more expensive than your initial search (just requires that your image and target are simple enough)
That being said, here are some ideas that may help:
1:If you run a sufficiently heavy closing (dilate->erode), that should fill in all that garbage at the bottom.
2: If you know that your point of interest is not at left or right of picture (boundaries), you could take the right and left edge points and calculate a slope to be applied as an offset to flatten the whole image.
3: If your image always has the large dark linear region below the peak as seen here, you could locate those edges with houghlines looking for verticals and then search only the columns between them.
4: If speed is a concern, you could do a more sophisticated search pattern than left to right, as your peak has a pretty good distribution around it which could help with faster localization of maxima.

MATLAB image processing technique

I have this 3D array in MATLAB (V: vertical, H: horizontal, t: time frame)
Figures below represent images obtained using imagesc function after slicing the array in terms of t axis
area in black represents damage area and other area is intact
each frame looks similar but has different amplitude
I am trying to visualize only defect area and get rid of intact area
I tried to use 'threshold' method to get rid of intact area as below
NewSet = zeros(450,450,200);
for kk = 1:200
frame = uwpi(:,:,kk);
STD = std(frame(:));
Mean = mean(frame(:));
for ii = 1:450
for jj =1:450
if frame(ii, jj) > 2*STD+Mean
NewSet(ii, jj, kk) = frame(ii, jj);
else
NewSet(ii, jj, kk) = NaN;
end
end
end
end
However, since each frame has different amplitude, result becomes
Is there any image processing method to get rid of intact area in this case?
Thanks in advance
You're thresholding based on mean and standard deviation, basically assuming your data is normally distributed and looking for outliers. But your model should try to distinguish values around zero (noise) vs higher values. Your data is not normally distributed, mean and standard deviation are not meaningful.
Look up Otsu thresholding (MATLAB IP toolbox has it). It's model does not perfectly match your data, but it might give reasonable results. Like most threshold estimation algorithms, it uses the image's histogram to determine the optimal threshold given some model.
Ideally you'd model the background peak in the histogram. You can find the mode, fit a Gaussian around it, then cut off at 2 sigma. Or you can use the "triangle method", which finds the point along the histogram that is furthest from the line between the upper end of the histogram and the top of the background peak. A little more complex to explain, but trivial to implement. We have this implemented in DIPimage (http://www.diplib.org), M-file code is visible so you can see how it works (look for the function threshold)
Additionally, I'd suggest to get rid of the loops over x and y. You can type frame(frame<threshold) = nan, and then copy the whole frame back into NewSet in one operation.
Do I clearly understand the question, ROI is the dark border and all it surrounds? If so I'd recommend process in 3D using some kind of region-growing technique like watershed or active snakes with markers by imregionalmin. The methods should provide segmentation result even if the border has small holes. Than just copy segmented object to a new 3D array via logic indexing.

High RMS error while "online" cv:stereoCalibration

I have two cameras setted horizontally (close to each other). I have left camera cam1 and right camera cam2.
First I calibrate cameras (I want to calibrate 50 pairs of images):
I calibrate both cameras separetely using cv::calibrateCamera()
I calibrate stereo using cv::stereoCalibrate()
My questions:
In stereoCalibrate - I assumed that the order of cameras data is important. If data from left camera should be the imagePoints1 and from right camera it should be imagePoints2 or vice versa or it doesn't matters as long as order of cameras is the same in every point of program?
In stereoCalibrate - I get RMS error around 15,9319 and average reprojection error around 8,4536. I get that values if I use all images from cameras. In other case: first I save images, I select pairs where whole chessboard is visible (all of chessborad's squares is in camera view and every square is visible in its entirety) I get RMS around 0,7. If that means that only offline calibration is good and if I want to calibrate camera I should select good images manually? Or there is some way to do calibration online? By online I mean that I start capture view from camera and on every view I found chessboard corners and after stop capture view from camera I calibrate camera.
I need only four values of distortion but I get five of them (with k3). In old api version cvStereoCalibrate2 I got only four values but in cv::stereoCalibrate I don't know how to do this? Is it even possible or the only way is to get 5 values and use only four of them later?
My code:
Mat cameraMatrix[2], distCoeffs[2];
distCoeffs[0] = Mat(4, 1, CV_64F);
distCoeffs[1] = Mat(4, 1, CV_64F);
vector<Mat> rvec1, rvec2, tvec1, tvec2;
double rms1 = cv::calibrateCamera(objectPoints, imagePoints[0], imageSize, cameraMatrix[0], distCoeffs[0],rvec1, tvec1, CALIB_FIX_K3, TermCriteria(
TermCriteria::COUNT+TermCriteria::EPS, 30, DBL_EPSILON));
double rms2 = cv::calibrateCamera(objectPoints, imagePoints[1], imageSize, cameraMatrix[1], distCoeffs[1],rvec2, tvec2, CALIB_FIX_K3, TermCriteria(
TermCriteria::COUNT+TermCriteria::EPS, 30, DBL_EPSILON));
qDebug()<<"Rms1: "<<rms1;
qDebug()<<"Rms2: "<<rms2;
Mat R, T, E, F;
double rms = cv::stereoCalibrate(objectPoints, imagePoints[0], imagePoints[1],
cameraMatrix[0], distCoeffs[0],
cameraMatrix[1], distCoeffs[1],
imageSize, R, T, E, F,
TermCriteria(CV_TERMCRIT_ITER+CV_TERMCRIT_EPS, 100, 1e-5),
CV_CALIB_FIX_INTRINSIC+
CV_CALIB_SAME_FOCAL_LENGTH);
I had a similar problem. My problem was that I was reading the left images and the right images by assuming that both were sorted. Here a part of the code in Python
I fixed by using "sorted" in the second line.
images = glob.glob(path_left)
for fname in sorted(images):
img = cv2.imread(fname)
gray1 = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Find the chess board corners
ret, corners1 = cv2.findChessboardCorners(gray1, (n, m), None)
# If found, add object points, image points (after refining them)
if ret == True:
i = i + 1
print("Cam1. Chess pattern was detected")
objpoints1.append(objp)
cv2.cornerSubPix(gray1, corners1, (5, 5), (-1, -1), criteria)
imgpoints1.append(corners1)
cv2.drawChessboardCorners(img, (n, m), corners1, ret)
cv2.imshow('img', img)
cv2.waitKey(100)
The only thing why is the order of cameras/image sets important is the rotation and translation you get from stereoCalibrate function. The image set you put into the function as first is taken as the base. So the rotation and translation you get is how is the second camera translated and rotated from the first camera. Of course you can just reverse the result, which is the same as switching image sets. This of course holds only if the images in both sets are corresponding to each other (their order).
This is a bit tricky, but there are few reasons why you are getting this big RMS error.
First, I'm not sure how you detect your chessboard corners, but if the whole chessboard is not visible and you provide valid chessboard model, findChessboardCorners should return false as it does not detect the chessboard. So you're able to automatically (=online) omit these "chessless" images. Of course you have to throw away also the image from second camera, even if that one is valid, to preserve correct order in both sets.
Second option is to back-project all corners for each image and calculate reprojection error for all images separately (not only for whole calibration). Then you can select, for example, best 3/4 images by this error and recalculate calibration without outliers.
Other reason could be the time sync between snapping images from 2 cameras. If the delay is big and you move with the chessboard continuously, you're actually trying to match projections of slightly translated chessboard.
If you want robust online version I'm afraid you will end up with the second option, as it helps you also get rid of blurred images, wrong detections due to light conditions and so. You just need to set the threshold (how many images you will cut of as outliers) carefully to not throw away valid data.
I'm not that sure in this field, but I would say you can calculate 5 of them and use only four coz it looks like you just cut off higher order of Taylor series. But I cannot guarantee it's true.

Detect black dots from color background

My short question
How to detect the black dots in the following images? (I paste only one test image to make the question look compact. More images can be found →here←).
My long question
As shown above, the background color is roughly blue, and the dots color is "black". If pick one black pixel and measure its color in RGB, the value can be (0, 44, 65) or (14, 69, 89).... Therefore, we cannot set a range to tell the pixel is part of the black dot or the background.
I test 10 images of different colors, but I hope I can find a method to detect the black dots from more complicated background which may be made up of three or more colors, as long as human eyes can identify the black dots easily. Some extremely small or blur dots can be omitted.
Previous work
Last month, I have asked a similar question at stackoverflow, but have not got a perfect solution, some excellent answers though. Find more details about my work if you are interested.
Here are the methods I have tried:
Converting to grayscale or the brightness of image. The difficulty is that I can not find an adaptive threshold to do binarization. Obviously, turning a color image to grayscale or using the brightness (HSV) will lose much useful information. Otsu algorithm which calculates adaptive threshold can not work either.
Calculating RGB histogram. In my last question, natan's method is to estimate the black color by histogram. It is time-saving, but the adaptive threshold is also a problem.
Clustering. I have tried k-means clustering and found it quite effective for the background that only has one color. The shortage (see my own answer) is I need to set the number of clustering center in advance but I don't know how the background will be. What's more, it is too slow! My application is for real time capturing on iPhone and now it can process 7~8 frames per second using k-means (20 FPS is good I think).
Summary
I think not only similar colors but also adjacent pixels should be "clustered" or "merged" in order to extract the black dots. Please guide me a proper way to solve my problem. Any advice or algorithm will be appreciated. There is no free lunch but I hope a better trade-off between cost and accuracy.
I was able to get some pretty nice first pass results by converting to HSV color space with rgb2hsv, then using the Image Processing Toolbox functions imopen and imregionalmin on the value channel:
rgb = imread('6abIc.jpg');
hsv = rgb2hsv(rgb);
openimg = imopen(hsv(:, :, 3), strel('disk', 11));
mask = imregionalmin(openimg);
imshow(rgb);
hold on;
[r, c] = find(mask);
plot(c, r, 'r.');
And the resulting images (for the image in the question and one chosen from your link):
You can see a few false positives and missed dots, as well as some dots that are labeled with multiple points, but a few refinements (such as modifying the structure element used in the opening step) could clean these up some.
I was curios to test with my old 2d peak finder code on the images without any threshold or any color considerations, really crude don't you think?
im0=imread('Snap10.jpg');
im=(abs(255-im0));
d=rgb2gray(im);
filter=fspecial('gaussian',16,3.5);
p=FastPeakFind(d,0,filter);
imagesc(im0); hold on
plot(p(1:2:end),p(2:2:end),'r.')
The code I'm using is a simple 2D local maxima finder, there are some false positives, but all in all this captures most of the points with no duplication. The filter I was using was a 2d gaussian of width and std similar to a typical blob (the best would have been to get a matched filter for your problem).
A more sophisticated version that does treat the colors (rgb2hsv?) could improve this further...
Here is an extraodinarily simplified version, that can be extended to be full RGB, and it also does not use the image procesing library. Basically you can do 2-D convolution with a filter image (which is an example of the dot you are looking for), and from the points where the convolution returns the highest values, are the best matches for the dots. You can then of course threshold that. Here is a simple binary image example of just that.
%creating a dummy image with a bunch of small white crosses
im = zeros(100,100);
numPoints = 10;
% randomly chose the location to put those crosses
points = randperm(numel(im));
% keep only certain number of points
points = points(1:numPoints);
% get the row and columns (x,y)
[xVals,yVals] = ind2sub(size(im),points);
for ii = 1:numel(points)
x = xVals(ii);
y = yVals(ii);
try
% create the crosses, try statement is here to prevent index out of bounds
% not necessarily the best practice but whatever, it is only for demonstration
im(x,y) = 1;
im(x+1,y) = 1;
im(x-1,y) = 1;
im(x,y+1) = 1;
im(x,y-1) = 1;
catch err
end
end
% display the randomly generated image
imshow(im)
% create a simple cross filter
filter = [0,1,0;1,1,1;0,1,0];
figure; imshow(filter)
% perform convolution of the random image with the cross template
result = conv2(im,filter,'same');
% get the number of white pixels in filter
filSum = sum(filter(:));
% look for all points in the convolution results that matched identically to the filter
matches = find(result == filSum);
%validate all points found
sort(matches(:)) == sort(points(:))
% get x and y coordinate matches
[xMatch,yMatch] = ind2sub(size(im),matches);
I would highly suggest looking at the conv2 documentation on MATLAB's website.

Measure real object from image

I am going to prepare my assignment. It's a bit freaky as our Teacher, though :D. Okay, the job is simple. There will be a white cloth vertically. A person will be in front of that. Distance of the man from the cloth is 3 feet. The shadow of the person will be caught through a mid res (say 1600 X 1200) camera. The image (img01.jpg) of this camera is my input. I have to measure the man's body from the image, I mean parts of body. I need 80 to 90 percent accuracy. Desired output is some length (centimeter):
A = ?
B = ?
C = ?
D = ?
E = ?
Just as the picture:
I don't know what type of algorithm is needed here and I don't want to ask it to my freaky Sir. Great hearts here are requested to help me. Do not ask me for my code as I don't have yet. I don't need codes rather I need algorithms to do the job.
Thanks in advance.
Find the number of pixels between the points and multiply by the number of cm's per pixel based on how far the subject is from the camera.
A possible algorithm would be, given a vertical offset y, to find the distance (in pixels) between the first colored (or in your case, black) pixel and the last one, on the same line y. Then, you can use your unit conversion as you deem convenient, once you fix the scale between pixels and your real world measure. This answer would work assuming, as in your example, that the distance is measured horizontally and not diagonally on the figure.

Resources