I have to develop an Intensity Meter, which will basically display the average intensity level of a uniformly exposed image. As the focus of the camera is changed the pointer starts moving to the correct value in the following display:
Along with that a text field will be displaying the exact intensity value in digit as well, as can be seen in the figure.
Till now I have been able to capture the image and store its intensity values in a text file. Now I have to develop this animated image. I have never done any animations using OpenCV, so I am looking for some idea how to go about this kind of animation using OpenCV.
Any pointers here?
My complete application is based on Windows Forms (C++) and I am using OpenCV for other
Image processing tasks (not listed here).
Note: The meter will basically show average intensity level of a uniformly exposed area.
Update
I have got some solution here.
So with some research I now know the algorithm:
Theta= A * I
Where,
Theta= angle at which the pointer is rotated
A = Proportionality factor
I= Intensity Level
The angle of rotation will be directly proportional to the intensity level.
So now here is how I plan to go:
1- Create a new Window using cvNamedWindow
2- Display the static part of the image , i.e the dial in the figure, which never changes
3- Display a vertical Pointer pointing at the middle
4- Get the value of Intensity Level I, and apply the formula above to calculate Theta.
5- Based on the angle calculated above, rotate the Pointer by using OpenCv Function.
Can some verify my understanding? Especially please tell me what is the fastest function for rotating the pointer? Please let me know if you can improve it further.
I'll suggest two options.
Use a library: something like WxIndustrialControls which I linked in the comments.
If you really want to draw things in opencv, please post the code that you have so far so that answers can focus on the part that has you stuck.
I haven't done drawing in opencv, but I think I found the parts you would need:
simple image load (e.g. imread or cvLoadImage)
drawing functions (e.g. fillPoly to draw your needle
seems like you can update images quickly using simple cvShowImage as this person did with a camera feed
an equation that converts your input intensity value to a set of points for fillPoly
Please update the question with more details about which parts have you stuck. Sorry if I linked to multiple versions of opencv documentation. Shouldn't matter for these fundamental functions I guess.
update for the equation:
I haven't tested this at all, but it seemed like something fun to come up with:
Assumptions:
left of center ==> 180 deg
right of center ==> 0 deg
image origin is top left
pixel order is [column, row] (i.e. [x, y])
Configuration Variables:
min_v ==> the value to go on the left edge of the dial
max_v ==> the value to go on the right edge of the dial
arc_size ==> the number of degrees at the top of a circle that you want to use for your intensity range
radius ==> the distance from the top of your dial to the center of a virtual circle (larger radius ==> flatter dial)
top_padding ==> the distance from the top of the static image to the needle when pointing straight up
needle_l ==> the length of the needle / polygon you want drawn
Input Variables:
needle_v ==> value for the needle to represent
Output Variables:
needle_end ==> the outer point of the needle in terms of your image coordinates
needle_start ==> the inner point of the needle in terms of your image coordinates
Calculation:
for readability:
left_a = 90 + (arc_size / 2) (angle for min_v)
right_a = 90 - (arc_size / 2) (angle for max_v)
value_range = (max_v - min_v)
get the angle for the needle
value_proportion = (needle_v - min_v) / value_range
needle_a = left_a - (value_proportion * arc_size)
get the virtual center of the circle (e.g. the center for the one in your example image would be outside the image)
dial_origin = [int(image_width / 2) , top_padding + radius]
get the needle start and end points
needle_end[x] = dial_origin[x] - radius * cos(needle_a)
needle_end[y] = dial_origin[y] - radius * sin(needle_a)
needle_start[x] = dial_origin[x] - (radius - needle_l) * cos(needle_a)
needle_start[y] = dial_origin[x] - (radius - needle_l) * sin(needle_a)
Related
I have a segmented image as shown here
i want to fit a curve along the top pixels of the segmented image(show as red curve) and i want to find the top point along the curve show in blue. I have already worked on basic idea like traversing through the top to bottom and collecting the top point along each column. i want to know is there any easy solution for this problem like directly taking out the boundary pixels and find the top point.I am using MATLAB for this problem
%download the image
img = logical(imread('http://i.stack.imgur.com/or2iX.png'));
%for some reason it appeared RGB with big solid borders.
%to monochrome
img = img(:,:,1);
%remove borders
img = img(~all(img,2), ~all(img,1));
%split into columns
cimg = num2cell(img,1);
%find first nonzero element per column
ridx = cellfun(#(x) find(x,1,'first'), cimg);
figure, imshow(img)
hold on
%image dim1 is Y, dim2 is X
plot(1:size(img,2),ridx-1,'r','linewidth',2)
%find top point
[yval, xval] = min(ridx);
If you want a smoother curve, try polyfit/polyval
#EDIT
If we want the line to have break at gaps between connected components, we should change the code to something like
bord_idx = sub2ind(size(img), ridx, 1:size(img,2));
regs=regionprops(bwlabel(img),'pixelidxlist');
regs_idx = struct2cell(regs);
split_step = cellfun(#(x) sum(ismember(bord_idx,x)), regs_idx);
split_step = split_step(split_step>0);
split_yvals = mat2cell(ridx',split_val);
split_xvals = mat2cell([1:size(img,2)]',split_val);
figure, imshow(img)
hold on
for k = 1:length(split_step),
plot(split_xvals{k}, split_yvals{k}, 'r', 'linewidth', 2),
end
However, the result is not ideal if one region is positioned over the other. If the "shadowed" points are needed, you should try bwtraceboundary or convexhull and find where the border turns down
As far as "simplest matlab solution" by which I think you mean built in matlab functions: imclose()->edge()->bwboundaries()->findpeaks()'on each boundary'->'filter results based on width and magnitude of peaks'. *you will need to tune all the parameters in these functions, I am just listing what would get you there if appropriately applied.
As far as processing speed is concerned, I think I would have done exactly what you did, basically collecting the top edge from a top down column search and then looking for the point of highest inflection. As soon as you start doing processing of any type, you start doing several operations per pixel which will quickly become more expensive than your initial search (just requires that your image and target are simple enough)
That being said, here are some ideas that may help:
1:If you run a sufficiently heavy closing (dilate->erode), that should fill in all that garbage at the bottom.
2: If you know that your point of interest is not at left or right of picture (boundaries), you could take the right and left edge points and calculate a slope to be applied as an offset to flatten the whole image.
3: If your image always has the large dark linear region below the peak as seen here, you could locate those edges with houghlines looking for verticals and then search only the columns between them.
4: If speed is a concern, you could do a more sophisticated search pattern than left to right, as your peak has a pretty good distribution around it which could help with faster localization of maxima.
I'm working on a PIV-Workflow and I'm currently pre-processing the images. I need to get rid of the perspective distortion in the images. I do have the "image processing toolbox" and the "camera calibrator". I already got rid of the lens distortion with "undistortImage();" and the cameraParams object, which is inferred through a chessboard pattern.
First Question: Is it possible to use the cameraParams object to distort the image perspectively, so that my chessboard is rectified in the image?
Second Question: Since I were not able to use the cameraParams object, I tried to use the transformation functions manually. I tried to use pairs of control-points (with cpselection tool, the original image and a generated chessboard-image) and the fitgeotrans(movingPoints, fixedPoints, 'projective'); function to get my tform-object. However I always get the error message:
Error using fitgeotrans>findProjectiveTransform (line 189)
At least 4 non-collinear points needed to infer projective transform.
Error in fitgeotrans (line 102)
tform = findProjectiveTransform(movingPoints,fixedPoints);
I tried a lot of different pairs of control-points (4 pairs or more). But I'm still getting this error. I believe I must overlook something here.
Any help is appreciated, thank you.
Stephan
If you are using one of the calibration images, then all the information you need is in the cameraParams object.
Let's say you are using calibration image 1, and let's call it I.
First, undistort the image:
I = undistortImage(I, cameraParams);
Get the extrinsics (rotation and translation) for your image:
R = cameraParams.RotationMatrices(:,:,1);
t = cameraParams.TranslationVectors(1, :);
Then combine rotation and translation into one matrix:
R(3, :) = t;
Now compute the homography between the checkerboard and the image plane:
H = R * cameraParams.IntrinsicMatrix;
Transform the image using the inverse of the homography:
J = imwarp(I, projective2d(inv(H)));
imshow(J);
You should see a "bird's eye" view of the checkerboard. If you are not using one of the calibration images, then you can compute R and t using the extrinsics function.
Another way to do this is to use detectCheckerboardPoints and generateCheckerboardPoints, and then compute the homography using fitgeotform.
My short question
How to detect the black dots in the following images? (I paste only one test image to make the question look compact. More images can be found →here←).
My long question
As shown above, the background color is roughly blue, and the dots color is "black". If pick one black pixel and measure its color in RGB, the value can be (0, 44, 65) or (14, 69, 89).... Therefore, we cannot set a range to tell the pixel is part of the black dot or the background.
I test 10 images of different colors, but I hope I can find a method to detect the black dots from more complicated background which may be made up of three or more colors, as long as human eyes can identify the black dots easily. Some extremely small or blur dots can be omitted.
Previous work
Last month, I have asked a similar question at stackoverflow, but have not got a perfect solution, some excellent answers though. Find more details about my work if you are interested.
Here are the methods I have tried:
Converting to grayscale or the brightness of image. The difficulty is that I can not find an adaptive threshold to do binarization. Obviously, turning a color image to grayscale or using the brightness (HSV) will lose much useful information. Otsu algorithm which calculates adaptive threshold can not work either.
Calculating RGB histogram. In my last question, natan's method is to estimate the black color by histogram. It is time-saving, but the adaptive threshold is also a problem.
Clustering. I have tried k-means clustering and found it quite effective for the background that only has one color. The shortage (see my own answer) is I need to set the number of clustering center in advance but I don't know how the background will be. What's more, it is too slow! My application is for real time capturing on iPhone and now it can process 7~8 frames per second using k-means (20 FPS is good I think).
Summary
I think not only similar colors but also adjacent pixels should be "clustered" or "merged" in order to extract the black dots. Please guide me a proper way to solve my problem. Any advice or algorithm will be appreciated. There is no free lunch but I hope a better trade-off between cost and accuracy.
I was able to get some pretty nice first pass results by converting to HSV color space with rgb2hsv, then using the Image Processing Toolbox functions imopen and imregionalmin on the value channel:
rgb = imread('6abIc.jpg');
hsv = rgb2hsv(rgb);
openimg = imopen(hsv(:, :, 3), strel('disk', 11));
mask = imregionalmin(openimg);
imshow(rgb);
hold on;
[r, c] = find(mask);
plot(c, r, 'r.');
And the resulting images (for the image in the question and one chosen from your link):
You can see a few false positives and missed dots, as well as some dots that are labeled with multiple points, but a few refinements (such as modifying the structure element used in the opening step) could clean these up some.
I was curios to test with my old 2d peak finder code on the images without any threshold or any color considerations, really crude don't you think?
im0=imread('Snap10.jpg');
im=(abs(255-im0));
d=rgb2gray(im);
filter=fspecial('gaussian',16,3.5);
p=FastPeakFind(d,0,filter);
imagesc(im0); hold on
plot(p(1:2:end),p(2:2:end),'r.')
The code I'm using is a simple 2D local maxima finder, there are some false positives, but all in all this captures most of the points with no duplication. The filter I was using was a 2d gaussian of width and std similar to a typical blob (the best would have been to get a matched filter for your problem).
A more sophisticated version that does treat the colors (rgb2hsv?) could improve this further...
Here is an extraodinarily simplified version, that can be extended to be full RGB, and it also does not use the image procesing library. Basically you can do 2-D convolution with a filter image (which is an example of the dot you are looking for), and from the points where the convolution returns the highest values, are the best matches for the dots. You can then of course threshold that. Here is a simple binary image example of just that.
%creating a dummy image with a bunch of small white crosses
im = zeros(100,100);
numPoints = 10;
% randomly chose the location to put those crosses
points = randperm(numel(im));
% keep only certain number of points
points = points(1:numPoints);
% get the row and columns (x,y)
[xVals,yVals] = ind2sub(size(im),points);
for ii = 1:numel(points)
x = xVals(ii);
y = yVals(ii);
try
% create the crosses, try statement is here to prevent index out of bounds
% not necessarily the best practice but whatever, it is only for demonstration
im(x,y) = 1;
im(x+1,y) = 1;
im(x-1,y) = 1;
im(x,y+1) = 1;
im(x,y-1) = 1;
catch err
end
end
% display the randomly generated image
imshow(im)
% create a simple cross filter
filter = [0,1,0;1,1,1;0,1,0];
figure; imshow(filter)
% perform convolution of the random image with the cross template
result = conv2(im,filter,'same');
% get the number of white pixels in filter
filSum = sum(filter(:));
% look for all points in the convolution results that matched identically to the filter
matches = find(result == filSum);
%validate all points found
sort(matches(:)) == sort(points(:))
% get x and y coordinate matches
[xMatch,yMatch] = ind2sub(size(im),matches);
I would highly suggest looking at the conv2 documentation on MATLAB's website.
I want to process an image in matlab
The image consists out of a solid back ground and two specimens (top and bottom side). I already have a code that separate the top and bottom and make it two images. But the part what I don't get working is to crop the image to the glued area only (red box in the image, I've only marked the top one). However, the cropped image should be a rectangle just like the red box (the yellow background, can be discarded afterwards).
I know this can be done with imcrop, but this requires manual input from the user. The code needs to be automated such that it is possible to process more images without user input. All image will have the same colors (red for glue, black for material).
Can someone help me with this?
edit: Thanks for the help. I used the following code to solve the problem. However, I couldn't get rid of the black part right of the red box. This can be fix by taping that part off before making pictures. The code which I used looks a bit weird, but it succeeds in counting the black region in the picture and getting a percentage.
a=imread('testim0.png');
level = graythresh(a);
bw2=im2bw(a, level);
rgb2=bw2rgb(bw2);
IM2 = imclearborder(rgb2,4);
pic_negative = ait_imgneg(IM2);
%% figures
% figure()
% image(rgb2)
%
% figure()
% imshow(pic_negative)
%% Counting percentage
g=0;
for j=1:size(rgb2,2)
for i=1:size(rgb2,1)
if rgb2(i,j,1) <= 0 ...
& rgb2(i,j,2) <= 0 ...
& rgb2(i,j,3) <= 0
g=g+1;
end
end
end
h=0;
for j=1:size(pic_negative,2)
for i=1:size(pic_negative,1)
if pic_negative(i,j)== 0
h=h+1;
end
end
end
per=g/(g+h)
If anyone has some suggestions to improve the code, I'm happy to hear it.
For a straight-forward image segmentation into 2 regions (background, foreground) based on color (yellow, black are prominent in your case), an option can be clustering image color values using kmeans algorithm. For additional robustness you can transform the image from RGB to Lab* colorspace.
The example for your case follows the MATLAB Imape Processing example here.
% read and transform to L*a*b space
im_rgb = double(imread('testim0.png'))./256;
im_lab = applycform(im_rgb, makecform('srgb2lab'));
% keep only a,b-channels and form feature vector
ab = double(lab_I(:,:,2:3));
[nRows, nCols, ~] = size(ab);
ab = reshape(ab,nRows * nCols,2);
% apply k-means for 2 regions, repeat c times, e.g. c = 5
nRegions = 2;
[cluster_idx cluster_center] = kmeans(ab,nRegions, 'Replicates', 5);
% get foreground-background mask
im_regions = reshape(cluster_idx, nRows, nCols);
You can use the resulting binary image to index the regions of interest (or find the boundary) in the original reference image.
Images are saved as matrices. If you know the bounds in pixels of the crop box want to crop you can execute the crop using indexing.
M = rand(100); % create a 100x100 matrix or load it from an image
left = 50;
right = 75;
top = 80;
bottom = 10;
croppedM = M(bottom:top, left:right);
%save croppedm
You can easily get the unknown bounded crop by
1) Contour plotting the image,
2) find() on the result for the max/min X/ys,
3) use #slayton's method to perform the actual crop.
EDIT: Just looked at your actual image - it won't be so easy. But color enhance/threshold your image first, and the contours should work with reasonable accuracy. Needless to say, this requires tweaking to your specific situation.
since you've already been able to seperate the top and bottom, and also able to segment the region you want (including the small part of the right side that you don't want), I propose you just add a fix at the end of the code via the following.
after segmentation, sum each column Blue intensity value, so that you are compressing the image from 2d to 1d. so if original region is width=683 height=59, the new matrix/image will be simply width=683 height=1.
now, you can apply a small threshold to determine where the edge should lie, and apply a crop to the image at that location. Now you get your stats.
I am trying to generate the following "effect" from a basic shape in MATLAB:
But I don't even know how this process is called. Let's say I have an image containing the brown shape, what I want is generate the contours outside of it, that get smoother as they get bigger.
Is there either a name for this effect, a function to do this in MATLAB or an algorithm that does it from scratch?
thanks
I think you are looking for bwdist.
The image you are displaying looks like the positive part of a distance function from the boundary of your shape. You can perform this easily in Matlab using the examples on the aforementioned manual page.
Try this:
I = imread('brown_image.png');
I_bw = (rgb2gray(I) > 0); % or whatever, just so I_bw is 1 in the 'brown' region
r = 10;
se1 = strel('disk', r);
se2 = strel('disk', r-1);
imshow(imdilate(I_bw, se1) - imdilate(I_bw, se2))
Requires image processing toolbox, but the basic idea is to dilate the image twice with dilation elements that differ by 1 (or however thick you want the contours to be) and subtract the result of the smaller one from the bigger one. You could then color them however you want.