How do I normalize the signal generated from wfbm in matlab? - algorithm

I am using matlab's wavelet fractional Brownian motion function in order to generate 1D point-like data of a diffusive particle in the physical regimes: sub-diffusion, super-diffusion and normal diffusion.
The problem I encounter with is that the time normalization/variance is weird.
For example for Hurst parameter equals 0.5 (regular Brownian motion) I get standard deviation which isn't unity (1):
>> std(diff(wfbm(0.5,1e6)))
ans =
0.3955
Due to the above, I am not sure how to re-normalize all the 3 trajectories I create for the 3 diffusion cases (sub, super, normal).
I generated trajectories for N pointlike particles of length M:
M=500;
N=200;
nd = zeros(M,N);
sub = zeros(M,N);
sup = zeros(M,N);
Hsub = 0.25;
Hsup = 0.75;
for j=1:N
nd(:,j) = wfbm(0.5, M, 15, 'db10');
sub(:,j) = wfbm(Hsub,M, 10, 'db10');
sup(:,j) = wfbm(Hsup,M, 10, 'db10');
end
Here is how function is implemented in matlab and generates the signal, however I am not sure how to modify it to have a proper brownian motion:
tmp = conv(randn(1,len+nbmax),ckbeta);
tmp = cumsum(tmp);
CA = wkeep(tmp,len,'c');
for j=0:nblev-1
CD = 2^(j/2)*4^(-s)*2^(-j*s)*randn(1,len);
len = 2*len-nbmax;
CA = idwt(CA,CD,fs1,gs1,len);
end
fBm = wkeep(CA,L,'c');
fBm = fBm-fBm(1);
I was trying to understand it from the paper which says it's possible to control the variance of fBm:
This is citation 7 from the snapshot above.

Related

How to rotate a 2D theoretical path to fit or be overlaid on a path from videoprocessing

I track the motion of an object from a video file in MATLAB and save the locations from each frame in a numberOfFrames x 2 array.
I know the theoretical path or intended path. When recording the movie the camera is at some unknown angle in space. Therefore, the path is skewed. The only information I have is the scaling between the pixels and millimeters by using the object diameter.
Now I would like to rotate the intended path, and move it around until it is overlaid on the tracked motion path.
I start with my theoretical path (Pth) then rotate it in 3-dimensions to "Pthr". After that I loop over each point in "M". And for each point in "M", I look for the closest point from "Pthr". Then, I repeat for the next point in "M". This probably has a problem of choosing the same point in "Pthr" for multiple points in "M".
I noticed this is sensitive to my initial guess and it gives terrible results.
Also, M is not a perfect path, since it is experimental measurements it is no where near perfect. measured vs. theoretical unrotated path
% M = [Mx,My], is location in x and y, from motion tracking.
% scale = 20; % pixels/mm, using the size of object
% I build the theoretical path (Pth) goes from to (0,0,0) to (0,3,0) to
% (3,3,0) to (3,0,0) to be approximately the same length as M
Pthup = linspace(0,3,num)';
Pthdwn = linspace(3,0,num)';
Pth0 = zeros(size(Pthup));
Pth3 = 3*ones(size(Pthup));
% Pth is approximately same length as M
Pth = scale*[Pth0 Pthup Pth0;Pthup Pth3 Pth0;Pth3 Pthdwn Pth0];
% using fmincon in matlab to minimize the sum of the square
lb = [145 0 -45 min(min(M)) min(min(M))]; %upper bound
ub = [180 90 45 max(max(M)) max(max(M))]; %lower bound
coro = [180 0 0 mean(Mx) mean(My)]; %initial guess
% initial guess (theta(x),theta(y),theta(z), shift in x, shift in y)
cnt = 0; er = 1;
while (abs(er)>0.1)
[const,fval] = fmincon(#(cor)findOrientation(cor,Pth,M),coro,[],[],[],[],lb,ub);
er = sum(const-coro);
coro = const;
cnt = 1+cnt;
if (cnt>50)
cnt = cnt;
break
end
end
%% function findOrientation keeps rotating Pth until it is closest to M
function [Eo] = findOrientation(cor,Pth,M)
% cor = [angle of rotations, center coordinate];(degrees, non-dimensiolaized in pixels)
% M = measured coordinates from movie in pixel
% coor: is output of the form [x-coordiante,y-coordinate, absolute distance from Center(i,:)]
% F = sum of least square, sum(coor(:,3))
%% Rotation of theoretical path about z,y,x and shifting in it in xy
thx = cor(1);
thy = cor(2);
thz = cor(3);
xy = cor([4:5]);
% T = [cosd(thn) -sind(thn);
% sind(thn) cosd(thn)]; %rotation matrix in 3D
Tz = [cosd(thz) -sind(thz) 0;
sind(thz) cosd(thz) 0;0 0 1]; %rotation matrix
Ty = [cosd(thy) 0 -sind(thy);0 1 0;
sind(thy) 0 cosd(thy)]; %rotation matrix
Tx = [1 0 0;0 cosd(thx) -sind(thx);
0 sind(thx) cosd(thx)]; %rotation matrix
Pthr = zeros(size(Pth));
for i = 1:size(Pth,1)
xp = Tz*Pth(i,:)';
xp = Ty*xp;
xp = Tx*xp;
Pthr(i,:) = xp.';
end
Pthr = Pthr(:,[1,2]); % omit third value because it is 2D
Pthr = Pthr + [cor([4:5])];
rin = sqrt(Pthr(:,1).^2+Pthr(:,2).^2); %theoretical radius
Centern = sqrt(M(:,1).^2 + M(:,2).^2);%measured radius
for i = 1:size(M,1) %loop over each point in tracked motion
sub = Pthr-M(i,:); %subtracting M(i,:) from all Pthr
for j = 1:length(sub)
dist(j,1) = norm(sub(j,:));% distance from M(i,:) to all ri
end
%index is based on the min absolute distance between Pthr and M(i,:). It chooses the closest Pthr to a specific M(i,:)
[mn, index] = min(dist);
erri = abs(rin(index)-Centern(i))./rin(index);
coor(i,:) = erri;
end
Eo = sum(coor);

Speeding up vectorized eye-tracking algorithm in numpy

I'm trying to implement Fabian Timm's eye-tracking algorithm [http://www.inb.uni-luebeck.de/publikationen/pdfs/TiBa11b.pdf] (found here: [http://thume.ca/projects/2012/11/04/simple-accurate-eye-center-tracking-in-opencv/]) in numpy and OpenCV and I've hit a snag. I think I've vectorized my implementation decently enough, but it's still not fast enough to run in real time and it doesn't detect pupils with as much accuracy as I had hoped. This is my first time using numpy, so I'm not sure what I've done wrong.
def find_pupil(eye):
eye_len = np.arange(eye.shape[0])
xx,yy = np.meshgrid(eye_len,eye_len) #coordinates
XX,YY = np.meshgrid(xx.ravel(),yy.ravel()) #all distance vectors
Dx,Dy = [YY-XX, YY-XX] #y2-y1, x2-x1 -- simpler this way because YY = XXT
Dlen = np.sqrt(Dx**2+Dy**2)
Dx,Dy = [Dx/Dlen, Dy/Dlen] #normalized
Gx,Gy = np.gradient(eye)
Gmagn = np.sqrt(Gx**2+Gy**2)
Gx,Gy = [Gx/Gmagn,Gy/Gmagn] #normalized
GX,GY = np.meshgrid(Gx.ravel(),Gy.ravel())
X = (GX*Dx+GY*Dy)**2
eye = cv2.bitwise_not(cv2.GaussianBlur(eye,(5,5),0.005*eye.shape[1])) #inverting and blurring eye for use as w
eyem = np.repeat(eye.ravel()[np.newaxis,:],eye.size,0)
C = (np.nansum(eyem*X, axis=0)/eye.size).reshape(eye.shape)
return np.unravel_index(C.argmax(), C.shape)
and the rest of the code:
def find_eyes(face):
left_x, left_y = [int(floor(0.5 * face.shape[0])), int(floor(0.2 * face.shape[1]))]
right_x, right_y = [int(floor(0.1 * face.shape[0])), int(floor(0.2 * face.shape[1]))]
area = int(floor(0.2 * face.shape[0]))
left_eye = (left_x, left_y, area, area)
right_eye = (right_x, right_y, area, area)
return [left_eye,right_eye]
faceCascade = cv2.CascadeClassifier("haarcascade_frontalface_default.xml")
video_capture = cv2.VideoCapture(0)
while True:
# Capture frame-by-frame
ret, frame = video_capture.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(
gray,
scaleFactor=1.1,
minNeighbors=5,
minSize=(30, 30),
flags=cv2.CASCADE_SCALE_IMAGE
)
# Draw a rectangle around the faces
for (x, y, w, h) in faces:
cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 2)
roi_gray = gray[y:y+h, x:x+w]
roi_color = frame[y:y+h, x:x+w]
eyes = find_eyes(roi_gray)
for (ex,ey,ew,eh) in eyes:
eye_gray = roi_gray[ey:ey+eh,ex:ex+ew]
eye_color = roi_color[ey:ey+eh,ex:ex+ew]
cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(255,0,0),2)
px,py = find_pupil(eye_gray)
cv2.rectangle(eye_color,(px,py),(px+1,py+1),(255,0,0),2)
# Display the resulting frame
cv2.imshow('Video', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything is done, release the capture
video_capture.release()
cv2.destroyAllWindows()
You can perform many of those operations that save replicated elements and then perform some mathematical opertaions by directly performing the mathematical operatrions after creating singleton dimensions that would allow NumPy broadcasting. Thus, there would be two benefits - On the fly operations to save workspace memory and performance boost. Also, at the end, we can replace the nansum calculation with a simplified version. Thus, with all of that philosophy in mind, here's one modified approach -
def find_pupil_v2(face, x, y, w, h):
eye = face[x:x+w,y:y+h]
eye_len = np.arange(eye.shape[0])
N = eye_len.size**2
eye_len_diff = eye_len[:,None] - eye_len
Dlen = np.sqrt(2*((eye_len_diff)**2))
Dxy0 = eye_len_diff/Dlen
Gx0,Gy0 = np.gradient(eye)
Gmagn = np.sqrt(Gx0**2+Gy0**2)
Gx,Gy = [Gx0/Gmagn,Gy0/Gmagn] #normalized
B0 = Gy[:,:,None]*Dxy0[:,None,:]
C0 = Gx[:,None,:]*Dxy0
X = ((C0.transpose(1,0,2)[:,None,:,:]+B0[:,:,None,:]).reshape(N,N))**2
eye1 = cv2.bitwise_not(cv2.GaussianBlur(eye,(5,5),0.005*eye.shape[1]))
C = (np.nansum(X,0)*eye1.ravel()/eye1.size).reshape(eye1.shape)
return np.unravel_index(C.argmax(), C.shape)
There's one repeat still left in it at Dxy. It might be possible to avoid that step and Dxy0 could be fed directly into the step that uses Dxy to give us X, but I haven't worked through it. Everything's converted to broadcasting based!
Runtime test and output verification -
In [539]: # Inputs with random elements
...: face = np.random.randint(0,10,(256,256)).astype('uint8')
...: x = 40
...: y = 60
...: w = 64
...: h = 64
...:
In [540]: find_pupil(face,x,y,w,h)
Out[540]: (32, 63)
In [541]: find_pupil_v2(face,x,y,w,h)
Out[541]: (32, 63)
In [542]: %timeit find_pupil(face,x,y,w,h)
1 loops, best of 3: 4.15 s per loop
In [543]: %timeit find_pupil_v2(face,x,y,w,h)
1 loops, best of 3: 529 ms per loop
It seems we are getting close to 8x speedup!

Image Based Visual Servoing algorithm in MATLAB

I was trying to implement the IBVS algorithm (the one explained in the Introduction here) in MATLAB myself, but I am facing the following problem : The algorithm seems to work only for the cases that the camera does not have to change its orientation in respect to the world frame.For example, if I just try to make one vertex of the initial (almost) square go closer to its opposite vertex, the algorithm does not work, as can be seen in the following image
The red x are the desired projections, the blue circles are the initial ones and the green ones are the ones I get from my algorithm.
Also the errors are not exponentially dereasing as they should.
What am I doing wrong? I am attaching my MATLAB code which is fully runable. If anyone could take a look, I would be really grateful. I took out the code that was performing the plotting. I hope it is more readable now. Visual servoing has to be performed with at least 4 target points, because else the problem has no unique solution. If you are willing to help, I would suggest you take a look at the calc_Rotation_matrix() function to check that the rotation matrix is properly calculated, then verify that the line ds = vc; in euler_ode is correct. The camera orientation is expressed in Euler angles according to this convention. Finally, one could check if the interaction matrix L is properly calculated.
function VisualServo()
global A3D B3D C3D D3D A B C D Ad Bd Cd Dd
%coordinates of the 4 points wrt camera frame
A3D = [-0.2633;0.27547;0.8956];
B3D = [0.2863;-0.2749;0.8937];
C3D = [-0.2637;-0.2746;0.8977];
D3D = [0.2866;0.2751;0.8916];
%initial projections (computed here only to show their relation with the desired ones)
A=A3D(1:2)/A3D(3);
B=B3D(1:2)/B3D(3);
C=C3D(1:2)/C3D(3);
D=D3D(1:2)/D3D(3);
%initial camera position and orientation
%orientation is expressed in Euler angles (X-Y-Z around the inertial frame
%of reference)
cam=[0;0;0;0;0;0];
%desired projections
Ad=A+[0.1;0];
Bd=B;
Cd=C+[0.1;0];
Dd=D;
t0 = 0;
tf = 50;
s0 = cam;
%time step
dt=0.01;
t = euler_ode(t0, tf, dt, s0);
end
function ts = euler_ode(t0,tf,dt,s0)
global A3D B3D C3D D3D Ad Bd Cd Dd
s = s0;
ts=[];
for t=t0:dt:tf
ts(end+1)=t;
cam = s;
% rotation matrix R_WCS_CCS
R = calc_Rotation_matrix(cam(4),cam(5),cam(6));
r = cam(1:3);
% 3D coordinates of the 4 points wrt the NEW camera frame
A3D_cam = R'*(A3D-r);
B3D_cam = R'*(B3D-r);
C3D_cam = R'*(C3D-r);
D3D_cam = R'*(D3D-r);
% NEW projections
A=A3D_cam(1:2)/A3D_cam(3);
B=B3D_cam(1:2)/B3D_cam(3);
C=C3D_cam(1:2)/C3D_cam(3);
D=D3D_cam(1:2)/D3D_cam(3);
% computing the L matrices
L1 = L_matrix(A(1),A(2),A3D_cam(3));
L2 = L_matrix(B(1),B(2),B3D_cam(3));
L3 = L_matrix(C(1),C(2),C3D_cam(3));
L4 = L_matrix(D(1),D(2),D3D_cam(3));
L = [L1;L2;L3;L4];
%updating the projection errors
e = [A-Ad;B-Bd;C-Cd;D-Dd];
%compute camera velocity
vc = -0.5*pinv(L)*e;
%change of the camera position and orientation
ds = vc;
%update camera position and orientation
s = s + ds*dt;
end
ts(end+1)=tf+dt;
end
function R = calc_Rotation_matrix(theta_x, theta_y, theta_z)
Rx = [1 0 0; 0 cos(theta_x) -sin(theta_x); 0 sin(theta_x) cos(theta_x)];
Ry = [cos(theta_y) 0 sin(theta_y); 0 1 0; -sin(theta_y) 0 cos(theta_y)];
Rz = [cos(theta_z) -sin(theta_z) 0; sin(theta_z) cos(theta_z) 0; 0 0 1];
R = Rx*Ry*Rz;
end
function L = L_matrix(x,y,z)
L = [-1/z,0,x/z,x*y,-(1+x^2),y;
0,-1/z,y/z,1+y^2,-x*y,-x];
end
Cases that work:
Ad=2*A;
Bd=2*B;
Cd=2*C;
Dd=2*D;
Ad=A+1;
Bd=B+1;
Cd=C+1;
Dd=D+1;
Ad=2*A+1;
Bd=2*B+1;
Cd=2*C+1;
Dd=2*D+1;
Cases that do NOT work:
Rotation by 90 degrees and zoom out (zoom out alone works, but I am doing it here for better visualization)
Ad=2*D;
Bd=2*C;
Cd=2*A;
Dd=2*B;
Your problem comes from the way you move the camera from the resulting visual servoing velocity. Rather than
cam = cam + vc*dt;
you should compute the new camera position using the exponential map
cam = cam*expm(vc*dt)

matlab: efficient computation of local histograms within circular neighboorhoods

I've an image over which I would like to compute a local histogram within a circular neighborhood. The size of the neighborhood is given by a radius. Although the code below does the job, it's computationally expensive. I run the profiler and the way I'm accessing to the pixels within the circular neighborhoods is already expensive.
Is there any sort of improvement/optimization based maybe on vectorization? Or for instance, storing the neighborhoods as columns?
I found a similar question in this post and the proposed solution is quite in the spirit of the code below, however the solution is still not appropriate to my case. Any ideas are really welcomed :-) Imagine for the moment, the image is binary, but the method should also ideally work with gray-level images :-)
[rows,cols] = size(img);
hist_img = zeros(rows, cols, 2);
[XX, YY] = meshgrid(1:cols, 1:rows);
for rr=1:rows
for cc=1:cols
distance = sqrt( (YY-rr).^2 + (XX-cc).^2 );
mask_radii = (distance <= radius);
bwresponses = img(mask_radii);
[nelems, ~] = histc(double(bwresponses),0:255);
% do some processing over the histogram
...
end
end
EDIT 1 Given the received feedback, I tried to update the solution. However, it's not yet correct
radius = sqrt(2.0);
disk = diskfilter(radius);
fun = #(x) histc( x(disk>0), min(x(:)):max(x(:)) );
output = im2col(im, size(disk), fun);
function disk = diskfilter(radius)
height = 2*ceil(radius)+1;
width = 2*ceil(radius)+1;
[XX,YY] = meshgrid(1:width,1:height);
dist = sqrt((XX-ceil(width/2)).^2+(YY-ceil(height/2)).^2);
circfilter = (dist <= radius);
end
Following on the technique I described in my answer to a similar question you could try to do the following:
compute the index offsets from a particular voxel that get you to all the neighbors within a radius
Determine which voxels have all neighbors at least radius away from the edge
Compute the neighbors for all these voxels
Generate your histograms for each neighborhood
It is not hard to vectorize this, but note that
It will be slow when the neighborhood is large
It involves generating an intermediate matrix that is NxM (N = voxels in image, M = voxels in neighborhood) which could get very large
Here is the code:
% generate histograms for neighborhood within radius r
A = rand(200,200,200);
radius = 2.5;
tic
sz=size(A);
[xx yy zz] = meshgrid(1:sz(2), 1:sz(1), 1:sz(3));
center = round(sz/2);
centerPoints = find((xx - center(1)).^2 + (yy - center(2)).^2 + (zz - center(3)).^2 < radius.^2);
centerIndex = sub2ind(sz, center(1), center(2), center(3));
% limit to just the points that are "far enough on the inside":
inside = find(xx > radius+1 & xx < sz(2) - radius & ...
yy > radius + 1 & yy < sz(1) - radius & ...
zz > radius + 1 & zz < sz(3) - radius);
offsets = centerPoints - centerIndex;
allPoints = 1:prod(sz);
insidePoints = allPoints(inside);
indices = bsxfun(#plus, offsets, insidePoints);
hh = histc(A(indices), 0:0.1:1); % <<<< modify to give you the histogram you want
toc
A 2D version of the same code (which might be all you need, and is considerably faster):
% generate histograms for neighborhood within radius r
A = rand(200,200);
radius = 2.5;
tic
sz=size(A);
[xx yy] = meshgrid(1:sz(2), 1:sz(1));
center = round(sz/2);
centerPoints = find((xx - center(1)).^2 + (yy - center(2)).^2 < radius.^2);
centerIndex = sub2ind(sz, center(1), center(2));
% limit to just the points that are "far enough on the inside":
inside = find(xx > radius+1 & xx < sz(2) - radius & ...
yy > radius + 1 & yy < sz(1) - radius);
offsets = centerPoints - centerIndex;
allPoints = 1:prod(sz);
insidePoints = allPoints(inside);
indices = bsxfun(#plus, offsets, insidePoints);
hh = histc(A(indices), 0:0.1:1); % <<<< modify to give you the histogram you want
toc
You're right, I don't think that colfilt can be used as you're not applying a filter. You'll have to check the correctness, but here's my attempt using im2col and your diskfilter function (I did remove the conversion to double so it now output logicals):
function circhist
% Example data
im = randi(256,20)-1;
% Ranges - I do this globally for the whole image rather than for each neighborhood
mini = min(im(:));
maxi = max(im(:));
edges = linspace(mini,maxi,20);
% Disk filter
radius = sqrt(2.0);
disk = diskfilter(radius); % Returns logical matrix
% Pad array with -1
im_pad = padarray(im, (size(disk)-1)/2, -1);
% Convert sliding neighborhoods to columns
B = im2col(im_pad, size(disk), 'sliding');
% Get elements from each column that correspond to disk (logical indexing)
C = B(disk(:), :);
% Apply histogram across columns to count number of elements
out = histc(C, edges)
% Display output
figure
imagesc(out)
h = colorbar;
ylabel(h,'Counts');
xlabel('Neighborhood #')
ylabel('Bins')
axis xy
function disk = diskfilter(radius)
height = 2*ceil(radius)+1;
width = 2*ceil(radius)+1;
[XX,YY] = meshgrid(1:width,1:height);
dist = sqrt((XX-ceil(width/2)).^2+(YY-ceil(height/2)).^2);
disk = (dist <= radius);
If you want to set your ranges (edges) based on each neighborhood then you'll need to make sure that the vector is always the same length if you want to build a big matrix (and then the rows of that matrix won't correspond to each other).
You should note that the shape of the disk returned by fspecial is not as circular as what you were using. It's meant to be used a smoothing/averaging filter so the edges are fuzzy (anti-aliased). Thus when you use ~=0 it will grab more pixels. It'd stick with your own function, which is faster anyways.
You could try processing with an opposite logic (as briefly explained in the comment)
hist = zeros(W+2*R, H+2*R, Q);
for i = 1:R+1;
for j = 1:R+1;
if ((i-R-1)^2+(j-R-1)^2 < R*R)
for q = 0:1:Q-1;
hist(i:i+W-1,j:j+H-1,q+1) += (image == q);
end
end
end
end

Ellipse Detection using Hough Transform

using Hough Transform, how can I detect and get coordinates of (x0,y0) and "a" and "b" of an ellipse in 2D space?
This is ellipse01.bmp:
I = imread('ellipse01.bmp');
[m n] = size(I);
c=0;
for i=1:m
for j=1:n
if I(i,j)==1
c=c+1;
p(c,1)=i;
p(c,2)=j;
end
end
end
Edges=transpose(p);
Size_Ellipse = size(Edges);
B = 1:ceil(Size_Ellipse(1)/2);
Acc = zeros(length(B),1);
a1=0;a2=0;b1=0;b2=0;
Ellipse_Minor=[];Ellipse_Major=[];Ellipse_X0 = [];Ellipse_Y0 = [];
Global_Threshold = ceil(Size_Ellipse(2)/6);%Used for Major Axis Comparison
Local_Threshold = ceil(Size_Ellipse(1)/25);%Used for Minor Axis Comparison
[Y,X]=find(Edges);
Limit=numel(Y);
Thresh = 150;
Para=[];
for Count_01 =1:(Limit-1)
for Count_02 =(Count_01+1):Limit
if ((Count_02>Limit) || (Count_01>Limit))
continue
end
a1=Y(Count_01);b1=X(Count_01);
a2=Y(Count_02);b2=X(Count_02);
Dist_01 = (sqrt((a1-a2)^2+(b1-b2)^2));
if (Dist_01 >Global_Threshold)
Center_X0 = (b1+b2)/2;Center_Y0 = (a1+a2)/2;
Major = Dist_01/2.0;Alpha = atan((a2-a1)/(b2-b1));
if(Alpha == 0)
for Count_03 = 1:Limit
if( (Count_03 ~= Count_01) || (Count_03 ~= Count_02))
a3=Y(Count_03);b3=X(Count_03);
Dist_02 = (sqrt((a3 - Center_Y0)^2+(b3 - Center_X0)^2));
if(Dist_02 > Local_Threshold)
Cos_Tau = ((Major)^2 + (Dist_02)^2 - (a3-a2)^2 - (b3-b2)^2)/(2*Major*Dist_02);
Sin_Tau = 1 - (Cos_Tau)^2;
Minor_Temp = ((Major*Dist_02*Sin_Tau)^2)/(Major^2 - ((Dist_02*Cos_Tau)^2));
if((Minor_Temp>1) && (Minor_Temp<B(end)))
Acc(round(Minor_Temp)) = Acc(round(Minor_Temp))+1;
end
end
end
end
end
Minor = find(Acc == max(Acc(:)));
if(Acc(Minor)>Thresh)
Ellipse_Minor(end+1)=Minor(1);Ellipse_Major(end+1)=Major;
Ellipse_X0(end+1) = Center_X0;Ellipse_Y0(end+1) = Center_Y0;
for Count = 1:numel(X)
Para_X = ((X(Count)-Ellipse_X0(end))^2)/(Ellipse_Major(end)^2);
Para_Y = ((Y(Count)-Ellipse_Y0(end))^2)/(Ellipse_Minor(end)^2);
if (((Para_X + Para_Y)>=-2)&&((Para_X + Para_Y)<=2))
Edges(X(Count),Y(Count))=0;
end
end
end
Acc = zeros(size(Acc));
end
end
end
Although this is an old question, perhaps what I found can help someone.
The main problem of using the normal Hough Transform to detect ellipses is the dimension of the accumulator, since we would need to vote for 5 variables (the equation is explained here):
There is a very nice algorithm where the accumulator can be a simple 1D array, for example, and that runs in . If you wanna see code, you can look at here (the image used to test was that posted above).
If you use circle for rough transform is given as rho = xcos(theta) + ysin(theta)
For ellipse since it is
You could transform the equation as
rho = axcos(theta) + bysin(theta)
Although I am not sure if you use standard Hough Transform, for ellipse-like transforms, you could manipulate the first given function.
If your ellipse is as provided, being a true ellipse and not a noisy sample of points;
the search for the two furthest points gives the ends of the major axis,
the search for the two nearest points gives the ends of the minor axis,
the intersection of these lines (you can check it's a right angle) occurs at the centre.
If you know the 'a' and 'b' of an ellipse then you can rescale the image by factor of a/b in one direction and look for circle. I am still thinking about what to do when a and b are unknown.
If you know that it is circle then use Hough transform for circles. Here is a sample code:
int accomulatorResolution = 1; // for each pixel
int minDistBetweenCircles = 10; // In pixels
int cannyThresh = 20;
int accomulatorThresh = 5*_accT+1;
int minCircleRadius = 0;
int maxCircleRadius = _maxR*10;
cvClearMemStorage(storage);
circles = cvHoughCircles( gryImage, storage,
CV_HOUGH_GRADIENT, accomulatorResolution,
minDistBetweenCircles,
cannyThresh , accomulatorThresh,
minCircleRadius,maxCircleRadius );
// Draw circles
for (int i = 0; i < circles->total; i++){
float* p = (float*)cvGetSeqElem(circles,i);
// Draw center
cvCircle(dstImage, cvPoint(cvRound(p[0]),cvRound(p[1])),
1, CV_RGB(0,255,0), -1, 8, 0 );
// Draw circle
cvCircle(dstImage, cvPoint(cvRound(p[0]),cvRound(p[1])),
cvRound(p[2]),CV_RGB(255,0,0), 1, 8, 0 );
}

Resources