Filter the noise from EEG signal - filter

I recently recorded the EEG signal with sampling rate of 256Hz.The signal will band passed at 4-64Hz.I need a code to filter the eeg data.Is there any type of filter in matlab is most suitable to filter the artifact or noise from the signal??

You can apply a notch filter at 50 or 60 Hz
Artifacts from eye movements generally have a 2-5 Hz frequency range, So you can apply a high pass filter out there.
Wavelets have a thresholding mechanism to filter out noise (hard and soft thresholding) using wavelet packet decomposition.

in case that your Fs =1000 Hz, use this code to be able to filter the signal and extract the features bands (alpha, beta, ...)
S = "your EEG-Data-Row";
waveletFunction = 'db8' OR 'sym8' ;
[C,L] = wavedec(S,8,waveletFunction);
%% Calculation The Coificients Vectors
cD1 = detcoef(C,L,1); %NOISY
cD2 = detcoef(C,L,2); %NOISY
cD3 = detcoef(C,L,3); %NOISY
cD4 = detcoef(C,L,4); %NOISY
cD5 = detcoef(C,L,5); %GAMA
cD6 = detcoef(C,L,6); %BETA
cD7 = detcoef(C,L,7); %ALPHA
cD8 = detcoef(C,L,8); %THETA
cA8 = appcoef(C,L,waveletFunction,8); %DELTA
%%%% Calculation the Details Vectors
D1 = wrcoef('d',C,L,waveletFunction,1); %NOISY
D2 = wrcoef('d',C,L,waveletFunction,2); %NOISY
D3 = wrcoef('d',C,L,waveletFunction,3); %NOISY
D4 = wrcoef('d',C,L,waveletFunction,4); %NOISY
D5 = wrcoef('d',C,L,waveletFunction,5); %GAMMA
D6 = wrcoef('d',C,L,waveletFunction,6); %BETA
D7 = wrcoef('d',C,L,waveletFunction,7); %ALPHA
D8 = wrcoef('d',C,L,waveletFunction,8); %THETA
A8 = wrcoef('a',C,L,waveletFunction,8); %DELTA
Hope this will help .

Related

LMS beamforming Julia Programming

I have been trying to implement a simple LMS adaptive beamforming code. Since I don't have a Matlab license I decided to use Julia since they are quite similar. In order to get a basic code working I implemented the MVRD beamforming example found on Matlabs website (I can't seem o find the link right now). Then I used the link https://teaandtechtime.com/adaptive-beamforming-with-lms/ to get LMS going.
My code at the moment is
using Plots
using LinearAlgebra
# Source: https://teaandtechtime.com/adaptive-beamforming-with-lms/
M = 20; # Number of Array Elements.
N = 200; # Number of Signal Samples.
n = 1:N; # Time Sample Index Vector.
c = 3*10^8; # Speed of light
f = 2.4*10^9; # Frequency [Hz]
lambda = c/f; # Incoming Signal Wavelength in [m].
d = lambda/2; # Interelement Distance in [m].
SNR = 20; # Target SNR in dBs.
phi_s = 0; # Target azimuth angle in degrees.
phi_i1 = 20; # Interference angle in degrees.
phi_i2 = -30; # Interference angle in degrees.
phi_i3 = 50; # Interference angle in degrees.
INR1 = 35; # Interference #1 INR in dBs.
INR2 = 70; # Interference #2 INR in dBs.
INR3 = 50; # Interference #3 INR in dBs.
u_s = (d/lambda)*sin(phi_s*pi/180); # Normalized Spatial Frequency of the Target signal.
u_int1 = (d/lambda)*sin(phi_i1*pi/180); # Normalized Spatial Frequency of the Interferer #1.
u_int2 = (d/lambda)*sin(phi_i2*pi/180); # Normalized Spatial Frequency of the Interferer #2.
u_int3 = (d/lambda)*sin(phi_i3*pi/180); # Normalized Spatial Frequency of the Interferer #3.
tau_s = (d/c)*sin(phi_s*pi/180); # Time delay of the Target signal.
tau1 = (d/c)*sin(phi_i1*pi/180); # Time delay of the Interferer #1.
tau2 = (d/c)*sin(phi_i2*pi/180); # Time delay of the Interferer #2.
tau3 = (d/c)*sin(phi_i3*pi/180); # Time delay of the Interferer #3.
# Target Signal definition.
s = zeros(ComplexF64,M,N)
v_s = exp.(-1im*2*pi*u_s*collect(0:M-1))/sqrt(M); # Target Steering Vector.
for k=1:N
s[:,k] = 10^(SNR/20)*v_s; # Amplitude of Target Signal Generation.
end
# The uncorrelated unit power thermal noise samples with a Gaussian
# distribution are generated by:
w = (randn(M,N)+1im*randn(M,N))/sqrt(2)
# The interference [1iammer] vectors are generated by:
v_i1 = exp.(-1im*2*pi*u_int1*collect(0:M-1))/sqrt(M)
i_x1 = 10^(INR1/20)*v_i1*(randn(1,N)+1im*randn(1,N))/sqrt(2)
v_i2 = exp.(-1im*2*pi*u_int2*collect(0:M-1))/sqrt(M)
i_x2 = 10^(INR2/20)*v_i2*(randn(1,N)+1im*randn(1,N))/sqrt(2)
v_i3 = exp.(-1im*2*pi*u_int3*collect(0:M-1))/sqrt(M)
i_x3 = 10^(INR3/20)*v_i3*(randn(1,N)+1im*randn(1,N))/sqrt(2)
# The three signals are added to produce the overall array signal.
x = s + i_x1 + i_x2 + i_x3 + w
# Run LMS algorithm
mu = 0.001; # LMS step size
a = ones(ComplexF64,M); # Complex weights
S = zeros(ComplexF64,M); # Complex weights
ts = 1/(N*f); # sample time
for k = 1:N
d = cos(2*pi*f*k*ts); # Reference Signal
S = a.*x[:,k];
y = sum(S);
global e = conj(d) - y;
println(e)
global a += mu*x[:,k]*e; # next weight calculation
end
println(a)
# Array Response
Nsamples1 = 3*10^4
angle1 = -90:180/Nsamples1:90-180/Nsamples1
LMS_Beam_Pat = zeros(ComplexF64,Nsamples1)
for k = 1:Nsamples1
u = (d/lambda)*sin(angle1[k]*pi/180)
v = exp.(-1im*2*pi*u*collect(0:M-1))/sqrt(M); # Azimuth Scanning Steering Vector.
LMS_Beam_Pat[k] = a'*v;
end
# Plot the corresponding Beampatterns.
display(plot(angle1,10*log10.(abs.(LMS_Beam_Pat).^2),xlims=(-90,90),ylims=(-100,0)))
sleep(10)
# PolardB plot
display(plot(angle1*pi/180,10*log10.(abs.(LMS_Beam_Pat).^2), proj=:polar, lims=(-80,0)))
sleep(10)
The LMS code does not converge (it diverges rather) and I don't know why. I also don't understand the reference signal bit and how it is different from the target steering vector. Perhaps some clarification on the general concepts would be really helpful. I am new to beamforming and my background is in root solvers and such.
Below is the the working Julia code that is rewritten from the Matlab example. It is almost identical to the code above but without the LMS section.
using Plots
using LinearAlgebra
M = 20; # Number of Array Elements.
N = 200; # Number of Signal Samples.
n = 1:N; # Time Sample Index Vector.
c = 3*10^8; # Speed of light
f = 2.4*10^9; # Frequency [Hz]
lambda = c/f; # Incoming Signal Wavelength in [m].
d = lambda/2; # Interelement Distance in [m].
SNR = 20; # Target SNR in dBs.
phi_s = 0; # Target azimuth angle in degrees.
phi_i1 = 20; # Interference angle in degrees.
phi_i2 = -30; # Interference angle in degrees.
phi_i3 = 50; # Interference angle in degrees.
INR1 = 35; # Interference #1 INR in dBs.
INR2 = 70; # Interference #2 INR in dBs.
INR3 = 50; # Interference #3 INR in dBs.
u_s = (d/lambda)*sin(phi_s*pi/180); # Normalized Spatial Frequency of the Target signal.
u_int1 = (d/lambda)*sin(phi_i1*pi/180); # Normalized Spatial Frequency of the Interferer #1.
u_int2 = (d/lambda)*sin(phi_i2*pi/180); # Normalized Spatial Frequency of the Interferer #2.
u_int3 = (d/lambda)*sin(phi_i3*pi/180); # Normalized Spatial Frequency of the Interferer #3.
tau_s = (d/c)*sin(phi_s*pi/180); # Time delay of the Target signal.
tau1 = (d/c)*sin(phi_i1*pi/180); # Time delay of the Interferer #1.
tau2 = (d/c)*sin(phi_i2*pi/180); # Time delay of the Interferer #2.
tau3 = (d/c)*sin(phi_i3*pi/180); # Time delay of the Interferer #3.
# Target Signal definition.
s = zeros(ComplexF64,M,N)
v_s = exp.(-1im*2*pi*u_s*collect(0:M-1))/sqrt(M); # Target Steering Vector.
for k=1:N
s[:,k] = 10^(SNR/20)*v_s; # Amplitude of Target Signal Generation.
end
# The uncorrelated unit power thermal noise samples with a Gaussian
# distribution are generated by:
w = (randn(M,N)+1im*randn(M,N))/sqrt(2)
# The interference [1iammer] vectors are generated by:
v_i1 = exp.(-1im*2*pi*u_int1*collect(0:M-1))/sqrt(M)
i_x1 = 10^(INR1/20)*v_i1*(randn(1,N)+1im*randn(1,N))/sqrt(2)
v_i2 = exp.(-1im*2*pi*u_int2*collect(0:M-1))/sqrt(M)
i_x2 = 10^(INR2/20)*v_i2*(randn(1,N)+1im*randn(1,N))/sqrt(2)
v_i3 = exp.(-1im*2*pi*u_int3*collect(0:M-1))/sqrt(M)
i_x3 = 10^(INR3/20)*v_i3*(randn(1,N)+1im*randn(1,N))/sqrt(2)
#The three signals are added to produce the overall array signal.
x = s + i_x1 + i_x2 + i_x3 + w
iplusn = i_x1 + i_x2 + i_x3 + w
# Calculation of the i+n autocorrelation matrix.
R_ipn = 10^(INR1/10)*(v_i1*v_i1') + 10^(INR2/10)*(v_i2*v_i2') + 10^(INR3/10)*(v_i3*v_i3') + I
InvR = inv(R_ipn)
# Calculate the Beam Patterns.
# MVDR Optimum Beamformer computed for a phi_s = 0 deg.
c_opt = InvR*v_s/(v_s'*InvR*v_s);
# Spatial Matched Filter | Steering Vector Beamformer Eq. (11.2.16).
c_mf = v_s
Nsamples1 = 3*10^4
angle1 = -90:180/Nsamples1:90-180/Nsamples1
Opt_Beam_Pat = zeros(ComplexF64,Nsamples1)
Conv_Beam_Pat = zeros(ComplexF64,Nsamples1)
for k = 1:Nsamples1
u = (d/lambda)*sin(angle1[k]*pi/180)
v = exp.(-1im*2*pi*u*collect(0:M-1))/sqrt(M); # Azimuth Scanning Steering Vector.
Opt_Beam_Pat[k] = c_opt'*v
Conv_Beam_Pat[k] = c_mf'*v
end
# Plot the corresponding Beampatterns.
plot(angle1,10*log10.(abs.(Conv_Beam_Pat).^2))
display(plot!(angle1,10*log10.(abs.(Opt_Beam_Pat).^2),xlims=(-90,90),ylims=(-100,0)))
sleep(10)
# PolardB plot
display(plot(angle1*pi/180,10*log10.(abs.(Opt_Beam_Pat).^2), proj=:polar, lims=(-80,0)))
sleep(10)
# Calculate the SINR loss factor for the Optimum Beamformer:
Nsamples = 3*10^4; # The resolution must be very fine to reveal the true depth of the notches.
Lsinr_opt = zeros(ComplexF64,Nsamples,1);
Lsinr_mf = zeros(Nsamples,1);
SNR0 = M*10^(SNR/10);
angle = -90:180/Nsamples:90-180/Nsamples;
for k=1:Nsamples
v = exp.(-1im*pi*collect(0:M-1)*sin(angle[k]*pi/180))/sqrt(M); # Azimuth Scanning Steering Vector.
c_mf = v; # This is the spatial matched filter beamformer.
Lsinr_opt[k] = v'*InvR*v;
SINRout_mf = real(M*(10^(SNR/10))*(abs(c_mf'*v)^2)/(c_mf'*R_ipn*c_mf));
Lsinr_mf[k] = SINRout_mf/SNR0;
end
plot(angle,10*log10.(abs.(Lsinr_opt)),xlims=(-90,0));
display(plot!(angle,10*log10.(abs.(Lsinr_mf)),xlims=(-90,90),ylims=(-75,5)));
sleep(10)
A good practice would be building the simulation iteratively.
One should be aware that the adaptive filter adapts to reference signal while all other data is interfering it. The more correlated the data to the reference the harder is the way to adapt to it.
So the reference signal should be something you can filter out from the signal on the sensors. For example, it can be the non delayed version of it or the signal of one of the sensors gets.
To make things simpler I created an example of real signals which basically gets a different delay according to the sensors.
The simplest way to add a delay to a real signal is by adjusting the phase of its analytic signal and taking the real.
Then we basically have N signals with M samples each which are getting into the adaptive filter which applies an inner product on each row of this M x N matrix.
I created a simple simulation both in MATLAB and Julia.
This is the result from the Julia code:
In the above we see the antenna gain for a target signal coming at 30 [Deg] and interference at [20.0; -35.0; 50.0].
As can be seen the array does adapt and have a gain of ~1 on the reference direction while rejection all other directions.
The full code is available on my StackExchange Signal Processing Q81138 GitHub Repository (Look at the SignalProcessing\Q81138 folder).

How do I normalize the signal generated from wfbm in matlab?

I am using matlab's wavelet fractional Brownian motion function in order to generate 1D point-like data of a diffusive particle in the physical regimes: sub-diffusion, super-diffusion and normal diffusion.
The problem I encounter with is that the time normalization/variance is weird.
For example for Hurst parameter equals 0.5 (regular Brownian motion) I get standard deviation which isn't unity (1):
>> std(diff(wfbm(0.5,1e6)))
ans =
0.3955
Due to the above, I am not sure how to re-normalize all the 3 trajectories I create for the 3 diffusion cases (sub, super, normal).
I generated trajectories for N pointlike particles of length M:
M=500;
N=200;
nd = zeros(M,N);
sub = zeros(M,N);
sup = zeros(M,N);
Hsub = 0.25;
Hsup = 0.75;
for j=1:N
nd(:,j) = wfbm(0.5, M, 15, 'db10');
sub(:,j) = wfbm(Hsub,M, 10, 'db10');
sup(:,j) = wfbm(Hsup,M, 10, 'db10');
end
Here is how function is implemented in matlab and generates the signal, however I am not sure how to modify it to have a proper brownian motion:
tmp = conv(randn(1,len+nbmax),ckbeta);
tmp = cumsum(tmp);
CA = wkeep(tmp,len,'c');
for j=0:nblev-1
CD = 2^(j/2)*4^(-s)*2^(-j*s)*randn(1,len);
len = 2*len-nbmax;
CA = idwt(CA,CD,fs1,gs1,len);
end
fBm = wkeep(CA,L,'c');
fBm = fBm-fBm(1);
I was trying to understand it from the paper which says it's possible to control the variance of fBm:
This is citation 7 from the snapshot above.

Speeding up vectorized eye-tracking algorithm in numpy

I'm trying to implement Fabian Timm's eye-tracking algorithm [http://www.inb.uni-luebeck.de/publikationen/pdfs/TiBa11b.pdf] (found here: [http://thume.ca/projects/2012/11/04/simple-accurate-eye-center-tracking-in-opencv/]) in numpy and OpenCV and I've hit a snag. I think I've vectorized my implementation decently enough, but it's still not fast enough to run in real time and it doesn't detect pupils with as much accuracy as I had hoped. This is my first time using numpy, so I'm not sure what I've done wrong.
def find_pupil(eye):
eye_len = np.arange(eye.shape[0])
xx,yy = np.meshgrid(eye_len,eye_len) #coordinates
XX,YY = np.meshgrid(xx.ravel(),yy.ravel()) #all distance vectors
Dx,Dy = [YY-XX, YY-XX] #y2-y1, x2-x1 -- simpler this way because YY = XXT
Dlen = np.sqrt(Dx**2+Dy**2)
Dx,Dy = [Dx/Dlen, Dy/Dlen] #normalized
Gx,Gy = np.gradient(eye)
Gmagn = np.sqrt(Gx**2+Gy**2)
Gx,Gy = [Gx/Gmagn,Gy/Gmagn] #normalized
GX,GY = np.meshgrid(Gx.ravel(),Gy.ravel())
X = (GX*Dx+GY*Dy)**2
eye = cv2.bitwise_not(cv2.GaussianBlur(eye,(5,5),0.005*eye.shape[1])) #inverting and blurring eye for use as w
eyem = np.repeat(eye.ravel()[np.newaxis,:],eye.size,0)
C = (np.nansum(eyem*X, axis=0)/eye.size).reshape(eye.shape)
return np.unravel_index(C.argmax(), C.shape)
and the rest of the code:
def find_eyes(face):
left_x, left_y = [int(floor(0.5 * face.shape[0])), int(floor(0.2 * face.shape[1]))]
right_x, right_y = [int(floor(0.1 * face.shape[0])), int(floor(0.2 * face.shape[1]))]
area = int(floor(0.2 * face.shape[0]))
left_eye = (left_x, left_y, area, area)
right_eye = (right_x, right_y, area, area)
return [left_eye,right_eye]
faceCascade = cv2.CascadeClassifier("haarcascade_frontalface_default.xml")
video_capture = cv2.VideoCapture(0)
while True:
# Capture frame-by-frame
ret, frame = video_capture.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(
gray,
scaleFactor=1.1,
minNeighbors=5,
minSize=(30, 30),
flags=cv2.CASCADE_SCALE_IMAGE
)
# Draw a rectangle around the faces
for (x, y, w, h) in faces:
cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 2)
roi_gray = gray[y:y+h, x:x+w]
roi_color = frame[y:y+h, x:x+w]
eyes = find_eyes(roi_gray)
for (ex,ey,ew,eh) in eyes:
eye_gray = roi_gray[ey:ey+eh,ex:ex+ew]
eye_color = roi_color[ey:ey+eh,ex:ex+ew]
cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(255,0,0),2)
px,py = find_pupil(eye_gray)
cv2.rectangle(eye_color,(px,py),(px+1,py+1),(255,0,0),2)
# Display the resulting frame
cv2.imshow('Video', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything is done, release the capture
video_capture.release()
cv2.destroyAllWindows()
You can perform many of those operations that save replicated elements and then perform some mathematical opertaions by directly performing the mathematical operatrions after creating singleton dimensions that would allow NumPy broadcasting. Thus, there would be two benefits - On the fly operations to save workspace memory and performance boost. Also, at the end, we can replace the nansum calculation with a simplified version. Thus, with all of that philosophy in mind, here's one modified approach -
def find_pupil_v2(face, x, y, w, h):
eye = face[x:x+w,y:y+h]
eye_len = np.arange(eye.shape[0])
N = eye_len.size**2
eye_len_diff = eye_len[:,None] - eye_len
Dlen = np.sqrt(2*((eye_len_diff)**2))
Dxy0 = eye_len_diff/Dlen
Gx0,Gy0 = np.gradient(eye)
Gmagn = np.sqrt(Gx0**2+Gy0**2)
Gx,Gy = [Gx0/Gmagn,Gy0/Gmagn] #normalized
B0 = Gy[:,:,None]*Dxy0[:,None,:]
C0 = Gx[:,None,:]*Dxy0
X = ((C0.transpose(1,0,2)[:,None,:,:]+B0[:,:,None,:]).reshape(N,N))**2
eye1 = cv2.bitwise_not(cv2.GaussianBlur(eye,(5,5),0.005*eye.shape[1]))
C = (np.nansum(X,0)*eye1.ravel()/eye1.size).reshape(eye1.shape)
return np.unravel_index(C.argmax(), C.shape)
There's one repeat still left in it at Dxy. It might be possible to avoid that step and Dxy0 could be fed directly into the step that uses Dxy to give us X, but I haven't worked through it. Everything's converted to broadcasting based!
Runtime test and output verification -
In [539]: # Inputs with random elements
...: face = np.random.randint(0,10,(256,256)).astype('uint8')
...: x = 40
...: y = 60
...: w = 64
...: h = 64
...:
In [540]: find_pupil(face,x,y,w,h)
Out[540]: (32, 63)
In [541]: find_pupil_v2(face,x,y,w,h)
Out[541]: (32, 63)
In [542]: %timeit find_pupil(face,x,y,w,h)
1 loops, best of 3: 4.15 s per loop
In [543]: %timeit find_pupil_v2(face,x,y,w,h)
1 loops, best of 3: 529 ms per loop
It seems we are getting close to 8x speedup!

Image Based Visual Servoing algorithm in MATLAB

I was trying to implement the IBVS algorithm (the one explained in the Introduction here) in MATLAB myself, but I am facing the following problem : The algorithm seems to work only for the cases that the camera does not have to change its orientation in respect to the world frame.For example, if I just try to make one vertex of the initial (almost) square go closer to its opposite vertex, the algorithm does not work, as can be seen in the following image
The red x are the desired projections, the blue circles are the initial ones and the green ones are the ones I get from my algorithm.
Also the errors are not exponentially dereasing as they should.
What am I doing wrong? I am attaching my MATLAB code which is fully runable. If anyone could take a look, I would be really grateful. I took out the code that was performing the plotting. I hope it is more readable now. Visual servoing has to be performed with at least 4 target points, because else the problem has no unique solution. If you are willing to help, I would suggest you take a look at the calc_Rotation_matrix() function to check that the rotation matrix is properly calculated, then verify that the line ds = vc; in euler_ode is correct. The camera orientation is expressed in Euler angles according to this convention. Finally, one could check if the interaction matrix L is properly calculated.
function VisualServo()
global A3D B3D C3D D3D A B C D Ad Bd Cd Dd
%coordinates of the 4 points wrt camera frame
A3D = [-0.2633;0.27547;0.8956];
B3D = [0.2863;-0.2749;0.8937];
C3D = [-0.2637;-0.2746;0.8977];
D3D = [0.2866;0.2751;0.8916];
%initial projections (computed here only to show their relation with the desired ones)
A=A3D(1:2)/A3D(3);
B=B3D(1:2)/B3D(3);
C=C3D(1:2)/C3D(3);
D=D3D(1:2)/D3D(3);
%initial camera position and orientation
%orientation is expressed in Euler angles (X-Y-Z around the inertial frame
%of reference)
cam=[0;0;0;0;0;0];
%desired projections
Ad=A+[0.1;0];
Bd=B;
Cd=C+[0.1;0];
Dd=D;
t0 = 0;
tf = 50;
s0 = cam;
%time step
dt=0.01;
t = euler_ode(t0, tf, dt, s0);
end
function ts = euler_ode(t0,tf,dt,s0)
global A3D B3D C3D D3D Ad Bd Cd Dd
s = s0;
ts=[];
for t=t0:dt:tf
ts(end+1)=t;
cam = s;
% rotation matrix R_WCS_CCS
R = calc_Rotation_matrix(cam(4),cam(5),cam(6));
r = cam(1:3);
% 3D coordinates of the 4 points wrt the NEW camera frame
A3D_cam = R'*(A3D-r);
B3D_cam = R'*(B3D-r);
C3D_cam = R'*(C3D-r);
D3D_cam = R'*(D3D-r);
% NEW projections
A=A3D_cam(1:2)/A3D_cam(3);
B=B3D_cam(1:2)/B3D_cam(3);
C=C3D_cam(1:2)/C3D_cam(3);
D=D3D_cam(1:2)/D3D_cam(3);
% computing the L matrices
L1 = L_matrix(A(1),A(2),A3D_cam(3));
L2 = L_matrix(B(1),B(2),B3D_cam(3));
L3 = L_matrix(C(1),C(2),C3D_cam(3));
L4 = L_matrix(D(1),D(2),D3D_cam(3));
L = [L1;L2;L3;L4];
%updating the projection errors
e = [A-Ad;B-Bd;C-Cd;D-Dd];
%compute camera velocity
vc = -0.5*pinv(L)*e;
%change of the camera position and orientation
ds = vc;
%update camera position and orientation
s = s + ds*dt;
end
ts(end+1)=tf+dt;
end
function R = calc_Rotation_matrix(theta_x, theta_y, theta_z)
Rx = [1 0 0; 0 cos(theta_x) -sin(theta_x); 0 sin(theta_x) cos(theta_x)];
Ry = [cos(theta_y) 0 sin(theta_y); 0 1 0; -sin(theta_y) 0 cos(theta_y)];
Rz = [cos(theta_z) -sin(theta_z) 0; sin(theta_z) cos(theta_z) 0; 0 0 1];
R = Rx*Ry*Rz;
end
function L = L_matrix(x,y,z)
L = [-1/z,0,x/z,x*y,-(1+x^2),y;
0,-1/z,y/z,1+y^2,-x*y,-x];
end
Cases that work:
Ad=2*A;
Bd=2*B;
Cd=2*C;
Dd=2*D;
Ad=A+1;
Bd=B+1;
Cd=C+1;
Dd=D+1;
Ad=2*A+1;
Bd=2*B+1;
Cd=2*C+1;
Dd=2*D+1;
Cases that do NOT work:
Rotation by 90 degrees and zoom out (zoom out alone works, but I am doing it here for better visualization)
Ad=2*D;
Bd=2*C;
Cd=2*A;
Dd=2*B;
Your problem comes from the way you move the camera from the resulting visual servoing velocity. Rather than
cam = cam + vc*dt;
you should compute the new camera position using the exponential map
cam = cam*expm(vc*dt)

rotary encoder glitch in arduino

I hope you can help me with this arduino problem.
I have a rotary encoder hooked up to a servo, and want to achieve that linear relationship of low speed = low angle and high speed = high angle;
It is generally working, this is snapshot from my serial monitor:
spd = 0
input angle = 0
.
spd = 2863304
input angle = 7023
.
spd = 34
input angle = 10
.
spd = 0
input angle = 0
.
My concern here is that sudden jump to a number such as 2863304, it happens frequently, I'm thinking it is a glitch and I wish to remove it, I would very appreciate it if someone can provide some advice!

Resources