I'm looking for a way/method to fit my response data (Image is shown below). So using f(t) = (square(2*pi*f*t)+1) to filter my raw data. However, cftool don't recognize this kind of function. So please help me thanks!
The function below might allow to fit the data. It is continuous, but not differentiable everywhere. The steps tend to fall to the right, while OPs data does not. This might require some extra work. Moreover, steps have to be equidistant, which, however, seems to be the case.
# -*- coding: utf-8 -*-
import matplotlib.pyplot as plt
import numpy as np
def f( x, a, b ): # test function (that would be the one to fit, actually + a shift of edge position)
return a + b * x**3
def f_step( x, l, func, args=None ):
y = ( x - l / 2. ) % l - l / 2.
y = y / l * 2.
p = np.floor( ( x-l/2.) / (l) ) + 1
centre = p * l
left = centre - l / 2.
right = centre + l / 2.
fL = func( left, *args )
fR = func( right, *args )
fC = func( centre, *args )
out = fC + sharp( y , fL - fC, fR - fC , 5 )
return out
def sharp( x, a, b , p, epsilon=1e-1 ):
out = a * ( 1. / abs( x + 1 + epsilon )**p - ( 2 + epsilon)**( -p ) ) / ( epsilon**( -p ) - ( 2 + epsilon )**( -p ) )
out += b * ( 1. /abs( x - 1 - epsilon )**p - ( 2 + epsilon)**( -p ) ) / ( epsilon**( -p ) - ( 2 + epsilon )**( -p ) )
return out
l=0.57
xList = np.linspace( -1, 1.75, 500 )
yList = [ f_step( x, l, f, args=(2, -.3 ) ) for x in xList ]
fig1 = plt.figure( 1 )
ax = fig1.add_subplot( 1, 1, 1 )
ax.plot( xList, yList )
ax.plot( xList, f(xList, 2,-.3) )
plt.show()
Looks like:
Related
I am new to fortran and I am trying to write code using random data instead of binned data in x, y, z as shown in my sample code.
implicit real*8(a-h,o-z)
dimension rm(4),rp1(4),rip1(4),rp2(4),rip2(4),rp3(4),rip3(4),
d rn(4),u1(4),u2(4),u3(4)
do ix= 1000,25000,1000
x = ix/1000000.
do iy= 1000,25000,1000
y = iy/100000.
do iz= 1,1000,25
z = iz/10000.
a=(x**2+y**2)/z
b=x*y*z
c=x*y**2+y*z**2+z*x**2
fr=(a*b)/c
if(fr.ge.0.05.and.fr.le.23)then
write(40,*)x,y,x,fr
else
endif
end do
end do
end do
stop
How to convert such code having binned data to a code using random draws.
As an example binned data here means possible fixed values of x are {1000/1000000.,2000/1000000., .....,25000/1000000.} i.e. 25 possible values in range {.001, .025} but they are not random values
In case of random values 25 points will be drawn from the range {.001, .025} randomly.
This my assumption about doing the analysis with random draws(previously I was not familiar with this ).
Something like
ian#eris:~/work/stack$ cat data.f90
Program random_data
Use, Intrinsic :: iso_fortran_env, Only : wp => real64
Implicit None
Real( wp ), Parameter :: min_rand = 0.001_wp
Real( wp ), Parameter :: max_rand = 0.025_wp
Integer, Parameter :: n_samples = 25
Real( wp ) :: x, y, z
Real( wp ) :: a, b, c
Real( wp ) :: fr
Integer :: i_sample
Do i_sample = 1, n_samples
Call Random_number( x )
Call Random_number( y )
Call Random_number( z )
x = x * ( max_rand - min_rand ) + min_rand
y = y * ( max_rand - min_rand ) + min_rand
z = z * ( max_rand - min_rand ) + min_rand
a=(x**2+y**2)/z
b=x*y*z
c=x*y**2+y*z**2+z*x**2
fr=(a*b)/c
If( fr >= 0.05_wp .And. fr <= 23.0_wp )Then
Write( 40, * ) x, y, x, fr
Endif
End Do
End Program random_data
ian#eris:~/work/stack$ gfortran-10 -Wall -Wextra -fcheck=all -std=f2008 -g -finit-real=snan data.f90
ian#eris:~/work/stack$ ./a.out;more fort.40
more: stat of fort.40 failed: No such file or directory
Unfortunately none of the random numbers in this run produced an output that lay in the desired range - however I did test it with 2500 samples and then a couple did.
Can I convert rotation matrix to quaternion?
I know how to convert quaternion to rotation matrix but I can't find way to do opposite that.
I can show you the code how to convert quaternion to rotation matrix as bellow.
Example(C++): Quaterniond quat; MatrixXd t; t = quat.matrix();
I want to know way to convert rotation matrix to quaternion like this.
A numerically stable algorithm for converting a direction cosine matrix D into a quaternion q is as follows:
T = D(1,1) + D(2,2) + D(3,3)
M = max( D(1,1), D(2,2), D(3,3), T )
qmax = (1/2) * sqrt( 1 – T + 2*M )
if( M == D(1,1) )
qx = qmax
qy = ( D(1,2) + D(2,1) ) / ( 4*qmax )
qz = ( D(1,3) + D(3,1) ) / ( 4*qmax )
qw = ±( D(3,2) - D(2,3) ) / ( 4*qmax )
elseif( M == D(2,2) )
qx = ( D(1,2) + D(2,1) ) / ( 4*qmax )
qy = qmax
qz = ( D(2,3) + D(3,2) ) / ( 4*qmax )
qw = ±( D(1,3) - D(3,1) ) / ( 4*qmax )
elseif( M == D(3,3) )
qx = ( D(1,3) + D(3,1) ) / ( 4*qmax )
qy = ( D(2,3) + D(3,2) ) / ( 4*qmax )
qz = qmax
qw = ±( D(1,3) - D(3,1) ) / ( 4*qmax )
else
qx = ±( D(3,2) - D(2,3) ) / ( 4*qmax )
qy = ±( D(1,3) - D(3,1) ) / ( 4*qmax )
qz = ±( D(2,1) - D(1,2) ) / ( 4*qmax )
qw = qmax
endif
Note that there is a sign ambiguity inherent in quaternions. The algorithm above arbitrarily picks the sign of the largest element qmax to be positive, but it is equally valid to pick this sign as negative (i.e., essentially flipping all of the signs of the result). It is up to the user to determine which is the more appropriate selection based on the application.
The ± selection is made based on the quaternion convention you are using:
Choose + for Hamilton Left Chain Convention or JPL Right Chain Convention
Choose - for Hamilton Right Chain Convention or JPL Left Chain Convention
Hamilton Convention means the quaternion elements i, j, k behave in a right-handed manner for multiplication (like cross products):
i * j = k , j * k = i , k * i = j
JPL Convention means the quaternion elements i, j, k behave in a left-handed manner for multiplication (negative of cross products):
i * j = -k , j * k = -i , k * i = -j
Right Chain means the quaternion rotation operation on a vector has the unmodified quaternion on the right side:
D * v1 = v2 = q^-1 * v1 * q
Left Chain means the quaternion rotation operation on a vector has the unmodified quaternion on the left side:
D * v1 = v2 = q * v1 * q^-1
For completeness, here is the algorithm for the other direction, converting a quaternion to a direction cosine matrix:
D = (qw^2 - dot(qv,qv))*I3 + 2*qv*qv^T ± 2*qw*Skew(qv)
where ^T means transpose (for outer product in that term) and
qv = [qx]
[qy]
[qz]
I3 = [1 0 0]
[0 1 0]
[0 0 1]
Skew(qv) = [ 0 -qz qy]
[ qz 0 -qx]
[-qy qx 0]
I am trying to implement the intensity-based image alignment algorithm in the paper Efficient, Robust, and Fast Global Motion Estimation for Video Coding on MATLAB. The problem is that the Levenberg-Marquardt (LM) optimization doesn't work properly, so the energy function cannot approach even a local minima due to very tiny update term computed from the LM approximation equation. I've searching for a week but still could not figure out the problem. Here is my implementation.
I have two gray images I and T equal in size (W x H) and would like to estimate a rigid transformation F (scale + rotation + translation) that aligns I onto T. This could be obtained by minimizing the following energy function
Where
According to LM, I can derive the gradient term of E with respect to each parameter a_i (i=1,4) as follow (from equation (5) in the paper).
And the terms of the Hessian matrix of E are (from equation (4) in the paper)
From the above equations, I derive the bellow MATLAB procedure to compute beta and alpha terms.
%========== 1. Compute gradient of T using Sobel filter
gx = [-1 -2 -1; 0 0 0; 1 2 1] / 4;
gy = [-1 0 1; -2 0 2; -1 0 1] / 4;
Tx = conv2( T, gx, 'same' );
Ty = conv2( T, gy, 'same' );
%========== 2. Warp I using F to compute diff_image = I(x, y) - T(F(x,y,a)). F was previously initialized by an identity matrix of size 3x3
tform = affine2d( F );
I_warp = imwarp( I, tform, 'linear', 'OutputView', imref2d( size( T ) ) );
diff_img = I_wapr - T;
% create a mask for overlaping region between two aligned images
mask = ones( size( T ) );
mask_warp = imwarp( mask, tform, 'nearest', 'OutputView', imref2d( size( T ) ) );
overlap_area = numel( mask_warp );
diff_img = diff_img .* mask;
error = sum( sum( diff_img ) ) / 2 / overlap_area;
% create x, y grids
[y, x] = ndgrid( 0 : h-1, 0 : w-1 );
x_warp = imwarp( x, t_form, 'nearest', 'OutputView', imref2d( size(T) ) );
y_warp = imwarp( y, t_form, 'nearest', 'OutputView', imref2d( size(T) ) );
%======== compute beta_i = - dE/da_i (i=1,4)
sx = Tx .* diff_img;
sy = Ty .* diff_img;
beta_1 = sum( sum( x_warp .* sx + y_warp .* sy ) );
beta_2 = sum( sum( y_warp .* sx - x_warp .* sy ) );
beta_3 = sum( sum( sx ) );
beta_4 = sum( sum( sy ) );
beta = -[beta_1; beta_2; beta_3; beta_4] / overlap_area;
%======= compute alpha_ij = (dE/da_i) * (dE/da_j) i,j = 1,4
Sxx = (Tx .^ 2) .* mask_warp;
Syy = (Ty .^ 2) .* mask_warp;
Sxy = (Tx.* Ty) .* mask_warp;
xx = x_warp .^2;
yy = y_warp .^2;
xy = x_warp .* y_warp;
alpha_11 = sum( sum( xx .* Sxx + 2 * xy .* Sxy + yy .* Syy ) );
alpha_12 = sum( sum( (yy - xx) .* Sxy + xy .* (Sxx - Syy) ) );
alpha_13 = sum( sum( x .* Sxx + y .* Sxy ) );
alpha_14 = sum( sum( x .* Sxy + y .* Syy ) );
alpha_22 = sum( sum( yy .* Sxx - 2 * xy .* Sxy + xx .* Syy ) );
alpha_23 = sum( sum( y .* Sxx - x .* Sxy ) );
alpha_24 = sum( sum( y .* Sxy - x .* Syy ) );
alpha_33 = sum( sum( Sxx ) );
alpha_34 = sum( sum( Sxy ) );
alpha_44 = sum( sum( Syy ) );
alpha = [alpha_11 alpha_12 alpha_13 alpha_14;
alpha_12 alpha_22 alpha_23 alpha_24;
alpha_13 alpha_23 alpha_33 alpha_34;
alpha_14 alpha_24 alpha_34 alpha_44] / overlap_area;
% With lamda was previously initialized by 0.0001
for i = 1 : 4
alpha(i, i) = alpha(i, i) * (lamda + 1);
end
%======== Find the update term: delta_a = alpha^(-1) * beta
delta_a = pinv( alpha ) * beta
% Or we can solve for delta_a using SVD
%[U, S, V] = svd( alpha );
%inv_S = S;
%for ii = 1 : size(S, 1)
% if S(ii, ii)
% inv_S(ii, ii) = 1 / S(ii, ii);
% end
%end
%delta_a = V * inv_S * U' * beta;
%======== Update a_i and new error value
a = [ F(1, 1)-1;
F(2, 1);
F(3, 1);
F(3, 2)];
new_a = a + delta_a;
new_F = [new_a(1)+1 -new_a(2) 0;
new_a(2) new_a(1)+1 0;
new_a(3) new_a(4) 1];
tform = affine2d( new_F );
mask_warp = imwarp( mask, tform, 'nearest', 'OutputView', imref2d(size(T)));
new_I_warp = imwarp( I, tform, 'linear', 'OutputView', imref2d(size(T)));
new_diff_img = (new_I_warp - T) .* mask_warp;
new_error = sum( sum( new_diff_img .^2 ) ) / 2/ sum( sum( mask_warp ) );
if( new_error > error )
lamda = lamda * 10;
elseif new_error < error
lamda = lamda /10;
The above process is repeated until the iteration reach the limitation or the new_error value less than a predetermined threshold. However, after each iteration, the update terms are too small that parameters make trivial movement. For example,
delta a = 1.0e-04 * [0.0011 -0.0002 0.2186 0.2079]
Is there anybody know how to fixed it? Any help would be highly appreciated.
I've read the HSL to RGB algorithm in wikipedia. I understand it and can convert using it. However I came upon another algorithm here, and the math is "explained" here.
The algorithm is:
//H, S and L input range = 0 ÷ 1.0
//R, G and B output range = 0 ÷ 255
if ( S == 0 )
{
R = L * 255
G = L * 255
B = L * 255
}
else
{
if ( L < 0.5 ) var_2 = L * ( 1 + S )
else var_2 = ( L + S ) - ( S * L )
var_1 = 2 * L - var_2
R = 255 * Hue_2_RGB( var_1, var_2, H + ( 1 / 3 ) )
G = 255 * Hue_2_RGB( var_1, var_2, H )
B = 255 * Hue_2_RGB( var_1, var_2, H - ( 1 / 3 ) )
}
Hue_2_RGB( v1, v2, vH ) //Function Hue_2_RGB
{
if ( vH < 0 ) vH += 1
if( vH > 1 ) vH -= 1
if ( ( 6 * vH ) < 1 ) return ( v1 + ( v2 - v1 ) * 6 * vH )
if ( ( 2 * vH ) < 1 ) return ( v2 )
if ( ( 3 * vH ) < 2 ) return ( v1 + ( v2 - v1 ) * ( ( 2 / 3 ) - vH ) * 6)
return ( v1 )
}
I've tried following the math but I can't figure it. How does it work?
The first part if ( S == 0 ) is for the case that there is no Saturation it means that it’s a shade of grey. You set the Luminance, set RGB to that grey scale level and you are done.
If this is not the case, then we need to perform the tricky part:
We shall use var_1 and var_2 as temporary values, only for making the code more readable.
So, if Luminance is smaller then 0.5 (50%) then var_2 = Luminance x (1.0 + Saturation.
If Luminance is equal or larger then 0.5 (50%) then var_2 = Luminance + Saturation – Luminance x Saturation. That's the else part of:
if ( L < 0.5 ) var_2 = L * ( 1 + S )
else var_2 = ( L + S ) - ( S * L )
Then we do:
var1 = 2 x Luminance – var_2
which is going to be useful later.
Now we need another three temporary variables for each color channel, as far as Hue is conserned. For Red, we add 0.333 to it (H + (1/3) in code), for Green we do nothing, and for Blue, we subtract 0.333 from it (H + (1/3)). That temporaty value is called vH (value Hue) in Hue_2_RGB().
Now each color channel will be treated separetely, thus the three function calls. There are four formulas that can be applied to a color channel. Every color channel should "use" only one formula.
Which one? It depends on the value of Hue (vH).
By the way, the value of vH must be normalized, thus if it's negative we add 1, or if it's greater than 1, we subtract 1 from it, so that vH lies in [0, 1].
If 6 x vH is smaller then 1, Color channel = var_1 + (var_2
– var_1) x 6 x vH
If 2 x vH is smaller then 1, Color channel = var_2
If 3 x vH is smaller then 2, Color channel = var_1 + (var_2 – var_1)
x (0.666 – vH) x 6
Else, Color channel = var_1
For R = 255 * Hue_2_RGB( var_1, var_2, H + ( 1 / 3 ) ), the Color Channel would be the Red, named R in the code.
This question describe the gabor filter family and its application pretty well. Though, there is nothing described about the wavelength (spatial frequency) of the filter. The creation of gabor wavelets are done in the following for loop:
for v = 0 : 4
for u = 1 : 8
GW = GaborWavelet ( R, C, Kmax, f, u, v, Delt2 ); % Create the Gabor wavelets
figure( 2 );
subplot( 5, 8, v * 8 + u ),imshow ( real( GW ) ,[]); % Show the real part of Gabor wavelets
GW_ALL( v*8+u, :) = GW(:);
end
figure ( 3 );
subplot( 1, 5, v + 1 ),imshow ( abs( GW ),[]); % Show the magnitude of Gabor wavelets
end
I know that the second loop variable is the orientation with pi/8 intervals. Though, I don't know how the first loop variable is linked with the spatial frequency (wavelength) in this code and its function [pixels/cycle]. Can anyone help?
I found the answer finally. The GaborWavelet function is defined as follows:
function GW = GaborWavelet (R, C, Kmax, f, u, v, Delt2)
k = ( Kmax / ( f ^ v ) ) * exp( 1i * u * pi / 8 );% Wave Vector
kn2 = ( abs( k ) ) ^ 2;
GW = zeros ( R , C );
for m = -R/2 + 1 : R/2
for n = -C/2 + 1 : C/2
GW(m+R/2,n+C/2) = ( kn2 / Delt2 ) * exp( -0.5 * kn2 * ( m ^ 2 + n ^ 2 ) / Delt2) * ( exp( 1i * ( real( k ) * m + imag ( k ) * n ) ) - exp ( -0.5 * Delt2 ) );
end
end
The Kmax is the maximum frequency, f is the spacing factor and v is the resolution. The spacing factor f is usually considered as sqrt(2).
Based on this paper, k= 2*pi*f*exp(i*ϑ) and in the code Kmax=fmax*2*pi which is not described and is the key to find the wavelength of the filter. I also read this implementation and noticed that wavelength can easily be found using f = 1/lambda where lambda is wavelength of sinusoid.
So for example, if Kmax=pi/2 and v=0, so the k=Kmax*exp(1i*u*pi/8) and considering the above mentioned formula, lambda = 2*pi/Kmax = 4 [pixel/cycle].