How to calculate the following Jacobian on Lie group SO(3)? - bspline

everyone, I've met a problem to calculate the following question(see https://openaccess.thecvf.com/content_CVPR_2020/papers/Sommer_Efficient_Derivative_Computation_for_Cumulative_B-Splines_on_Lie_Groups_CVPR_2020_paper.pdf):
The partial differential is:
I've tried to calculate this but it seems that the result does not coincide with the code which contains in so3_spline.h of basalt:
J_helper = coeff[i + 1] * res.matrix() * Jl_k_delta * Jl_inv_delta * p0.inverse().matrix();
Does anyone know the reasons?

Related

how can I solve the limes of $n*log_2(n)$ vs $n^{log_3(4)}$

I want to manually calculate which of these two functions ($n*log_2(n)$ vs. $n^{log_3(4)}$) has a higher asymptotic increasing without using a calculator or any software.
My approach till now was:
lim n-> inf: \frac {$n*log:2(n)$} {$n^{log_3(4)}$}
Now use L´Hospital and derive each function:
\frac {$log_2(n)$ + $1/ln(2)n$ } {$log_3(4) n^{log_3(4) -1}}
Now use L´Hospital again:
\frac {$1/(ln(2)*n)$ + $1/(ln(2)*n) $} {$1/ln(3)4 $ * $n^{log_3(4)-1}$ + $log_3(4)-1 * n^{log_3(4)-2} * log_3(4) $}
My problem: If I calculate like that it results to a wrong solution. Does anyone have an idea how to solve that correctly?
Edit: I also noticed that your first derivative was incorrect.
Your first derivative and second evaluation of L' Hopitals rule is incorrect.
You start with:
f(n)=n*log2(n)
g(n)=n^(log3(4))
This gives:
f'(n)=log2(n) + n * (1/ln(2)) * n^(-1)
=log2(n) + 1/ln(2)
g'(n)=log3(4) * n^(log3(4)-1)
This gives:
f''(n)=(1/ln(2)) * n^(-1)
g''(n)=log3(4) * (log3(4)-1) * n^(log3(4)-2)
With your error in the first derivate you would have gotten f''(n)=(1/ln(2)) * n^(-1) - (1/ln(2)) * n^(-2), which still allows you to factor out n and results in the same final result.
Now that you have n in all of it, you can factor that out:
f''(n)/g''(n) = 1/[ln(2) * log3(4) * (log3(4)-1) * n^(log3(4)-2+1)]
= 1/[ln(2) * log3(4) * (log3(4)-1)] * n^(1-log3(4))]
Which now can be represented as:
k * n^(1-log3(4)) where k>0.
And the limit as this approaches infinity is 0. That means n^log3(4) has a greater asymptote than n * log2(n).
Alternatively, you can simplify first.
Note that both have a factor of n which can be removed, so instead you can have:
f(n)=log2(n)
g(n)=n^(log3(4)-1)
f'(n)=(1/ln(2)) * n^(-1)
g'(n)=(log3(4)-1) * n^(log3(4)-2)
f'(n)/g'(n) = (1/ln(2)) * n^(-1-log3(4)+2)/(log3(4)-1)
=(1/ln(2)) * n^(1-log3(4))/(log3(4)-1)
Again, the limit is 0, meaning that n^(log3(4)) has a greater asymptote.
The only thing extra that is needed to know is that log3(4) is greater than 1 as 4 is greater than 3.
That means (log3(4)-1)>0 and (1-log3(4))<0.
Also remember that the correct result may not be what you think it is. These 2 equations cross when n~= 30 000
Also, I'm not sure if this belongs here or on math.

sympy regression example, solve after partial derivative

I try to solve some matrix calculus problem with sympy and get stuck at the solver after differentiation with respect to a vector.
As a short example let's take ordinary least squares regression.
E.g. the sum of the squared differences between target y and prediction y_hat. Where the prediction y_hat = X.T * w is linear combination and thus a matrix vector multiplication.
We therefore want to minimize the LMS Error with respect to the weight vector w.
By hand we can derive that from:
Err(w) = norm(y - X.T * w)^2
follows after differentiation, setting to zero and solving for w
w_opt = (X*X.T)^-1 * X * y
How can we derive w_opt using sympy?
My rather naïve approach was:
from sympy import *
# setup matrix and vectors
X = MatrixSymbol('X',3,5)
y = MatrixSymbol('y',5,1)
w = MatrixSymbol('w',3,1)
# define error function
E = (y - X.T*w).T * (y - X.T*w)
# derivate
Edw = [E.diff(wi) for wi in w]
# solve for w
solve(Edw,w)
At solve(Edw,w) however i get the attribute error: 'Mul' object has no attribute 'shape'
I also tried to set E.as_explicit() before differentiating. This however resultet in the attribute error: 'str' object has no attribute 'is_Piecewise'
I know by calculating by hand, that after derivation the result should be -2*X*y + 2*X*X.T*w. The derivation in Edw is listed but not performed. How can I verify this step in between? My first guess was the .doit() method, which unfortunately is not defined in that case.

Evolutionary Optimization Algorithms

I want to optimize following objective functionmax f = profit(x,y) - expense(x,y) subject to: 0<= x, y <=1 using strength pareto evolutionary algorithm (SPEA2). The objective function is non-linear and is not convex or concave function of decision variables. Can I break objective function into two that is maximize profit(x,y) and minimize expense(x,y) and then optimize them in combination at the end. I do not know if it make some sense, sorry I am completely new to the filed. I shall appreciate any help.
Note: (in general) profit = income - expense; and what you've asked looks very dodgy (e.g. something = profit - expense = income - 2 * expense), so I'm going to assume you meant "income" everywhere you've said "profit".
No, you can't find max. income(x,y) and min. expense(x,y) and combine/optimise them at the end because your can expect a relationship between income and expense (e.g. as expense increases income increases).
Also; don't forget that the best way to approach this kind of problem is to expand and simplify the resulting function.
For a very simple example:
income = items * bonus + items * 0.9 * $123.45
expense = bonus * $1 + items * $99.00
profit = income - expense
= (items * bonus + items * 0.9 * $123.45) - (bonus * $1 + items * $99.00)
= items * bonus - bonus * $1 + items * 0.9 * $123.45 - items * $99.00
= (items - 1) * bonus + items * (0.9 * 123.45 - 99.00)
= (items - 1) * bonus + items * 12.105
In other words; if you could find max. income(x,y) and min. expense(x,y) and combine/optimise them at the end, you still wouldn't want to because it's less efficient/slower, and better/faster to just find max. profit(x,y).

How can I write an algorithm to solve this formula for closest point to a set of lines in 3d

I am trying to understand how I can write an algorithm to solve the formula written at the end of this answer
I know simple equations systems may be solved through matrices when you have Ax=b you can solve with x = A^(-1)b but this is a little more complicated for me
I think I should come to a form such as A vec(c) = b but I don't have idea how to deal with sums and dot products..
Use the last formula in the linked answer.To simplify it, normalize direction vectors d(i) (to exclude denominator)
Sum[i=1..N] (c - a(i) - d(i) * DotProduct(c-a(i), d(i))) = 0
Sum[i=1..N] (c - a(i) -
d(i) * ((c.x-a(i).x) * d(i).x + (c.y-a(i).y) * d(i).y +(c.z-a(i).z) * d(i).z)) = 0
etc
You have a system of three linear equations. You can solve this system with simple elimination method (which is close to the school approach for solving eq. systems)

Characteristic Equation of A Closed Loop System in Terms of PI Controller

Just wondering if you could guide me on how to find the characteristic equation of a trasfer function G(s) (see below for G(s)) in terms of the coefficients in the PI controller?
G(s) = 45/(5s + 2)
No sure what to do here, as I'm used to just multiplying the error by the proportional gain - but there's no error value provided.
Any advice would be much appreciated. Thanks in advance ;)
Given:
G(s) = 45/(5s + 2) (plant transfer function)
C(s) = Kp + Ki/s (PI Controller transfer function)
and assuming your system looks like:
https://www.dropbox.com/s/wtt4tvujn6tpepv/block_diag.JPG
The equation of the closed loop transfer function is:
Gcl(s) = C(s)G(s)/(1+C(s)G(s)) = CG/(1+CG)
In general, If you had another transfer function on the feedback path, H(s),
the the closed loop transfer function becomes:
CG / (1 + CGH)
If you plug in G(s) and C(s) as shown above you will get the following closed loop transfer function after some algebraic simplification:
45*[Kp*s + Ki] / [5*s*s + (2 + 45*Kp)*s + 45*Ki]
and so the characteristic equation is
5*s*s + (2 + 45*Kp)*s + 45*Ki = 0
Notice how the integral term adds a pole to the system but has a side effect of also adding a zero which could produce unwanted transient behaviour if Kp is not chosen correctly. The presence of Kp in the s term in the denominator shows that the value of Kp will determine the damping ratio of the system and therefore determine the transient response.
More information on poles, zeros, and system dynamics:
http://web.mit.edu/2.14/www/Handouts/PoleZero.pdf

Resources