Variable sized keyword arguments in Julia - arguments

I'm writing a convex solver, for concreteness' sake assume it's solving ordinary least squares: find x that minimizes ||b-Ax||^2. So my function call would look like
x = optim(A, b)
I would like to be able to use warm-starts when they are useful, to provide a good initial guess at the solution. So something like
x = optim(A, b; w=some_starting_value)
My problem is that if I want to use a default value, some_starting_value needs to be of length equal to the number of columns in A, which is chosen by the user. In R it's possible to do something like
x = optim(A, b; w=ncols(A))
Does any similar functionality exist in Julia? My current solution is to do something like
x = optim(A, b; w=0)
and then check if w != 0 and set it to be the right size vector inside the optim function. But that seems hacky and (I assume) messes with type stability.
Is there a clean way to specify a keyword argument whose size depends on a required argument?
Edit
It looks like something like
function foo{T<:Real}(A::Array{T,2}; w=zeros(T,size(x,2)))
println("$x")
println("$y")
end
will do the trick.

It appears that default parameters in Julia can be expressions containing the values of the other parameters:
julia> a(x, y=2*x) = println("$x, $y")
a (generic function with 2 methods)
julia> a(10)
10, 20
Additionally the default parameter expressions can make calls to other functions:
julia> b(x) = sqrt(x)
b (generic function with 1 method)
julia> a(x, y=b(x)) = println("$x, $y")
a (generic function with 2 methods)
julia> a(100)
100, 10.0

Related

Find the reverse algorithm to go back to initial value

I have a problem and try to solve it for hours. Here is a pseudocode:
x = 30
if x > 100 then max(function_1(x), function_2(x))
elseif x > 50 then max(function_3(x), function_4(x))
elseif x > 20 then max(function_5(x), function_6(x))
elseif x < 10 then function_7(x)
else function_8(x)
This was a code I run with different values of x. Then functions are mathematical formulas. Now, I have the result of the above for each x and I want to revert and go back to x again.
I found all the reversed mathematical formulas of functions. For example for function_1(x), I have a rev_function_1(y) that will get the result and will give me the initial x.
But, since the original code has a lot of cases, plus the MAX, I am not sure how I can run one code, for every value and return the original one.
Edit: All the functions are one-to-one
Edit2: It seems that the whole function is not one-to-one while each of them individually are. As a result, I have two x for every y and I cannot revert it.
You need to study the result space (or domain) of you functions.
There exists an inverse only if each x results in a unique f(x) that cannot be obtained for any other value of x. This property is called one-to-one
Let me give you an example:
Let's say that f(1) == 8 and that also f(10) == 8.
Then you don't know if the inverse of 8 is 1 or 10.
If the function is one-to-one the inverse will be a unique value. If it is not one-to-one the inverse may be more than one value.
The next step is to figure out which inverse to call.
One way to do it is to call the inverse of all subfunctions.
For each x value you get, calculate f(x). If f(x) gets back the value you wanted to inverse, then keep that x, otherwise throw it away.
When you have gone through all values you will have one (or more) matching x value.
Edit:
Another way is to pre-compute which function that corresponds to a certain interval of output values. You can store these in a database as the tuples:
lowerbound, upperbound, inverse_function
You can then find which function to use (assuming SQL):
SELECT inverse_function FROM lookup_table
WHERE :fx > lowerbound and :fx < upperbound
:fx is the value you want to inverse.
You have an output y for each x. If two xs produce the same y then you can't undo the mapping since y could have come from either x. If no two xs produce the same y then you know it came from that y's x.
NOTE: Since a reverse algorithm is required and OP have not made mandatory to use the original functions or their corresponding reverse functions so following method can be used.
"Now, I have the result of the above for each x and I want to revert and go back to x again.". Its seems that its a case of [Key] => [Value].
//one-to-many case.
if x > 100 then max(function_1(x), function_2(x))
elseif x > 50 then max(function_3(x), function_4(x))
Above piece of code tells that multiple different inputs "x" can produce same output "y".
So you can use std::multimap if you are using C++.
Now multimap can be directly used at input output level, that is, if given a input "x" and it produces an output "y" after running all the formulas then multimap.insert(std::pair<int,int>(y,x));
Therefore, now given an output "y" you can find all the prospective input "x" which could produce an output "y" as follows:
std::pair <std::multimap<int,int>::iterator, std::multimap<int,int>::iterator> ret;
ret = multimap.equal_range(y);
for (std::multimap<int,int>::iterator it=ret.first; it!=ret.second; ++it)
std::cout << ' ' << it->second;
If the relation between input "x" and its corresponding "y" is one-to-one then, std::map can be used.
I think that this is not possible, let's take this example
function_1(x) = x - 200
function_2(x) = x - 201
function_7(x) = x - 5
then for x = 200 => y =0 and for x = 5 => y =0
So for a given value of y we can have multiple values of x
There is no solution in the general case. Think about this set of formulas:
function_1(x) { return x }
function_2(x) { return x }
function_3(x) { return x }
....
I guess it's obvious why it can't work.

How to define the variable as a matrix in sage?

I want to define a function which deals with matrices for example..
If I have a characteristic polynomial of a matrix with me and I want to check the cayley hamilton theorem.. What can be done better?
var('x')
f(x)=2x^2+x+3 # this the characteristic polynomial of $A$ (say)
print f(A)# this is what I want as an answer..
In the above if I want to replace my x by a matrix what I have to do?
So, ultimate aim is to find define a polynomial which can take matrix
Thanks in advance...
Amazingly, apparently this hasn't come up very often despite having already been mentioned six years ago, so we haven't fixed it.
sage: M = matrix([[1,2],[3,4]])
sage: g(x) = x^2-5*x-2
sage: g(M)
TypeError: no canonical coercion from Full MatrixSpace of 2 by 2 dense matrices over Integer Ring to Callable function ring with argument x
(Doing at least something about this is Trac 15487.)
However, try using this trick. The problem is only with symbolic expressions, not polynomials.
sage: M = matrix([[1,2],[3,4]])
sage: f = M.charpoly()
sage: f.subs(x=M)
[0 0]
[0 0]
Edit: in general, try something like this.
M = matrix([[1,2],[3,4]])
R.<t> = PolynomialRing(SR)
f = t^2+t+1
f(M)

Matrix valued undefined functions in SymPy

I'm looking for a possibility to specify matrix quantities that depend on a variables. For scalars that works as follows, using undefined functions:
from sympy import *
x = Function('f')(t)
diff(x,t)
For Matrix Symbols like
x = MatrixSymbol('x',3,3)
i cannot find an equivalent. There is
i,j = Symbols('i j')
x = FunctionMatrix(6,1,Lambda((i,j),f))
but this is not what i need as you need to specify the contents of the matrix. The context is that i have equations
which should be derived in time and contain matrix valued elements.
I cannot deal with the elements of the matrices one by one.
Thanks!
I'm not sure about what you want, but I think you want to make a Matrix with differentiable elements. In that case, see if this works for you.
Create a matrix with function elements:
X = sym.FunctionMatrix(6,1,lambda i,j:sym.Function("x_%d%d" % (i,j))(t))
M = sym.Matrix(X)
M.diff(t)
This results in
Matrix([
[Derivative(x_00(t), t)],
[Derivative(x_10(t), t)],
[Derivative(x_20(t), t)],
[Derivative(x_30(t), t)],
[Derivative(x_40(t), t)],
[Derivative(x_50(t), t)]])
You may then replace stuff as you need.
Also, it may be preferrable if you populate the matrix with the expressions you need before differentiating. Leaving them as undefined functions may make it harder for you to simplify after substitution.

Natural Logarithm of Bessel Function, Overflow

I am trying to calculate the logarithm of a modified Bessel function of second type in MATLAB, i.e. something like that:
log(besselk(nu, Z))
where e.g.
nu = 750;
Z = 1;
I have a problem because the value of log(besselk(nu, Z)) goes to infinity, because besselk(nu, Z) is infinity. However, log(besselk(nu, Z)) should be small indeed.
I am trying to write something like
f = double(sym('ln(besselk(double(nu), double(Z)))'));
However, I get the following error:
Error using mupadmex Error in MuPAD command: DOUBLE cannot convert the input expression into a double array. If the input expression contains a symbolic variable, use the VPA function instead.
Error in sym/double (line 514) Xstr = mupadmex('symobj::double', S.s, 0)`;
How can I avoid this error?
You're doing a few things incorrectly. It makes no sense to use double for your two arguments to to besselk and the convert the output to symbolic. You should also avoid the old string based input to sym. Instead, you want to evaluate besselk symbolically (which will return about 1.02×102055, much greater than realmax), take the log of the result symbolically, and then convert back to double precision.
The following is sufficient – when one or more of the input arguments is symbolic, the symbolic version of besselk will be used:
f = double(log(besselk(sym(750), sym(1))))
or in the old string form:
f = double(sym('log(besselk(750, 1))'))
If you want to keep your parameters symbolic and evaluate at a later time:
syms nu Z;
f = log(besselk(nu, Z))
double(subs(f, {nu, Z}, {750, 1}))
Make sure that you haven't flipped the nu and Z values in your math as large orders (nu) aren't very common.
As njuffa pointed out, DLMF gives asymptotic expansions of K_nu(z) for large nu. From 10.41.2 we find for real positive arguments z:
besselk(nu,z) ~ sqrt(pi/(2nu)) (e z/(2nu))^-nu
which gives after some simplification
log( besselk(nu,z) ) ~ 1/2*log(pi) + (nu-1/2)*log(2nu) - nu(1 + log(z))
So it is O(nu log(nu)). No surprise the direct calculation fails for nu > 750.
I dont know how accurate this approximation is. Perhaps you can compare it for the values where besselk is smaller than the numerical infinity, to see if it fits your purpose?
EDIT: I just tried for nu=750 and z=1: The above approximation gives 4.7318e+03, while with the result of horchler we get log(1.02*10^2055) = 2055*log(10) + log(1.02) = 4.7318e+03. So it is correct to at least 5 significant digits, for nu >= 750 and z=1! If this is good enough for you this will be much faster than symbolic math.
Have you tried the integral representation?
Log[Integrate[Cosh[Nu t]/E^(Z Cosh[t]), {t, 0, Infinity}]]

How map/tween a number based on a dynamic curve

I am really lacking terminology here, so any help with that appreciate. Even it doesn't answer the question it can hopefully get me closer to an answer.
How can I get y from a function of p where the curviness is also a variable (possible between 0 and 1? Or whatever is best?).
I am presuming p is always between 1 and 0, as is the output y.
The graphic is just an illustration, I don't need that exact curve but something close to this idea.
Pseudo code is good enough as an answer or something c-style (c, javascript, etc).
To give a little context, I have a mapping function where one parameter can be the – what I have called – easing function. There are based on the penner equations. So, for example if I wanted to do a easeIn I would provide:
function (p) { return p * p; };
But I would love to be able to do what is in the images: varying the ease dynamically. With a function like:
function (p, curviness) { return /* something */; }
You might try messing around with a Superellipse, it seems to have the shape malleability you're looking for. (Special case: Squircle)
Update
Ok, so the equation for the superellipse is as follows:
abs(x/a)^n + abs(y/b)^n = 1
You're going to be working in the range from [0,1] in both so we can discard the absolute values.
The a and b are for the major and minor ellipse axes; we're going to set those to 1 (so that the superellipse only stretches to +/-1 in either direction) and only look at the first quadrant ([0, 1], again).
This leaves us with:
x^n + y^n = 1
You want your end function to look something like:
y = f(p, n)
so we need to get things into that form (solve for y).
Your initial thought on what to do next was correct (but the variables were switched):
y^n = 1 - p^n
substituting your variable p for x.
Now, initially I'd thought of trying to use a log to isolate y, but that would mean we'd have to take log_y on both sides which would not isolate it. Instead, we can take the nth root to cancel the n, thus isolating y:
y = nthRoot(n, 1 - p^n)
If this is confusing, then this might help: square rooting is just raising to a power of 1/2, so if you took a square root of x you'd have:
sqrt(x) == x^(1/2)
and what we did was take the nth root, meaning that we raised things to the 1/n power, which cancels the nth power the y had since you'd be multiplying them:
(y^n)^(1/n) == y^(n * 1/n) == y^1 == y
Thus we can write things as
y = (1 - p^n)^(1/n)
to make things look better.
So, now we have an equation in the form
y = f(p, n)
but we're not done yet: this equation was working with values in the first quadrant of the superellipse; this quadrant's graph looks different from what you wanted -- you wanted what appeared in the second quadrant, only shifted over.
We can rectify this by inverting the graph in the first quadrant. We'll do this by subtracting it from 1. Thus, the final equation will be:
y = 1 - (1 - p^n)^(1/n)
which works just fine by my TI-83's reckoning.
Note: In the Wikipedia article, they mention that when n is between 0 and 1 then the curve will be bowed down/in, when n is equal to 1 you get a straight line, and when n is greater than 1 then it will be bowed out. However, since we're subtracting things from 1, this behavior is reversed! (So 0 thru 1 means it's bowed out, and greater than 1 means it's bowed in).
And there you have it -- I hope that's what you were looking for :)
Your curviness property is the exponent.
function(p, exp) { return Math.pow(p, exp); }
exp = 1 gives you the straight line
exp > 1 gives you the exponential lines (bottom two)
0 < exp < 1 gives you the logarithmic lines (top two)
To get "matching" curviness above and below, an exp = 2 would match an exp = 1/2 across the linear dividing line, so you could define a "curviness" function that makes it more intuitive for you.
function curvyInterpolator(p, curviness) {
curviness = curviness > 0 ? curviness : 1/(-curviness);
return Math.pow(p, curviness);
}

Resources