Creating a function that depends on an arbitrary function - wolfram-mathematica

How would I create a function in Mathematica that depends on an arbitrary function? For instance, if I were to make a function that takes the derivative of a function (I know this example is built into Mathematica, go with me on this), this would involve translating the variable of the arbitrary function. Is it possible to do this?
What I am really trying to do is make a function that takes the fractional derivative of a function. There is a way to do this via integration, but I would like to use the limit definition of the fractional derivative.

Taking a function
f = (x + 2) (x^2 + 1) x (x - 1) (x - 2);
Here is a function that takes a derivative of a function
g[f_] := D[f, x]

Related

Variable sized keyword arguments in Julia

I'm writing a convex solver, for concreteness' sake assume it's solving ordinary least squares: find x that minimizes ||b-Ax||^2. So my function call would look like
x = optim(A, b)
I would like to be able to use warm-starts when they are useful, to provide a good initial guess at the solution. So something like
x = optim(A, b; w=some_starting_value)
My problem is that if I want to use a default value, some_starting_value needs to be of length equal to the number of columns in A, which is chosen by the user. In R it's possible to do something like
x = optim(A, b; w=ncols(A))
Does any similar functionality exist in Julia? My current solution is to do something like
x = optim(A, b; w=0)
and then check if w != 0 and set it to be the right size vector inside the optim function. But that seems hacky and (I assume) messes with type stability.
Is there a clean way to specify a keyword argument whose size depends on a required argument?
Edit
It looks like something like
function foo{T<:Real}(A::Array{T,2}; w=zeros(T,size(x,2)))
println("$x")
println("$y")
end
will do the trick.
It appears that default parameters in Julia can be expressions containing the values of the other parameters:
julia> a(x, y=2*x) = println("$x, $y")
a (generic function with 2 methods)
julia> a(10)
10, 20
Additionally the default parameter expressions can make calls to other functions:
julia> b(x) = sqrt(x)
b (generic function with 1 method)
julia> a(x, y=b(x)) = println("$x, $y")
a (generic function with 2 methods)
julia> a(100)
100, 10.0

Can a MATLAB function take mathematical functions as inputs?

I am totally new to this site and MATLAB, so please excuse me if my question is naive or a duplicate of some already existing question.
Well, I am a mathematics student, using MATLAB to aid in my project. There is a thing call "L^2 inner product" in which you need 2 mathematical functions, says f(x) and g(x), as inputs. It should work like
inner(f,g)=integrat f(x)*g(x) from 0 to 1.
The problem is I don't know how to write that in MATLAB.
To summarize, I want to make a MATLAB function whose inputs are two mathematical functions, the output is a real number. I know how to make an inline object but I don't know how to proceed further. Any help would be highly appreciated.
PS. I don't know if my tags are appropriate or on topic or not, please bear with me.
I will build on what #transversality condition wrote in greater detail (eg. there should be a .*)
Illustrative example with anonymous functions
h = #sin % This assigns h the function handle of the sin function
% If you know c, c++, this is basically a function pointer
inner = #(f,g)integral(#(x)f(x).*g(x),0,1) % This assigns the variable inner
% the function hanlde of a function which
% takes in two function handles f and g
% and calculates the integral from 0 to 1
% Because of how Matlab works, you want .* here;
% you need f and g to be fine with vector inputs.
inner(h, #cos) %this will calculate $\int_0^1 sin(x)cos(x)dx$
This yields 0.354
Writing inner as a regular function
In the previous example, inner was a variable, and the value of the variable was a function handle to a function which calculates the inner product. You could also just write a function that calculates the inner product. Create a file myinner.m with the following code:
function y = myinner(f, g)
y = integral(#(x)f(x).*g(x),0,1);
You could then call myinner the same way:
myinner(#sin, #cos)
result: 0.354
Note also that the integral function calculates the integral numerically and in strange situations, it's possible to have numerical problems.

Hash Function in Separate Chaining Vs. Open Addressing

I'm reading Weiss's Data Structures book, and I'm confused with the difference between hash function in Separate Chaining Vs. hash function in Open Addressing.
In separate chaining, the hash function is defined as:
hash(x) = x mod tableSize
whereas in open addressing:
h_i(x) = (hash(x) + f(i)) mod tableSize
where i is the number of trials and f(i) is the function such as f(i) = i for Linear Probing, f(i) = i^2 for Quadratic Probing, etc.
I have 2 questions:
1) In Separate Chaining, does it make sense to have a hash function:
hash(x) = x mod 10
when the table size equals, let's say, 11?
2) In Open Addressing, do we always have to mod the key(+gap) by tableSize twice?
1) Not really. It will be correct, but not efficient. If you mod by less than the table size, there will be at least one bucket unused at the top of your table. If there is a specific reason to choose that value to mod by (there might be, if you're looking for certain properties) then you could just trim the table to that size and avoid the waste.
2) That isn't really necessary (((a mod c) + b) mod c is redundant) and that isn't the only definition in the first place. Slightly more generally you have h_i(x) = f(x, i) mod tableSize, some obvious choices for f include
f(x, i) = x + i (linear probing)
f(x, i) = x + a * i + b * i * i for some constants a and b != 0 (quadratic probing)
f(x, i) = h1(x) + i * h2(x) for some suitable hash functions h1 and h2 (double hashing)
That last one is especially susceptible to overflow, which could mess up some properties, so you might want to perform some calculations modulo the table size (especially if that's a prime number, because then you have a nice field to work in).
Also, you're always going to use f(x, i) mod tablesize before you need f(x, i + 1), so you might as well calculate f incrementally, where at every step you mod by the tablesize because you have to do it anyway.
But we're certainly not limited to those forms of f or indeed to this scheme of open addressing where we search for an open spot. Cuckoo hashing (and variants) has two candidate places to insert an item, and will kick out an item and move it to its alt-location (possibly also displacing an item) if both places are full (with some care taken to avoid infinite loops). That way a lookup only has two places to look at, instead of potentially the entire table. It has many variants.

NMinimize with function containing random variables

I was wondering if it is possible to use NMinimize from mathematica with an objective function, which contains random variables? E.g. I have a function with parameters which follow a distribution (normal and truncated normal). I want to fit its histogram to data that I have and constructed an objective function which now I need to minimize (so, the objective function depends on the mus and sigmas of the parameters and need to be determined). If I run my code, there is an error message: It claims the parameter for the NormalDistribution needs to be positive (If I plug in numbers for the mus and sigmas of my objective functionby hand, i don't get an error message).
So, I am wondering if it is not possible for NMinimize to handle a non-analytic function.
Thanks!
Here, I give you an example code (please note that the original function is more complicated)
listS and listT are both lists of event times. I want to fit the curve of my statistical model for the times (here, a very simple one, it consists of a truncated normal distribution) to the data I have.
For this I compare the survival curves and need to minimize the sum of the least squares.
My problem is that the function NMinimize doesn't seem to work. (Please note, that the original objective function consists of a more complicated function with parameters that are random variables)
(* Both lists are supposed to be the list of times *)
SurvivalS[listeS_, x_] := Module[{res, survivald},
survivald = SurvivalDistribution[listeS];
res = SurvivalFunction[survivald, x];
res]
Residuum[listeT_, listeS_] :=
Table[(SurvivalS[listeT, listeT[[i]]] - SurvivalS[listeS, listeT[[i]]]), {i,
1, dataN}];
LeastSquare[listeT_, listeS_] :=
Total[Function[x, x^2] /#
Residuum[listeT,
listeS]];(* objective function, here ist is the sum of least square *)
objectiveF[mu_, sigma_] :=
Piecewise[{{LeastSquare[listeT, listeS[mu, sigma]], mu > 0 && sigma > 0}},
20 (1 + (sigma + mu)^2)];
pool = 100; (* No. points from MonteCarlo *)
listeS[mu_, sigma_] := RandomVariate[TruncatedDistribution[{0, 1}, NormalDistribution[mu, sigma]],pool];(* simulated data *)
listeT = Sort[RandomVariate[TruncatedDistribution[{0, 1}, NormalDistribution[.5, .9]],60]]; (* list of "measured" data *)
dataN = Length[listeT];
NMinimize[objectiveF[mu, .9], {{mu, .4}}]
The error message is: "RandomVariate::realprm: Parameter mu at position 1 in NormalDistribution[mu,0.9] is expected to be real. >>"

Function derivatives

I have some function,
int somefunction( //parameters here, let's say int x) {
return something Let's say x*x+2*x+3 or does not matter
}
How do I find the derivative of this function? If I have
int f(int x) {
return sin(x);
}
after derivative it must return cos(x).
You can approximate the derivative by looking at the gradient over a small interval. Eg
const double DELTA=0.0001;
double dfbydx(int x) {
return (f(x+DELTA) - f(x)) / DELTA;
}
Depending on where you're evaluating the function, you might get better results from (f(x+DELTA) - f(x-DELTA)) / 2*DELTA instead.
(I assume 'int' in your question was a typo. If they really are using integers you might have problems with precision this way.)
You can get the numerical integral of mostly any function using one of many numerical techniques such as Numerical ordinary Differential Equations
Look at: Another question
But you can get the integration result as a function definition with a library such as Maple, Mathematica, Sage, or SymPy

Resources