Let's consider the function
f[x]:=x^3
To find the maximum value of this function in Mathematica, we can use
FindMaximum[f[x],x]/
which displays the result
{2.093608796950724*10^313, {x -> 2.75612*10^104}}.
this shows that x=2.75612*10^104 is a critical value for f(x) in some interval.
How can I extract this value of x to be used in my code further? For example, if I want to get \frac{x_0 - x}{f(x_0)}?
Related
My y variable (n=30,000) is distributed with very heavy tails (both positive and negative), for which the fitDist GAMLSS function selects the ST4 family.
I tried to assess a GAMLSS-based regression with an explanatory variable x (pb smoothing), but tails on y are so heavy that convergence does not reach after 50 cycles, even after refit (time consuming+++).
Therefore, I normalized y using the orderNorm transformation (bestNormalize package), which allowed to easily and quickly reach convergence, and then to predict the fitted value from the GAMLSS object.
However, these fitted "orderNormalized" values are a GAMLSS object, and thus cannot be inversed using the predict function from bestNormalize (since this latter seems to not recognize a GAMLSS object).
My question: is it possible, whatever the means, to apply an inverse orderNorm transformation to fitted values from a GAMLSS object?
It is easy to get confused about on what to use the predict function, so I list here the steps without code (as there is no example in the question):
1) transposeObj = orderNorm(data$outputvariable)
2) fitObj = gamlls(transposeObj$x.t ~., data)
3) pred = predict(fitObj, type = 'response')
4) inversedpredictions = predict(transposeObj, newdata = pred, inverse = TRUE)
In plain text, you normalize your data, fit a model, make predictions with the fit, and then predict on the predictions with the normalization object obtained from orderNorm.
The vignette for bestNormalize has a similar example, using lm instead of GAMLSS. See the application section of the vignette. Once you have run the normalization procedure, you should be able to repeat and invert the transformation with the predict function.
The key is storing a transformation object as an R object that can then be fed into the predict (or rather, the predict.bestNormalize) function.
The basic definition of random variable is that it is a function based on random experiment.the question is that if it is a function say f then how can it take numerical values..
Suppose if we toss two coins and X be random variable relating no. of heads with (0,1,2) .For event of two heads say w....we have X(w)=2 is value of function X at w. and not of X itself..
But sometimes it is written that x is a r .v taking values 0,1,2,....
Don't it sound wrong to say function and takes values?
A random variable is a well defined function X: E -> R, whose domain E is a probability space and its codomain is (generally speaking) the set of real numbers.
Intuitively, X is some kind of metric or measurement on the elements of E.
Example 1
Let E be the set of users of Stack Overflow at a given point in time, say right now. And let X be the function that assigns their reputation to every SO user. For example, you could calculate P(X >= 5000) which is the percent of SO users with a reputation of 5000 or more.
Notice that P(X >= 5000) is nothing but a compact notation for the subset of E defined as:
{u in E | X(u) >= 5000}
meaning the subset of SO users u with a reputation of 5000 or more.
Example 2
Let E be the set of questions in SO and X the function that assigns the number of votes (at certain point in time) to each question. If you pick one question q at random, X(q) would be its number of votes and we could ask for the probability of, say, X < 0 (down-voted questions.)
Here the subset of such questions is
{q in E | X(q) < 0}
i.e., the subset of questions q having a negative vote count.
Conclusion
There is nothing random in a Random variable. The randomness is in the way we pick elements (or subsets) from its domain.
Speaking of functions - Yes, it is safe to say that a function can take certain values. Speaking of random variables and probability, the definition I know is:
A random variable assigns a numerical value to each possible outcome of a random experiment
This definition does indeed say that X (aka random variable) is a function. In your case, where it is said that X (as in function) can take values 0,1,2 is basically saying that the subset of the codomain (or even the codomain or target set itself) of function X is the set {0,1,2}, or interval
[0,2] ⊂ ℕ.
I am totally new to this site and MATLAB, so please excuse me if my question is naive or a duplicate of some already existing question.
Well, I am a mathematics student, using MATLAB to aid in my project. There is a thing call "L^2 inner product" in which you need 2 mathematical functions, says f(x) and g(x), as inputs. It should work like
inner(f,g)=integrat f(x)*g(x) from 0 to 1.
The problem is I don't know how to write that in MATLAB.
To summarize, I want to make a MATLAB function whose inputs are two mathematical functions, the output is a real number. I know how to make an inline object but I don't know how to proceed further. Any help would be highly appreciated.
PS. I don't know if my tags are appropriate or on topic or not, please bear with me.
I will build on what #transversality condition wrote in greater detail (eg. there should be a .*)
Illustrative example with anonymous functions
h = #sin % This assigns h the function handle of the sin function
% If you know c, c++, this is basically a function pointer
inner = #(f,g)integral(#(x)f(x).*g(x),0,1) % This assigns the variable inner
% the function hanlde of a function which
% takes in two function handles f and g
% and calculates the integral from 0 to 1
% Because of how Matlab works, you want .* here;
% you need f and g to be fine with vector inputs.
inner(h, #cos) %this will calculate $\int_0^1 sin(x)cos(x)dx$
This yields 0.354
Writing inner as a regular function
In the previous example, inner was a variable, and the value of the variable was a function handle to a function which calculates the inner product. You could also just write a function that calculates the inner product. Create a file myinner.m with the following code:
function y = myinner(f, g)
y = integral(#(x)f(x).*g(x),0,1);
You could then call myinner the same way:
myinner(#sin, #cos)
result: 0.354
Note also that the integral function calculates the integral numerically and in strange situations, it's possible to have numerical problems.
I was wondering if it is possible to use NMinimize from mathematica with an objective function, which contains random variables? E.g. I have a function with parameters which follow a distribution (normal and truncated normal). I want to fit its histogram to data that I have and constructed an objective function which now I need to minimize (so, the objective function depends on the mus and sigmas of the parameters and need to be determined). If I run my code, there is an error message: It claims the parameter for the NormalDistribution needs to be positive (If I plug in numbers for the mus and sigmas of my objective functionby hand, i don't get an error message).
So, I am wondering if it is not possible for NMinimize to handle a non-analytic function.
Thanks!
Here, I give you an example code (please note that the original function is more complicated)
listS and listT are both lists of event times. I want to fit the curve of my statistical model for the times (here, a very simple one, it consists of a truncated normal distribution) to the data I have.
For this I compare the survival curves and need to minimize the sum of the least squares.
My problem is that the function NMinimize doesn't seem to work. (Please note, that the original objective function consists of a more complicated function with parameters that are random variables)
(* Both lists are supposed to be the list of times *)
SurvivalS[listeS_, x_] := Module[{res, survivald},
survivald = SurvivalDistribution[listeS];
res = SurvivalFunction[survivald, x];
res]
Residuum[listeT_, listeS_] :=
Table[(SurvivalS[listeT, listeT[[i]]] - SurvivalS[listeS, listeT[[i]]]), {i,
1, dataN}];
LeastSquare[listeT_, listeS_] :=
Total[Function[x, x^2] /#
Residuum[listeT,
listeS]];(* objective function, here ist is the sum of least square *)
objectiveF[mu_, sigma_] :=
Piecewise[{{LeastSquare[listeT, listeS[mu, sigma]], mu > 0 && sigma > 0}},
20 (1 + (sigma + mu)^2)];
pool = 100; (* No. points from MonteCarlo *)
listeS[mu_, sigma_] := RandomVariate[TruncatedDistribution[{0, 1}, NormalDistribution[mu, sigma]],pool];(* simulated data *)
listeT = Sort[RandomVariate[TruncatedDistribution[{0, 1}, NormalDistribution[.5, .9]],60]]; (* list of "measured" data *)
dataN = Length[listeT];
NMinimize[objectiveF[mu, .9], {{mu, .4}}]
The error message is: "RandomVariate::realprm: Parameter mu at position 1 in NormalDistribution[mu,0.9] is expected to be real. >>"
Graphics#Flatten[Table[
(*colors, dont mind*)
{ColorData["CMYKColors"][(a[[r, t]] - .000007)/(.0003 - 0.000007)],
(*point size, dont mind*)
PointSize[1/Sqrt[r]/10],
(*Coordinates for your points "a" is your data matrix *)
Point[
{(rr =Log[.025 + (.58 - .25)/64 r]) Cos#(tt = t 5 Degree),
rr Sin#tt}]
} &#
(*values for the iteration*)
, {r, 7, 64}, {t, 1, 72}], 1]
(*Rotation, dont mind*)
/. gg : Graphics[___] :> Rotate[gg, Pi/2]
Okay, I'll bite. First, Mathematica allows functions to be applied via one of several forms: standard form - f[x], prefix form - f # x, postfix form - f // x, and infix form - x ~ f ~ y. Belisarius's code uses both standard and prefix form.
So, let's look at the outermost functions first: Graphics # x /. gg : Graphics[___]:> Rotate[gg,Pi/2], where x is everything inside of Flatten. Essentially, what this does is create a Graphics object from x and using a named pattern (gg : Graphics[___]) rotates the resulting Graphics object by 90 degrees.
Now, to create a Graphics object, we need to supply a bunch of primitives and this is in the form of a nested list, where each sublist describes some element. This is done via the Table command which has the form: Table[ expr, iterators ]. Iterators can have several forms, but here they both have the form {var, min, max}, and since they lack a 4th term, they take on each value between min and max in integer steps. So, our iterators are {r, 7, 64} and {t, 1, 72}, and expr is evaluated for each value that they take on. Since, we have two iterators this produces a matrix, which would confuse Graphics, so we using Flatten[ Table[ ... ], 1] we take every element of the matrix and put it into a simple list.
Each element that Table produces is simply: color (ColorData), point size (PointSize), and point location (Point). So, with Flatten, we have created the following:
Graphics[{{color, point size, point}, {color, point size, point}, ... }]
The color generation is taken from the data, and it assumes that the data has been put into a list called a. The individual elements of a are accessed through the Part construct: [[]]. On the surface, the ColorData construct is a little odd, but it can be read as ColorData["CMYKColors"] returns a ColorDataFunction that produces a CMYK color value when a value between 0 and 1 is supplied. That is why the data from a is scaled the way it is.
The point size is generated from the radial coordinate. You'd expect with 1/Sqrt[r] the point size should be getting smaller as r increases, but the Log inverts the scale.
Similarly, the point location is produced from the radial and angular (t) variables, but Point only accepts them in {x,y} form, so he needed to convert them. Two odd constructs occur in the transformation from {r,t} to {x,y}: both rr and tt are Set (=) while calculating x allowing them to be used when calculating y. Also, the term t 5 Degree lets Mathematica know that the angle is in degrees, not radians. Additionally, as written, there is a bug: immediately following the closing }, the terms & and # should not be there.
Does that help?