Is there a way to obtain the order of an ODE in mathematica.
For example, if i have y''+5y i want mathematica return 2 (beacuse it's a 2nd order equation). So, is it possible what i'm asking?
here is a way to extract the value automatically:
ode = y'' + y' + y == 0 ;
Max[Cases[ ode , Derivative[n_][y] :> n , Infinity]]
2
note this just finds the largest derivative in the expression, it doesn't verify if the expression is actually an ode..
Related
function [v] = f_dora(f,x0,y0,n)
p_y=#(f,x,y) (f(x,y+0.0001) - f(x,y-0.0001))/(2*0.0001);
p_x=#(f,x,y) (f(x+0.0001,y) - f(x-0.0001,y))/(2*0.0001);
for i=1:n
g=#(x,y) f(x,y)*p_y(f,x,y) + p_x(f,x,y)
f=#(x,y) g(x,y)
end
v=f(x0,y0);
end
This function has a input f, like (f=#((x,y) y -x^2 +1), and want to has as output $f^n(x,y)$ (the nth derivative of f, where y depends of x)
The steps that I follow to solve this problem are:
First, I define the function p_y and p_x that is the partial derivate aproximation of f.
And, I define the function g, which is the (p_x + p_y*f(x,y))(this is the form of the derivate f)
Finally, I want to evaluate n times the function g_k in the same function, something like this:
g_1=#(x,y) f(x,y)*p_y(f,x,y) + p_x(f,x,y);
g_2=#(x,y) g_1(x,y)*p_y(g_1,x,y) + p_x(g_1,x,y);
.
.
.
g_n=#(x,y) g_(n-1)(x,y)*p_y(g_(n-1),x,y) + p_x(g_(n-1),x,y);
g_n(x,y)
or
Algorithm
But, when I use the first code I have incorrect results for n>=3.
A example to try the code, is using f_dora(#((x,y) y -x^2 +1,0,0.5,n), and for n=1, the result that would be correct is 1.5. For n=2,the result that would be correct is -0.5, for n=3. the result that would be correct is -0.5....
Someone could help me by seeing what the errors are in my code?, or does anyone know any better way to calculate partial derivatives without using the symbolic?
I have been away from Mathematica for quite a while and am trying to fix some old notebooks from v4 that are no longer working under v11. I'm also a tad rusty.
I am attempting to use functional minimization to fit a polynomial of variable degree to an arbitrary function (F) given a starting guess (ao) and domain of interest (d). Note that while F is arbitrary, its nature is such that the integral of the product of F and a polynomial (or F^2) can always be evaluated algebraically.
For the sake of example, I'll use the following inputs:
ao = { 1, 2, 3, 4 }
d = { -1, 1 }
F = Sin[x]
To do so, I create an array of 'indexed' variables
polyCoeff = Array[a,Length[a],0]
Result: polycoeff = {a[0], a[1], a[2], a[3]}
I then create the polynomial itself using the following
genPoly[{},x_] := 0
genPoly[a_List,x_] := First[a] + x genPoly[Rest[a],x]
poly = genPoly[polyCoeff,x]
Result: poly = a[0] + x (a[1] + x (a[2] + x a[3]))
I then define my objective function as the integral of the square of the error of the difference between this poly and the function I am attempting to fit:
Q = Integrate[ (poly - F[x])^2, {x, d[[1]],d[[2]]} ]
result: Q = 0.545351 - 2. a[0.]^2 + 0.66667 a[1.]^2 + .....
And this is where things break down. poly looks just as I expected: a polynomial in x with coefficients that look like a[0], a[1], a[2], ... But, Q is not exactly what I expected. I expected and got a new polynomial. But not the coefficients contained a[0.], a[1.], a[2.], ...
The next step is to create the initial guess for FindMinimum
init = Transpose[{polyCoeff,ao}]
Result: {{a[0],1},{a[1],2},{a[3],3},{a[4],4}}
This looks fine.
But when I make the call to FindMinimum, I get an error because the coefficients passed in the objective (a[0.],a[1.],...) do not match those passed in the initial guess (a[0],a[1],...).
S = FindMinimum[Q,init]
So I think my question is how do I keep Integrate from changing the arguments to my coefficients? But, I am open to other approaches as well. Keep in mind though that this is "legacy" work that I really don't want to have to completely revamp.
Thanks much for any/all help.
Question:
Minimising x1+x2+...+xn
Known k1*x1+k2*x2+...kn*xn = T
k1,k2,...,kn and T are known integers and > 0
k1 > k2 > k3 > ... > kn
All the x are also integers and >= 0
Find all the x
I was trying to use Rglpk and Glpk. But I can't find an example with only one row of matrix. Is this Integer programming? And is it solvable? Many thanks.
Some Ruby codes I wrote:
ks = [33, 18, 15, 5, 3]
t = 999
problem = Rglpk::Problem.new
problem.name = "test"
problem.obj.dir = Rglpk::GLP_MIN
rows = problem.add_rows(1)
rows[0].name = "sum of x equals t"
rows[0].set_bounds(Rglpk::GLP_UP, t, t)
cols = problem.add_cols(ks.size)
ks.each_with_index do |k,index|
cols[index].name = "k: #{k}"
cols[index].set_bounds(Rglpk::GLP_LO, 0.0, 0.0)
end
problem.obj.coefs = Array.new(ks.size, 1)
problem.set_matrix(ks)
problem.simplex
minimum_x_sum = problem.obj.get
xs = []
cols.each do |col|
xs << col.get_prim
end
xs
Yes, it is an integer program, a rather famous one, the so-called "knapsack problem". You therefore can solve it with either of the packages you mention (provided the number of variables is not too great) but a much more efficient approach is to use dynamic programming (see the above link). The use of DP here is quite simple to implement. This is one Ruby implementation I found by Googling.
I should mention a few related tidbits. Firstly, your constraint is an equality constraint:
k1x1 + k2x2 +...+ knxn = T
but this is normally assumed to be an inequality by (DP) knapsack algorithms:
k1x1 + k2x2 +...+ knxn <= T
To deal with an equality constraint you can either modify the algorithm slightly, or add the term:
M*(T - x1 + x2 +...+ xn)
to the objective you are minimizing, where M is a very large number (106, perhaps), thereby forcing equality at the optimal solution. (When expanded, the coefficient for each xi becomes 1-M. The constant term MT can be disregarded.)
Two more details:
DP algorithms permit the variables in the objective to have coefficients other than 1 (and there is no gain in efficiency when all the coefficients equal 1); and
If the DP algorithm maximizes (rather than minimizes) the objective, you can simply negate the coefficients of the variables in the objective to obtain an optimal solution to the minimization problem.
How can I maximize the following equation in respect to $\tau$ in Mathematica 9:
$$max_\tau \sqrt{(1 - \tau)y^i} + \sqrt{\tau y}$$
I want to find something like
$$\tau^i = \frac{y}{y^i + y}$$
Let x = Τ then f(x, y) = sqrt((1-x)*y^c) + sqrt(xy)
I'll assume that c is a constant, so there are only two independent variables here.
So take the first partial derivative w.r.t. x and set that equal to zero.
Wolfram Alpha can help you with that.
I have a sine wave whose parameters I can determine (they are user-input). It's of the form y=a*sin(m*x + t)
I'd like to know whether anyone knows an efficient algorithm to figure out the range of y for a given interval which goes from [0, x] (x is again another input)
For example:
for y = sin(x) (i.e. a=1, t=0, m=1), for the interval [0, 4] I'd like an output like [1, -0.756802]
Please keep in mind, m and t can be anything. Thus, the y-curve does not have to start (or end) at 0 (or 1). It could start anywhere.
Also, please note that x will be discrete.
Any ideas?
PS: I'll use python for implementing the algorithm.
Since function y(x) = a*sin(m*x + t) is continuous, maximum will be either at one of the interval's ends or at the maximum inside interval, in this case dy/dx will be equal to zero.
So:
1. Find values of y(x) at the ends of interval.
2. Find out if dy/dx == a * m cos (mx + t) have zero(s) in interval, find out values of y(x) at the zero(s).
3. Choose point where y(x) have maximum value
If you have greater than one period then the result is just +/- a.
For less than one period you can evaluate y at the start/end points and then find any maxima between the start/end points by solving for y' = 0, i.e. cos(m*x + t) = 0.
All the answers are more or less the same. Thanks guys=)
I think I'd go with something like the following (note that I am renaming the variable I called "x" to "end". I had this "x" at the beginning which denoted the end of my interval on the X-axis):
1) Evaluate y at 0 and "end", use an if-block to assign the two values to the correct PRELIMINARY "min" and "max" of the range
2) Evaluate number of evolutions: "evolNr" = (m*end)/2Pi. If evolNr > 1, return [-a, a]
3) If evolNr < 1: First find the root of the derivative, which is at "firstRoot" = (1/2m)*Pi - phase + q * 1/m * Pi, where q = ceil(m/Pi * ((1/2m) * Pi - phase) ) --- this gives me the first root at some position x > 0. From then on I know that all other extremes are within firstRoot and "end", we have a new root every 1/m * Pi.
In code: for (a=firstRoot; a < end; a += 1/m*Pi) {Eval y at a, if > 0 its a maximum, update "max", otherwise update "min"}
return [min, max]