Solving 2d movement differential equations - wolfram-mathematica

I'm trying to solve a really simple problem of finding object position under force {k1+k2 * y, k3*t}. Here's what I'm entering into Mathematica 7:
DSolve[{
x''[t]*m == k1 + k2*y[t],
y''[t]*m == k3*t,
y'[0] == 0,
y[0] == 0,
x'[0] == 0,
x[0] == 0
}, {x[t], y[t]}, t]
and I get this error:
DSolve::deqn: Equation or list of equations expected instead of True in the first argument {-C m (x^[Prime])[t]^2==k1+k2 y[t],m (y^[Prime][Prime])[t]==k3 t,True,y[0]==0,True,x[0]==0}.
It seems that Mathematica is unhappy about boundary conditions x'[0] == 0. Why is that?

It worked as you typed it ... try to do it in a fresh notebook

When I cut and paste the code you've posted into M'ma 7.0.1 and evaluate, I get the result
{{x[t] -> (60*k1*m*t^2 + k2*k3*t^5)/(120*m^2),
y[t] -> (k3*t^3)/(6*m)}}
Your M'ma error message tells me you actually have only one prime on x (i.e. x'[t]) in your actual M'ma input. The equation it cites, -C m (x^[Prime])[t]^2==k1+k2 y[t], does not match the first line of your code above.
I also suspect that x'[0] and y'[0] have been assigned to zero previously, which is causing x'[0]==0, ..., y'[0]==0 to both collapse to True. Best way to test: kill your kernel and re-evaluate the input above (after fixing typos).

Both, belisarius and Eric Towers have suggested killing the kernel and re-evaluating. They're most likely correct in that something has a prior definition. You can check if that is true via
?<variable name>
As an alternative to killing the kernel, I'd suggest clearing their values via
Clear[x, y, k1, k2, k3, m]
Or, if you really want to rid yourself of any definition of a variable there's Remove. This way, you won't have to recalculate anything else from your current session.

Related

Replacements (/.) within Minimize function Mathematica

I want to find the parameter value that minimizes the solution to an ODE at a given x.
For example,the function s(w) solves the ODE at parameter value w. I want to find the w such that y(5) is smallest. Here is the code:
s[w_] := NDSolve[{y'[x] == y[x] w Cos[x + y[x]], y[0] == 1}, y, {x, 0, 30}]
NMinimize[y[5] /. s[w], {w}]
I get the following error:
ReplaceAll::reps: {NDSolve[{(y^\[Prime])[x]==w Cos[x+y[<<1>>]] y[x],y[0]==1},y,{x,0,30}]} is neither a list of replacement rules nor a valid dispatch table, and so cannot be used for replacing.
It looks like it does not successfully replace the numeric value of w into the ODE, and therefore s(w) is not solved, and y[5] can not be replaced by the solution. How do I fix this problem to successfully find the minimum?

Mathematica Does No Find the Global Maximum It Just Print Out My Function

I am am mathmatica notebook to find an analytical solution to the follow constrained optimization problem:
Max y^(1-b)(x^b(1-a(x/(x+1)))) s.t. M = Px+qy
x,y
I have tried the following code:
Maximize[{y^(1-b)(x^b(1-a(x/(x+1)))), M==Px+qy}, {x,y}]
and in returns the same function as an output. In the function a, b, M, P, and q are all parameters. I have also tried assigning the parameters arbitrary values to test to see if mathmatica is not sure how to deal with the parameters. I used to following code:
Maximize[{y^(1-0.5)(x^0.5(1-0.75(x/(x+1)))), 1000=5x+5y},{x,y}]
and it returns the same function. However, if I remove the constraint it will solve the optimization problem.
Maximize[{y^(1-0.5)(x^0.5(1-0.75(x/(x+1))))},{x,y}]
{7.2912*^59,{x->2.89727*^60,y->2.93582*^60}}
I am not sure what to do. After reading about constrained optimization problem the syntax appears to be correct. Sorry, it this question is really basic I am very new to mathmatica, also since I am using a notebook I could not past the output from the first two lines in.
The constraint is incorrectly specified, it should be 1000 == 5 x + 5 y. Maximize works better with exact numbers.
Maximize[{Rationalize[y^(1 - 0.5) (x^0.5 (1 - 0.75 (x/(x + 1))))],
1000 == 5 x + 5 y}, {x, y}] // N
(* {25.7537, {x -> 96.97, y -> 103.03}} *)

Mathematica dsolve fails unless I set the differential function first

I start Mathematica and load a notebook with the following:
DSolve[{p'[t] == p[t], p[0] == {x0, y0}}, p[t], t]
When I evaluate that, Mathematica fails to solve and returns:
DSolve: For some branches of the general solution, the given boundary conditions lead to an empty solution.
I then type in a line of code that sets p[t], so that the notebook contains:
p[t_] = {x[t], y[t]};
DSolve[{p'[t] == p[t], p[0] == {x0, y0}}, p[t], t]
When I evaluate that, Mathematica returns a solution:
{{x[t] -> E^t x0, y[t] -> E^t y0}}
If I then erase the line p[t] = {x[t],y[t]} so the code is back in its original state, then Mathematica (weirdly!) still returns a solution. But then when I restart Mathematica and repeat the above steps, it goes back to failing to solve the equation.
I'm using Mathematica 11.2. Am I doing something wrong in the above code?

How to preserve results from Maximize in Mathematica?

I aim to calculate and preserve the results from the maximization of a function with two arguments and one exogenous parameter, when the maximum can not be derived (in closed form) by maximize. For instance, let
f[x_,y_,a_]=Max[0,Min[a-y,1-x-y]
be the objective function where a is positive. The maximization shall take place over [0,1]^2, therefore I set
m[a_]=Maximize[{f[x, y, a], 0 <= x <= 1 && 0 <= y <= 1 && 0 <= a}, {x,y}]
Obviously m can be evaluated at any point a and it is therefore possible to plot the maximizing x by employing
Plot[x /. m[a][[2]], {a, 0.01, 1}]
As I need to do several plots and further derivations containing the optimal solutions x and y (which of course are functions of a), i would like to preserve/save the results from the optimization for further use. Is there an elegant way to do this, or do I have to write some kind of loop to extract the values myself?
Now that I've seen the full text of your comment on my original comment, I suspect that you do understand the differences between Set and SetDelayed well enough. I think what you may be looking for is memoisation, sometimes implemented a bit like this;
f[x_,y_] := f[x,y] = Max[0,Min[a-y,1-x-y]]
When you evaluate, for example f[3,4] for the first time it will evaluate to the entire expression to the right of the :=. The rhs is the assignment f[3,4] = Max[0,Min[a-y,1-x-y]]. Next time you evaluate f[3,4] Mathematica already has a value for it so doesn't need to recompute it, it just recalls it. In this example the stored value would be Max[0,Min[a-4,-6]] of course.
I remain a little uncertain of what you are trying to do so this answer may not be any use to you at all.
Simple approach
results = Table[{x, y, a} /. m[a][[2]], {a, 0.01, 1, .01}]
ListPlot[{#[[3]], #[[1]]} & /# results, Joined -> True]
(The Set = is ok here so long as 'a' is not previosly defined )
If you want to utilise Plot[]s automatic evaluation take a look at Reap[]/Sow[]
{p, data} = Reap[Plot[x /. Sow[m[a]][[2]], {a, 0.01, 1}]];
Show[p]
(this takes a few minutes as the function output is a mess..).
hmm try this again: assuming you want x,y,a and the minimum value:
{p, data} = Reap[Plot[x /. Sow[{a, m[a]}][[2, 2]], {a, 0.01, .1}]];
Show[p]
results = {#[[1]], x /. #[[2, 2]], y /. #[[2, 2]], #[[2, 1]]} & /# data[[1]]
BTW Your function appears to be independent of x over some ranges which is why the plot is a mess..

Problem performing a substitution in a multiple derivative

I have a basic problem in Mathematica which has puzzled me for a while. I want to take the m'th derivative of x*Exp[t*x], then evaluate this at x=0. But the following does not work correct. Please share your thoughts.
D[x*Exp[t*x], {x, m}] /. x -> 0
Also what does the error mean
General::ivar: 0 is not a valid variable.
Edit: my previous example (D[Exp[t*x], {x, m}] /. x -> 0) was trivial. So I made it harder. :)
My question is: how to force it to do the derivative evaluation first, then do substitution.
As pointed out by others, (in general) Mathematica does not know how to take the derivative an arbitrary number of times, even if you specify that number is a positive integer.
This means that the D[expr,{x,m}] command remains unevaluated and then when you set x->0, it's now trying to take the derivative with respect to a constant, which yields the error message.
In general, what you want is the m'th derivative of the function evaluated at zero.
This can be written as
Derivative[m][Function[x,x Exp[t x]]][0]
or
Derivative[m][# Exp[t #]&][0]
You then get the table of coefficients
In[2]:= Table[%, {m, 1, 10}]
Out[2]= {1, 2 t, 3 t^2, 4 t^3, 5 t^4, 6 t^5, 7 t^6, 8 t^7, 9 t^8, 10 t^9}
But a little more thought shows that you really just want the m'th term in the series, so SeriesCoefficient does what you want:
In[3]:= SeriesCoefficient[x*Exp[t*x], {x, 0, m}]
Out[3]= Piecewise[{{t^(-1 + m)/(-1 + m)!, m >= 1}}, 0]
The final output is the general form of the m'th derivative. The PieceWise is not really necessary, since the expression actually holds for all non-negative integers.
Thanks to your update, it's clear what's happening here. Mathematica doesn't actually calculate the derivative; you then replace x with 0, and it ends up looking at this:
D[Exp[t*0],{0,m}]
which obviously is going to run into problems, since 0 isn't a variable.
I'll assume that you want the mth partial derivative of that function w.r.t. x. The t variable suggests that it might be a second independent variable.
It's easy enough to do without Mathematica: D[Exp[t*x], {x, m}] = t^m Exp[t*x]
And if you evaluate the limit as x approaches zero, you get t^m, since lim(Exp[t*x]) = 1. Right?
Update: Let's try it for x*exp(t*x)
the mth partial derivative w.r.t. x is easily had from Wolfram Alpha:
t^(m-1)*exp(t*x)(t*x + m)
So if x = 0 you get m*t^(m-1).
Q.E.D.
Let's see what is happening with a little more detail:
When you write:
D[Sin[x], {x, 1}]
you get an expression in with x in it
Cos[x]
That is because the x in the {x,1} part matches the x in the Sin[x] part, and so Mma understands that you want to make the derivative for that symbol.
But this x, does NOT act as a Block variable for that statement, isolating its meaning from any other x you have in your program, so it enables the chain rule. For example:
In[85]:= z=x^2;
D[Sin[z],{x,1}]
Out[86]= 2 x Cos[x^2]
See? That's perfect! But there is a price.
The price is that the symbols inside the derivative get evaluated as the derivative is taken, and that is spoiling your code.
Of course there are a lot of tricks to get around this. Some have already been mentioned. From my point of view, one clear way to undertand what is happening is:
f[x_] := x*Exp[t*x];
g[y_, m_] := D[f[x], {x, m}] /. x -> y;
{g[p, 2], g[0, 1]}
Out:
{2 E^(p t) t + E^(p t) p t^2, 1}
HTH!

Resources