Mathematica Solve function gives incorrect solution, why? - wolfram-mathematica

Please look into attached file. I’m using Mathematica Solve function to solve some simple equations from physics. One of the equations is an If function which defines function value when a condition is met. Solve finds almost correct solution which itself is a ConditionalExpression. For independent variable θ = 90° the answer given by Solve is in error. It seems that Solve forgets the case when Cos equals 0. Why? Thanks.
Regards/Mikael

Specifying theta as a real solves the problem.
w = 1500;
mus = 0.4;
fv = f Cos[theta Degree];
fh = f Sin[theta Degree];
fn = fv + w;
ff = If[mus fn >= 0, mus fn, 0];
frul = Quiet#Solve[fh == ff, f, Reals];
f /. frul /. theta -> 90.
{600.}
f /. frul /. theta -> 90
{Undefined}
Same again, with radians.
w = 1500;
mus = 0.4;
fv = f Cos[theta];
fh = f Sin[theta];
fn = fv + w;
ff = If[mus fn >= 0, mus fn, 0];
frul = Quiet#Solve[fh == ff, f, Reals];
f /. frul /. theta -> N[Pi/2]
{600.}
f /. frul /. theta -> Pi/2
{Undefined}

Many thanks Chris.
Yes, giving it real numbers yields correct answers. This is because Cos[90.0°] is 6.123233995736766E-17 whereas Cos[90°] is 0. The solution is the same but we are fooling it with finite machine precision.
If I ask me, I would say that this is a bug in equation solver in Mathematica. The solution produced by Solve[] should test for Cos[] >= 0. Now it tests for Cos[] > 0 which is not true for Cos[90°].

Related

Maple solve system of equation showing "RootOf"

I'm trying to solve the following system:
gx:=2*x*exp(x^2+y^2)-4*y
gy:=2*y*exp(x^2+y^2)-4*x
sys:={gx=0,gy=0}:
solve(sys,{x,y})
It then displays the following output:
{x = 0, y = 0}, {x = RootOf(2*_Z^2-ln(2)), y = RootOf(2*_Z^2-ln(2))}, {x = -RootOf(2*_Z^2-ln(2)-I*Pi), y = RootOf(2*_Z^2-ln(2)-I*Pi)}
The first "root" (0,0) is correct, however how do i remove that root of and whatever Z that is? Is it possible to get the correct answer out of it?
This is a great scenario for the function allvalues.
From the help page:
compute all possible values of expressions involving RootOfs
gx:=2*x*exp(x^2+y^2)-4*y;
gy:=2*y*exp(x^2+y^2)-4*x;
sys:={gx=0,gy=0}:
sol := solve(sys,{x,y}):
seq(allvalues(sol[i]), i= 1..numelems([sol])):
print~([%])[];
Notice, however, that you are not getting all solutions this way. There are infinitely many solutions to the problem; to get all solutions, use the optional argument allsolutions = true in the solve command:
sol2 := solve(sys,{x,y},allsolutions = true):
seq(allvalues(sol2[i]), i= 1..numelems([sol2])):
print~([%])[];
If you run this, you will see a new variable _Z1 that has a trailing tilde (~) - this tilde means there are assumptions on the variable. To see these assumptions use
about(_Z1);
Originally _Z1, renamed _Z1~:
is assumed to be: integer
This means that the above solutions work for any integer _Z1. Those are all your solutions and written in the expected way.
You can use fsolve to final numerical solutions,
restart;
gx:=2*x*exp(x^2+y^2)-4*y;
gy:=2*y*exp(x^2+y^2)-4*x;
sys:={gx=0,gy=0}:
fsolve(sys,{x,y})
{x = .5887050113, y = .5887050113}
sys:={gx=0.,gy=0.}:
solve(sys,{x,y})
{x = 0., y = 0.}, {x = .5887050112, y = .5887050112}, {x = -.5887050112, y = -.5887050112}, {x = -.9887236333-.7943556085*I, y = .9887236333+.7943556085*I}, {x = .9887236333+.7943556085*I, y = -.9887236333-.7943556085*I}

Performance w/ calculating Hessian

[edit] The part about "f" is solved. Here is what I did:
Instead of using:
X = (F * W' - Y);
f = X' * X;
I'm now using:
X = F*W;
A = X'*F*W;
B = -2*X'*Y;
Y1 = Y'*Y;
f = A + B + Y1
This will give a massive speed up. Still, the problem with the Hessian of f remains.
[/edit]
So, I'm having some serious performance "problems" with a quadratic optimization problem I'm trying so solve in Matlab. The problem is not the optimization per se, but the calculation of the target function and the Hessian. Right now it looks like this (F and Y aren't random at all and will have real data, also it is not neccesarily unconstrainted, because then the solution would of course be (F'F)^-1*F'*Y):
W_a = sym('w_a_%d', [1 96]);
W_b = sym('w_b_%d', [1 96]);
for i = 1:96
W(1,2*(i-1)+1) = W_a(1,i);
W(1,2*i) = W_b(1,i);
end
F = rand(10000,192);
Y = rand(10000,1);
q = [];
for i = 1:192
q = [q sum(-Y(:).*F(:,i))];
end
q = 2*q;
q = double(q);
X = (F * W' - Y);
f = X' * X;
H = hessian(f);
H = double(H);
A=[]; b=[];
Aeq=[]; beq=[];
lb=[]; ub=[];
options=optimset('Algorithm', 'active-set', 'Display', 'off');
[xsol,~,exitflag,output]=quadprog(H, q, A, b, Aeq, beq, lb, ub, [], options);
The thing is: calculating f and H takes like forever.
I'm not expecting that there are ways to significantly speed this up, since Matlab is optimized for stuff like this. But maybe someone knows some open license software, that's almost as fast as Matlab, so that I could calculate f and H with that software on a faster machine (which unfortunately has no Matlab license ...) and then let Matlab do the optimization.
Right now I'm kinda lost in this :/
Thank you very much in advance. Even some keywords could help me here like "Look for software xy"
If speed is your concern, using symbolic methods is usually the wrong approach (especially for large systems or if you need to run something repeatedly). You'll need to calculate your Hessian numerically. There's an excellent utility on the MathWorks FileExchange that can do this for you: the DERIVESTsuite. It includes a numeric hessian function. You'll need to formulate your f as a function of X.

Mathematica NDSolve

I have a question about NDSolve function in Mathematica.
I have an oscillator defined by these two equations:
x' = v
v' = -x - u*v^3
where u is some constant.
How to create an NDSolve that resolves this? I tried following code (it has to depend on time) but it doesnt work:
eq1 = x'[t] == v;
eq2 = v' == -x[t] - u*v^3;
eq3 = x[0] == 2;
(initial displacement is 2m).
s = NDSolve[{eq1, eq2, eq3}, x, {t, 0, 30}]
Thank you very much...
You need to observe that the first equation once differentiated with respect to t can be used to substitute for v[t]. But then the second equation becomes a ODE of second order and requires to be supplied with another extra initial condition. We will give
v[0]==x'[0]==some number
Then after solving this ODE for x you can recover v[t]==x'[t]
I give you the solution in term of a Manipulate so that geometrically the situation becomes clear to you.
(* First equation *)
v[t] = x'[t];
(*
Differentiate this equation once and substitute
for v[t] in the second equation
*)
Manipulate[
With[{u = Constant, der = derval},
res = NDSolve[{x''[t] == -x[t] - u*x'[t]^3, x[0.] == 2,x'[0.] == der},
x, {t, 0., 30.}] // First;
Plot[Evaluate[{x[t], v[t]} /. res], {t, 0, 30}, PlotRange -> All,
Frame -> True,Axes -> None, ImageSize -> 600]
],
{{Constant, 0.,TraditionalForm#(u)}, 0.,3, .1},
{{derval, -3., TraditionalForm#(v[0] == x'[0])}, -3, 3, .1}
]
Hope this helps you but next time before you ask you need to brush up the theory first as you can see the question you asked concerns very basic and elementary Mathematics not Mathematica programming. Good luck!!
You need to specify a numeric value for your u as well as an initial condition for v[t] :
u=1.0;
solution=NDSolve[{x'[t]==v[t], v'[t]==-x[t]-u v[t]^3,x[0]==2,v[0]==-1},{x,v},{t,0,1}]
Plot[{solution[[1,1,2]][t],solution[[1,2,2]][t]},{t,0,1}]

Root finding with quadrature

I have a problem with root finding and am having difficulty in getting it to work in this instance.
Some complicated function I need.
f[x_, lambda_, alpha_, beta_, mu_] =
Module[{gamma},
gamma = Sqrt[alpha^2 - beta^2];
(gamma^(2*lambda)/((2*alpha)^(lambda - 1/2)*Sqrt[Pi]*Gamma[lambda]))*
Abs[x - mu]^(lambda - 1/2)*
BesselK[lambda - 1/2, alpha Abs[x - mu]] E^(beta (x - mu))
];
A function I want to find the root of is defined as the integral of this function so I use quadrature:
F[x_, lambda_, alpha_, beta_, mu_] :=
NIntegrate[f[t, lambda, alpha, beta, mu], {t, 0, x}];
Now the problem, mathematica has difficulty solving the roots of this equation,
Q[u_, lambda_, alpha_, beta_, mu_] :=
x /. FindRoot[F[x, lambda, alpha, beta, mu] == u, {x, 1}]
Does anybody know why? The integral is defined at all points in R. f here is a density function and F its CDF.
Thanks for reading.
Try using := instead of = in the definition of f and see if that helps.
Incidentally when you use this SetDelayed syntax, you don't need semicolons to suppress output, because it doesn't immediately create output.
Here is some sample output, courtesy of belisarius and WReach:

find minimum of a function defined by integration in Mathematica

I need to find the minimum of a function f(t) = int g(t,x) dx over [0,1]. What I did in mathematica is as follows:
f[t_] = NIntegrate[g[t,x],{x,-1,1}]
FindMinimum[f[t],{t,t0}]
However mathematica halts at the first try, because NIntegrate does not work with the symbolic t. It needs a specific value to evaluate. Although Plot[f[t],{t,0,1}] works perferctly, FindMinimum stops at the initial point.
I cannot replace NIntegrate by Integrate, because the function g is a bit complicated and if you type Integrate, mathematica just keep running...
Any way to get around it? Thanks!
Try this:
In[58]:= g[t_, x_] := t^3 - t + x^2
In[59]:= f[t_?NumericQ] := NIntegrate[g[t, x], {x, -1, 1}]
In[60]:= FindMinimum[f[t], {t, 1}]
Out[60]= {-0.103134, {t -> 0.57735}}
In[61]:= Plot[f[t], {t, 0, 1}]
Two relevant changes I made to your code:
Define f with := instead of with =. This effectively gives a definition for f "later", when the user of f has supplied the values of the arguments. See SetDelayed.
Define f with t_?NumericQ instead of t_. This says, t can be anything numeric (Pi, 7, 0, etc). But not anything non-numeric (t, x, "foo", etc).
An ounce of analysis...
You can get an exact answer and completely avoid the heavy lifting of the numerical integration, as long as Mathematica can do symbolic integration of g[t,x] w.r.t x and then symbolic differentiation w.r.t. t. A less trivial example with a more complicated g[t,x] including polynomial products in x and t:
g[t_, x_] := t^2 + (7*t*x - (x^3)/13)^2;
xMax = 1; xMin = -1; f[t_?NumericQ] := NIntegrate[g[t, x], {x, xMin, xMax}];
tMin = 0; tMax = 1;Plot[f[t], {t, tMin, tMax}];
tNumericAtMin = t /. FindMinimum[f[t], {t, tMax}][[2]];
dig[t_, x_] := D[Integrate[g[t, x], x], t];
Print["Differentiated integral is ", dig[t, x]];
digAtXMax = dig[t, x] /. x -> xMax; digAtXMin = dig[t, x] /. x -> xMin;
tSymbolicAtMin = Resolve[digAtXMax - digAtXMin == 0 && tMin ≤ t ≤ tMax, {t}];
Print["Exact: ", tSymbolicAtMin[[2]]];
Print["Numeric: ", tNumericAtMin];
Print["Difference: ", tSymbolicAtMin [[2]] - tNumericAtMin // N];
with the result:
⁃Graphics⁃
Differentiated integral is 2 t x + 98 t x^3 / 3 - 14 x^5 / 65
Exact: 21/3380
Numeric: 0.00621302
Difference: -3.01143 x 10^-9
Minimum of the function can be only at zero-points of it's derivate, so why to integrate in the first place?
You can use FindRoot or Solve to find roots of g
Then you can verify that points are really local minimums by checking derivates of g (it should be positive at that point).
Then you can NIntegrate to find minimum value of f - only one numerical integration!

Resources