There was a question on mathgroup, and while I was looking at it, I noticed this thing, and I can't understand why, I thought some expert here would know.
When doing Dt [ x[1] ]
it gives zero, because during evaluation of x[1], the last value left is 1, as can be seen from the TracePrint below. And hence '1' is what is seen by Dt, and so Dt[1] is 0.
Hence Dt[ x[1] ] is zero
In[86]:= TracePrint[ Dt[x[1] ]]
During evaluation of In[86]:= Dt[x[1]]
During evaluation of In[86]:= Dt
During evaluation of In[86]:= x[1]
During evaluation of In[86]:= x
During evaluation of In[86]:= 1
During evaluation of In[86]:= 0
Out[86]= 0
That made sense to me, until I typed x[1], and got back x[1]
In[84]:= x[1]
Out[84]= x[1]
But x[1] returning x[1] also made sense to me, since x[1] has no value, so it should return unevaluated.
So, my question, is why it seems that x[1] was evaluated all the way down to '1' during the call above, but at the top level notebook interface, it did not evaluate to 1?
In[87]:= Evaluate[ x[1] ]
Out[87]= x[1]
Thanks
The expression
x[1]
does not evaluate to 1 - it is an indexed variable with undefined value. The problem is that when you use the form of Dt with 1 argument, then x is considered a function, and 1 - its argument, and you get 0. This becomes clearer when you consider
In[1]:= Dt[x[y]]
Out[1]= Dt[y] Derivative[1][x][y]
If you now use
In[2]:= Dt[x[1],x[1]]
Out[2]= 1
you get 1, since now you differentiate over x[1] considered as a variable. Or,
In[3]:= Dt[x[1]^2, x[1]]
Out[3]= 2 x[1]
You were confused by evaluation printout since indeed, when evaluating an expression, all parts are normally evaluated - but (in the absense of any rules for x), x[1] evaluates back to itself also inside Dt, to be sure. What you observed is related to how Dt with one argument interprets that argument.
Related
I would like Mathematica to evaluate square root of a squared variable. Instead it is just returning the squared variable under square root. I wrote a simple code as an example:
x = y^2
z = FullSimplify[Sqrt[x]]
But it is returning y^2 under a square root sign!
This behavior is documented on the Sqrt reference page:
Sqrt[z^2] is not automatically converted to z.
[…]
These conversions can be done using PowerExpand, but will typically be correct only for positive real arguments.
Thus:
In[1]:= x = y^2
Out[1]= y^2
In[15]:= PowerExpand[Sqrt[x]]
Out[15]= y
You can also get simplifications by supplying various assumptions:
In[10]:= Simplify[Sqrt[x], Assumptions -> Element[y, Reals]]
Out[10]= Abs[y]
In[13]:= Simplify[Sqrt[x], Assumptions -> y > 0]
Out[13]= y
In[14]:= Simplify[Sqrt[x], Assumptions -> y < 0]
Out[14]= -y
If you want more help, I suggest asking on the Mathematica Stack Exchange.
I have an array of expressions, each depends on a.
I want to find the minimal positive value, as it depends on a, without having to substitute for a.
For example, if the array is [a^2, 1-2a, 1], then the function, call it MinPositive would return:
(MinPositive[a^2, 1-2a, 1]) /. a-> 0
0
(MinPositive[a^2, 1-2a, 1]) /. a-> 0.7
0.7^2
and so on.
Any ideas?
I would appreciate help to write the MinPositive function so that it can be used, for example, instead of the regular Min function.
Thanks.
Did you have something like this in mind? Return the expression that retsults in the min value..
minp[lst_, a_, v_] := (
pos = Select[lst, ((# /. a -> v) > 0) &];
Last#Sort[pos , ( (#1 /. a -> v ) > (#2 /. a -> v )) &])
minp[{a^2, 1 - 2 a, 1}, a, .2] -> a^2
minp[{a^2, 1 - 2 a, 1}, a, .48] -> 1-2 a
minp[{a^2, 1 - 2 a, 1}, a, 2] -> 1
The expression
[a^2, 1-2a, 1]
is not a well-formed Mathematica expression, perhaps you mean
{a^2, 1-2a, 1}
which is a valid expression for a list of 3 elements. Mathematica doesn't really do arrays as such, though lists can generally be used to model arrays.
On the other hand the expression
MinPositive[a^2, 1-2a, 1]
is a valid call to a function called MinPositive with 3 arguments.
All that to one side, I think you might be looking for a function call such as
MinPositive[{a^2, 1-2a, 1}/.a->0]
in which the value of 0 will be substituted for a inside the call to MinPositive but will not be applied outside that call.
It's not clear from your question whether you want help to write the MinPositive function; if you do, edit your question and make it clear. Further, your question title asks for the maximum positive value, while the body of your question refers to minima. You might want to sort that out too.
EDIT
I don't have Mathematica on this machine so I haven't checked this, but it should be close enough for you to finish off:
minPositive[lst_List] := Min[Select[lst,#>0&]]
which you would then call like this
minPositive[{a^2, 1-2a, 1}]
(NB: I avoid creating functions with a name with an initial capital letter.)
Or, considering your comment, perhaps you want something like
minPositive[lst_List, rl_Rule] := Min[Select[lst/.rl,#>0&]]
which you would call like this:
minPositive[{a^2, 1-2a, 1},a->2]
EDIT 2
The trouble, for you, with an expression such as
(MinPositive[a^2, 1-2a, 1]) /. a-> 0
is that the normal evaluation loop in Mathematica will cause the evaluation of the MinPositive function before the replacement rule is applied. How then can Mathematica figure out the minimum positive value in the list when a is set to a particular value ?
To prevent evaluation of the arguments before calling the body of the function is achieved by setting the attributes of the function to HoldAll (prevents evaluations of all arguments), HoldFirst (prevents the evaluation of the first argument only) or HoldRest (prevents the evaluation of all but the first argument).
In addition, since "a" by itself is not an argument, you need to use Block to isolate it from (potential) definitions for "a"
so
SetAttributes[minPositive, HoldAll]
minPositive[lst_List] := Block[{a},Min[Select[lst /. a -> 0, # > 0 &]]]
and even if you explicitly set a to some other value, say
a=3
than
minPositive[{a^2, 1 - 2 a, 100}]
returns 9 as expected
HTH
yehuda
Given a expression (polynomial, or any equation in general) such as
a s^2+b = 0
I want to solve for s^2, to get s^2 = -b/a. We all know that one can't just write
Solve[eq==0,s^2]
because s^2 is not a 'variable'. only s is a 'variable'. So what I do is
eq = a s^2+b;
sol = First#Solve[eq==0/.s^2->z,z];
z/.sol
-(b/a)
I was wondering if there is a way to do the above, without the intermediate variable substitution?
I tried many commands, but no success (reduce, collect, eliminate, factor. etc...).
thanks
--Nasser
One way is to solve for s and then square it...
eq=a s^2+b;
sol=#^2 &# (s/.Solve[eq==0,s])//DeleteDuplicates
Out[1]= {-(b/a)}
You could use the Notation package, but it leads to other issues.
So here is your original equation:
In[1]:= Solve[b + a s^2 == 0, s^2]
During evaluation of In[1]:= Solve::ivar: s^2 is not a valid variable. >>
Out[1]= Solve[b + a s^2 == 0, s^2]
Now Symbolize s^2 so that the normal Mathematica evaluator treats it like any other symbol
In[2]:= Needs["Notation`"]
In[3]:= Symbolize[ParsedBoxWrapper[SuperscriptBox["s", "2"]]]
In[4]:= Solve[b + a s^2 == 0, s^2]
Out[4]= {{s^2 -> -(b/a)}}
The problem is that s^2 really is treated as just another symbol, eg
In[6]:= Sqrt[s^2] // PowerExpand
Out[6]= Sqrt[s^2]
A work around is to replace s^2 with s*s, since Symbolize only acts on user inputed expressions (ie at the level of interpreting inputted Box structures)
In[7]:= Sqrt[s^2] /. s^2 -> s s // PowerExpand
Out[7]= s
I have a basic problem in Mathematica which has puzzled me for a while. I want to take the m'th derivative of x*Exp[t*x], then evaluate this at x=0. But the following does not work correct. Please share your thoughts.
D[x*Exp[t*x], {x, m}] /. x -> 0
Also what does the error mean
General::ivar: 0 is not a valid variable.
Edit: my previous example (D[Exp[t*x], {x, m}] /. x -> 0) was trivial. So I made it harder. :)
My question is: how to force it to do the derivative evaluation first, then do substitution.
As pointed out by others, (in general) Mathematica does not know how to take the derivative an arbitrary number of times, even if you specify that number is a positive integer.
This means that the D[expr,{x,m}] command remains unevaluated and then when you set x->0, it's now trying to take the derivative with respect to a constant, which yields the error message.
In general, what you want is the m'th derivative of the function evaluated at zero.
This can be written as
Derivative[m][Function[x,x Exp[t x]]][0]
or
Derivative[m][# Exp[t #]&][0]
You then get the table of coefficients
In[2]:= Table[%, {m, 1, 10}]
Out[2]= {1, 2 t, 3 t^2, 4 t^3, 5 t^4, 6 t^5, 7 t^6, 8 t^7, 9 t^8, 10 t^9}
But a little more thought shows that you really just want the m'th term in the series, so SeriesCoefficient does what you want:
In[3]:= SeriesCoefficient[x*Exp[t*x], {x, 0, m}]
Out[3]= Piecewise[{{t^(-1 + m)/(-1 + m)!, m >= 1}}, 0]
The final output is the general form of the m'th derivative. The PieceWise is not really necessary, since the expression actually holds for all non-negative integers.
Thanks to your update, it's clear what's happening here. Mathematica doesn't actually calculate the derivative; you then replace x with 0, and it ends up looking at this:
D[Exp[t*0],{0,m}]
which obviously is going to run into problems, since 0 isn't a variable.
I'll assume that you want the mth partial derivative of that function w.r.t. x. The t variable suggests that it might be a second independent variable.
It's easy enough to do without Mathematica: D[Exp[t*x], {x, m}] = t^m Exp[t*x]
And if you evaluate the limit as x approaches zero, you get t^m, since lim(Exp[t*x]) = 1. Right?
Update: Let's try it for x*exp(t*x)
the mth partial derivative w.r.t. x is easily had from Wolfram Alpha:
t^(m-1)*exp(t*x)(t*x + m)
So if x = 0 you get m*t^(m-1).
Q.E.D.
Let's see what is happening with a little more detail:
When you write:
D[Sin[x], {x, 1}]
you get an expression in with x in it
Cos[x]
That is because the x in the {x,1} part matches the x in the Sin[x] part, and so Mma understands that you want to make the derivative for that symbol.
But this x, does NOT act as a Block variable for that statement, isolating its meaning from any other x you have in your program, so it enables the chain rule. For example:
In[85]:= z=x^2;
D[Sin[z],{x,1}]
Out[86]= 2 x Cos[x^2]
See? That's perfect! But there is a price.
The price is that the symbols inside the derivative get evaluated as the derivative is taken, and that is spoiling your code.
Of course there are a lot of tricks to get around this. Some have already been mentioned. From my point of view, one clear way to undertand what is happening is:
f[x_] := x*Exp[t*x];
g[y_, m_] := D[f[x], {x, m}] /. x -> y;
{g[p, 2], g[0, 1]}
Out:
{2 E^(p t) t + E^(p t) p t^2, 1}
HTH!
FullSimplify fails to recognize that:
a*Conjugate[b] + b*Conjugate[a] = 2 Re[a*b]
I have some very complex equations that could be simplified greatly if Mathematica could recognize this simple identity
(and that a*Conjugate[b] - b*Conjugate[a] = 2 Im[a*b]).
See, Mathematica will not finish solving my equations when written in
a*Conjugate[b] +b*Conjugate[a] form,
but I could at the very least write my final equations in an extremely descriptive and compact form if Mathematica recognized this. The actual expressions look like:
-((I q1 + q2)/(I q0 + Sqrt[-q0^2 + q1^2 + q2^2 + q3^2])) -
(Conjugate[q1] + I Conjugate[q2])/
(Conjugate[q0] + I Conjugate[Sqrt[-q0^2 + q1^2 + q2^2 + q3^2]])
I would do this myself, but there are 16 of such expressions and they form 4 sets of coupled systems. Since one sign error would render my work useless, I would strongly prefer an automated process.
The identity you gave, b Conjugate[a] + a Conjugate[b] == 2 Re[a b], is only true if at least one of a and b is real:
In[7]:= Simplify[
Reduce[a*Conjugate[b] + b*Conjugate[a] == 2 Re[a*b], {a, b}]]
Out[7]= Im[a] == 0 || Im[b] == 0
If this additional condition is in fact true in your application then you could give it to Simplify or FullSimplify as an assumption, as their second argument. For example:
In[14]:= FullSimplify[Im[a*Conjugate[b] + b*Conjugate[a]],
Im[a] == 0 || Im[b] == 0]
Out[14]= 0
By the way, here is one example when the identity is not true:
In[1]:= FindInstance[
a*Conjugate[b] + b*Conjugate[a] != 2 Re[a*b], {a, b}]
Out[1]= {{a -> -I, b -> -I}}
First pass: Use ComplexExpand[].
In := Simplify[ ComplexExpand[ a Conjugate[b] + b Conjugate[a], {a, b} ] ]
Out = 2 (Im[a] Im[b] + Re[a] Re[b])
For more fun, look at ComplexityFunction, although I find that a lot of trial and error is involved in tuning FullSimplify.
I think the correct identity should be:
a*Conjugate[b] + b*Conjugate[a] == 2 Re[Conjugate[a]*b]
It's always true:
In[1]:= FullSimplify[a*Conjugate[b] + b*Conjugate[a] == 2 Re[Conjugate[a]*b]]
Out[1]= True
Is your identity correct? I'm getting different numbers for two sides
{a*Conjugate[b] + b*Conjugate[a], 2 Re[a*b]} /. {a -> RandomComplex[],b -> RandomComplex[]}