I'm wondering if there is a different command than FullSimplify to tell mathematica to do the computation requested. Here's three variations of a simplification attempt
FullSimplify[Re[ (-I + k Rr)] Cos[Ttheta], Element[{k, Rr, Ttheta, t, omega}, Reals]]
FullSimplify[Re[E^(I (omega t - k Rr)) ] Cos[Ttheta], Element[{k, Rr, Ttheta, t, omega}, Reals]]
FullSimplify[Re[E^(I (omega t - k Rr)) (-I + k Rr)] Cos[Ttheta], Element[{k, Rr, Ttheta, t, omega}, Reals]]
I get respectively:
k Rr Cos[Ttheta]
Cos[k Rr - omega t] Cos[Ttheta]
I (-k Rr + omega t)
Cos[Ttheta] Re[E (-I + k Rr)]
Without the exponential, the real parts get evaluated. Without the complex factor multiplying the exponential, the real parts get evaluated. With both multiplied, the input is returned as output?
I tried the // Timings modifier, and this isn't because the expression is too complex (which is good since I can do this one in my head, but this was a subset of a larger test expression that was also failing).
Since your variables are declared Reals have you tried ComplexExpand?
To redeem my slow posting here is another approach: tell Mathematica that you do not want Complex in the result via ComplexityFunction
FullSimplify[Re[E^(I (omega t - k Rr)) (-I + k Rr)] Cos[Ttheta],
Element[{k, Rr, Ttheta, t, omega}, Reals],
ComplexityFunction -> (1 - Boole#FreeQ[#, Complex] &)]
ComplexExpand, perhaps?
ComplexExpand[Re[E^(I (omega t - k Rr)) (-I + k Rr)] Cos[Ttheta]]
This Is a problem I've been having with Mathematica for a long time, combining suggestions from here I've created a new function that can be used instead of Simplify[] when dealing with complex arguments. Works for me so far, any further suggestions?
CSimplify[in_] :=
FullSimplify[in // ComplexExpand,
ComplexityFunction -> (1 - Boole#FreeQ[#, Complex] &)]
Related
I'm puzzled by what I think is a mistake in a partial derivative I'm having Mathematica do for me.
Specifically, this is what I have:
Derivative I'd like to take
I'm trying to take the partial derivative of the following w.r.t. the variable θ (apologies for the formatting):
f=(1/4)(-4e((1+θ)/2)ψ+eN((1+θ)/2)ψ+eN((1+θ)/2-θd)ψ)-s
But the solution Mathematica produces seems very different from the one I get when I take the derivative myself. While Mathematica says the partial derivative of f w.r.t. θ is:
(1/4)eψ(N-2)
By hand, I get and am quite confident the correct answer is instead:
(1/4)eψ(N(1-d)-2)
That is, Mathematica is producing something that drops the variable d when it is differentiating. I've explored different functions that take a derivative in Mathematica, and the possibility that maybe some of the variables I'm using (such as d) might be protected or otherwise special, but I can't say that I know why the answer's so off. This is the first time in the notebook that d appears, so it is not set to 0. For context, I'm trying to confirm that the derivative of the function is positive for values of the variables in certain ranges, and we have d>0 and d<(1/2). Doing this all by hand works but I'm trying to confirm with Mathematica as I will be dealing with more complicated functions and need to make sure I'm having Mathematica produce the right derivatives.
Your didn't add spaces in eN and θd, so it thinks they're some other 2-character variables.
Adding spaces between them gives your expected result:
f[θ,e,N,ψ,d,s] = (1/4) (-4 e ((1+θ)/2) ψ + e N ((1+θ)/2) ψ + e N ((1+θ)/2 - θ d) ψ) - s;
D[f[θ, e, N, ψ, d, s], θ] // FullSimplify
(* 1/4 e (-2 + N - d N) ψ *)
How could I implement a program that takes in the two sides of a trig equation (could be generalized to anything but for now I'll leave it at just trig identities) and the program will output the steps to transform one side into another (or transform them both) to show that they are in fact equal. The program will assume that they are equal in the first place. I am quite stumped as to how I might implement an algorithm to do this. My first thought was something to do with graphs, but I couldn't think of anything beyond this. From there, I thought that I should first parse both sides of the equation into trees. For example (cot x * sin) / (sin x + cos x) would look like this:
division
/ \
* +
/ \ / \
cot sin sin cos
After this, I had two similar ideas, both of which have problems. The first idea was to pick the side with the least number of leaves and try to manipulate it into the other side by using equivalencies that would be represented by "tree regexs." Examples of these "tree regexs" would be csc = 1 / sin or cot = cos / sin (in tree form of course), etc. My second idea would be to pick the side with more leaves and try to find some expression that when multiplied by that expression would equal the other side. Using reciprocals this wouldn't be too bad, however, I would then have to prove that the thing I multiplied by equals 1. Again I am back to this "tree regex" thing.
The major flaw with both of these is in what order/how could I apply these substitutions. Will it just have to be a big mess of if statements or is there a more elegant solution? Is there actually a graph-based solution that I'm not seeing. What (if any) might be a good algorithm to prove trig identities.
To be clear I am not talking about the "solve for x" type problem such as tan(x)sin(x) = 5, find all values of x but rather prove that sqrt((1 + sin x) / (1 - sin x)) = sec x + tan x
This is a simple algorithm for deciding trigonometric identities that can be brought into the form polynomial(sin x, cos x) = 0 :
Get rid of tan x, cot x, sec x, ..., sin 2x, ... by the obvious substitutions (tan x -> (sin x)/(cos x), ..., sin 2x -> 2 (sin x) (cos x), ...)
Transform identity to polynomial by squaring (isolated) roots (getting rid of multiple roots in an identity can be tricky, though), multiplying with denominators and bringing all expanded terms to one side
Replace all terms cos^2 x in the polynomial (cos^3 x = (cos^2 x)(cos x), cos^4 x = (cos^2 x)(cos^2 x), ...) by 1 - sin^2 x and expand the polynomial.
Finally a polynomial without cos^2 x is computed. If it is identical to 0 the identity is proven, otherwise the identity does not hold.
Your example sqrt((1 + sin x)/(1 - sin x)) = sec x + tan x:
Using the substitutions sec x -> 1/(cos x) and tan x -> (sin x)/(cos x) we get
sqrt((1 + sin x)/(1 - sin x)) = 1/(cos x) + (sin x)/(cos x).
For brevity let us write s instead of sin x and c instead of cos x, which gives us:
sqrt((1 + s)/(1 - s)) = 1/c + s/c
Squaring the equation and multiplying both sides with (1 - s)c^2 we get
(1 + s)c^2 = (1 + s)^2(1 - s).
Expanding the parenthesis and bringing everthing to one side we get
c^2 - sc^2 + s^3 + s^2 - s - 1 = 0
Substituting c^2 = 1 - s^2 into the polynomial we get
(1 - s^2) - s(1 - s^2) + s^3 + s^2 - s - 1 which expands to 0.
Hence the identity is proven.
Look out for texts on computer algebra (which I haven't), I'm sure you'll find clever ideas there.
My approach would be a graph-based search, as I doubt that a linear application of transformations will reliably lead to a solution.
Express the whole equation as an expression-tree the way you already started, but including an "equals" node above.
For the search-graph view, take one expression-tree as one search-state. The search-target is a decidable expression-tree like 1=1 or 1=0. When searching (expanding a search-state), create the child states by applying equivalence transformations on your expression (regex-like sounds quite plausible to me). Define an evaluation function that counts the overall complexity of an expression (e.g. number of nodes in the expression-tree). Do a directed search minimizing the evaluation function (expanding the lowest-complexity expression first), thus simplifying the expression until you reach a decidable form.
Depending on the expressions, it's quite possible that an unrestricted search never terminates. I don't know how you'd handle that, maybe by limiting the allowed complexity of expressions to some multiple of the original one. That would reduce the risk of running indefinitely, but leave you with undecided cases.
So generally, if you have two functions f,g: X -->Y, and if there is some binary operation + defined on Y, then f + g has a canonical definition as the function x --> f(x) + g(x).
What's the best way to implement this in Mathematica?
f[x_] := x^2
g[x_] := 2*x
h = f + g;
h[1]
yields
(f + g)[1]
as an output
of course,
H = Function[z, f[z] + g[z]];
H[1]
Yields '3'.
Consider:
In[1]:= Through[(f + g)[1]]
Out[1]= f[1] + g[1]
To elaborate, you can define h like this:
h = Through[ (f + g)[#] ] &;
If you have a limited number of functions and operands, then UpSet as recommended by yoda is surely syntactically cleaner. However, Through is more general. Without any new definitions involving Times or h, one can easily do:
i = Through[ (h * f * g)[#] ] &
i[7]
43218
Another way of doing what you're trying to do is using UpSetDelayed.
f[x_] := x^2;
g[x_] := 2*x;
f + g ^:= f[#] + g[#] &; (*define upvalues for the operation f+g*)
h[x_] = f + g;
h[z]
Out[1]= 2 z + z^2
Also see this very nice answer by rcollyer (and also the ones by Leonid & Verbeia) for more on UpValues and when to use them
I will throw in a complete code for Gram - Schmidt and an example for function addition etc, since I happened to have that code written about 4 years ago. Did not test extensively though. I did not change a single line of it now, so a disclaimer (I was a lot worse at mma at the time). That said, here is a Gram - Schmidt procedure implementation, which is a slightly generalized version of the code I discussed here:
oneStepOrtogonalizeGen[vec_, {}, _, _, _] := vec;
oneStepOrtogonalizeGen[vec_, vecmat_List, dotF_, plusF_, timesF_] :=
Fold[plusF[#1, timesF[-dotF[vec, #2]/dotF[#2, #2], #2]] &, vec, vecmat];
GSOrthogonalizeGen[startvecs_List, dotF_, plusF_, timesF_] :=
Fold[Append[#1,oneStepOrtogonalizeGen[#2, #1, dotF, plusF, timesF]] &, {}, startvecs];
normalizeGen[vec_, dotF_, timesF_] := timesF[1/Sqrt[dotF[vec, vec]], vec];
GSOrthoNormalizeGen[startvecs_List, dotF_, plusF_, timesF_] :=
Map[normalizeGen[#, dotF, timesF] &, GSOrthogonalizeGen[startvecs, dotF, plusF, timesF]];
The functions above are parametrized by 3 functions, realizing addition, multiplication by a number, and the dot product in a given vector space. The example to illustrate will be to find Hermite polynomials by orthonormalizing monomials. These are possible implementations for the 3 functions we need:
hermiteDot[f_Function, g_Function] :=
Module[{x}, Integrate[f[x]*g[x]*Exp[-x^2], {x, -Infinity, Infinity}]];
SetAttributes[functionPlus, {Flat, Orderless, OneIdentity}];
functionPlus[f__Function] := With[{expr = Plus ## Through[{f}[#]]}, expr &];
SetAttributes[functionTimes, {Flat, Orderless, OneIdentity}];
functionTimes[a___, f_Function] /; FreeQ[{a}, # | Function] :=
With[{expr = Times[a, f[#]]}, expr &];
These functions may be a bit naive, but they will illustrate the idea (and yes, I also used Through). Here are some examples to illustrate their use:
In[114]:= hermiteDot[#^2 &, #^4 &]
Out[114]= (15 Sqrt[\[Pi]])/8
In[107]:= functionPlus[# &, #^2 &, Sin[#] &]
Out[107]= Sin[#1] + #1 + #1^2 &
In[111]:= functionTimes[z, #^2 &, x, 5]
Out[111]= 5 x z #1^2 &
Now, the main test:
In[115]:=
results =
GSOrthoNormalizeGen[{1 &, # &, #^2 &, #^3 &, #^4 &}, hermiteDot,
functionPlus, functionTimes]
Out[115]= {1/\[Pi]^(1/4) &, (Sqrt[2] #1)/\[Pi]^(1/4) &, (
Sqrt[2] (-(1/2) + #1^2))/\[Pi]^(1/4) &, (2 (-((3 #1)/2) + #1^3))/(
Sqrt[3] \[Pi]^(1/4)) &, (Sqrt[2/3] (-(3/4) + #1^4 -
3 (-(1/2) + #1^2)))/\[Pi]^(1/4) &}
These are indeed the properly normalized Hermite polynomials, as is easy to verify. The normalization of built-in HermiteH is different. Our results are normalized as one would normalize the wave functions of a harmonic oscillator, say. It is trivial to obtain a list of polynomials as expressions depending on a variable, say x:
In[116]:= Through[results[x]]
Out[116]= {1/\[Pi]^(1/4),(Sqrt[2] x)/\[Pi]^(1/4),(Sqrt[2] (-(1/2)+x^2))/\[Pi]^(1/4),
(2 (-((3 x)/2)+x^3))/(Sqrt[3] \[Pi]^(1/4)),(Sqrt[2/3] (-(3/4)+x^4-3 (-(1/2)+x^2)))/\[Pi]^(1/4)}
I would suggest defining an operator other than the built-in Plus for this purpose. There are a number of operators provided by Mathematica that are reserved for user definitions in cases such as this. One such operator is CirclePlus which has no pre-defined meaning but which has a nice compact representation (at least, it is compact in a notebook -- not so compact on a StackOverflow web page). You could define CirclePlus to perform function addition thus:
(x_ \[CirclePlus] y_)[args___] := x[args] + y[args]
With this definition in place, you can now perform function addition:
h = f \[CirclePlus] g;
h[x]
(* Out[3]= f[x]+g[x] *)
If one likes to live on the edge, the same technique can be used with the built-in Plus operator provided it is unprotected first:
Unprotect[Plus];
(x_ + y_)[args___] := x[args] + y[args]
Protect[Plus];
h = f + g;
h[x]
(* Out[7]= f[x]+g[x] *)
I would generally advise against altering the behaviour of built-in functions -- especially one as fundamental as Plus. The reason is that there is no guarantee that user-added definitions to Plus will be respected by other built-in or kernel functions. In some circumstances calls to Plus are optimized, and those optimizations might be not take the user definitions into account. However, this consideration may not affect any particular application so the option is still a valid, if risky, design choice.
I have a very complicated mathematica expression that I'd like to simplify by using a new, possibly dimensionless parameter.
An example of my expression is:
K=a*b*t/((t+f)c*d);
(the actual expression is monstrously large, thousands of characters). I'd like to replace all occurrences of the expression t/(t+f) with p
p=t/(t+f);
The goal here is to find a replacement so that all t's and f's are replaced by p. In this case, the replacement p is a nondimensionalized parameter, so it seems like a good candidate replacement.
I've not been able to figure out how to do this in mathematica (or if its possible). I tried:
eq1= K==a*b*t/((t+f)c*d);
eq2= p==t/(t+f);
Solve[{eq1,eq2},K]
Not surprisingly, this doesn't work. If there were a way to force it to solve for K in terms of p,a,b,c,d, this might work, but I can't figure out how to do that either. Thoughts?
Edit #1 (11/10/11 - 1:30)
[deleted to simplify]
OK, new tact. I've taken p=ton/(ton+toff) and multiplied p by several expressions. I know that p can be completely eliminated. The new expression (in terms of p) is
testEQ = A B p + A^2 B p^2 + (A+B)p^3;
Then I made the substitution for p, and called (normal) FullSimplify, giving me this expression.
testEQ2= (ton (B ton^2 + A^2 B ton (toff + ton) +
A (ton^2 + B (toff + ton)^2)))/(toff + ton)^3;
Finally, I tried all of the suggestions below, except the last (not sure how it works yet!)
Only the eliminate option worked. So I guess I'll try this method from now on. Thank you.
EQ1 = a1 == (ton (B ton^2 + A^2 B ton (toff + ton) +
A (ton^2 + B (toff + ton)^2)))/(toff + ton)^3;
EQ2 = P1 == ton/(ton + toff);
Eliminate[{EQ1, EQ2}, {ton, toff}]
A B P1 + A^2 B P1^2 + (A + B) P1^3 == a1
I should add, if the goal is to make all substitutions that are possible, leaving the rest, I still don't know how to do that. But it appears that if a substitution can completely eliminate a few variables, Eliminate[] works best.
Have you tried this?
K = a*b*t/((t + f) c*d);
Solve[p == t/(t + f), t]
-> {{t -> -((f p)/(-1 + p))}}
Simplify[K /. %[[1]] ]
-> (a b p)/(c d)
EDIT: Oh, and are you aware of Eliminiate?
Eliminate[{eq1, eq2}, {t,f}]
-> a b p == c d K && c != 0 && d != 0
Solve[%, K]
-> {{K -> (a b p)/(c d)}}
EDIT 2: Also, in this simple case, solving for K and t simultaneously seems to do the trick, too:
Solve[{eq1, eq2}, {K, t}]
-> {{K -> (a b p)/(c d), t -> -((f p)/(-1 + p))}}
Something along these lines is discussed in the MathGroup post at
http://forums.wolfram.com/mathgroup/archive/2009/Oct/msg00023.html
(I see it has an apocryphal note that is quite relevant, at least to the author of that post.)
Here is how it might be applied in the example above. For purposes of keeping this self contained I'll repeat the replacement code.
replacementFunction[expr_, rep_, vars_] :=
Module[{num = Numerator[expr], den = Denominator[expr],
hed = Head[expr], base, expon},
If[PolynomialQ[num, vars] &&
PolynomialQ[den, vars] && ! NumberQ[den],
replacementFunction[num, rep, vars]/
replacementFunction[den, rep, vars],
If[hed === Power && Length[expr] == 2,
base = replacementFunction[expr[[1]], rep, vars];
expon = replacementFunction[expr[[2]], rep, vars];
PolynomialReduce[base^expon, rep, vars][[2]],
If[Head[hed] === Symbol &&
MemberQ[Attributes[hed], NumericFunction],
Map[replacementFunction[#, rep, vars] &, expr],
PolynomialReduce[expr, rep, vars][[2]]]]]]
Your example is now as follows. We take the input, and also the replacement. For the latter we make an equivalent polynomial by clearing denominators.
kK = a*b*t/((t + f) c*d);
rep = Numerator[Together[p - t/(t + f)]];
Now we can invoke the replacement. We list the variables we are interested in replacing, treating 'p' as a parameter. This way it will get ordered lower than the others, meaning the replacements will try to remove them in favor of 'p'.
In[127]:= replacementFunction[kK, rep, {t, f}]
Out[127]= (a b p)/(c d)
This approach has a bit of magic in figuring out what should be the listed "variables". Possibly some further tweakage could be done to improve on that. But I believe that, generally, simply not listing the things we want to use as new replacements is the right way to go.
Over the years there have been variants of this idea on MathGroup. It is possible that some others may be better suited to the specific expression(s) you wish to handle.
--- edit ---
The idea behind this is to use PolynomialReduce to do algebraic replacement. That is to say, we do not try for pattern matching but instead use polynomial "canonicalization" a method. But in general we're not working with polynomial inputs. So we apply this idea recursively on PolynomialQ arguments inside NumericQ functions.
Earlier versions of this idea, along with some more explanation, can be found at the note referenced below, as well as in notes it references (how's that for explanatory recursion?).
http://forums.wolfram.com/mathgroup/archive/2006/Aug/msg00283.html
--- end edit ---
--- edit 2 ---
As observed in the wild, this approach is not always a simplifier. It does algebraic replacement, which involves, under the hood, a notion of "term ordering" (roughly, "which things get replaced by which others?") and thus simple variables may expand to longer expressions.
Another form of term rewriting is syntactic replacement via pattern matching, and other responses discuss using that approach. It has a different drawback, insofar as the generality of patterns to consider might become overwhelming. For example, what does one do with k^2/(w + p^4)^3 when the rule is to replace k/(w + p^4) with q? (Specifically, how do we recognize this as being equivalent to (k/(w + p^4))^2*1/(w + p^4)?)
The upshot is one needs to have an idea of what is desired and what methods might be feasible. This of course is generally problem specific.
One thing that occurs is perhaps you want to find and replace all commonly occurring "complicated" expressions with simpler ones. This is referred to as common subexpression elimination (CSE). In Mathematica this can be done using a function called Experimental`OptimizeExpression[]. Here are several links to MathGroup posts that discuss this.
http://forums.wolfram.com/mathgroup/archive/2009/Jul/msg00138.html
http://forums.wolfram.com/mathgroup/archive/2007/Nov/msg00270.html
http://forums.wolfram.com/mathgroup/archive/2006/Sep/msg00300.html
http://forums.wolfram.com/mathgroup/archive/2005/Jan/msg00387.html
http://forums.wolfram.com/mathgroup/archive/2002/Jan/msg00369.html
Here is an example from one of those notes.
InputForm[Experimental`OptimizeExpression[(3 + 3*a^2 + Sqrt[5 + 6*a + 5*a^2] +
a*(4 + Sqrt[5 + 6*a + 5*a^2]))/6]]
Out[206]//InputForm=
Experimental`OptimizedExpression[Block[{Compile`$1, Compile`$3, Compile`$4,
Compile`$5, Compile`$6}, Compile`$1 = a^2; Compile`$3 = 6*a;
Compile`$4 = 5*Compile`$1; Compile`$5 = 5 + Compile`$3 + Compile`$4;
Compile`$6 = Sqrt[Compile`$5]; (3 + 3*Compile`$1 + Compile`$6 +
a*(4 + Compile`$6))/6]]
--- end edit 2 ---
Daniel Lichtblau
K = a*b*t/((t+f)c*d);
FullSimplify[ K,
TransformationFunctions -> {(# /. t/(t + f) -> p &), Automatic}]
(a b p) / (c d)
Corrected update to show another method:
EQ1 = a1 == (ton (B ton^2 + A^2 B ton (toff + ton) +
A (ton^2 + B (toff + ton)^2)))/(toff + ton)^3;
f = # /. ton + toff -> ton/p &;
FullSimplify[f # EQ1]
a1 == p (A B + A^2 B p + (A + B) p^2)
I don't know if this is of any value at this point, but hopefully at least it works.
I came across some weird behaviour when using GroebnerBasis. In m1 below, I used a Greek letter as my variable and in m2, I used a Latin letter. Both of them have no rules associated with them. Why do I get vastly different answers depending on what variable I choose?
Image:
Copyable code:
Clear["Global`*"]
g = Module[{x},
x /. Solve[
z - x (1 - b -
b x ( (a (3 - 2 a (1 + x)))/(1 - 3 a x + 2 a^2 x^2))) == 0,
x]][[3]];
m1 = First#GroebnerBasis[\[Kappa] - g, z]
m2 = First#GroebnerBasis[k - g, z]
EDIT:
As pointed out by belisarius, my usage of GroebnerBasis is not entirely correct as it requires a polynomial input, whereas mine is not. This error, introduced by a copy-pasta, went unnoticed until now, as I was getting the answer that I expected when I followed through with the rest of my code using m1 from above. However, I'm not fully convinced that it is an unreasonable usage. Consider the example below:
x = (-b+Sqrt[b^2-4 a c])/2a;
p = First#GroebnerBasis[k - x,{a,b,c}]; (*get relation or cover for Riemann surface*)
q = First#GroebnerBasis[{D[p,k] == 0, p == 0},{a,b,c},k,
MonomialOrder -> EliminationOrder];
Solve[q==0, b] (*get condition on b for double root or branch point*)
{{b -> -2 Sqrt[a] Sqrt[c]}, {b -> 2 Sqrt[a] Sqrt[c]}}
which is correct. So my interpretation is that it is OK to use GroebnerBasis in such cases, but I'm not all too familiar with the deep theory behind it, so I could be completely wrong here.
P.S. I heard that if you mention GroebnerBasis three times in your post, Daniel Lichtblau will answer your question :)
The bug that was shown by these examples will be fixed in version 9. Offhand I do not know how to evade it in versions 8 and prior. If I recall correctly it was caused by an intermediate numeric overflow in some code that was checking whether a symbolic polynomial coefficient might be zero.
For some purposes it might be suitable to specify more variables and possibly a non-default term order. Also clearing denominators can be helpful at least in cases where that is a valid thing to do. That said, I do not know if these tactics would help in this example.
I'll look some more at this code but probably not in the near future.
Daniel Lichtblau
This may be related to the fact that Mathematica does not try all variable orders in functions like Simplify. Here is an example:
ClearAll[a, b, c]
expr = (c^4 b^2)/(c^4 b^2 + a^4 b^2 + c^2 a^2 (1 - 2 b^2));
Simplify[expr]
Simplify[expr /. {a -> b, b -> a}]
(b^2 c^4)/(a^4 b^2 + a^2 (1 - 2 b^2) c^2 + b^2 c^4)
(a^2 c^4)/(b^2 c^2 + a^2 (b^2 - c^2)^2)
Adam Strzebonski explained that:
...one can try FullSimplify with all
possible orderings of chosen
variables. Of course, this multiplies
the computation time by
Factorial[Length[variables]]...