for example, I have two vectors:
a,b
I need to simplify the following simple equation :
|a+b|==|a-b|
We can know by artificial calculation:
a.b==0
Now I tried the following expression in Mathematica:
In[1040]=
Reduce[{a, b} \[Element] Vectors[2, Reals] && (a + b).(a + b) == (a - b).(a - b)]
but keep it as it is.
Out[1040]=
Reduce[(a | b) \[Element] Reals && (a + b).(a + b) == (a - b).(a - b)]
With a little help with TensorReduce:
assumptions = Element[#, Vectors[2, Reals]] & /# {a, b};
Reduce#TensorReduce[(a + b).(a + b) == (a - b).(a - b), Assumptions->assumptions]
Output:
a.b == 0
Related
I have a circuit made from AND NOT and OR gates, i need to convert it so that it only has NOR gates, this isn't working to well for me so any tips would be much appreciated.
This is the original function to convert:
~a~b~cd + ~a~bc~d + ~ab~c~d + ~abcd + a~b~c~d + a~bcd + ab~cd + abc~d
I'm assuming cd means c AND d. The rules are:
~a = a NOR a
a^b = (a NOR a) NOR (b NOR b)
a+b = (a NOR b) NOR (a NOR b)
From that, it's purely mechanical. I'll do the first part as an example:
~a~b~cd + ~a~bc~d
(a NOR a)(b NOR b)(c NOR C)d + (a NOR a)(b NOR b)c(d NOR d)
(((a NOR a) NOR (a NOR a)) NOR ((b NOR b) NOR (b NOR b)))(c NOR C)d + (a NOR a)(b NOR b)c(d NOR d)
I have noticed that mathematica chokes out on certain definite integrals, but if I do an indeifinite integration and subtract the limiting values of the resulting function, it would readily give me an answer.
Are there different algorithms to compute the definite and indefinite integrals? Is there some reason the above procedure described isn't done by Mathematica manually?
Examples:
As people in the comment asked for examples, here are two.
Timing[Integrate[(r - c x)/(r^2 + c^2 - (2 r) c x)^(3/2), x]]
Out:={0.010452,(-c+r x)/(r^2 Sqrt[c^2+r^2-2 c r x])}
Gets the output immediately. While,
Integrate[(r - c x)/(r^2 + c^2 - (2 r) c x)^(3/2), {x, -1, 1}]
Keeps computing and slows my old computer down. After a bit of time it returns an unneccesarily long result with a lot of cases. This is Mathematica 7. There are no singularities in this integral or running into complex numbers etc. To get the value, let's abort the computation running for about a minutes now and find the value using the fundamental theorem of calculus manually.
g = (r - c x)/(r^2 + c^2 - (2 r) c x)^(3/2)
l = Integrate[g, x] /. x -> -1;
u = Integrate[g, x] /. x -> 1;
u - l
Out:= (-c+r)/(r^2 Sqrt[c^2-2 c r+r^2])-(-c-r)/(r^2 Sqrt[c^2+2 c r+r^2])
FullSimplify[%]
Out:= ((-c+r)/Sqrt[(c-r)^2]+(c+r)/Sqrt[(c+r)^2])/r^2
Which is actually correct. Finally, for completeness, let's compare the output and the time for the definite integral:
Timing[Integrate[(r - c x)/(r^2 + c^2 - (2 r) c x)^(3/2), {x, -1,
1}]]
Out:= {174.52,If[(Re[c/r+r/c]>=2||2+Re[c/r+r/c]<=0||c/r+r/c\[NotElement]Reals)&&((Im[r] Re[c]+Im[c] Re[r]<=0&&((Im[c]+Im[r]) (Re[c]+Re[r])>=0||Im[c]^3 Re[r]+Im[r] Re[c] (Im[r]^2-Re[c]^2+Re[r]^2)>=Im[c] (Im[c] Im[r] Re[c]+Re[r] (Im[r]^2 ... blah blah half a page
Note the three minutes computation time and the very confused answer.
An actual example from my work, I noticed it and was puzzled but forgot about it after the deadline submission, until today when I faced the same problem again.
f = 1/((-I c + k^2/2 - 1/2 (a + k)^2) (I d + k^2/2 -
1/2 (-b + k)^2)) + 1/((I c + k^2/2 - 1/2 (-a + k)^2) (I c + I d +
k^2/2 - 1/2 (-a - b + k)^2)) + 1/((I d + k^2/2 -
1/2 (-b + k)^2) (I c + I d + k^2/2 -
1/2 (-a - b + k)^2)) + 1/((I c + k^2/2 - 1/2 (-a + k)^2) (-I d +
k^2/2 - 1/2 (b + k)^2)) + 1/((-I c + k^2/2 -
1/2 (a + k)^2) (-I c - I d + k^2/2 -
1/2 (a + b + k)^2)) + 1/((-I d + k^2/2 - 1/2 (b + k)^2) (-I c -
I d + k^2/2 - 1/2 (a + b + k)^2))
When I tried the definite integral, I waited and waited, after several hours (true!) I finally decided to try the workaround that worked in no time:
fl = Integrate[f, k] /. k -> -1 ;
fu = Integrate[f, k] /. k -> 1 ;
F = fu - fl;
F1 = F /. {a -> .01, c -> 0, d -> 1};
Please note that I am not talking about singularities as one comment suggested. Integrate[1/x, {x, -1, 1}] almost immediately returns Integrate::idiv: Integral of 1/x does not converge on {-1,1}. >> which is a perfectly reasonable output.
I think Daniel is right in his comment above: "Most likely is the definite integration code is checking for singularities on the integration path"
Just look:
Timing#Integrate[(r - c x)/(r^2 + c^2 - (2 r) c x)^(3/2), {x, -1, 1}]
Result -> None, I got bored waiting and aborted the calc
While:
Timing#Integrate[(r - c x)/(r^2 + c^2 - (2 r) c x)^(3/2), {x, -1, 1},
Assumptions -> {r ∈ Reals && c ∈ Reals && c != r && c != -r}]
->{3.688, (-Sign[c - r] + Sign[c + r])/r^2}
So, it is a matter of which conditions you specify for your constants.
Another way is suggested in Simon's comment above:
Timing#Integrate[(r - c x)/(r^2 + c^2 - (2 r) c x)^(3/2), {x, -1, 1},
GenerateConditions -> False]
{10.375, ((-c + r)/Sqrt[(c - r)^2] + (c + r)/Sqrt[(c + r)^2])/r^2}
And finally, you may also do:
Timing#Integrate[(r - c x)/(r^2 + c^2 - (2 r) c x)^(3/2), {x, -1, 1},
GenerateConditions -> True]
{16.45, ConditionalExpression[.. A long expression ...., Re[c^2 + r^2] > 0]}
Belisarius answered this. I just wanted to be a bit specific about what I meant in the comment, and how it applies to this example.
The denominator in the integrand makes it clear that we have a problem if, say, r and c are real, positive, and r
In[1]:= InputForm[Timing[Integrate[(r - c*x)/(r^2 + c^2 - 2*r*c*x)^(3/2),
{x, -1, 1}, Assumptions->r>c>0]]]
Out[1]//InputForm= {2.33, 2/r^2}
Absent useful assumptions, Integrate can take significant time to sort out regions of good vs bad behavior. The technology under the hood is daunting (inequality handling can be that way). And perhaps it is not everywhere applied in the most efficient ways possible.
Further information may be found at
http://library.wolfram.com/infocenter/Conferences/5832/
or
http://dl.acm.org/citation.cfm?doid=2016567.2016569
Daniel Lichtblau
How do I tell mathematica to do this replacement smartly? (or how do I get smarter at telling mathematica to do what i want)
expr = b + c d + ec + 2 a;
expr /. a + b :> 1
Out = 2 a + b + c d + ec
I expect the answer to be a + cd + ec + 1. And before someone suggests, I don't want to do a :> 1 - b, because for aesthetic purposes, I'd like to have both a and b in my equation as long as the a+b = 1 simplification cannot be made.
In addition, how do I get it to replace all instances of 1-b, -b+1 or -1+b, b-1 with a or -a respectively and vice versa?
Here's an example for this part:
expr = b + c (1 - a) + (-1 + b)(a - 1) + (1 -a -b) d + 2 a
You can use a customised version of FullSimplify by supplying your own transformations to FullSimplify and let it figure out the details:
In[1]:= MySimplify[expr_,equivs_]:= FullSimplify[expr,
TransformationFunctions ->
Prepend[
Function[x,x-#]&/#Flatten#Map[{#,-#}&,equivs/.Equal->Subtract],
Automatic
]
]
In[2]:= MySimplify[2a+b+c*d+e*c, {a+b==1}]
Out[2]= a + c(d + e) + 1
equivs/.Equal->Subtract turns given equations into expressions equal to zero (e.g. a+b==1 -> a+b-1). Flatten#Map[{#,-#}&, ] then constructs also negated versions and flattens them into a single list. Function[x,x-#]& /# turns the zero expressions into functions, which subtract the zero expressions (the #) from what is later given to them (x) by FullSimplify.
It may be necessary to specify your own ComplexityFunction for FullSimplify, too, if your idea of simple differs from FullSimplify's default ComplexityFunction (which is roughly equivalent to LeafCount), e.g.:
MySimplify[expr_, equivs_] := FullSimplify[expr,
TransformationFunctions ->
Prepend[
Function[x,x-#]&/#Flatten#Map[{#,-#}&,equivs/.Equal->Subtract],
Automatic
],
ComplexityFunction -> (
1000 LeafCount[#] +
Composition[
Total,Flatten,Map[ArrayDepth[#]#&,#]&,CoefficientArrays
][#] &
)
]
In your example case, the default ComplexityFunction works fine, though.
For the first case, you might consider:
expr = b + c d + ec + 2 a
PolynomialReduce[expr, {a + b - 1}, {b, a}][[2]]
For the second case, consider:
expr = b + c (1 - a) + (-1 + b) (a - 1) + (1 - a - b) d + 2 a;
PolynomialReduce[expr, {x + b - 1}][[2]]
(% /. x -> 1 - b) == expr // Simplify
and:
PolynomialReduce[expr, {a + b - 1}][[2]]
Simplify[% == expr /. a -> 1 - b]
I have the following expression
(-1 + 1/p)^B/(-1 + (-1 + 1/p)^(A + B))
How can I multiply both the denominator and numberator by p^(A+B), i.e. to get rid of the denominators in both numerator and denominator? I tried varous Expand, Factor, Simplify etc. but none of them worked.
Thanks!
I must say I did not understand the original question. However, while trying to understand the intriguing solution given by belisarius I came up with the following:
expr = (-1 + 1/p)^B/(-1 + (-1 + 1/p)^(A + B));
Together#(PowerExpand#FunctionExpand#Numerator#expr/
PowerExpand#FunctionExpand#Denominator#expr)
Output (as given by belisarius):
Alternatively:
PowerExpand#FunctionExpand#Numerator#expr/PowerExpand#
FunctionExpand#Denominator#expr
gives
or
FunctionExpand#Numerator#expr/FunctionExpand#Denominator#expr
Thanks to belisarius for another nice lesson in the power of Mma.
If I understand you question, you may teach Mma some algebra:
r = {(k__ + Power[a_, b_]) Power[c_, b_] -> (k Power[c, b] + Power[a c, b]),
p_^(a_ + b_) q_^a_ -> p^b ( q p)^(a),
(a_ + b_) c_ -> (a c + b c)
}
and then define
s1 = ((-1 + 1/p)^B/(-1 + (-1 + 1/p)^(A + B)))
f[a_, c_] := (Numerator[a ] c //. r)/(Denominator[a ] c //. r)
So that
f[s1, p^(A + B)]
is
((1 - p)^B*p^A)/((1 - p)^(A + B) - p^(A + B))
Simplify should work, but in your case it doesn't make sense to multiply numerator and denominator by p^(A+B), it doesn't cancel denominators
Due to DSolve syntax, systems of differential equations have to be given as lists of equations and not as a vector equation (Unlike Solve, which accepts both).
So my simple question is how to convert a vector equation such as:
{f'[t],g'[t]}=={{a,b},{c,d}}.{f[t],g[t]}
To list of equations:
{f'[t]==a*f[t]+b*g[t],g'[t]==c*f[t]+d*g[t]}
I think I knew once the answer, but I can't find it now and I think it could benefit others as well.
Try using Thread:
Thread[{f'[t], g'[t]} == {{a, b}, {c, d}}.{f[t], g[t]}]
(* {f'[t] == a f[t] + b g[t], g'[t] == c f[t] + d g[t] *)
It takes the equality operator == and applies it to each item within a list with the same Head.
The standard answer to this question is that which Brett presented,
i.e., using Thread.
However, I find that for use in DSolve, NDSolve, etc... the command LogicalExpand is better.
eqn = {f'[t], g'[t]} == {{a, b}, {c, d}}.{f[t], g[t]};
LogicalExpand[eqn]
(* f'[t] == a f[t] + b g[t] && g'[t] == c f[t] + d g[t] *)
It doesn't convert a vector equation to a list, but it is more useful since it automatically flattens out matrix/tensor equations and combinations of vector equations.
For example, if you wanted to add initial conditions to the above differential equation, you'd use
init = {f[0], g[0]} == {f0, g0};
LogicalExpand[eqn && init]
(* f[0] == f0 && g[0] == g0 &&
f'[t] == a f[t] + b g[t] && g'[t] == c f[t] + d g[t] *)
An example of a matrix equation is
mEqn = Array[a, {2, 2}] == Partition[Range[4], 2];
Using Thread here is awkward, you need to apply it multiple times and Flatten the result. Using LogicalExpand is easy
LogicalExpand[mEqn]
(* a[1, 1] == 1 && a[1, 2] == 2 && a[2, 1] == 3 && a[2, 2] == 4 *)