Suppose I have a lemma L that says
forall x, x + 1 + 1 = x + 2.
If my goal is of the form a + 1 + 1 = b
I can write a command rewrite L to get a goal of the form a + 2 = b
However, if my goal is of the form a + 2 = b
how to apply the lemma backwards to get a goal a + 1 + 1 = b?
Say
rewrite <- L. (* Rewrite right to left *)
For symmetry, there's also rewrite -> L, which is the same as rewrite L (rewrite left to right).
This is documented in Coq's tactic reference.
Related
I have an equation (parentheses are used because of VBA code)
Y=(P/(12E((bt^3)/12))*A
and i know every variables but not "b". Is there any way how to ask Wolfram Alpha to "redefine" (not solve) equation so I can see something like following: I tried to do it manually (but result is not OK)
b=((P/EY)*12A))/t^3
I wish to see how right equation will look.
Original equation is on picture below
where
equation in [,] I simplified by A
I'm not sure if there's a way to tell Wolfram|Alpha to rearrange for a particular variable; in general it will usually try to rearrange for x or y.
If I substitute b for x in your equation and use the following query:
solve Y - (P/(12E((xt^3)/12))*A) = 0
then Wolfram Alpha returns the result you're looking for: x (b) expressed in terms of the other variables. Specifically:
x = A P / (E t^3 Y) for tY != 0 and AP != 0
I know that your question was about Wolfram Alpha, that you do not want to "solve", but here is one way you could do it in Mathematica using your real question. I renamed I into J because I is a reserved symbol in Mathematica for the imaginary unit.
J = b t^3/12;
expr = (P / (12 E J) ) (4 L1^3 + 3 R ( 2 Pi L1^2 + Pi R^2 + 8 L1 R ) + 12 L2 (L1 + R)^2)
Solve[ Y == expr , b]
Result
{{b -> (P (4 L1^3 + 12 L1^2 L2 + 24 L1 L2 R + 6 L1^2 \[Pi] R + 24 L1 R^2 + 12 L2 R^2 + 3 \[Pi] R^3))/(E t^3 Y)}}
I read that implications are functions. But I have a hard time trying to understand the example given in the above mentioned page:
The proof term for an implication P → Q is a function that takes
evidence for P as input and produces evidence for Q as its output.
Lemma silly_implication : (1 + 1) = 2 → 0 × 3 = 0. Proof. intros H.
reflexivity. Qed.
We can see that the proof term for the above lemma is indeed a
function:
Print silly_implication. (* ===> silly_implication = fun _ : 1 + 1 = 2
=> eq_refl
: 1 + 1 = 2 -> 0 * 3 = 0 *)
Indeed, it's a function. But its type does not look right to me. From my reading, the proof term for P -> Q should be a function with an evidence for Q as output. Then, the output of (1+1) = 2 -> 0*3 = 0 should be an evidence for 0*3 = 0, alone, right?
But the Coq print out above shows that the function image is eq_refl : 1 + 1 = 2 -> 0 * 3 = 0, instead of eq_refl: 0 * 3 = 0. I don't understand why the hypothesis 1 + 1 = 2 should appear in the output. Can anyone help explain what is going on here?
Thanks.
Your understanding is correct until:
But the Coq print out above shows that the function image is ...
I think you misunderstand the Print command. Print shows you the term associated with a definition, along with the type of the definition. It does not show the image/output of a function.
For example, the following prints the definition and type of the value x:
Definition x := 5.
Print x.
> x = 5
> : nat
Similarly, the following prints the definition and type of the function f:
Definition f := fun n => n + 2.
Print f.
> f = fun n : nat => n + 2
> : nat -> nat
If you want to see the function's codomain, you have to apply the function to a value, like so:
Definition fx := f x.
Print fx.
> fx = f x
> : nat
If you want to see the image/output of a function Print won't help you. What you need is Compute. Compute takes a term (e.g. a function application) and reduces it as far as possible:
Compute (f x).
> = 7
> : nat
In Maxima, I want to change the following equation:
ax+b-c-d=0
into the following format
(ax+b)/(c+d)=1
Note:
something like ax+b-c-d+1=1 is not what I want.
Basically I want to have positive elements in one side and negative elements in another side, then divide the positive elements by the negative elements.
Here is a quick attempt. It handles some equations of the form you described, but it's probably easy to find some which it can't handle. Maybe it works well enough, or at least provides some inspiration.
ptermp (e) := symbolp(e) or (numberp(e) and e > 0)
or ((op(e) = "+" or op(e) = "*") and every (ptermp, args(e)));
matchdeclare (pterm, ptermp);
matchdeclare (otherterm, all);
defrule (r1, pterm + otherterm = 0, ratsimp (pterm/(-otherterm)) = 1);
NOTE: the catch-all otherterm must be precede pterm alphabetically! This is a useful, but obscure, consequence of the simplification of "+" expressions and the pattern-matching process ... sorry for the obscurity.
Examples:
apply1 (a*x - b - c + d = 0, r1);
a x + d
------- = 1
c + b
apply1 (a*x - (b + g) - 2*c + d*e*f = 0, r1);
a x + d e f
----------- = 1
g + 2 c + b
A friend of mine showed me a home exercise in a C++ course which he attend. Since I already know C++, but just started learning Haskell I tried to solve the exercise in the "Haskell way".
These are the exercise instructions (I translated from our native language so please comment if the instructions aren't clear):
Write a program which reads non-zero coefficients (A,B,C,D) from the user and places them in the following equation:
A*x + B*y + C*z = D
The program should also read from the user N, which represents a range. The program should find all possible integral solutions for the equation in the range -N/2 to N/2.
For example:
Input: A = 2,B = -3,C = -1, D = 5, N = 4
Output: (-1,-2,-1), (0,-2, 1), (0,-1,-2), (1,-1, 0), (2,-1,2), (2,0, -1)
The most straight-forward algorithm is to try all possibilities by brute force. I implemented it in Haskell in the following way:
triSolve :: Integer -> Integer -> Integer -> Integer -> Integer -> [(Integer,Integer,Integer)]
triSolve a b c d n =
let equation x y z = (a * x + b * y + c * z) == d
minN = div (-n) 2
maxN = div n 2
in [(x,y,z) | x <- [minN..maxN], y <- [minN..maxN], z <- [minN..maxN], equation x y z]
So far so good, but the exercise instructions note that a more efficient algorithm can be implemented, so I thought how to make it better. Since the equation is linear, based on the assumption that Z is always the first to be incremented, once a solution has been found there's no point to increment Z. Instead, I should increment Y, set Z to the minimum value of the range and keep going. This way I can save redundant executions.
Since there are no loops in Haskell (to my understanding at least) I realized that such algorithm should be implemented by using a recursion. I implemented the algorithm in the following way:
solutions :: (Integer -> Integer -> Integer -> Bool) -> Integer -> Integer -> Integer -> Integer -> Integer -> [(Integer,Integer,Integer)]
solutions f maxN minN x y z
| solved = (x,y,z):nextCall x (y + 1) minN
| x >= maxN && y >= maxN && z >= maxN = []
| z >= maxN && y >= maxN = nextCall (x + 1) minN minN
| z >= maxN = nextCall x (y + 1) minN
| otherwise = nextCall x y (z + 1)
where solved = f x y z
nextCall = solutions f maxN minN
triSolve' :: Integer -> Integer -> Integer -> Integer -> Integer -> [(Integer,Integer,Integer)]
triSolve' a b c d n =
let equation x y z = (a * x + b * y + c * z) == d
minN = div (-n) 2
maxN = div n 2
in solutions equation maxN minN minN minN minN
Both yield the same results. However, trying to measure the execution time yielded the following results:
*Main> length $ triSolve' 2 (-3) (-1) 5 100
3398
(2.81 secs, 971648320 bytes)
*Main> length $ triSolve 2 (-3) (-1) 5 100
3398
(1.73 secs, 621862528 bytes)
Meaning that the dumb algorithm actually preforms better than the more sophisticated one. Based on the assumption that my algorithm was correct (which I hope won't turn as wrong :) ), I assume that the second algorithm suffers from an overhead created by the recursion, which the first algorithm isn't since it's implemented using a list comprehension.
Is there a way to implement in Haskell a better algorithm than the dumb one?
(Also, I'll be glad to receive general feedbacks about my coding style)
Of course there is. We have:
a*x + b*y + c*z = d
and as soon as we assume values for x and y, we have that
a*x + b*y = n
where n is a number we know.
Hence
c*z = d - n
z = (d - n) / c
And we keep only integral zs.
It's worth noticing that list comprehensions are given special treatment by GHC, and are generally very fast. This could explain why your triSolve (which uses a list comprehension) is faster than triSolve' (which doesn't).
For example, the solution
solve :: Integer -> Integer -> Integer -> Integer -> Integer -> [(Integer,Integer,Integer)]
-- "Buffalo buffalo buffalo buffalo Buffalo buffalo buffalo..."
solve a b c d n =
[(x,y,z) | x <- vals, y <- vals
, let p = a*x +b*y
, let z = (d - p) `div` c
, z >= minN, z <= maxN, c * z == d - p ]
where
minN = negate (n `div` 2)
maxN = (n `div` 2)
vals = [minN..maxN]
runs fast on my machine:
> length $ solve 2 (-3) (-1) 5 100
3398
(0.03 secs, 4111220 bytes)
whereas the equivalent code written using do notation:
solveM :: Integer -> Integer -> Integer -> Integer -> Integer -> [(Integer,Integer,Integer)]
solveM a b c d n = do
x <- vals
y <- vals
let p = a * x + b * y
z = (d - p) `div` c
guard $ z >= minN
guard $ z <= maxN
guard $ z * c == d - p
return (x,y,z)
where
minN = negate (n `div` 2)
maxN = (n `div` 2)
vals = [minN..maxN]
takes twice as long to run and uses twice as much memory:
> length $ solveM 2 (-3) (-1) 5 100
3398
(0.06 secs, 6639244 bytes)
Usual caveats about testing within GHCI apply -- if you really want to see the difference, you need to compile the code with -O2 and use a decent benchmarking library (like Criterion).
How do I tell mathematica to do this replacement smartly? (or how do I get smarter at telling mathematica to do what i want)
expr = b + c d + ec + 2 a;
expr /. a + b :> 1
Out = 2 a + b + c d + ec
I expect the answer to be a + cd + ec + 1. And before someone suggests, I don't want to do a :> 1 - b, because for aesthetic purposes, I'd like to have both a and b in my equation as long as the a+b = 1 simplification cannot be made.
In addition, how do I get it to replace all instances of 1-b, -b+1 or -1+b, b-1 with a or -a respectively and vice versa?
Here's an example for this part:
expr = b + c (1 - a) + (-1 + b)(a - 1) + (1 -a -b) d + 2 a
You can use a customised version of FullSimplify by supplying your own transformations to FullSimplify and let it figure out the details:
In[1]:= MySimplify[expr_,equivs_]:= FullSimplify[expr,
TransformationFunctions ->
Prepend[
Function[x,x-#]&/#Flatten#Map[{#,-#}&,equivs/.Equal->Subtract],
Automatic
]
]
In[2]:= MySimplify[2a+b+c*d+e*c, {a+b==1}]
Out[2]= a + c(d + e) + 1
equivs/.Equal->Subtract turns given equations into expressions equal to zero (e.g. a+b==1 -> a+b-1). Flatten#Map[{#,-#}&, ] then constructs also negated versions and flattens them into a single list. Function[x,x-#]& /# turns the zero expressions into functions, which subtract the zero expressions (the #) from what is later given to them (x) by FullSimplify.
It may be necessary to specify your own ComplexityFunction for FullSimplify, too, if your idea of simple differs from FullSimplify's default ComplexityFunction (which is roughly equivalent to LeafCount), e.g.:
MySimplify[expr_, equivs_] := FullSimplify[expr,
TransformationFunctions ->
Prepend[
Function[x,x-#]&/#Flatten#Map[{#,-#}&,equivs/.Equal->Subtract],
Automatic
],
ComplexityFunction -> (
1000 LeafCount[#] +
Composition[
Total,Flatten,Map[ArrayDepth[#]#&,#]&,CoefficientArrays
][#] &
)
]
In your example case, the default ComplexityFunction works fine, though.
For the first case, you might consider:
expr = b + c d + ec + 2 a
PolynomialReduce[expr, {a + b - 1}, {b, a}][[2]]
For the second case, consider:
expr = b + c (1 - a) + (-1 + b) (a - 1) + (1 - a - b) d + 2 a;
PolynomialReduce[expr, {x + b - 1}][[2]]
(% /. x -> 1 - b) == expr // Simplify
and:
PolynomialReduce[expr, {a + b - 1}][[2]]
Simplify[% == expr /. a -> 1 - b]