So I want to solve/simplify (not sure the correct term to use here) for the symbolic variable 'a' in this equation:
a == -1 - ((f - af) / n)
with sagemath. I should be expecting and answer of :
a == (-f - n) / (n- f)
I was able to do this using https://mathpapa.com/algebra-calculator.html but wasn't sure if this was possible with sagemath. I've tried a few ways using solve and simplify but couldn't get anything to work.
Do this in three steps.
First declare the symbolic variables in Sage's symbolic ring (SR):
sage: a, f, n = SR.var('a, f, n')
Then define the equation, writing multiplication a * f explicitly:
sage: equation = a == -1 - ((f - a*f) / n)
Solve the equation in terms of the variable a:
sage: solve(equation, a)
[a == (f + n)/(f - n)]
Related
I have started playing around with Cubical Agda. Last thing I tried doing was building the type of integers (assuming the type of naturals is already defined) in a way similar to how it is done in classical mathematics (see the construction of integers on wikipedia). This is
data dInt : Set where
_⊝_ : ℕ → ℕ → dInt
canc : ∀ a b c d → a + d ≡ b + c → a ⊝ b ≡ c ⊝ d
trunc : isSet (dInt)
After doing that, I wanted to define addition
_++_ : dInt → dInt → dInt
(x ⊝ z) ++ (u ⊝ v) = (x + u) ⊝ (z + v)
(x ⊝ z) ++ canc a b c d u i = canc (x + a) (z + b) (x + c) (z + d) {! !} i
...
I am now stuck on the part between the two braces. A term of type x + a + (z + d) ≡ z + b + (x + c) is asked. Not wanting to prove this by hand, I wanted to use the ring solver made in Cubical Agda. But I could never manage to make it work, even trying to set it up for simple ring equalities like x + x + x ≡ 3 * x.
How can I make it work ? Is there a minimal example to make it work for naturals ? There is a file NatExamples.agda in the library, but it makes you have to rewrite your equalities in a convoluted way.
You can see how the solver for natural numbers is supposed to be used in this file in the cubical library:
Cubical/Tactics/NatSolver/Examples.agda
Note that this solver is different from the solver for commutative rings, which is designed for proving equations in abstract rings and is explained here:
Cubical/Tactics/CommRingSolver/Examples.agda
However, if I read your problem correctly, the equality you want to prove requires the use of other propositional equalities in Nat. This is not supported by any solver in the cubical library (as far as I know, also the standard library doesn't support it). But of course, you can use the solver for all the steps that don't use other equalities.
Just in case you didn't spot this: here is a definition of the integers in math-style using the SetQuotients of the cubical library. SetQuotients help you to avoid the work related to your third constructor trunc. This means you basically just need to show some constructions are well defined as you would in 'normal' math.
I've successfully used the ring solver for exactly the same problem: defining Int as a quotient of ℕ ⨯ ℕ. You can find the complete file here, the relevant parts are the following:
Non-cubical propositional equality to path equality:
open import Cubical.Core.Prelude renaming (_+_ to _+̂_)
open import Relation.Binary.PropositionalEquality renaming (refl to prefl; _≡_ to _=̂_) using ()
fromPropEq : ∀ {ℓ A} {x y : A} → _=̂_ {ℓ} {A} x y → x ≡ y
fromPropEq prefl = refl
An example of using the ring solver:
open import Function using (_$_)
import Data.Nat.Solver
open Data.Nat.Solver.+-*-Solver
using (prove; solve; _:=_; con; var; _:+_; _:*_; :-_; _:-_)
reorder : ∀ x y a b → (x +̂ a) +̂ (y +̂ b) ≡ (x +̂ y) +̂ (a +̂ b)
reorder x y a b = fromPropEq $ solve 4 (λ x y a b → (x :+ a) :+ (y :+ b) := (x :+ y) :+ (a :+ b)) prefl x y a b
So here, even though the ring solver gives us a proof of _=̂_, we can use _=̂_'s K and _≡_'s reflexivity to turn that into a path equality which can be used further downstream to e.g. prove that Int addition is representative-invariant.
I was reading the code of the implementation of (^) of the standard haskell library :
(^) :: (Num a, Integral b) => a -> b -> a
x0 ^ y0 | y0 < 0 = errorWithoutStackTrace "Negative exponent"
| y0 == 0 = 1
| otherwise = f x0 y0
where -- f : x0 ^ y0 = x ^ y
f x y | even y = f (x * x) (y `quot` 2)
| y == 1 = x
| otherwise = g (x * x) ((y - 1) `quot` 2) x
-- g : x0 ^ y0 = (x ^ y) * z
g x y z | even y = g (x * x) (y `quot` 2) z
| y == 1 = x * z
| otherwise = g (x * x) ((y - 1) `quot` 2) (x * z)
Now this part where g is defined seems odd to me why not just implement it like this:
expo :: (Num a ,Integral b) => a -> b ->a
expo x0 y0
| y0 == 0 = 1
| y0 < 0 = errorWithoutStackTrace "Negative exponent"
| otherwise = f x0 y0
where
f x y | even y = f (x*x) (y `quot` 2)
| y==1 = x
| otherwise = x * f x (y-1)
But indeed plugging in say 3^1000000 shows that (^) is about 0,04 seconds faster than expo.
Why is (^) faster than expo?
As the person who wrote the code, I can tell you why it's complex. :)
The idea is to be tail recursive to get loops, and also to perform the minimum number of multiplications. I don't like the complexity, so if you find a more elegant way please file a bug report.
A function is tail-recursive if the return value of a recursive call is returned as-is, without further processing. In expo, f is not tail-recursive, because of otherwise = x * f x (y-1): the return value of f is multiplied by x before it is returned. Both f and g in (^) are tail-recursive, because their return values are returned unmodified.
Why does this matter? Tail-recursive functions can implemented much more efficiently than general recursive functions. Because the compiler doesn't need to create a new context (stack frame, what have you) for a recursive call, it can reuse the caller's context as the context of the recursive call. This saves a lot of the overhead of calling a function, much like in-lining a function is more efficient than calling the function proper.
Whenever you see a bread-and-butter function in the standard library and it's implemented weirdly, the reason is almost always "because doing it like that triggers some special performance-critical optimization [possibly in a different version of the compiler]".
These odd workarounds are usually to "force" the compiler to notice that some specific, important optimization is possible (e.g., to force a particular argument to be considered strict, to allow worker/wrapper transformation, whatever). Typically some person has compiled their program, noticed it's epicly slow, complained to the GHC devs, and they looked at the compiled code and thought "oh, GHC isn't seeing that it can inline that 3rd worker function... how do I fix that?" The result is that if you rephrase the code just slightly, the desired optimization then fires.
You say you tested it and there's not much speed difference. You didn't say for what type. (Is the exponent Int or Integer? What about the base? It's quite possible it makes a significant difference in some obscure case.)
Occasionally functions are also implemented weirdly to maintain strictness / laziness guarantees. (E.g., the library spec says it has to work a certain way, and implementing it the most obvious way would make the function more strict / less strict than the spec claims.)
I don't know what's up with this specific function, but I would suggest #chi is probably onto something.
I'm a beginner in Haskell, just started now learning about folds and what not, in college, first year.
One of the problems I'm facing now is to define Euclid's algorithm using the until function.
Here's the Euclid's recursive definition (EDIT: just to show how euclid works, I'm trying to define euclid's without the recursive. Just using until):
gcd a b = if b == 0 then a else gcd b (a `mod` b)
Here's what i have using until:
gcd a b = until (==0) (mod a ) b
Obviously this doesn't make any sense since it's always going to return 0, as that is my stopping point instead of printing the value of a when b == 0. I can't for the life of me though figure out how to get the value of a.
Any help is appreciated.
Thank you in advance guys.
Hints:
Now
until :: (a -> Bool) -> (a -> a) -> a -> a
so we need a function that we can apply repeatedly until a condition holds, but we have two numbers a and b, so how can we do that?
The solution is to make the two numbers into one value, (a,b), so think of gcd this way:
uncurriedGCD (a,b) = if b == 0 then (a,a) else uncurriedGCD (b,a `mod` b)
Now you can make two functions, next & check and use them with until.
Helpers for until:
next (a,b) = (b,a `mod` b)
check (a,b) = b == 0
This means that we now could have written uncurriedGCD using until.
Answer:
For example:
ghci> until check next (6,4)
(2,0)
ghci> until check next (12,18)
(6,0)
So we can define:
gcd a b = c where (c,_) = until check next (a,b)
giving:
ghci> gcd 20 44
4
ghci> gcd 60 108
12
What the Euclid's algorithm says is this: for (a, b), computing (b, mod a b) until (the new) b equals zero. This can be translated directly to an implementation using until like this:
myGcd a b = until (\(x, y) -> y == 0) (\(x, y) -> (y, x `mod` y)) (a, b)
f[n_] := ((A*n^a)^(1/s) +
c*(B*(a*c*(B/A)^(1/s)*n^(1 - (a/s)))^(-(a*s)/(a - s)))^(1/s))^s +
b*log (1 - n - ((a*c*(B/A)^(1/s)*n^(1 - (a/s)))^(-(a*s)/(a - s))))
d/dn (f (n))
d/dn (f[n])
D[f[n], n]
solve (D[f[n], n] = 0)
0
Solve[D[f[n], n] = 0, n]
Solve[0, n]
Maximize[f[n], n]
Maximize[b log (1 - n - (a (B/A)^(1/s) c n^(1 - a/s))^(-((a s)/(a - s)))) + ((A n^a)^(1/s)
+ c (B (a (B/A)^(1/s) c n^(1 - a/s))^(-((a s)/(a - s))))^(1/s))^s, n]
I am not getting anything returning for any of these functions. Any idea why?
Attaching a photo of the mathematica script:
First of all, you're using solve with a lowercase, which is just an undefined variable. To use the function Solve you need to write it with a capital letter. In the same way, you have to write Log with a capital letter, not a lower-case letter, since it's a built in function.
Second, your open parenthesis is not a bracket. Functions in Mathematica require brackets, like Solve[ ... ], not Solve( ).
Third, you're using = instead of ==. The single equals = is used to store variables, the double equals == is used to represent equality.
See if you can get it to work after remedying these errors.
I have the following expression
(-1 + 1/p)^B/(-1 + (-1 + 1/p)^(A + B))
How can I multiply both the denominator and numberator by p^(A+B), i.e. to get rid of the denominators in both numerator and denominator? I tried varous Expand, Factor, Simplify etc. but none of them worked.
Thanks!
I must say I did not understand the original question. However, while trying to understand the intriguing solution given by belisarius I came up with the following:
expr = (-1 + 1/p)^B/(-1 + (-1 + 1/p)^(A + B));
Together#(PowerExpand#FunctionExpand#Numerator#expr/
PowerExpand#FunctionExpand#Denominator#expr)
Output (as given by belisarius):
Alternatively:
PowerExpand#FunctionExpand#Numerator#expr/PowerExpand#
FunctionExpand#Denominator#expr
gives
or
FunctionExpand#Numerator#expr/FunctionExpand#Denominator#expr
Thanks to belisarius for another nice lesson in the power of Mma.
If I understand you question, you may teach Mma some algebra:
r = {(k__ + Power[a_, b_]) Power[c_, b_] -> (k Power[c, b] + Power[a c, b]),
p_^(a_ + b_) q_^a_ -> p^b ( q p)^(a),
(a_ + b_) c_ -> (a c + b c)
}
and then define
s1 = ((-1 + 1/p)^B/(-1 + (-1 + 1/p)^(A + B)))
f[a_, c_] := (Numerator[a ] c //. r)/(Denominator[a ] c //. r)
So that
f[s1, p^(A + B)]
is
((1 - p)^B*p^A)/((1 - p)^(A + B) - p^(A + B))
Simplify should work, but in your case it doesn't make sense to multiply numerator and denominator by p^(A+B), it doesn't cancel denominators