How to leave out a part of programming code, in a programming code quote? - quotes

How to leave out a part of programming code, in a programming code quote?
In particular I have the following code snippet (from the Lean proof assistant):
def single (a : α) (b : β) : α →₀ β :=
⟨λa', if a = a' then b else 0,
finite_subset (#finite_singleton α a) $ assume a', by by_cases h : a = a';
simp [h]⟩
And I want to leave out a part, like:
def single (a : α) (b : β) : α →₀ β :=
⟨λa', if a = a' then b else 0,
[...]⟩
For text quotes I know that we can use the [...] for leaving out part of the quote.
But what do we use in the case of programming code quotes?

Lean has sorry, which is even syntactically-valid if the omitted part is an expression, as it is here. So if you are writing an exercise in Lean and you want provide a partially-filled answer that the reader could try to fill in:
def single (a : α) (b : β) : α →₀ β :=
⟨λa', if a = a' then b else 0,
sorry⟩
It differs from _ in that Lean will not try to fill in the blank when you use sorry.

Related

How do you use the Ring solver in Cubical Agda?

I have started playing around with Cubical Agda. Last thing I tried doing was building the type of integers (assuming the type of naturals is already defined) in a way similar to how it is done in classical mathematics (see the construction of integers on wikipedia). This is
data dInt : Set where
_⊝_ : ℕ → ℕ → dInt
canc : ∀ a b c d → a + d ≡ b + c → a ⊝ b ≡ c ⊝ d
trunc : isSet (dInt)
After doing that, I wanted to define addition
_++_ : dInt → dInt → dInt
(x ⊝ z) ++ (u ⊝ v) = (x + u) ⊝ (z + v)
(x ⊝ z) ++ canc a b c d u i = canc (x + a) (z + b) (x + c) (z + d) {! !} i
...
I am now stuck on the part between the two braces. A term of type x + a + (z + d) ≡ z + b + (x + c) is asked. Not wanting to prove this by hand, I wanted to use the ring solver made in Cubical Agda. But I could never manage to make it work, even trying to set it up for simple ring equalities like x + x + x ≡ 3 * x.
How can I make it work ? Is there a minimal example to make it work for naturals ? There is a file NatExamples.agda in the library, but it makes you have to rewrite your equalities in a convoluted way.
You can see how the solver for natural numbers is supposed to be used in this file in the cubical library:
Cubical/Tactics/NatSolver/Examples.agda
Note that this solver is different from the solver for commutative rings, which is designed for proving equations in abstract rings and is explained here:
Cubical/Tactics/CommRingSolver/Examples.agda
However, if I read your problem correctly, the equality you want to prove requires the use of other propositional equalities in Nat. This is not supported by any solver in the cubical library (as far as I know, also the standard library doesn't support it). But of course, you can use the solver for all the steps that don't use other equalities.
Just in case you didn't spot this: here is a definition of the integers in math-style using the SetQuotients of the cubical library. SetQuotients help you to avoid the work related to your third constructor trunc. This means you basically just need to show some constructions are well defined as you would in 'normal' math.
I've successfully used the ring solver for exactly the same problem: defining Int as a quotient of ℕ ⨯ ℕ. You can find the complete file here, the relevant parts are the following:
Non-cubical propositional equality to path equality:
open import Cubical.Core.Prelude renaming (_+_ to _+̂_)
open import Relation.Binary.PropositionalEquality renaming (refl to prefl; _≡_ to _=̂_) using ()
fromPropEq : ∀ {ℓ A} {x y : A} → _=̂_ {ℓ} {A} x y → x ≡ y
fromPropEq prefl = refl
An example of using the ring solver:
open import Function using (_$_)
import Data.Nat.Solver
open Data.Nat.Solver.+-*-Solver
using (prove; solve; _:=_; con; var; _:+_; _:*_; :-_; _:-_)
reorder : ∀ x y a b → (x +̂ a) +̂ (y +̂ b) ≡ (x +̂ y) +̂ (a +̂ b)
reorder x y a b = fromPropEq $ solve 4 (λ x y a b → (x :+ a) :+ (y :+ b) := (x :+ y) :+ (a :+ b)) prefl x y a b
So here, even though the ring solver gives us a proof of _=̂_, we can use _=̂_'s K and _≡_'s reflexivity to turn that into a path equality which can be used further downstream to e.g. prove that Int addition is representative-invariant.

Linear Temporal Logic (LTL) questions

[] = always
O = next
! = negation
<> = eventually
Wondering is it []<> is that equivalent to just []?
Also having a hard time understanding how to distribute temporal logic.
[][] (a OR !b)
!<>(!a AND b)
[]([] a ==> <> b)
I'll use the following notations:
F = eventually
G = always
X = next
U = until
In my model-checking course, we defined LTL the following way:
LTL: p | φ ∩ ψ | ¬φ | Xφ | φ U ψ
With F being a syntactic sugar for :
F (future)
Fφ = True U φ
and G:
G (global)
Gφ = ¬F¬φ
With that, your question is :
Is it true that : Gφ ?= GFφ
GFφ <=> G (True U φ)
Knowing that :
P ⊧ φ U ψ <=> exists i >= 0: P_(>= i) ⊧ ψ AND forall 0 <= j < i : P_(<= j) ⊧ φ
From that, we can clearly see that GFφ indicates that it must always be true that φ will be always be verified after some time i, and before that (j before i) True must be verified (trivial).
But Gφ indicates that φ must always be true, "from now to forever" and not "from i to forever".
G p indicates that at all times p holds. GF p indidcates that at all times, eventually p will hold. So while the infinite trace pppppp... satisfies both of the specifications, an infinite trace of the form p(!p)(!p!)p(!p)p... satisfies only GF p but not G p.
To be clear, both these example traces need to contain infinitely many locations, where p holds. But in the case of GF p, and only in this case, it is acceptable that there be locations in between, where p does not hold.
So the short answer to the above question by counterexample is: no, those two specifications aren't the same.

Theory of Computation. Turing Machine

Click here for the answer. Turing Machine
The question is to construct a Turing Machine which accepts the regular expression,
L = {a^n b^n | n>= 1}.
I am not sure if my answer is correct or wrong. Thank you in advance for your reply.
You cannot "accept the regular expression", only the language it describes. And what you provide is not a regular expression, but a set description. In fact, the language is not regular and therefore cannot be described by standard regular expressions.
The machine from your answer accepts the language described by the regular expression a^+ b^+.
A TM could mark the first a (e.g. by converting it to A) then delete the first b. And for each n one loop. If you and up with a string only of A, then accept.
As stated before, language L = {a^nb^n; n >= 1} cannot be described by regular expressions, it doesn't belong into the category of regular grammars. This language in particular is an example of context-free grammar, and thus it can be described by context-free grammar and recognized by pushdown automaton (an automaton with LIFO memory, a stack).
Grammar for this language would look something like this:
G = (V, S, R, P)
Where:
V is finite set of non-terminal characters, V = { S }
S is finite set of terminal characters, S = { a, b }
R is relation that describes "rewrites" from non-terminal characters to non-terminals and terminals, in this case R = { S -> aSb, S -> ab }
P is starting non-terminal character, P = S
A pushdown automata recognizing this language would be more complex, as it is a 7-tuple M = (Q, S, G, D, q0, Z, F)
Q is set of states
S is input alphabet
G is stack alphabet
D is the transition relation
q0 is start state
Z is initial stack symbol
F is set of accepting states
For our case, it would be:
Q = { q0, q1, qF }
S = { a, b }
G = { z0, X }
D will take a form of relation (current state, input character, top of stack) -> (output state, top of stack) (meaning you can move to a different state and rewrite top of stack (erase it, rewrite it or let it be)
(q0, a, z0) -> (q0, Xz0) - reading the first a
(q0, a, X) -> (q0, XX) - reading consecutive a's
(q0, b, X) -> (q1, e) - reading first b
(q1, b, X) -> (q1, e) - reading consecutive b's
(q1, e, z0) -> (qF, e) - reading last b
where e is empty word (sometimes called epsilon)
q0 = q0
Z = z0
F = { qF }
The language L = {a^n b^n | n≥1} represents a kind of language where we use only 2 character, i.e., a, b. In the beginning language has some number of a’s followed by equal number of b’s . Any such string which falls in this category will be accepted by this language. The beginning and end of string is marked by $ sign.
Step-1:
Replace a by X and move right, Go to state Q1.
Step-2:
Replace a by a and move right, Remain on same state
Replace Y by Y and move right, Remain on same state
Replace b by Y and move right, go to state Q2.
Step-3:
Replace b by b and move left, Remain on same state
Replace a by a and move left, Remain on same state
Replace Y by Y and move left, Remain on same state
Replace X by X and move right, go to state Q0.
Step-5:
If symbol is Y replace it by Y and move right and Go to state Q4
Else go to step 1
Step-6:
Replace Y by Y and move right, Remain on same state
If symbol is $ replace it by $ and move left, STRING IS ACCEPTED, GO TO FINAL STATE Q4

Resolving PREDICT/PREDICT conflicts in LL(1)

I'm working on a simple LL(1) parser generator, and I've run into an issue with PREDICT/PREDICT conflicts given certain input grammars. For example, given an input grammar like:
E → E + E
| P
P → 1
I can remove out the left recursion from E, replacing it with a roughly equivalent right recursive rule, thus arriving at the grammar:
E → P E'
E' → + E E'
| ε
P → 1
Next, I can compute the relevant FIRST and FOLLOW sets for the grammar, and end up with the following:
FIRST(E) = { 1 }
FIRST(E') = { +, ε }
FIRST(P) = { 1 }
FOLLOW(E) = { +, EOF }
FOLLOW(E') = { +, EOF }
FOLLOW(P) = { +, EOF }
And finally, using PREDICT(A → α) = { FIRST(α) - ε } ∪ (FOLLOW(A) if ε ∈ FIRST(α) else ∅) to construct the PREDICT sets for the grammar, the resulting sets are as follows.
PREDICT(1. E → P E') = { 1 }
PREDICT(2. E' → + E E') = { +, EOF }
PREDICT(3. E' → ε) = { +, EOF }
PREDICT(4. P → 1) = { 1 }
So this is where I run into the conflict that PREDICT(2) = PREDICT(3), and thus, I cannot produce a parse table as the grammar is not LL(1), since parser wouldn't be able to choose which rule should be applied.
What I'm really wondering is whether it's possible to resolve the conflict or factor the grammar such that the conflict can be avoided, and produce a legal LL(1) grammar, without having to directly modify the original input grammar.
The problem here is that your original grammar is ambiguous.
E → E + E
E → P
means that P + P + P can be parsed either as (P + P) + P or P + (P + P). Eliminating left recursion doesn't fix the ambiguity, so the modified grammar is also ambiguous. And ambiguous grammars can't be LL(k) (or, for that matter, LR(k)).
So you need to make the grammar unambiguous:
E → E + P
E → P
(That's the common left-associative version.) Once you eliminate left recursion, you end up with:
E → P E'
E' → + P E'
| ε
Now + is not in FOLLOW(E').
(The example is drawn straight from the Dragon book, but simplified; it's example 4.8 in the rather battered old copy I have.)
It's worth noting that the transformation used here preserves the set of strings derived by the grammar, but not the derivation. The parse tree which results from the modified grammar is effectively right-associative, so it will need to be reprocessed to recover the desired parse. This fact is rather briefly mentioned by the Dragon book authors:
Although left-recursion elimination and left factoring are easy to do, they make the resulting grammar hard to read and difficult to use for translation purposes. (My emphasis)
They go on to suggest that operator precedence parsing can be used for expressions, and then mention that if an LR parser generator is available, dividing the grammar into a predictive part and an operator-precedence part is no longer necessary.

Get mathematica to simplify expression with another equation

I have a very complicated mathematica expression that I'd like to simplify by using a new, possibly dimensionless parameter.
An example of my expression is:
K=a*b*t/((t+f)c*d);
(the actual expression is monstrously large, thousands of characters). I'd like to replace all occurrences of the expression t/(t+f) with p
p=t/(t+f);
The goal here is to find a replacement so that all t's and f's are replaced by p. In this case, the replacement p is a nondimensionalized parameter, so it seems like a good candidate replacement.
I've not been able to figure out how to do this in mathematica (or if its possible). I tried:
eq1= K==a*b*t/((t+f)c*d);
eq2= p==t/(t+f);
Solve[{eq1,eq2},K]
Not surprisingly, this doesn't work. If there were a way to force it to solve for K in terms of p,a,b,c,d, this might work, but I can't figure out how to do that either. Thoughts?
Edit #1 (11/10/11 - 1:30)
[deleted to simplify]
OK, new tact. I've taken p=ton/(ton+toff) and multiplied p by several expressions. I know that p can be completely eliminated. The new expression (in terms of p) is
testEQ = A B p + A^2 B p^2 + (A+B)p^3;
Then I made the substitution for p, and called (normal) FullSimplify, giving me this expression.
testEQ2= (ton (B ton^2 + A^2 B ton (toff + ton) +
A (ton^2 + B (toff + ton)^2)))/(toff + ton)^3;
Finally, I tried all of the suggestions below, except the last (not sure how it works yet!)
Only the eliminate option worked. So I guess I'll try this method from now on. Thank you.
EQ1 = a1 == (ton (B ton^2 + A^2 B ton (toff + ton) +
A (ton^2 + B (toff + ton)^2)))/(toff + ton)^3;
EQ2 = P1 == ton/(ton + toff);
Eliminate[{EQ1, EQ2}, {ton, toff}]
A B P1 + A^2 B P1^2 + (A + B) P1^3 == a1
I should add, if the goal is to make all substitutions that are possible, leaving the rest, I still don't know how to do that. But it appears that if a substitution can completely eliminate a few variables, Eliminate[] works best.
Have you tried this?
K = a*b*t/((t + f) c*d);
Solve[p == t/(t + f), t]
-> {{t -> -((f p)/(-1 + p))}}
Simplify[K /. %[[1]] ]
-> (a b p)/(c d)
EDIT: Oh, and are you aware of Eliminiate?
Eliminate[{eq1, eq2}, {t,f}]
-> a b p == c d K && c != 0 && d != 0
Solve[%, K]
-> {{K -> (a b p)/(c d)}}
EDIT 2: Also, in this simple case, solving for K and t simultaneously seems to do the trick, too:
Solve[{eq1, eq2}, {K, t}]
-> {{K -> (a b p)/(c d), t -> -((f p)/(-1 + p))}}
Something along these lines is discussed in the MathGroup post at
http://forums.wolfram.com/mathgroup/archive/2009/Oct/msg00023.html
(I see it has an apocryphal note that is quite relevant, at least to the author of that post.)
Here is how it might be applied in the example above. For purposes of keeping this self contained I'll repeat the replacement code.
replacementFunction[expr_, rep_, vars_] :=
Module[{num = Numerator[expr], den = Denominator[expr],
hed = Head[expr], base, expon},
If[PolynomialQ[num, vars] &&
PolynomialQ[den, vars] && ! NumberQ[den],
replacementFunction[num, rep, vars]/
replacementFunction[den, rep, vars],
If[hed === Power && Length[expr] == 2,
base = replacementFunction[expr[[1]], rep, vars];
expon = replacementFunction[expr[[2]], rep, vars];
PolynomialReduce[base^expon, rep, vars][[2]],
If[Head[hed] === Symbol &&
MemberQ[Attributes[hed], NumericFunction],
Map[replacementFunction[#, rep, vars] &, expr],
PolynomialReduce[expr, rep, vars][[2]]]]]]
Your example is now as follows. We take the input, and also the replacement. For the latter we make an equivalent polynomial by clearing denominators.
kK = a*b*t/((t + f) c*d);
rep = Numerator[Together[p - t/(t + f)]];
Now we can invoke the replacement. We list the variables we are interested in replacing, treating 'p' as a parameter. This way it will get ordered lower than the others, meaning the replacements will try to remove them in favor of 'p'.
In[127]:= replacementFunction[kK, rep, {t, f}]
Out[127]= (a b p)/(c d)
This approach has a bit of magic in figuring out what should be the listed "variables". Possibly some further tweakage could be done to improve on that. But I believe that, generally, simply not listing the things we want to use as new replacements is the right way to go.
Over the years there have been variants of this idea on MathGroup. It is possible that some others may be better suited to the specific expression(s) you wish to handle.
--- edit ---
The idea behind this is to use PolynomialReduce to do algebraic replacement. That is to say, we do not try for pattern matching but instead use polynomial "canonicalization" a method. But in general we're not working with polynomial inputs. So we apply this idea recursively on PolynomialQ arguments inside NumericQ functions.
Earlier versions of this idea, along with some more explanation, can be found at the note referenced below, as well as in notes it references (how's that for explanatory recursion?).
http://forums.wolfram.com/mathgroup/archive/2006/Aug/msg00283.html
--- end edit ---
--- edit 2 ---
As observed in the wild, this approach is not always a simplifier. It does algebraic replacement, which involves, under the hood, a notion of "term ordering" (roughly, "which things get replaced by which others?") and thus simple variables may expand to longer expressions.
Another form of term rewriting is syntactic replacement via pattern matching, and other responses discuss using that approach. It has a different drawback, insofar as the generality of patterns to consider might become overwhelming. For example, what does one do with k^2/(w + p^4)^3 when the rule is to replace k/(w + p^4) with q? (Specifically, how do we recognize this as being equivalent to (k/(w + p^4))^2*1/(w + p^4)?)
The upshot is one needs to have an idea of what is desired and what methods might be feasible. This of course is generally problem specific.
One thing that occurs is perhaps you want to find and replace all commonly occurring "complicated" expressions with simpler ones. This is referred to as common subexpression elimination (CSE). In Mathematica this can be done using a function called Experimental`OptimizeExpression[]. Here are several links to MathGroup posts that discuss this.
http://forums.wolfram.com/mathgroup/archive/2009/Jul/msg00138.html
http://forums.wolfram.com/mathgroup/archive/2007/Nov/msg00270.html
http://forums.wolfram.com/mathgroup/archive/2006/Sep/msg00300.html
http://forums.wolfram.com/mathgroup/archive/2005/Jan/msg00387.html
http://forums.wolfram.com/mathgroup/archive/2002/Jan/msg00369.html
Here is an example from one of those notes.
InputForm[Experimental`OptimizeExpression[(3 + 3*a^2 + Sqrt[5 + 6*a + 5*a^2] +
a*(4 + Sqrt[5 + 6*a + 5*a^2]))/6]]
Out[206]//InputForm=
Experimental`OptimizedExpression[Block[{Compile`$1, Compile`$3, Compile`$4,
Compile`$5, Compile`$6}, Compile`$1 = a^2; Compile`$3 = 6*a;
Compile`$4 = 5*Compile`$1; Compile`$5 = 5 + Compile`$3 + Compile`$4;
Compile`$6 = Sqrt[Compile`$5]; (3 + 3*Compile`$1 + Compile`$6 +
a*(4 + Compile`$6))/6]]
--- end edit 2 ---
Daniel Lichtblau
K = a*b*t/((t+f)c*d);
FullSimplify[ K,
TransformationFunctions -> {(# /. t/(t + f) -> p &), Automatic}]
(a b p) / (c d)
Corrected update to show another method:
EQ1 = a1 == (ton (B ton^2 + A^2 B ton (toff + ton) +
A (ton^2 + B (toff + ton)^2)))/(toff + ton)^3;
f = # /. ton + toff -> ton/p &;
FullSimplify[f # EQ1]
a1 == p (A B + A^2 B p + (A + B) p^2)
I don't know if this is of any value at this point, but hopefully at least it works.

Resources