Expression Tree of logical expression - data-structures

I need to transform this: (a < b)∧(b < c)∨(c < d)
expression into an expression tree, but I can't figure out a way that looks correct. Here is what I got:

I'm predicting this will be a long answer, because I will go through the general thinking process for achieving a solution. For the inpatient, the solution is at the end.
Is your solution correct?
Well, it depends. Your picture would be totally correct if ∨ took precedence over ∧, in which case you would need to apply ∧ only after (b < c) ∨ (c < d) has a result. It would also be correct if you forced precedence using parenthesis like so:
(a < b) ∧ ( (b < c) ∨ (c < d) )
That said, when talking about operator precedence, typically ∧/and has precedence over ∨/or. When precedence is the same, the evaluation happens from left to right, meaning the right side depends on the result of the left side.
The highest the precedence of an operator, the lower it appears in the tree.
The rest of the answer will assume the usual operator precedence.
How to approach this problem?
The best approach to this sort of problems is to decompose the expression. Even better, if we decompose using the prefix/polish notation, it will be more natural to build up the tree later.
Given: (a < b) ∧ (b < c) ∨ (c < d)
Let's decompose it into parts:
x = (a < b), which translates to prefix: < a b
y = (b < c), which translates to prefix: < b c
z = (c < d), which translates to prefix: < c d
We now have 3 inequality expressions decomposed as x, y and z.
Now to the logical operators.
i = x ∧ y, which translates to prefix: ∧ x y
j = i ∨ z, which translates to prefix: ∨ i z
We now have 2 logical expressions decomposed as i and j. Note how they depend on x, y, z. But also, that j depends on i. Dependencies are important because you know tree leaves have no dependencies.
How to build the tree?
To sum up, this is what we decomposed from the original expression:
x = < a b
y = < b c
z = < c d
i = ∧ x y
j = ∨ i z
Let's approach it bottom-up.
Considering the dependencies, the leaves are obviously the most independent elements: a, b, c and d.
Let's build the bottom of the tree considering all appearances of these independent elements in the decomposition we just made (b and c appear twice, we put them twice).
a b b c c d
Now let's build x, y and z which depends only on a, b, c and d. I'll be using / and \ for building my ASCII art as equivalent to your picture lines.
x y z
< < <
/ \ / \ / \
a b b c c d
Now as we've seen, i depends only on x and y. So we can put it there now. We can't add j just yet, because we need i to be there first.
i
∧
/ \
x y z
< < <
/ \ / \ / \
a b b c c d
Now we are just missing j, which depends on i and z.
j
∨
/ \
/ \
i \
∧ \
/ \ \
x y z
< < <
/ \ / \ / \
a b b c c d
And we have a full expression tree. As you see, each dependency level will result in a tree level.
To be completely accurate, a Breadth-First Search in this tree would have to consider that z is on the same level of i, so the correct representation of the tree would have to put z one level higher:
j
∨
/ \
/ \
i z
∧ <
/ \ / \
x y c d
< <
/ \ / \
a b b c
Just one more note, for it to be completely clear, for the expression (a < b) ∧ ( (b < c) ∨ (c < d) ), the decomposition result would be:
x = < a b
y = < b c
z = < c d
j = ∨ y z
i = ∧ x j
Which would, in turn, result in the tree in your picture.
Hope this helps your future endeavours on building expression trees.

Related

Apply a lemma to a conjunction branch without splitting in coq

I have a conjunction, let's abstract it as: A /\ B and I have a Lemma proven that C -> A and I wish to get as a result the goal C /\ B. Is this possible?
If yes, I'd be interested in how to do it. If I use split and then apply the lemma to the first subgoal, I can't reassemble the two resulting subgoals C and B to C /\ B - or can I? Also apply does not seem to be applyable to only one branch of a conjunction.
If no, please explain to me why this is not possible :-)
You could introduce a lemma like :
Theorem cut: forall (A B C: Prop), C /\ B -> (C -> A) -> A /\ B.
Proof.
intros; destruct H; split; try apply H0; assumption.
Qed.
And then define a tactic like :
Ltac apply_left lemma := eapply cut; [ | apply lemma].
As an example, you could do stuff like :
Theorem test: forall (m n:nat), n <= m -> max n m = m /\ min n m = n.
Proof.
intros.
apply_left max_r.
...
Qed.
In this case, the context goes from :
Nat.max n m = m /\ Nat.min n m = n
to
n <= m /\ Nat.min n m = n
I assume that's what you are looking for.
Hope this will help you !

Let Σ = { a; b} How can I define a PDA in JFLAP which recognizes the following?

L = {a^n b^k | 2n >= k}
For example.: abb is element of L, aabbb is element of L, ε is element of L, but babbb is not element of L, abbb is not element of L
The shortest string in L is the empty string, e. Given a string s in the language, the following rules hold:
as is in L
asb is in L
asbb is in L
We can combine these observations to get a context-free grammar:
S := aSbb | aSb | aS | e
By our observations, every string generated by this grammar must be in L. To show that this is a grammar for L, we must show that any string in L can be generated. To get a string a^n b^k we can do the following:
use rule #1 above x times
use rule #2 above y times
use rule #3 above z times
ensure x + y + z = n
ensure y + 2z = k
Setting y = k - 2z and substituting we find x + k - 2z + z = n. Rearranging:
if k > n, then z and x can be chosen however desired so long as k - n = z - x.
if k < n, then z and x can be chosen however desired so long as n - k = x - z.
If k = n, observe we might as well just choose y = n.
Note that we can always choose z and x in our above example since 0 <= x, z <= n and 0 <= k <= 2n.

All maximal independent sets of a matroid have the same cardinality

How to prove that all maximal independent sets of a matroid have the same cardinality.
Provided a matroid is a 2-tuple (M,J ) where M is a finite set and J is a
family of some of the subsets of M satisfying the following
properties:
If A is subset of B and B belongs to J , then A belongs to J ,
If A, B belongs to J , |A| <= |B|, and x belongs to A - B,
then there exists y belongs to B - A such that (B U {x})- {y} belongs to J.
The members of J are called independent sets.
Assume to the contrary that |A| < |B|, and A is not maximally independent.
Consider the following Venn diagram
Clearly B \ A (the only-blue part) is nonempty, as the cardinality of B is larger than that of A. Also, clearly A \ B (the only-orange part) is nonempty, as otherwise A ⊂ B, and, by definition, A is not maximally independent.
Hence, by the exchange property, there is some x &in; A \ B, y &in; B \ A, such that B ∪ {x} \ {y} &in; J as well. Let's call this set C. Note that if we would draw the Venn diagram for A and C (now the blue circle is C):
|B| = |C| (the blue circle has the same size)
|(A \ {x}) \ C| < |A \ B| (the only-orange part is smaller than before)
Now we can repeat the argument about A and C, and so on. Note, however, that we can't repeat it indefinitely, as A is assumed to be finite. Hence, at some point we will reach the contradiction that the orange set is completely contained in the blue set, which we already saw before is impossible (that would mean, by definition, that it is not maximally independent).
We will do this using proof by Contradiction.
Let's assume that all maximal independent set of a matroid does not have the same cardinality.
Thus there must some set A and set B so that both are maximal independent set. Without the
loss of generality let us take j A j < j B j i.e cardinality of A is less than cardinality of B.
Let j A j = P and j B j = Q , P < Q . Now Let X 2 A-B and Y 2 B-A. X and Y will always exist
since A is maximal and dierent from B. Using the second property of Matroid we can make B1
= f B [ X g - f y g which is also independent set and j B1 j = Q . We can continue picking an
element from X' 2 A-Bi and an element from Y' 2 Bi-A and insert X' and remove Y' to make a
new independent set which has the cardinality Q untill there is no element in A-Bi.
Since A-Bi = thus A Bi. But Bi is also and independent set with cardinality Q. Now we can
say that A is not maximal which is a contradiction and thus our assumption was wrong.Thus j A j
= j B j which implies there can be no two maximal independent set with dierent cardinalities.
So all maximal independent set of a matroid have the same cardinality.

Write a loop invariant for partial correctness of Hoare Triple

I am new to the world of logic. I am learning Hoare Logic and Partial & Total correctness of programs. I tried alot to solve the below question but failed.
Write a loop invariant P to show partial correctness for the Hoare triple
{x = ¬x ∧ y = ¬y ∧ x >= 0} mult {z = ¬x * ¬y}
where mult is the following program that computes the product of x and y and stores it in z:
mult:
z := 0;
while (y > 0) do
z := z + x;
y := y - 1;
One of my friend gave me the following answer but I don't know if it correct or not.
Invariant P is (¬x * ¬y = z + ¬x * y) ∧ x = ¬x. Intuitively, z holds the part of the result that is already computed, and (¬x * y) is what remains to compute.
Please teach me step by step of how to solve this question.

Isabelle matrix arithmetic: det_linear_row_setsum in library with different notation

I recently started using the Isabelle theorem prover. As I want to prove another lemma, I would like to use a different notation than the one used in the lemma "det_linear_row_setsum", which can be found in the HOL library. More specifically, I would like to use the "χ i j notation" instead of "χ i". I have been trying to formulate an equivalent expression for some time, but couldn't figure it out yet.
(* ORIGINAL lemma from library *)
(* from HOL/Multivariate_Analysis/Determinants.thy *)
lemma det_linear_row_setsum:
assumes fS: "finite S"
shows "det ((χ i. if i = k then setsum (a i) S else c i)::'a::comm_ring_1^'n^'n) = setsum (λj. det ((χ i. if i = k then a i j else c i)::'a^'n^'n)) S"
proof(induct rule: finite_induct[OF fS])
case 1 thus ?case apply simp unfolding setsum_empty det_row_0[of k] ..
next
case (2 x F)
then show ?case by (simp add: det_row_add cong del: if_weak_cong)
qed
..
(* My approach to rewrite the above lemma in χ i j matrix notation *)
lemma mydet_linear_row_setsum:
assumes fS: "finite S"
fixes A :: "'a::comm_ring_1^'n^'n" and k :: "'n" and vec1 :: "'vec1 ⇒ ('a, 'n) vec"
shows "det ( χ r c . if r = k then (setsum (λj .vec1 j $ c) S) else A $ r $ c ) =
(setsum (λj . (det( χ r c . if r = k then vec1 j $ c else A $ r $ c ))) S)"
proof-
show ?thesis sorry
qed
First, make yourself clear what the original lemma says: a is a family of vectors indexed by i and j, c is a family of vectors indexed by i. The k-th row of the matrix on the left is the sum of the vectors a k j ranged over all j from the set S.
The other rows are taken from c. On the right, the matrices are the same except that row k is now a k j and the j is bound in the outer sum.
As you have realised, the family of vectors a is only used for the index i = k, so you can replace a by %_ j. vec1 $ j. Your matrix A yields the family of rows, i.e., c becomes %r. A $ r. Then, you merely have to exploit that (χ n. x $ n) = x (theorem vec_nth_inverse) and push the $ through the if and setsum. The result looks as follows:
lemma mydet_linear_row_setsum:
assumes fS: "finite S"
fixes A :: "'a::comm_ring_1^'n^'n" and k :: "'n" and vec1 :: "'vec1 => 'a^'n"
shows "det (χ r c . if r = k then setsum (%j. vec1 j $ c) S else A $ r $ c) =
(setsum (%j. (det(χ r c . if r = k then vec1 j $ c else A $ r $ c))) S)"
To prove this, you just have to undo the expansion and the pushing through, the lemmas if_distrib, cond_application_beta, and setsum_component might help you in doing so.

Resources