Solving equations with an associative and commutative operator - solver

Consider a goal like this in Isabelle (and don’t worry about ccProd and ccFromList):
ccProd {x} (set xs) ⊔ (ccProd {x} (set ys) ⊔ (ccFromList xs ⊔ (ccFromList ys ⊔ ccProd (set xs) (set ys)))) =
ccProd {x} (set xs) ⊔ (ccFromList xs ⊔ (ccFromList ys ⊔ (ccProd {x} (set ys) ⊔ ccProd (set xs) (set ys))))
This is true, since ⊔ is associative and commutative. My usual approach to this is to use
apply (metis join_assoc join_comm)
and it works, but already takes a noticeable time to finish.
Similarly, I have a goal like
ccProd {x} (set xs) ⊔ (ccProd {x} (set ys) ⊔ (ccFromList xs ⊔ (ccFromList ys ⊔ ccProd (set xs) (set ys)))) =
ccFromList xs ⊔ (ccProd {x} (set ys) ⊔ (ccFromList ys ⊔ (ccProd (set xs) {x} ⊔ ccProd (set xs) (set ys))))
where I also need to apply the commutativity of ccProd in one instance. Again
apply (metis join_assoc join_comm ccProd_comm)
does the job, but takes even longer.
Are there better ways to solve equations involving a commutative and associative operator?
Maybe a tactic or simpproc that, given the theorems join_assoc join_comm, would solve the first goal and reduce the second goal to
ccProd {x} (set xs) = ccProd (set xs) {x}

Reasoning upto associativity and commutativity is usually done in Isabelle with the simplifier and ordered rewriting. In your example, you provide the simplifier with the associativity rule (oriented from left to right), the commutativity rule, and the left-commutativity rule. The details are explained in the Tutorial on Isabelle/HOL (Section 9.1, Permutative rewrite rules).
Then, the simplifier will reorder both sides of the equations into a normal form which is determined by the implicit term order in Isabelle. Hence, you get equal terms on both sides, which are shown to be equal by reflexivity. Unless your operator also satisfies cancellation laws, this approach does not reduce the second example to the differing parts. If you are lucky and the simplifier rotates both of these terms at the same position. you could use a bunch of introduction rules of the form a = b ==> a ⊔ c = b ⊔ c. However, this is rather fragile. If you rename your variables, the order can change and thus break the proof. However, ccProd seems to be commutative as well, so just add the commutativity law to the simplifier as well. Then, it will normalise these subterms first and solve everything.

If you have an instance of ab_semigroup_mult or ab_semigroup_add than adding ac_simps to the simpset often does the trick.
For example, if I replace your above goal by the following (since I get a syntax error with ⊔):
lemma
fixes ccProd :: "_ ⇒ _ ⇒ 'a::ab_semigroup_add"
shows "ccProd {x} (set xs) + (ccProd {x} (set ys) + (ccFromList xs + (ccFromList ys + ccProd (set xs) (set ys)))) =
ccProd {x} (set xs) + (ccFromList xs + (ccFromList ys + (ccProd {x} (set ys) + ccProd (set xs) (set ys))))"
Then by (simp add: ac_simps) succeeds.
Update: There's also the corresponding locale abel_semigroup that is again "registered" with ac_simps. So your second lemma could work along the following lines
interpretation abel_semigroup ccProd
sorry
That is you show that ccProd is AC (in addition to the already established ab_semigroup_add instance above).
lemma
"ccProd {x} (set xs) + (ccProd {x} (set ys) + (ccFromList xs + (ccFromList ys + ccProd (set xs) (set ys)))) =
ccFromList xs + (ccProd {x} (set ys) + (ccFromList ys + (ccProd (set xs) {x} + ccProd (set xs) (set ys))))"
by (simp add: ac_simps)

Related

Understanding list comprehension modifier order in Clojure

I'm currently learning Clojure and am stuck with list comprehension.
;; https://stackoverflow.com/a/7625207/4110233
(defn gen-primes "Generates an infinite, lazy sequence of prime numbers"
[]
(letfn [(reinsert [table x prime]
(update-in table [(+ prime x)] conj prime))
(primes-step [table d]
(if-let [factors (get table d)]
(recur (reduce #(reinsert %1 d %2) (dissoc table d) factors)
(inc d))
(lazy-seq (cons d (primes-step (assoc table (* d d) (list d))
(inc d))))))]
(primes-step {} 2)))
(defn prime-factors-not-working [x]
(for [y (gen-primes)
:when (= (mod x y) 0)
:while (< y (Math/sqrt x))]
y))
(defn prime-factors-working [x]
(for [y (gen-primes)
:while (< y (Math/sqrt x))
:when (= (mod x y) 0)]
y))
(prime-factors-working 100)
;; ↪ (2 5)
(prime-factors-not-working 100)
;; Goes into infinite loop
(gen-primes) is a lazy sequence of prime numbers. The only difference between the working and not-working sequences is the order of the modifiers while and when. Why does the not-working implementation go into an infinite loop?
The not working variant expands conceptually (but not factually) into this:
(loop [ys (gen-primes)
result []]
(if (seq ys)
(let [y (first ys)]
(if (= (mod x (first ys)) 0) ;; Can be replaced with `(zero? (mod x y))` BTW.
(if (< y (Math/sqrt x))
(recur (next ys) (conj result y))
result)
(recur (next ys) result)))
result))
As you can see, if (mod x (first ys)) is not 0, it will go to the next number - without checking for that <, going forever.
When you exchange :when and :while, the checks in the pseudo-expansion above are also swapped - stopping the iteration once y reaches the square root of x.
the macro expansion of for is sensitive to the order in which you put :when and :while. macroexpanding gives slightly different code.
you can go very far in clojure without relying on complicated macros beyond defn, for is not very common, and this isn't a usecase where it is clearly advantagous over map->filter->take
good expansion: line 28:
(when (< y (Math/sqrt x)) ; XXX
(if (= (mod x y) 0) ; XXX
(do
(chunk-append b__86695 y)
(recur (unchecked-inc i__86694)))
(recur (unchecked-inc i__86694))))
bad expansion: line 28:
(if (= (mod x y) 0) ; XXX
(when (< y (Math/sqrt x)) ; XXX
(do
(chunk-append b__86666 y)
(recur (unchecked-inc i__86665))))
(recur (unchecked-inc i__86665)))
you can learn about the implementation of the for macro by going to it's source code (your editor should have a way for navigating to defs of symbols)
https://github.com/clojure/clojure/blob/master/src/clj/clojure/core.clj#L4654
there is nothing about clojure that requires it's macros to write code in the way you think they should, though in this case it may be a bug in for, it's hard to tell. some use cases may want to be able to when while and when are written.
since this is about learning, and macros are pretty much magic unless you see the code they write out, i think the best way to learn is to figure out how to view the expanded macro forms in your code. this is generally how macros are debugged.
link to topic on macro expansion
how is a macro expanded in clojure?
documentation for expanding macros in cider (emacs clojure editor)
https://docs.cider.mx/cider/debugging/macroexpansion.html

The Reasoned Schemer : Not understanding Exercise 57

On Exercise (or entry?) 57, I'm just not understanding how the logic flows. The question is that this: given
(define teacupo
(lambda (x)
(conde
((= tea x ) #s)
((= cup x ) #s)
(else #u))))
where '=' is actually the triple-bar unify (?) operator. Running the following:
(run* (r)
(fresh (x y)
(conde
((teacupo x) (= #t y) #s)
((= #f x) (= #t y))
(else #u)
(= (cons x (cons y ())) r)))
the book gives the answer:
((tea #t) (cup #t) (#f #t))
I would have thought that the answer would have been:
(((tea cup) #t) (#f #t))
My reasoning being that the 'x' in (teacupo x) should have its conde go through all of its solutions first, and unify to the list of all of its solutions. But it appears that teacupo only gives up one of its solutions at a time. It confuses me because my interpretation of conde is that, using the rule it gives, is that you should go through the lines of conde, and after a line succeeds, pretend that it failed, refresh the variables and find the next line that succeeds. Given the way that the solution works, it seems like the conde in that code goes back to a successful line, then forcing the teacupo conde to fail and give up the next possible value. Again, I would have thought that the teacupo solution would give up all of its conde solutions in a list, and then move on in the outer conde call. Can anyone provide me guidance as to why it works as provided in the book and not in the way I reasoned?
My reasoning being that the 'x' in (teacupo x) should have its conde
go through all of its solutions first, and unify to the list of all of
its solutions.
The variable x is unified to one value at a time.
The form (run* (x) <goals>) collects values for x which fulfill the goals.
> (run* (x)
(teacupo x))
'(tea cup)
In
(conde
((teacupo x) (== #t y))
((== #f x) (== #t y)))
there is two ways to succeed: either the goal (teacupo x) is met and x is one of tea or cup -- or -- the goals (== #f x) is met, and x is (unified to) #f.
In short run* runs through the possible possible value for x one at time collecting those values that meets all goals. This means that x is unified to one value at a time.
A simpler example:
> (run* (x)
(fresh (y)
(== y 10)
(conde
[(== x 1) (== y 10)]
[(== x 2) (== y 20)]
[(== x 3) (== y 10)])))
'(1 3)
Full code for those who want to try the snippets in DrRacket:
#lang racket
(require minikanren)
(define-syntax succeed (syntax-id-rules () [_ (fresh (t) (== t t))]))
(run* (x)
(fresh (y)
(== y 10)
(conde
[(== x 1) (== y 10)]
[(== x 2) (== y 20)]
[(== x 3) (== y 10)])))
(define teacupo
(lambda (x)
(fresh (result)
(conde
((== 'tea x ) succeed)
((== 'cup x ) succeed)))))
(run* (x)
(teacupo x))
(run* (r)
(fresh (x y)
(conde
((teacupo x) (== #t y))
((== #f x) (== #t y)))
(== (cons x (cons y '())) r)))
(teacupo x) means "succeed twice: once with x unified with tea, and the second time with x unified with cup". Then,
(conde
((teacupo x) (= #t y) #s)
((= #f x) (= #t y)) ; you had a typo here
(else #u)
means,
for each solution produced by (teacupo x), also unify y with #t and succeed; and also
for each solution produced by (= #f x), also unify y with #t and succeed; and also
produce no more solutions
So each x in (tea cup) is paired up with y in (#t), and also x in (#f) is paired up with y in (#t), to form r; and then r is reported, i.e. collected into the final result list of solutions, giving ( (tea #t) (cup #t) (#f #t) ).
"it appears that teacupo only gives up one of its solutions at a time."
yes, this is exactly right, conceptually.
"after a line succeeds, pretend that it failed, refresh the variables and find the next line that succeeds."
yes, but each line can succeed a multiple number of times, if the conditional (or a subsequent goal) succeeds a multiple number of times.
"it seems like the conde in that code goes back to a successful line, then forcing the teacupo conde to fail and give up the next possible value."
it actually prepares to produce them in advance (but as a stream, not as a list), and then each is processed separately, fed through the whole chain of subsequent steps until either the last step is reached successfully, or the chain is broken, cut short by an #u or by an otherwise failed goal. So the next one is tried when the processing of a previous one has finished.
In pseudocode:
for each x in (tea cup):
for each y in (#t): ; does _not_ introduce separate scope for `y`;
collect (x y) ; `x` and `y` belong to the same logical scope
for each x in (#f): ; so if `x` is used in the nested `for` too,
for each y in (#t): ; its new value must be compatible with the
collect (x y) ; one known in the outer `for`, or else
for each _ in (): ; it will be rejected (x can't be two different
collect (x y) ; things at the same time)
As to why does it work this way, I can point you to another answer of mine, which might be of help (though it doesn't use Scheme syntax).
Using it, as an illustration, we can write your test as a Haskell code which is actually functionally equivalent to the book's code I think (in this specific case, of course),
data Val = Fresh | B Bool | S String | L [Val] deriving Show
type Store = [(String,Val)]
teacupo x = unify x (S "tea") &&: true -- ((= tea x ) #s)
||: unify x (S "cup") &&: true -- ((= cup x ) #s)
||: false -- (else #u)
run = [[("r", Fresh)]] -- (run* (r) ......
>>: (\s -> [ s ++: [("x", Fresh), ("y", Fresh)] ]) -- (fresh (x,y)
>>: -- (conde
( teacupo "x" &&: unify "y" (B True)
&&: true -- ((teacupo x) (= #t y) #s)
||: unify "x" (B False) &&: unify "y" (B True) -- ((= #f x) (= #t y))
||: false -- (else #u)
)
&&: project ["x", "y"] (unify "r" . L) -- (= r (list x y))
>>:
reporting ["r"] -- ...... )
reporting names store = [[a | a#(n,_) <- store, elem n names]]
with bare minimum implementation, just enough to make the above code work,
project vars kont store =
kont [val | var <- vars, (Just val) <- [lookup var store]] store
unify :: String -> Val -> Store -> [Store]
unify sym val store =
let
(Just v) = (lookup sym store)
in
case (val_unify v val) of
Just newval -> [replace_val sym newval store] -- [updated store], if unifies
Nothing -> [] -- couldn't unify - reject it
val_unify v Fresh = Just v -- barely working,
val_unify Fresh v = Just v -- initial
val_unify (B v) (B u) | v == u = Just (B v) -- implementation
| otherwise = Nothing
val_unify (S v) (S u) | v == u = Just (S v)
| otherwise = Nothing
val_unify _ _ = Nothing
replace_val s n ((a,b):c) | s == a = (a,n) : c
| otherwise = (a,b) : replace_val s n c
producing the output
*Main> run
[[("r", L [S "tea",B True])], [("r", L [S "cup",B True])], [("r", L [B False,B True])]]
And if we change the second line in the translated conde expression to
||: unify "x" (B False) &&: unify "x" (B True) -- ((= #f x) (= #t x))
we indeed get only two results,
*Main> run
[[("r", L [S "tea",B True])], [("r", L [S "cup",B True])]]

recursive definition of a PROPOSITION

(define( app list1 list2)
(if(empty? list1) list2
(cons (car list1) (app(cdr list1)list2))))
(app ((list "↔" "→" "∧" "⊕" "∨" "¬")) (list "P" "Q" "R" "S" "U" "X" "Y" "Z"))
(define L (list "↔" "→" "∧" "⊕" "∨" "¬"))
(define ( f L n)
(if (= n 0) "p"
(string-append "p" (car L) (f(cdr L) (- n 1)))))
(f L 3)
You have the following recursive definition of a PROPOSITION:
T, F are propositions ( truth value of propositional variables)
List item
Propositional letters P, Q , R, S, U, X, Y, Z are propositions.
If A is a proposition the ¬A is a proposition.
If A and B are propositions then
A⊕B , A→B , A∧B, A∨B , A↔B are propositions.
Write a DrRacket procedure that will randomly generate a proposition with a given number of operations .
I could not complete the function. Can you help me please?
Since this is a homework question, I'll show you the techinque using a different example. The following grammar has two non-terminals S and T. For each non-terminal I have defined a function that generates a random string according to the rules. Since S has four rules, I pick one of the rules at random.
#lang racket
;;; Grammar
; The grammar has two non-terminals S and T.
; There are four rules for S and one for T.
; S -> aSa
; S -> bSb
; S -> cT
; S -> ε
; T -> dS
(define (S)
(case (random 4) ; random number 0,1,2,3 (four rules for S)
[(0) (string-append "a" (S) "a")]
[(1) (string-append "b" (S) "b")]
[(2) (string-append "c" (T))]
[(3) ""]))
(define (T)
; only one rule, so no need for random here
(string-append "d" (S)))
; generate a random string according to the grammar
(S)
Some example outputs:
"bb"
"bbcdbcdbbb"
"cdbb"

Sort a list in haskell according to a predicate

I am trying to learn Haskell and I've come into issues trying to complete an example problem. The problem is to sort a list in Haskell according to a given predicate i.e. the type is
sort :: (a -> a -> Bool) -> [a] -> [a]
The code I have so far is :
sort _ [] = []
sort f (x:xs) =
let
smaller = sort f (filter (f x) xs)
bigger = sort f (filter (f x) xs) --error is on this line
in smaller ++ [x] ++ bigger
The code not working correctly in the sense im not sure how to take the opposite of the function. for example if it were an ordinary sort function I would use smaller = quicksort (filter (<=x) xs) and bigger = quicksort (filter (>x) xs) this would then break up the list according to that predicate but how do I do this with a higher order predicate?
You just need to use the not function to invert your boolean:
not :: Bool -> Bool
f :: a -> a -> Bool
f x :: a -> Bool
not . f x :: a -> Bool
And you'd use it as
sort _ [] = []
sort f (x:xs) =
let smaller = sort f (filter (f x) xs)
bigger = sort f (filter (not . f x) xs)
in smaller ++ [x] ++ bigger

Can you formulate the insertion sort as a monoid in Clojure?

This is the code for an insertion sort in Clojure:
(defn in-sort! [data]
(letfn [(insert ([raw x](insert [] raw x))
([sorted [y & raw] x]
(if (nil? y) (conj sorted x)
(if (<= x y ) (concat sorted [x,y] raw)
(recur (conj sorted y) raw x )))))]
(reduce insert [] data)))
;Usage:(in-sort! [6,8,5,9,3,2,1,4,7])
;Returns: [1 2 3 4 5 6 7 8 9]
This is the insertion sort formulated as a monoid in Haskell:
newtype OL x = OL [x]
instance Ord x => Monoid (OL x) where
mempty = OL []
mappend (OL xs) (OL ys) = OL (merge xs ys) where
merge [] ys = ys
merge xs [] = xs
merge xs#(x : xs') ys#(y : ys')
| x <= y = x : merge xs' ys
| otherwise = y : merge xs ys'
isort :: Ord x => [x] -> OL x
isort = foldMap (OL . pure)
This is how to write a monoid in Clojure:
(def mempty (+)) ;; 0
(def mappend +)
(defn mconcat [ms]
(reduce mappend mempty ms))
(mappend 3 4) ;; 7
(mconcat [2 3 4]) ;; 9
My question is: Can you formulate the insertion sort as a monoid in Clojure?
Here is my attempt, might not be the best one though :)
It's quite a direct translation of the Haskell monoid. Since we don't have auto-currying in Clojure, I needed to make a special comp-2 function.
(defn comp-2 [f g]
(fn [x y] (f (g x) (g y))))
(defn pure-list [x]
(cond
(sequential? x) (if (empty? x) '() (seq x))
:else (list x)))
(def OL-mempty (list))
(defn OL-mappend [xs ys]
(letfn [(merge [xs ys]
(cond
(empty? xs) ys
(empty? ys) xs
:else (let [[x & xs'] xs
[y & ys'] ys]
(if (<= x y)
(cons x (lazy-seq (merge xs' ys)))
(cons y (lazy-seq (merge xs ys')))))))]
(doall (merge xs ys))))
(defn foldmap [mempty mappend l]
(reduce mappend mempty l))
(def i-sort (partial foldmap OL-mempty (comp-2 OL-mappend pure-list)))
(i-sort (list 5 3 4 1 2 6)) ;; (1 2 3 4 5 6)
Here is a link to a very nice paper about morphisms in sorts.
Compatibility with reducers
If we want to go with Reducers style monoid then we could embed "mempty" in our "mappend" as a zero-arity branch. Once we do that, we can make our monoid fit right away in the Reducers library:
(require '[clojure.core.reducers :as re])
(defn pure-list [x]
(cond
(sequential? x) (if (empty? x) '() (seq x))
:else (list x)))
(defn sort-monoid
([] '()) ;; mempty
([xs ys] ;; mappend
(letfn [(merge [xs ys]
(cond
(empty? xs) ys
(empty? ys) xs
:else (let [[x & xs'] xs
[y & ys'] ys]
(if (<= x y)
(cons x (lazy-seq (merge xs' ys)))
(cons y (lazy-seq (merge xs ys')))))))]
(doall (merge (pure-list xs) (pure-list ys))))))
(re/reduce sort-monoid (list 2 4 1 2 5))
Here, for reference, is another version which turns the tail recursion modulo cons into tail recursion with an accumulator. For the sake of variety, here is also one way to partially simulate the absent type-classes.
(defprotocol Monoid
(mempty [_] )
(mappend [_ xs ys]))
(defn fold-map
[monoid f xs]
(reduce (partial mappend monoid) (mempty monoid) (map f xs)))
(defn- ord-mappend*
[[x & rx :as xs] [y & ry :as ys] a]
(cond
(empty? xs) (concat a ys)
(empty? ys) (concat a xs)
:else (if (< x y)
(recur rx ys (conj a x))
(recur xs ry (conj a y)))))
(def Ord
(reify Monoid
(mempty [_] (list))
(mappend [_ xs ys] (ord-mappend* xs ys []))))
(defn isort [xs] (fold-map Ord list xs))
(defn is-sorted? [xs] (apply < xs))
(is-sorted? (isort (shuffle (range 10000))))
;=> true (sometime later)

Resources