I am new to Z3 solver and SMTLib2. I want to obtain expressions for each variable in the constraints. Assume, I have this program.
(declare-const x Int)
(declare-const y Int)
(declare-const z Int)
(assert (= x (+ y 1)))
(assert (= z (+ x 10)))
(check-sat)
(get-value (z))
Using get-value, I can obtain the value for z that satisfies all the constraints. But, how can I get the expression for z. Something like z=y+11.
I find that using simplify, I can simplify the constraints but is there anyway to get expression for each variable in the constraints.
Z3 is first and formost an SMT solver; it solves existential problems, i.e., it will only show that there exists a solution, it will not compute a closed form of all solutions.
That said, there are some ways in which at least some results of that form can be obtained, for instance via simplification as mentioned, or perhaps via quantifier elimination if the logic permits it (see for instance Equivalent Quantifier Free Formulas, or Quantifier Elimination - More questions).
If more than one but perhaps not all models are required, look into Z3: finding all satisfying models and search for multiple other questions of similar title.
Related
In Racket (and other Schemes, from what I can tell), the only way I know of to check whether two things are not equal is to explicitly apply not to the test:
(not (= num1 num2))
(not (equal? string1 string2))
It's obviously (not (that-big-of-deal?)), but it's such a common construction that I feel like I must be overlooking a reason why it's not built in.
One possible reason, I suppose, is that you can frequently get rid of the not by using unless instead of when, or by switching the order of the true/false branches in an if statement. But sometimes that just doesn't mimic the reasoning that you're trying to convey.
Also, I know the negated functions are easy to define, but so is <=, for example, and that is built in.
What are the design decisions for not having things like not-equal?, not-eqv?, not-eq? and != in the standard library?
First, you are correct that it is (not (that-big-of-a-deal?))1
The reason Racket doesn't include it out of the box is likely just because it adds a lot of extra primitives without much benefit. I will admit that a lot of languages do have != for not equal, but even in Java, if you want to do a deep equality check using equals() (analogous to equal? in Racket), you have to manually invert the result with a ! yourself.
Having both <= and > (as well as >= and <) was almost certainly just convenient enough to cause the original designers of the language to include it.
So no, there isn't any deep reason why there is not any shortcut for having a not-eq? function built into Racket. It just adds more primitives and doesn't happen to add much benefit. Especially as you still need not to exist on its own anyway.
1I love that pun by the way. Have some imaginary internet points.
I do miss not having a not= procedure (or ≠ as mentioned in #soegaard's comment), but not for the reasons you think.
All the numeric comparison operators are variadic. For example, (< a b c d) is the same as (and (< a b) (< b c) (< c d)). In the case of =, it checks whether all arguments are numerically equal. But there is no procedure to check whether all arguments are all unequal—and that is a different question from whether not all arguments are equal (which is what (not (= a b c d)) checks).
Yes, you can simulate that procedure using a fold. But still, meh.
Edit: Actually, I just answered my own question in this regard: the reason for the lack of a variadic ≠ procedure is that you can't just implement it using n-1 pairwise comparisons, unlike all the other numeric comparison operators. The straightforward approach of doing n-1 pairwise comparisons would mean that (≠ 1 2 1 2) would return true, and that's not really helpful.
I'll leave my original musings in place for context, and for others who wonder similar things.
Almost all of the predicates are inherited by Scheme, the standard #!racket originally followed. They kept the number of procedures to a minimum as a design principle and left it to the user to make more complex structures and code. Feel free to make the ones you'd like:
(define not-equal? (compose1 not equal?))
(define != (compose1 not =))
; and so on
You can put it in a module and require it. Keep it by convention so that people who read you code knows after a minute that everything not-<known predicate> and !-<known-predicate> are (compose not <known-predicate>)
If you want less work and you are not after using the result in filter then making a special if-not might suffice:
(define-syntax-rule (if-not p c a) (if p a c))
(define (leafs tree)
(let aux ((tree tree) (acc 0))
(if-not (pair? tree)
(+ acc 1) ; base case first
(aux (cdr tree)
(aux (car tree) acc)))))
But it's micro optimizations compared to what I would have written:
(define (leafs tree)
(let aux ((tree tree) (acc 0))
(if (not (pair? tree))
(+ acc 1) ; base case first
(aux (cdr tree)
(aux (car tree) acc)))))
To be honest if I were trying to squeeze out a not I would just have switched them manually since then optimizing speed thrumps optimal readability.
I find that one easily can define != (for numbers) using following macro:
(define-syntax-rule (!= a b)
(not(= a b)))
(define x 25)
(!= x 25)
(!= x 26)
Output:
#f
#t
That may be the reason why it is not defined in the language; it can easily be created, if needed.
I can't seem to come up with an example of this and wondering if there is such a case? I know if I have an expression where applicative order doesn't terminate that normal order may still terminate. I'm wondering though if there is an example where both orders terminate but normal order has fewer steps.
(λ p. λ q. q) ((λ x. λ y. λ z. ((x y) z)) (λ w. λ v. w))
With some whitespace:
(λ p.
λ q.
q
)
(
(λ x.
λ y.
λ z.
((x y) z)
)
(λ w.
λ v.
w
)
)
In normal order, the outermost reduction can be performed first, reducing directly to the identity combinator in one step. Applicative order will get there too, but it takes much longer since the x-y-z-w-v expression needs to be evaluated first.
Note that the x-y-z-w-v expression isn't even used. You can think of normal order as a sort of lazy evaluation: expressions are only evaluated or reduced when they are used. So you just build a formula that doesn't use one of its arguments and you immediately have an example of this kind of efficiency win.
In a lambda expression, any variable bound within an abstraction can be used zero or more times in the body of the abstraction.
Normal order evaluates the argument n times, where n is the number of times it is used in the body.
Applicative order evaluates the argument exactly once, irrespective of the number of times it is used in the body.
Comparison
If argument is used exactly once, both normal order and applicative order will have same performance.
If the argument is used more than once, applicative order will be faster.
If the argument is used zero times, normal order would be faster.
Having a little trouble understanding the core terms of the MicroKanren DSL. Section 4 says:
Terms of the language are defined by the unify operator. Here, terms of the language consist of variables, objects deemed identical under eqv?, and pairs of the foregoing.
But they never describe what the "pairs" actually mean. Are the pairs supposed to represent equality of two subterms, like so:
type 'a ukanren = KVar of int | KVal of 'a | KEq of 'a kanren * 'a kanren
So a term like:
(call/fresh (λ (a) (≡ a 7)))
Generates a pair for (≡ a 7)?
Edit: upon further thought, I don't think this is it. The mentions of "pair" in the paper seem to come much later, with extensions and refinements to the basic system, which would mean the pairs have no meaning in the terms for the basic intro. Is this correct?
In this context, "pair" just means a cons pair, such as (5 . 6) or (foo . #t). An example unification between two pairs is:
(call/fresh
(λ (a)
(call/fresh
(λ (b)
(≡ (cons a b) (cons 5 6))))))
which associates a with 5 and b with 6.
Sorry for the confusion and difficulty!! And thank you for the question!
You can think of the (typical) Kanren term language as having a single binary functor tag cons/2, and an infinite quantity of constants (the exact makeup changes from embedding to embedding).
Under the assumption that cons/2 is the only (n < 0)-ary functor tag, every compound term will have been built with it. If you look up standard presentations of unification (e.g. Martelli-Montanari) you'll usually see a step f(t0,...,tn) g(s0,...,sm) => fail if f =/= g or n =/= m. We handle clash-failure in unification when comparing ground atomic terms; for us, compound terms must necessarily be of the same arity and tag.
pair? recognizes conses. In Racket, in fact, they provide a cons? alias, to make this clearer!
Once again, thank you!
I've read and somewhat understand Use of lambda for cons/car/cdr definition in SICP. My problem is understanding the why behind it. My first problem was staring and staring at
(define (cons x y)
(lambda (m) (m x y)))
and not understanding how this function actually did any sort of consing. Consing as I learned it from various Lisp/Scheme books is putting stuff in lists, i.e.,
(cons 1 ()) => (1)
how does
(define (cons x y)
(lambda (m) (m x y)))
do anything like consing? But as the light went on in my head: cons was only sort of a placeholder for the eventual definitions of car and cdr. So car is
(define (car z)
(z (lambda (p q) p)))
and it anticipates an incoming z. But what is this z? When I saw this use:
(car (cons 1 2))
it finally dawned on me that, yes, the cons function in its entirety is z, i.e., we're passing cons to car! How weird!
((lambda (m) (m 1 2)) (lambda (p q) p)) ; and then
((lambda (p q) p) 1 2)
which results in grabbing the first expression since the basic car operation can be thought of as an if statement where the boolean is true, thus, grab the first one.
Yes, all lists can be thought of as cons-ed together expressions, but what have we won by this strangely backward definition? It's as if any initial, stand-alone definition of cons is not germane. It's as if uses of something define that something, as if there's no something until its uses circumscribe it. Is this the primary use of closures? Can someone give me some other examples?
but what have we won by this strangely backward definition?
The point of the exercise is to demonstrate that data structures can be defined completely in terms of functions; that data structures are not necessary as a primitive construct in a language -- if you have functions (that are closures), that's sufficient. This shows the power of functions, and is probably mind-boggling to someone from outside of functional programming.
It's not that in a real project we would actually define data structures this way. It would be more efficient to use language-provided data structure constructs. But it's important to know that we can do it this way. In computer science, it's useful to be able to "reduce" one construct (data structures) into another construct (functions) so that if we prove something about the second construct, it applies to the first one too.
I make simple factorial program in Clojure.
(defn fac [x y]
(if (= x 1) y (recur (- x 1) (* y x)))
)
(def fact [n] (fac n 1))
How can it be done faster? If it can be done some faster way.
You can find many fast factorial algorithms here: http://www.luschny.de/math/factorial/FastFactorialFunctions.htm
As commented above, Clojure is not the best language for that. Consider using C, C++, ForTran.
Be careful with the data structures that you use, because factorials grow really fast.
Here is my favorite:
(defn fact [n] (reduce *' (range 1 (inc n))))
The ' tells Clojure to use BigInteger transparently so as to avoid overflow.
With the help of your own fact function (or any other), we can define this extremely fast version:
(def fact* (mapv fact (cons 1 (range 1 21))))
This will give the right results for arguments in the range from 1 to 20 in constant time. Beyond that range, your version doesn't give correct results either (i.e. there's an integer overflow with (fact 21)).
EDIT: Here's an improved implementation that doesn't need another fact implementation, does not overflow and should be much faster during definition because it doesn't compute each entry in its lookup table from scratch:
(def fact (persistent! (reduce (fn [v n] (conj! v (*' (v n) (inc n))))
(transient [1])
(range 1000))))
EDIT 2: For a different fast solution, i.e. without building up a lookup table, it's probably best to use a library that's already highly optimized. Google's general utility library Guava includes a factorial implementation.
Add it to your project by adding this Leiningen dependency: [com.google.guava/guava "15.0"]. Then you need to (import com.google.common.math.BigIntegerMath) and can then call it with (BigIntegerMath/factorial n).