I'm trying to understand the difference in Z3 between equality testing and biconditional. My understanding is that = is used to express biconditional, but how is equality tested?
For example. I am trying to write something similar to the following (toy) statement in z3:
on_table(o, a) ↔ (in_hand(o) Λ a != pickup(o)) ∨ a = put_on_table(o)
Note: I am aware the above statement can be factored into a set of implications, but I am interested in expressing it as a single biconditional.
For the Bool type, equality and biconditional are the same operations. For any other type, biconditional doesn't really make sense.
All logics in SMT come equipped with the notion of equality, which is essentially term-level equality of objects. The standard explicitly states:
Version 2.6 of the SMT-LIB format adopts as its underlying logic a
version of many-sorted first-order logic with equality [Man93, Gal86,
End01].
See Section 2.2 of http://smtlib.cs.uiowa.edu/papers/smt-lib-reference-v2.6-r2017-07-18.pdf
The same document also says (Section 3.7.1):
Note the absence of a symbol for double implication. Such a connective
is superfluous because the equality symbol = can be used in its place.
I suspect though, perhaps, you are trying to ask for something else. Some further examples would definitely help.
Related
I am developing an expression evaluator. Which association is considered to be correct for an expression containing more than one exponentiation operator? For example, for the expression "10-2^2^0.5": "10-(2^2) ^0.5"= 8 or "10-2^ (2^0.5)" = 7.33485585731?
The result differs across languages and (possible) interpreters. However, most of them uses right-associative rule.
In Lua print(10-2^2^0.5) returns 7.3348 and in Visual Basic, Console.WriteLine(10-2^2^0.5) returns 8.
The fact that different systems uses different rules suggest me that there is no defined rule for that.
I am writing a program for which I need terms in their prefix notation.
The point is to being able to parse mathematical expressions to prefix notation, while preserving the correct order of Operations. I then want to save the result in the database for later use (using assert), which includes translating to another language, which uses prefix notation. Prolog Operators do all have a fixed priority which is a feature I want to use, as I will be using all sorts of operators (including clp operators).
As among others I need to include complete mathematical expressions, such as the equality operator. Thus I cannot recursively use the Univ operator (=..), because it won't accept equality operators etc. Or can I somehow use =.. ?
Essentially I want to work with the internal representation of
N is 3*4+5 % just a random example
which would be
is(N,+(*(3,4),5))
Now, I do know that I can use, write_canonical(N is 3*4+5) to get the internal representation as seen above.
So is there a way to somehow get the internal representation as a term or a list, or something.
Would it be possible to bind the output of write_canonical to a variable?
I hope my question is clear enough.
Prolog terms can be despicted as trees. But, when writing a term, the way a term is displayed depends on the defined operators and write options. Consider:
?- (N is 3*4+5) = is(N,+(*(3,4),5)).
true.
?- (N is 3*4+5) = is(Variable, Expression).
N = Variable,
Expression = 3*4+5.
?- 3*4+5 = +(*(3,4),5).
true.
I.e. operators are syntactic sugar. They don't change how terms are represented, only how terms are displayed.
I am having a look at first order logic theorem provers such as Vampire and E-Prover, and the TPTP syntax seems to be the way to go. I am more familiar with Logic Programming syntaxes such as Answer Set Programming and Prolog, and although I try refering to a detailed description of the TPTP syntax I still don't seem to grasp how to properly distinguish between interpreted and non interpreted functor (and I might be using the terminology wrong).
Essentially, I am trying to prove a theorem by showing that no model acts as a counter-example. My first difficulty was that I did not expect the following logic program to be satisfiable.
fof(all_foo, axiom, ![X] : (pred(X) => (X = foo))).
fof(exists_bar, axiom, pred(bar)).
It is indeed satisfiable because nothing prevents bar from being equal to foo. So a first solution would be to insist that these two terms are distinct and we obtain the following unsatisfiable program.
fof(all_foo, axiom, ![X] : pred(X) => (X = foo)).
fof(exists_bar, axiom, pred(bar)).
fof(foo_not_bar, axiom, foo != bar).
The Techinal Report clarifies that different double quoted strings are different objects indeed, so another solution is to put quotes here and there, so as to obtain the following unsatisfiable program.
fof(all_foo, axiom, ![X] : (pred(X) => (X = "foo"))).
fof(exists_bar, axiom, pred("bar")).
I am happy not to have manually specify the inequality as that would obviously not scale to a more realistic scenario. Moving closer to my real situation, I actually have to handle composed terms, and the following program is unfortunately satisfiable.
fof(all_foo, axiom, ![X] : (pred(X) => (X = f("foo")))).
fof(exists_bar, axiom, pred(g("bar"))).
I guess f("foo") is not a term but the function f applied to the object "foo". So it could potentially coincide with function g. Although a manual specification that f and g never coincide does the trick, the following program is unsatisfiable, I feel like I'm doing it wrong. And it probably wouldn't scale to my real setting with plenty of terms all to be interpreted as distinct when they are syntactically distinct.
fof(all_foo, axiom, ![X] : (pred(X) => (X = f("foo")))).
fof(exists_bar, axiom, pred(g("bar"))).
fof(f_not_g, axiom, ![X, Y] : f(X) != g(Y)).
I have tried throwing single quotes around, but I didn't find the proper way to do it.
How do I make syntactically different (composed) terms and test for syntactical equality?
Subsidiary question: the following program is satisfiable, because the automated-theorem prover understands f as a function rather than a uninterpreted functor.
fof(exists_f_g, axiom, (?[I] : ((f(foo) = f(I)) & pred(g(I))))).
fof(not_g_foo, axiom, ~pred(g(foo))).
To make it unsatisfiable, I need to manually specify that f is injective. What would be the natural way to obtain this behaviour without specifying injectivity of all functors that occur in my program?
fof(exists_f_g, axiom, (?[I] : ((f(foo) = f(I)) & pred(g(I))))).
fof(not_g_foo, axiom, ~pred(g(foo))).
fof(f_injective, axiom, ![X,Y] : (f(X) = f(Y) => (X = Y))).
First of all let me point you to the Syntax BNF of TPTP. In principle, you have Prolog terms with some predefined infix/prefix operators of appropriate precedences. This means, variables are written in upper case and constants are written in lower case. Also like Prolog, escaping with single quotes allows us to write a constant starting with a capital letter i.e. 'X'. I have never seen double quoted atoms so far, so you might want look up the instructions of the prover on how to interpret them.
But even though the syntax is Prolog-ish, automated theorem proving is a different kind of beast. There is no closed world assumption nor are different constants assumed to be different - that's why you cannot find a proof for:
fof(c1, conjecture, a=b ).
and neither for:
fof(c1, conjecture, ~(a=b) ).
So if you want to have syntactic dis-equality, you need to axiomatize it. Now, assuming a different from b trivially shows that they are different, so I at least claimed: "Suppose there are two different constants a and b, then there exists some variable which is not b."
fof(a1, axiom, ~(a=b)).
fof(c1, conjecture, ?[X]: ~(X=b)).
Since functions in first-order logic are not necessarily injective, you also don't get around of adding your assumption in there.
Please also note the different roles of input formulas: so far you only stated axioms and no conjectures i.e. you ask the prover to show your axiom set to be inconsistent. Some provers might even give up because they use some resolution refinements (e.g. set of support) which restricts resolution between axioms[1]. In any case, you need to be aware that the formula you are trying to prove is of the form A1 ∧ ... ∧ An → C1 ∨ ... Cm where the A are axioms and the C are conjectures.[2]
I hope that at least the syntax is a bit clearer now - unfortunately the answer to the questions is more that atomated theorem provers don't make the same assumptions as you expect, so you have to axiomatize them. These axiomatizations are also often ineffective and you might get better perfomance from specialized tools.
[1] As you already notice, advanced provers like Vampire or E Prover tell you about (counter-)satisfyability instead.
[2] A resolution based theorem prover will first negate that formula and perform a CNF transformation, but even though most TPTP accepting provers are resolution based, that's not a requirement.
In Erlang, using => to compare two variables results in a syntax error, you have to use >= instead:
1> 10 => 5.
* 1: syntax error before: '>'
2> 10 >= 5.
true
Why is that? The same applies for <= which has to be written as =<. Is this because Erlang always used this syntax, or are the sequences => and >= used somewhere else?
Just to confirm what others have said: we used the same comparison operators as Prolog. I can't be certain why it does it this way but one reason could be that it leaves <= and => to be used as "arrows", which could be useful. In Prolog it is very easy to define new operators so even if they are not defined in the basic language they are still very useful:
:- op(Priority, Type, Operator).
The <= operator in in Erlang is a binary generator which can be used in list/binary comprehensions. It works in a similar way to <- but on binaries instead of lists.
Well, Erlang's syntax is influenced by Prolog's and Prolog uses the same convention so that's probably the reason.
I am not sure why Prolog uses >= and =<; => and <= are not really used. I assume that it's because => and <= are operators typically used for the logical implication so it is indeed awkward to use them for comparison, especially in a logic programming language. It is also prettier imho :b
This is probably just an Erlang convention. The reason I'd guess would be to do with how we pronounce these symbols: "greater than or equal to", "less than or equal to". It's really a rendering of the greater-than-or-equal-to/less-than-or-equal-to symbol in ASCII, so at some point someone decided the token should be <= and >=, and the convention has stuck in most languages, but it's fairly arbitrary. Perhaps they were attempting to create some kind of representation of the asymmetric nature of these operators.
It's also worth noting that lots of languages use => to mean some kind of arrow, such as separating the body of a function from its arguments, or as logical entailment. Not sure about the converse one.
EDIT: It appears that Erlang uses <= in comprehensions, which is why they've avoided using it as a comparison operator, and opted for the (slightly backwards) syntax instead.
VB has operators AndAlso and OrElse, that perform short-circuiting logical conjunction.
Why is this not the default behavior of And and Or expressions since short-circuiting is useful in every case.
Strangely, this is contrary to most languages where && and || perform short-circuiting.
Because the VB team had to maintain backward-compatibility with older code (and programmers!)
If short-circuiting was the default behavior, bitwise operations would get incorrectly interpreted by the compiler.
The Ballad of AndAlso and OrElse by Panopticon Central
Our first thought was that logical operations are much more common than bitwise operations, so we should make And and Or be logical operators and add new bitwise operators named BitAnd, BitOr, BitXor and BitNot (the last two being for completeness). However, during one of the betas it became obvious that this was a pretty bad idea. A VB user who forgets that the new operators exist and uses And when he means BitAnd and Or when he means BitOr would get code that compiles but produces "bad" results.
I do not find short-circuiting to be useful in every case. I use it only when required. For instance, when checking two different and unconnected variables, it would not be required:
If x > y And y > z Then
End If
As the article by Paul Vick illustrates (see link provided by Ken Browning above), the perfect scenario in which short-circuiting is useful is when an object has be checked for existence first and then one of its properties is to be evaluated.
If x IsNot Nothing AndAlso x.Someproperty > 0 Then
End If
So, in my opinion both syntactical options are very much required.
Explicit short-circuit makes sure that the left operand is evaluated first.
In some languages other than VB, logical operators may perform an implicit short circuit but may evaluate the right operator first (depending for instance on the complexity of the expressions at left and at right of the logical operator).