What does the semantic entailment relation mean (M |= A)? - logic

I read many acticles about it. They described it as :
In logics, meaning is often described by a satisfaction relation
M |= A
that describes when a situation M satisfies a formula A.
So, I also searched some examples. I found the examples following :
True |= False = false
False |= True = true
I don't understand at all. What does it mean in these cases?

(assuming that you talk about propositional logic (it is similar for other logics such as pred. logic))
for two formulas A and B:
A |= B
"B evaluates to true under all evaluations that evaluate A to true"
for a set of formulas M and a formula B:
M |= B
"for every evaluation: B evaluates to true if only all elements of M
evaluate to true"
coming to your examples:
true |= false
is incorrect since evaluations exist
false |= A
is correct for any formula A, since 'false' is never evaluated to 'true'
under any evaluation
rgrds.

Related

Algorithm for enumerating all permutations of algebraic expressions

If I have a list of variables, such as {A, B, C} and a list of operators, such as {AND, OR}, how can I efficiently enumerate all permutations of valid expressions?
Given the above, I would want to see as output (assuming evaluation from left-to-right with no operator precedence):
A AND B AND C
A OR B OR C
A AND B OR C
A AND C OR B
B AND C OR A
A OR B AND C
A OR C AND B
B OR C AND A
I believe that is an exhaustive enumeration of all combinations of inputs. I don't want to be redundant, so for example, I wouldn't add "C OR B AND A" because that is the same as "B OR C AND A".
Any ideas of how I can come up with an algorithm to do this? I really have no idea where to even start.
Recursion is a simple option to go:
void AllPossibilities(variables, operators, index, currentExpression){
if(index == variables.size) {
print(currentExpression);
return;
}
foreach(v in variables){
foreach(op in operators){
AllPossibilities(variables, operators, index + 1, v + op);
}
}
}
This is not an easy problem. First, you need a notion of grouping, because
(A AND B) OR C != A AND (B OR C)
Second, you need to generate all expressions. This will mean iterating through every permutation of terms, and grouping of terms in the permutation.
Third, you have to actually parse every expression, bringing the parsed expressions into a canonical form (say, CNF. https://en.wikipedia.org/wiki/Binary_expression_tree#Construction_of_an_expression_tree)
Finally, you have to actually check equivalence of the expressions seen so far. This is checking equivalence of the AST formed by parsing.
It will look loosely like this.
INPUT: terms
0. unique_expressions = empty_set
1. for p_t in permutations of terms:
2. for p_o in permutations of operations:
3. e = merge_into_expression(p_t, p_o)
4. parsed_e = parse(e)
5. already_seen = False
6. for unique_e in unique_expressions:
7. if equivalent(parsed_e, unique_e)
8. already_seen = True
9. break
10. if not already_seen:
11. unique_expressions.add(parsed_e)
For more info, check out this post. How to check if two boolean expressions are equivalent

Basic Logic operators both AND and OR

Hi I have tree conditions
A, B and C
Now, I want to apply these conditions I such a way, that if any of these conditions is true, or a combination, the whole outcome is true.
If I do
A || B || C
then as soon as A is true, B and C are not evaluated
if I do
A && B && C
it's only true if ALL of them are true.
Is there a special notation for fulfilling my wishes?
Using the Or (||) operator will give you the correct answer because it does not need to evaluate the other conditions but if you want B and C to be evaluated even if A is True then you should nest the If statements such as:
if A == True
do something;
if B == True
do something;
if C == True
do something;
Or just do three separate If statements.
You have answered your own question.
You want a situation whereby if EITHER A OR B OR C is true, or if a combination such as A AND B are true then the whole expression will evaluate to true.
That is A || B || C. If you only care about requiring ANY of the conditions to be true, then there is no need to evaluate all the conditions because as long as ONE condition is true, then your whole expression or outcome is true.
If you care about the specific combinations being true TOGETHER then you can group them using parenthesis as such:
If I want A AND B to be true at the same time OR C: (A && B) || C
If I want A AND C to be true at the same time OR B: (A && C) || B

Strategies for proving propositional tautologies?

Input is a string of symbols with (any) checked syntax and output is TRUE or FALSE.
My idea was post-fix representation of logical expressions written with AND, XOR and TRUE, but I finally realized that the patterns would be harder to recognize in post-fix.
Examples:
p IMPLIES q can be written TRUE XOR p (XOR (p AND q)) abbreviated 1+p+pq
p EQUIVALENT WITH q can be written abbreviated 1+p+q
NOT p abbreviated 1+p
p OR q abbreviated p+q+pq
The rules in this Boolean ring is the same as in ordinary algebra, with the two rules
p+p=0
pp=p
and those rules, together with commutations, are responsible for all reductions, which will leads to '1' if the string correspond to a tautology. The tautology Modus ponens,
((p IMPLIES q) AND p) IMPLIES q,
should first be substituted as above, then expanded by multiplying distributively, and last repeatedly be simplified. A straightforward substitution of IMPLIES gives:
1+((1+f+fg)f)+((1+f+fg)f)g =
= 1+ f+ff+fgf +(f+ff+fgf)g =
= 1+ f+f+fg + fg+fg+fg =
= 1+ fg +fg+fg+fg = 1
When a tautological expression is written as an element in a Boolean ring it reduces mechanically to 1. Other expression reduces to a algebraically simpler expression.
Is this a good strategy? What strategies are used in computer science?
As discussed in this overview paper, an arbitrary propositional formula can be converted into Conjunctive Normal Form (CNF) in such a way that it has only polynomial larger size and is unsatisfiable iff the original formula was a tautology.
Practical tools for conversion from formula to CNF include bool2cnf and bc2cnf.
SAT solvers for checking the unsatisfiability of the CNF include CryptoMiniSat and Lingeling.
See a related post which shows how to process propositional formulae using a SAT solver.

how to construct truth table for not gate in 3-satisfiability circuit?

Having a lot of trouble understanding this relatively simple concept. So I'm proving that the 3-satisfiability problem is NP-Complete, and part of that involves transforming boolean gates (for example, the NOT gate) into conjunctive normal functions.
So if one gate is a "NOT" gate and takes input a, and returns b = NOT(a), apparently the right answer is that we can enforce the two clauses: a or b, and NOT(a) and NOT(b).
This can be done by a truth table, but I can't seem to figure out how this truth table works. If anyone can explain that would be VERY helpful.
Define a "consistency function" C taking 2 arguments a and b, which is True(1) if and only if the values of a and b are consistent with the definition of a NOT gate.
In your case,
C(0,0) = 0 (since a=0 b=0 is inconsistent with a=NOT(b) )
C(0,1) = 1
C(1,0) = 1
C(1,1) = 0
which is the desired truth table.
Now you can obtain expression for C = a.b + (a').(b') = (a + b).(a' + b')

DPLL algorithm definition

I am having some problems understanding the DPLL algorithm and I was wondering if anyone could explain it to me because I think my understanding is incorrect.
The way I understand it is, I take some set of literals and if some every clause is true the model is true but if some clause is false then the model is false.
I recursively check the model by looking for a unit clause, if there is one I set the value for that unit clause to make it true, then update the model. Removing all clauses that are now true and remove all literals which are now false.
When there are no unit clauses left, I chose any other literal and assign values for that literal which make it true and make it false, then again remove all clauses which are now true and all literals which are now false.
DPLL requires a problem to be stated in disjunctive normal form, that is, as a set of clauses, each of which must be satisfied.
Each clause is a set of literals {l1, l2, ..., ln}, representing the disjunction of those literals (i.e., at least one literal must be true for the clause to be satisfied).
Each literal l asserts that some variable is true (x) or that it is false (~x).
If any literal is true in a clause, then the clause is satisfied.
If all literals in a clause are false, then the clause is unsatisfiable and hence the problem is unsatisfiable.
A solution is an assignment of true/false values to the variables such that every clause is satisfied. The DPLL algorithm is an optimised search for such a solution.
DPLL is essentially a depth first search that alternates between three tactics. At any stage in the search there is a partial assignment (i.e., an assignment of values to some subset of the variables) and a set of undecided clauses (i.e., those clauses that have not yet been satisfied).
(1) The first tactic is Pure Literal Elimination: if an unassigned variable x only appears in its positive form in the set of undecided clauses (i.e., the literal ~x doesn't appear anywhere) then we can just add x = true to our assignment and satisfy all the clauses containing the literal x (similarly if x only appears in its negative form, ~x, we can just add x = false to our assignment).
(2) The second tactic is Unit Propagation: if all but one of the literals in an undecided clause are false, then the remaining one must be true. If the remaining literal is x, we add x = true to our assignment; if the remaining literal is ~x, we add x = false to our assignment. This assignment can lead to further opportunities for unit propagation.
(3) The third tactic is to simply choose an unassigned variable x and branch the search: one side trying x = true, the other trying x = false.
If at any point we end up with an unsatisfiable clause then we have reached a dead end and have to backtrack.
There are all sorts of clever further optimisations, but this is the core of almost all SAT solvers.
Hope this helps.
The Davis–Putnam–Logemann–Loveland (DPLL) algorithm is a, backtracking-based search algorithm for deciding the satisfiability of propositional logic formulae in conjunctive normal form also known as satisfiability problem or SAT.
Any boolean formula can be expressed in conjunctive normal form (CNF) which means a conjunction of clauses i.e. ( … ) ^ ( … ) ^ ( … )
where a clause is a disjunction of boolean variables i.e. ( A v B v C’ v D)
an example of boolean formula expressed in CNF is
(A v B v C) ^ (C’ v D) ^ (D’ v A)
and solving the SAT problem means finding the combination of values for the variables in the formula that satisfy it like A=1, B=0, C=0, D=0
This is a NP-Complete problem. Actually it is the first problem which has been proven to be NP-Complete by Stepehn Cook and Leonid Levin
A particular type of SAT problem is the 3-SAT which is a SAT in which all clauses have three variables.
The DPLL algorithm is way to solve SAT problem (which practically depends on the hardness of the input) that recursively creates a tree of potential solution
Suppose you want to solve a 3-SAT problem like this
(A v B v C) ^ (C’ v D v B) ^ (B v A’ v C) ^ (C’ v A’ v B’)
if we enumerate the variables like A=1 B=2 C=3 D=4 and se negative numbers for negated variables like A’ = -1 then the same formula can be written in Python like this
[[1,2,3],[-3,4,2],[2,-1,3],[-3,-1,-2]]
now imagine to create a tree in which each node consists of a partial solution. In our example we also depicted a vector of the clauses satisfied by the solution
the root node is [-1,-1,-1,-1] which means no values have been yet assigned to the variables neither 0 nor 1
at each iteration:
we take the first unsatisfied clause then
if there are no more unassigned variables we can use to satisfy that clause then there can’t be valid solutions in this branch of the search tree and the algorithm shall return None
otherwise we take the first unassigned variable and set it such it satisfies the clause and start recursively from step 1. If the inner invocation of the algorithm returns None we flip the value of the variable so that it does not satisfy the clause and set the next unassigned variable in order to satisfy the clause. If all the three variables have been tried or there are no more unassigned variable for that clause it means there are no valid solutions in this branch and the algorithm shall return None
See the following example:
from the root node we choose the first variable (A) of the first clause (A v B v C) and set it such it satisfies the clause then A=1 (second node of the search tree)
the continue with the second clause and we pick the first unassigned variable (C) and set it such it satisfies the clause which means C=0 (third node on the left)
we do the same thing for the fourth clause (B v A’ v C) and set B to 1
we try to do the same thing for the last clause we realize we no longer have unassigned variables and the clause is always false. We then have to backtrack to the previous position in the search tree. We change the value we assigned to B and set B to 0. Then we look for another unassigned value that can satisfy the third clause but there are not. Then we have to backtrack again to the second node
Once there we have to flip the assignment of the first variable (C) so that it won’t satisfy the clause and set the next unassigned variable (D) in order to satisfy it (i.e. C=1 and D=1). This also satisfies the third clause which contains C.
The last clause to satisfy (C’ v A’ v B’) has one unassigned variable B which can be then set to 0 in order to satisfy the clause.
In this link http://lowcoupling.com/post/72424308422/a-simple-3-sat-solver-using-dpll you can find also the python code implementing it

Resources