I'm not sure if this belongs here, but I was told to evaluate
(00110101 ^ (10010101 v 10100000))
How is my answer suppose to look like?
I was wondering how I would do this?
I'm thinking of treating each of those values as a variable like (a ^ (b v c))
then make a truth table? Is that what I'm suppose to do?
Do the stuff in the parentheses first:
10010101
| 10100000
-----------
101.....
Then AND the result with the first number:
00110101
& 101.....
-----------
001.....
For bitwise logic simply evaluate each expression bit-by-bit. Take the first bit of expressions b and c and OR them, then AND them with the first bit of a. Then move to the second bit, etc. You should end up with a bit string of equal length as the starting strings.
Related
When attempting to solve logic problems on a computer, it is usual to first convert them to CNF, because the best solving algorithms expect CNF as input.
For propositional logic, the textbook rules for this conversion are simple, but if you apply them as is, the result is one of the very rare cases where a program encounters double exponential resource consumption without being specifically constructed to do so:
a <=> (b <=> (c <=> ...))
with N variables, generates 2^2^N clauses, one exponential blowup in the conversion of equivalence to AND/OR, and another in the distribution of OR into AND.
The solution to this is to rename subterms. If we rewrite the above as something like
r <=> (c <=> ...)
a <=> (b <=> r)
where r is a fresh symbol that is being defined to be equal to a subterm - in general, we may need O(N) such symbols - the exponential blowups can be avoided.
Unfortunately, this runs into a problem when we try to extend it to first-order logic. Using TPTP notation where ? means 'there exists' and variables begin with capital letters, consider
a <=> ?[X]:p(X)
Admittedly this case is simple enough that there is no actual need to rename the subterm, but it's necessary to use a simple case for illustration, so suppose we are using an algorithm that just automatically renames arguments of the equivalence operator; the point generalizes to more complex cases.
If we try the above trick and rename the ? subterm, we get
r <=> ?[X]:p(X)
Existential variables are converted to Skolem symbols, so that ends up as
r <=> p(s)
The original formula then expands to
(~a | r) & (a | ~r)
Which is by construction equivalent to
(~a | p(s)) & (a | ~p(s))
But this is not correct! Suppose we had not done the renaming, but just expanded the original formula as it was, we would get
(~a | ?[X]:p(X)) & (a | ~?[X]:p(X))
(~a | ?[X]:p(X)) & (a | ![X]:~p(X))
(~a | p(s)) & (a | ~p(X))
which is critically different from the version we got with the renaming.
The problem is that equivalence needs both the positive and negative versions of each argument, but applying negation to terms that contain universal or existential quantifiers, structurally changes those terms; you cannot just encapsulate them in a definition, then apply the negation to the defined symbol.
The upshot of this is that when you have equivalence and the arguments may contain such quantifiers, you actually need to recur through each argument twice, once for the positive version, once for the negative. This suffices to bring back the existential blowup we hoped to avoid by doing the renaming. As far as I can see, this problem is not caused by the way a particular algorithm works, but by the nature of the task.
So my question:
Given an input formula that may contain arbitrary nesting of equivalence and quantifiers, is there any algorithm that will correctly turn this to CNF with a polynomial rather than exponential number of clauses?
As you observed, an existential such as ∃X.p(X) is not in fact equivalent to a Skolemized expression p(S). Its negation ¬∃X.p(X) is not equivalent to ¬p(S), but to ∀Y.¬p(Y).
Possible approaches that avoid the exponential blow-up:
Convert existentials such as ∃X.p(X) to universals such as ¬∀Y.p(Y), or vice versa, so you have a canonical form. Skolemize at a later step.
Remember when you convert that your p(S) is a Skolemized existential, and that its negation is ∀Y.¬p(Y).
Define terms equivalent to universals and existentials, such that a represents ∀Y.p(Y) and ¬a then represents ¬∀Y.p(Y), or equivalently, ∃X.¬p(X).
Use the symmetry of Boolean duals, so that the same transformations apply with AND and OR swapped, De Morgan’s Laws, and the equivalence between existentials and negated universals, to restore the symmetry between the expansions of r and ~r. The negations in the conversion between universals and existentials and in De Morgan's Laws cancel each other out, and the duality of switching AND and OR means you can re-use the result on the left to generate the one on the right mechanically again?
Given that you need to support ALL and NOT ALL statements anyway, this should not create any new problems. Just canonicalize and use the same approach you would for a universal.
If you’re solving by converting to SAT, your terms can represent universals, too. So, in your example, you’re trying to replace a with r, but you can still use ~a, equivalent to the negative universal.
In your expressions. you’d still use (~a | r) & (a | ~r), but expand ~r to its correct rather than the incorrect value. That example is trivial, since that’s just ~a, but you’d normally define r as equivalent to a more complex transformation, and in that case you need to remember what both r and ~r represent. It is not really a simple mechanical transformation of the Skolemized expression.
In this example, I’m not sure why it’s a problem that (~a | r) & (a | ~r) is equivalent to (~a | r) & (a | ~a), which simplifies to (~a | r). That’s not going to give you exponential blow-up? When you translate back to first-order predicate logic, make the correct translation.
Update
Thanks for clarifying what the problem was in chat. As I currently think I understand it, what you have is an equivalence with a left and a right side, which contains other nested equivalences, and you want to expand both the equivalence and its negation. The problem is that, because the negation does not have symmetrical form, you need to recurse twice for each nested equivalence in the tree, once when expanding the equivalence and once when expanding its negation?
You should define a transformation that generates the negative expansion from the positive expansion in linear time, and divide-and-conquer the expressions containing nested equivalences using that. This seems to be what you were after with the ~p(S) transformation.
To do this, you recall that ¬∃X.p(X) is equivalent to ∀X.¬p(X), and vice versa. Then if you’ve expanded p(x) into normal form as conjunctions and disjunctions, De Morgan’s Laws lets you turn an expression like ¬(a ∨ ¬b) into ¬a ∧ b. The inner ¬ on the quantifier transformation and the outer ¬ on the De Morgan transformation cancel each other out. Finally, the dual of any Boolean equivalence remains valid when you replace each ∨ and ∧ with the other and any atom a or ¬a with its inverse.
So, while I might be making an error, especially at 1 AM, it looks to me like what you want is the dual transformation that substitutes:
An outer ∃ for ∀ and vice versa
∧ for ∨ and vice versa
Each term t with ¬t and vice versa
Apply this to the expansion of the positive equivalence to generate the negative dual in time proportional to its length, without further recursion.
This looks like such an easy problem but still can't figure it out. How do I prove ¬(¬a = a)?
No given premises.
I got this so far (in Fitch):
This is a subproof where I assume the negation of my goal and then try to reach the absurd/contradiction so I can state the negation of my assumption, which would be my goal.
Thanks in advance!
Looking at your screenshot I'd say your =Intro introduces a variable a (that is, a is an object of the domain, rather than a predicate).
I say this because
in all books I've read, the =Intro rule is used for objects rather than predicates, and
for predicates, equals is expressed as "if and only if" which is typically written as ↔ and not =.
So, in other words, the only sensible interpretation of ¬(¬a = a) is that = binds harder than ¬, and the whole formula should be interpreted as ¬(¬(a = a)).
Now you should be able to
introduce a = a
assume the contrary: ¬(a = a)
arrive at a contradiction, ⊥, based on 1. and 2.
Use ¬Intro on 2 and 3 to get ¬(¬(a = a)).
The GSAT (Greedy Satisfiability) algorithm can be used to find a solution to a search problem encoded in CNF. I'm aware that since GSAT is greedy, it is incomplete (which means there would be cases where a solution might exist, but GSAT cannot find it). From the following link, I learned that this can happen when flipping variables greedily traps us in a cycle such as I → I' → I'' → I.
http://www.dis.uniroma1.it/~liberato/ar/incomplete/incomplete.html
I've been trying quite hard to come up with an actual instance that can show this, but have not been able to (and could not find examples elsewhere). Any help would be much appreciated. Thanks :)
P.S. I'm not talking about "hard" k-SAT problems in which the ratio of variables to clauses approaches 4.3. I'm just looking for a simple example, possibly involving the least number of variables and/or clauses required.
Take a small unsatisfiable formula with n variables and run GSAT for > 2^n steps. Since there are only 2^n different combinations to try, GSAT must repeat itself - it will not stop because the formula is not satisifed.
One small unsatisfiable formula is (A V B V C) ^ (~A V B V C) ^ (A V ~B V C) ^ (~A V ~B V C) ^ (A V B V ~C) ^ (~A V B V ~C) ^ (A V ~B V ~C) ^ (~A V ~B V ~C) - all 8 combinations of 3-variable terms.
In Knuth vol 4A section 7.1.1 equation 32 P 56 Knuth gives what he calls an interesting 8-clause formula with eight different variables.
What about the formula:
{x_1, x_2, -x_3}, {-x_1, x_2, -x_3}, {-x_2, -x_3}, {-x_2, -x_3}, {x_2, x_3}, {x_2, x_3}
This formula is satisfied via the assignment (0,1,0). However if one starts with the initial assignment (0,0,1) then one gets the scores (1,2,2) and therefore will flip x_1. Then one gets to the assignment (1,0,1) which again leads to the scores (1,2,2) and you are stuck. Then only a restart with another initial assignment will help you get out.
Of course this a little bit constructed due to the two doubled clauses but I am sure one can extend this easily to achieve a formula without repeated clauses.
I have to write a program that tests whether two algebraic expressions are equivalent. It should follow MDAS precedence and parenthesis grouping. To solve the problem about precedence, I'm thinking I should implement a Infix to Postfix Notation converter for these expressions. But by doing this, I could not conclude their equivalence.
The program should look like this:
User Input: a*(a+b) = a*a + a*b
Output : Equivalent
For this problem I'm not allowed to use Computer Algebraic Systems or any external libraries. Please don't post the actual code if you have one, I just need an idea to work this problem out.
If you are not allowed to evaluate the expressions, you will have to parse them out into expression trees.
After that, I would get rid of all parenthesis by multiplying/dividing all members so a(b - c) becomes a*b - a*c.
Then convert all expressions back to strings, making sure you have all members alphabetically sorted (a*b, not b*a) ,remove all spaces and compare strings.
That's an idea:
You need to implement building expression tree first because it's a very natural representation of expression.
Then maybe you'll need to simplify it by open brackets and etc. using associative or distributive algebraic properties.
Then you'll have to compare trees. It's not obvious because you need to take care of all branch permutations in commutative operations and etc. E.g. you can sort them (I mean branches) and then compare for equality. Also you need to keep in mind possible renaming of parameters, i.e. a + b need to be equal x + y.
I'm writing an input file for OTTER that is very simple:
set(auto).
formula_list(usable).
all x y ([Nipah(x) & Encephalitis(y)] -> Causes(x,y)).
exists x y (Nipah(x) & Encephalitis(y)).
end_of_list.
I get this output for the search :
given clause #1: (wt=2) 2 [] Nipah($c2).
given clause #2: (wt=2) 2 [] Encephalitis($c1).
search stopped because sos empty
Why won't OTTER infer Causes($c2,$c1)?
EDIT:
I removed the square brackets from [Nipah(x) & Encephalitis(x)] and it worked. Why does this matter?
I'd answer with a question: Why did you use square brackets in the first place?
Look into Otter manual, Section 4.3, List Notation. Square brackets are used for lists, it's syntactic sugar that is expanded into special terms. In your case, it expanded to something like
all x y ($cons(Nipah(x) & Encephalitis(y), $nil) -> Causes(x,y)).
Why won't OTTER infer Causes($c2,$c1)?
Note that the resolution calculus is not complete in the sense that every formula provable in a given theory could be inferred by the calculus. This would be highly undesirable! Instead, resolution is only refutationally complete, meaning that if a given theory is
contradictory then the resolution will find a proof of the empty clause. So even if a clause C is a logical consequence of a set of clauses T, it doesn't mean that the resolution calculus can derive C from T. In your case, the fact that Causes($c2,$c1) follows from the input doesn't mean Otter has to derive it.