Related
What is the exact difference between Well-formed formula and a proposition in propositional logic?
There's really not much given about Wff in my book.
My book says: "Propositions are also called sentences or statements. Another term formulae or well-formed formulae also refer to the same. That is, we may also call Well formed formula to refer to a proposition". Does that mean they both are the exact same thing?
Proposition: A statement which is true or false, easy for people to read but hard to manipulate using logical equivalences
WFF: An accurate logical statement which is true or false, there should be an official rigorus definition in your textbook. There are 4 rules they must follow. Harder for humans to read but much more precise and easier to manipulate
Example:
Proposition : All men are mortal
WFF: Let P be the set of people, M(x) denote x is a man and S(x)
denote x is mortal Then for all x in P M(x) -> S(x)
It is most likely that there is a typo in the book. In the quote Propositions are also called sentences or statements. Another term formulae or well-formed formulae also refer to the same. That is, we may also call Well formed formula to refer to a preposition, the word "preposition" should be "proposition".
Proposition :- A statement which is either true or false,but not both.
Propositional Form (necessary to understand Well Formed Formula) :- An assertion which contains at least one propositional variable.
Well Formed Formula :-A propositional form satisfying the following rules and any Wff(Well Formed Formula) can be derived using these rules:-
If P is a propositional variable then it is a wff.
If P is a propositional variable,then ~P is a wff.
If P and Q are two wffs then,(A and B),(A or B),(A implies B),(A is equivalent to B) are all wffs.
During my exploration of different ways to write down lists, I am intrigued by the following list [[a,b]|c] which appears in the book 'Prolog and Natural Language Analysis' by Pereira and Shieber (page 42 of the digital edition).
At first I thought that such a notation was syntactically incorrect, as it would have had to say [[a,b]|[c]], but after using write_canonical/1 Prolog returned '.'('.'(a,'.'(b,[])),c).
As far as I can see, this corresponds to the following tree structure (although it seems odd to me that structure would simply end with c, without the empty list at the end):
I cannot seem to find the corresponding notation using comma's and brackets though. I thought it would correspond to [[a,b],c] (but this obviously returns a different result with write_canonical/1).
Is there no corresponding notation for [[a,b]|c] or am I looking at it the wrong way?
As others have already indicated, the term [[a,b]|c] is not a list.
You can test this yourself, using the | syntax to write it down:
?- is_list([[a,b]|c]).
false.
You can see from write_canonical/1 that this term is identical to what you have drawn:
| ?- write_canonical([[a,b]|c]).
'.'('.'(a,'.'(b,[])),c)
In addition to what others have said, I am posting an additional answer because I want to explain how you can go about finding the reason of unexpected failures. When starting with Prolog, you will often ask yourself "Why does this query fail?"
One way to find explanations for such issues is to generalize the query, by using logical variables instead of concrete terms.
For example, in the above case, we could write:
?- is_list([[A,b]|c]).
false.
In this case, I have used the logical variable A instead of the atom a, thus significantly generalizing the query. Since the generalized query still fails, some constraint in the remaining part must be responsible for the unexpected failure. We this generalize it further to narrow down the cause. For example:
?- is_list([[A,B]|c]).
false.
Or even further:
?- is_list([[A,B|_]|c]).
false.
And even further:
?- is_list([_|c]).
false.
So here we have it: No term that has the general form '.'(_, c) is a list!
As you rightly observe, this is because such a term is not of the form [_|Ls] where Ls is a list.
NOTE: The declarative debugging approach I apply above works for the monotonic subset of Prolog. Actually, is_list/1 does not belong to that subset, because we have:
?- is_list(Ls).
false.
with the declarative reading "There is no spoon list." So, it turns out, it worked only by coincidence in the case above. However, we could define the intended declarative meaning of is_list/1 in a pure and monotonic way like this, by simply applying the inductive definition of lists:
list([]).
list([_|Ls]) :- list(Ls).
This definition only uses pure and monotonic building blocks and hence is monotonic. For example, the most general query now yields actual lists instead of failing (incorrectly):
?- list(Ls).
Ls = [] ;
Ls = [_6656] ;
Ls = [_6656, _6662] ;
Ls = [_6656, _6662, _6668] .
From pure relations, we expect that queries work in all directions!
I cannot seem to find the corresponding notation using comma's and brackets though.
There is no corresponding notation, since this is technically speaking not a real list.
Prolog has syntacical sugar for lists. A list in Prolog is, like a Lisp list, actually a linked list: every element is either an empty list [], or a node .(H,T) with H the head and T the tail. Lists are not "special" in Prolog in the sense that the intepreter handles them differently than any other term. Of course a lot of Prolog libraries do list processing, and use the convention defined above.
To make complex lists more convenient, syntactical sugar was invented. You can write a node .(H,T) like [H|T] as well. So that means that in your [[a,b]|c]. We have an outer list, which has one node .(H,c) and the ? is another list, with two nodes and an empty list H = .(a,.(b,[])).
Technically speaking I would not consider this a "real" list, since the tail of a list should have either another node ./2, or an empty list.
You can however use this with variables like: [[a,b]|C] in order to unify the tail C further. So here we have some sort of list with [a,b] as first element (so a list containing a list) and with an open tail C. If we later for instance ground C to C = [], then the list is [[a,b]].
If I have a given boolean expression (AND and OR operations) with many boolean variables, then I want to evaluate this expression to true, how can I find the set of all possible boolean values to achive a true epxression?
For example, I have 4 boolean variable a, b, c, d and an expression:
(a ^ b) v (c ^ d)
The slowest way I've tried to do is:
I build an expression tree to get all variables in the expression, I get a {a,b,c,d} set.
I find all subsets of the set: {a}, {b}, {c}, {d}, {a,b}, {a,c}, {a,d}, {b,c}, {b,d}, {c,d}, {a,b,c}, {a,b,d}, {a,c,d}, {b,c,d}, {a,b,c,d}
For each subset, I set each of variables to true, then evaluate the expression. If the expression returns true, I save the subset with the values.
EDIT: I eliminate the NOT operator to make the problem simpler.
I think I see a way to compute this without having to try all the permutations, and my high level outline, described below, is not really very complicated. I'll outline the basic approach, and you will have two follow-up tasks to do on your own:
Parsing a logical expression, like "A && (B || C)" into a classical parse tree, that represents the expression, which each node in the tree being either a variable, or a boolean operation, either "&&", "||", or "!" (NOT), with two children being its operands. This is a classical parse tree. Plenty of examples of how to do this can be found in Google.
Translating my outline into actual C++ code. That's going to be up to you also, but I think that the implementation should be rather obvious, once you wrap your brain around the overall approach.
To solve this problem, I'll use a two-phase approach.
I will use the general approach of proof by induction in order to come up with a tentative list of all potential sets of values of all the variables for which the boolean expression will evaluate to true.
In the second phase I'll eliminate from the list of all potential sets those sets that are logically impossible. This might sound confusing, so I'll explain this second phase first.
Let's use the following datatypes. First, I'll use this datatype of possible values for which the boolean expression will evaluate to either true or false:
typedef std::set<std::pair<std::string, bool>> values_t;
Here, std::pair<std::string, bool> represents variable, whose name is std::string, has this bool value. For example:
{"a", true}
Means that the value of variable "a" has the value of true. It follows that this std::set represents a set of variables and their corresponding values.
All of these potential solutions are going to be an:
typedef std::list<values_t> all_values_t;
So this is how we will represent a list of all combinations of values of all variables, that produce the result of either true or false. You can use a std::vector instead of a std::list, it doesn't really matter.
Now notice that it is possible for a values_t to have both:
{"a", true}
and
{"a", false}
in the set. This means that in order for the expression to evaluate to true or false, "a" must be simultaneously true and false.
But this is, obviously, logically impossible. So, in phase 2 of this solution you will need to go simply go through all the individual values_t in all_values_t, and remove the "impossible" values_t that contain both true and false for the some variable. The way to do this should seem rather obvious, and I won't waste time on describing it, but phase 2 should be straightforward, once phase 1 is complete.
For phase 1 our goal is to come up with a function that's roughly declared like this:
all_values_t phase1(expression_t expr, bool goal);
expr is a parsed representation of your boolean expression, as a parse tree (as I mentioned in the beginning, doing this part will be up to you). goal is how you want the parsed expression to be evaluated to: phase1() returns all possible all_values_t for which expr evaluates to either true or false, as indicated by "goal". You will, obviously, call phase1() passing true for "goal" for your answer, because that's what you want to figure out. But phase1() will call itself recursively, with either a true or a false "goal", to do its magic.
Before proceeding, it is important now to read and understand the various resources that describe how a proof by induction works. Don't proceed any further until you understand this general concept fully.
Ok, now you understand the concept. If you do, then you must now agree with me that phase1() is already done. It works! Proof by induction starts by assuming that phase1() does what it is supposed to do. phase1() will make recursive calls to itself, and since phase1() returns the right result, phase1() can simply rely on itself to figure everything out. See how easy this is?
phase1() really has one "simple" task at hand:
Check what the top level node of the parse tree is. It will be either a variable node or an expression node (see above).
Return the appropriate all_values_t, based on that.
That's it. We'll take both possibilities, one at a time.
The top level node is a variable.
So, if your expression is just a variable, and you want the expression to return goal, then:
values_t v{ {name, goal} };
There's only one possible way for the expression to evaluate to goal: an obvious no-brainer: the variable, and goal for its value.
And there's only one possible solution. No other alternatives:
all_values_t res;
res.push_back(v);
return res;
Now, the other possibility is that the top-level node in the expression is one of the boolean operations: and, or, or not.
Again, we'll divide and conquer this, and tackle each one, one at a time.
Let's say that it's "not". What should we do then? That should be easy:
return phase1(child1, !goal);
Just call phase1() recursively, passing the "not" expression's child node, with goal logically inverted. So, if your goal was true, use phase1() to come back with what the values for "not" sub-expression being false, and vice-versa. Remember, proof by induction assumes that phase1() works as advertised, so you can rely on it to get the correct answer for the sub-expression.
It should now start becoming obvious how the rest of phase1() works. There are only two possibilities left: the "and" and the "or" logical operation.
For the "and" operation, we'll consider, separately, whether the "goal" of the "and" operation should be true or false.
If goal is true, you must use phase1() to come up with all_values_t for both subexpressions being true:
all_values_t left_result=phase1(child1, true);
all_values_t right_result=phase1(child2, true);
Then just combine the two results together. Now, recall that all_values_t is a list of all possible values. Each value in all_values_t (which can be an empty list/vector) represents one possible solution. Both the left and the right sub-expressions must be logically combined, but any possible solution from the left_result can go together with any right_result. Any potential solution with the left subexpression being true can (and must) go with any potential solution with the right subexpression being true.
So the all_values_t that needs to be returned, in this case, is obtained by doing a cartesian product between the left_result and the right_result. That is: taking the first value, the first values_t std::set in the left_result, then adding to this set the first right_result std::set, then the first left_result with the second right_result, and so on; then the second left_result with the first right_result, then the second right_result and so on. Each one of these combinations gets push_back()ed into the all_values_t that gets returned from this call to phase1().
But your goal is to have the "and" expression return false, instead, you simply have to do a variation of this three times. The first time by calling phase1(child1, false) with phase1(child2, false); then phase1(child1, true) with phase1(child2, false); and finally phase1(child1, false) and phase1(child2, true). Either child1 or child2, or both, must evaluate to false.
So that takes care of the "and" operation.
The last, and the final possibility for phase1() to deal with is the logical or operation. You should be able to, now, figure out how to do it by yourself, but I'll just briefly summarize it:
If goal is false, you must call phase1(child1, false) with phase1(child2, false), then combine both results together, as a cartesian product. If goal is true, you will make three sets of recursive calls, for the three other possibilities, and combine everything together.
You're done. There's nothing else for phase1() to do, and we completed our proof by induction.
Well, I lied a little bit. You'll also need to do a small "Phase 3". Recall that in "Phase 2" we eliminated all impossible solution. Well, it is possible that, as a result of all this, the final list of possible solution will have the same values_t occur more than one time in the all_values_t, so you'll just have to dedupe it.
P.S. It's also possible to avoid a discrete phase 2 by doing it on the fly, as part of phase 1. This variation is going to be your homework assignment, too.
I am currently studying logic programming, and learn Prolog for that case.
We can have a Knowledge Base, which can lead us to some results, whereas Prolog will get in infinite loop, due to the way it expands the predicates.
Let assume we have the following logic program
p(X):- p(X).
p(X):- q(X).
q(X).
The query p(john) will get to an infinite loop because Prolog expands by default the first predicate that is unified. However, we can conclude that p(john) is true if we start expanding the second predicate.
So why doesn't Prolog expand all the matching predicates (implemented like threads/processes model with time slices), in order to conclude something if the KB can conclude something ?
In our case for example, two processes can be created, one expanded with p(X) and the other one with q(X). So when we later expand q(X), our program will conclude q(john).
Because Prolog's search algorithm for matching predicates is depth-first. So, in your example, once matching the first rule, it will match again the first rule, and will never explore the others.
This would not happen if the algorithm is breadth-first or iterative-deepening.
Usually is up to you to reorder the KB such that these situations never happen.
However, it is possible to encode breadth-first/iterative-deepening search in Prolog using a meta-interpreter that changes the search order. This is an extremely powerful technique that is not well known outside of the Prolog world. 'The Art of Prolog' describes this technique in detail.
You can find some examples of meta-interpreters here, here and here.
I have the next rules
% Signature: natural_number(N)/1
% Purpose: N is a natural number.
natural_number(0).
natural_number(s(X)) :-
natural_number(X).
ackermann(0, N, s(N)). % rule 1
ackermann(s(M),0,Result):-
ackermann(M,s(0),Result). % rule 2
ackermann(s(M),s(N),Result):-
ackermann(M,Result1,Result),
ackermann(s(M),N,Result1). % rule 3
The query is: ackermann (M,N,s(s(0))).
Now, as I understood, In the third calculation, we got an infinite search (failure branch). I check it, and I got a finite search (failure branch).
I'll explain: In the first, we got a substitution of M=0, N=s(0) (rule 1 - success!). In the second, we got a substitution of M=s(0),N=0 (rule 2 - success!). But what now? I try to match M=s(s(0)) N=0, But it got a finite search - failure branch. Why the compiler doesn't write me "fail".
Thank you.
It was a bit hard to understand exactly what Tom is asking here. Perhaps there's an expectation that the predicate natural_number/1 somehow influences the execution of ackermann/3. It will not. The latter predicate is purely recursive and makes no subgoals that depend on natural_number/1.
When the three clauses shown are defined for ackermann/3, the goal:
?- ackermann(M,N,s(s(0))).
causes SWI-Prolog to find (with backtracking) the two solutions that Tom reports, and then to go into infinite recursion (resulting in an "Out of Stack" error). We can be sure that this infinite recursion involves the third clause given for ackermann/3 (rule 3 per Tom's comments in code) because in its absence we only get the two acknowledged solutions and then explicit failure:
M = 0,
N = s(0) ;
M = s(0),
N = 0 ;
false.
It seems to me Tom asks for an explanation of why changing the submitted query to one that sets M = s(s(0)) and N = 0, producing a finite search (that finds one solution and then fails on backtracking), is consistent with the infinite recursion produced by the previous query. My suspicion here is that there's a misunderstanding of what the Prolog engine attempts in backtracking (for the original query), so I'm going to drill down on that. Hopefully it clears up the matter for Tom, but let's see if it does. Admittedly my treatment is wordy, but the Prolog execution mechanism (unification and resolution of subgoals) is worthy of study.
[Added: The predicate has an obvious connection to the famous Ackermann function that is total computable but not primitive recursive. This function is known for growing rapidly, so we need to be careful in claiming infinite recursion because a very large but finite recursion is also possible. However the third clause puts its two recursive calls in an opposite order to what I would have done, and this reversal seems to play a critical role in the infinite recursion we find in stepping through the code below.]
When the top-level goal ackermann(M,N,s(s(0))) is submitted, SWI-Prolog tries the clauses (facts or rules) defined for ackermann/3 until it finds one whose "head" unifies with the given query. The Prolog engine does not have far to look as the first clause, this fact:
ackermann(0, N, s(N)).
will unify, binding M = 0 and N = s(0) as has already been described as the first success.
If requested to backtrack, e.g. by user typing semi-colon, the Prolog engine checks to see if there is an alternative way to satisfy this first clause. There is not. Then the Prolog engine proceeds to attempt the following clauses for ackermann/3 in their given order.
Again the search does not have to go far because the second clause's head also unifies with the query. In this case we have a rule:
ackermann(s(M),0,Result) :- ackermann(M,s(0),Result).
Unifying the query and the head of this rule yields the bindings M = s(0), N = 0 in terms of the variables used in the query. In terms of the variables used in the rule as stated above, M = 0 and Result = s(s(0)). Note that unification matches terms by their appearance as calling arguments and does not consider variable names reused across the query/rule boundary as signifying identity.
Because this clause is a rule (having body as well as head), unification is just the first step in trying to succeed with it. The Prolog engine now attempts the one subgoal that appears in the body of this rule:
ackermann(0,s(0),s(s(0))).
Note that this subgoal comes from replacing the "local" variables used in the rule by the values of unification, M = 0 and Result = s(s(0)). The Prolog engine is now calling the predicate ackermann/3 recursively, to see if this subgoal can be satisfied.
It can, as the first clause (fact) for ackermann/3 unifies in the obvious way (indeed in essentially the same way as before as regards the variables used in the clause). And thus (upon this recursive call succeeding), we get the second solution succeeding in the outer call (the top-level query).
If the user asks the Prolog engine to backtrack once more, it again checks to see if the current clause (the second one for ackermann/3) can be satisfied in an alternative way. It cannot, and so the search continues by passing to the third (and last) clause for predicate ackermann/3:
ackermann(s(M),s(N),Result) :-
ackermann(M,Result1,Result),
ackermann(s(M),N,Result1).
I'm about to explain that this attempt does produce infinite recursion. When we unify the top-level query with the head of this clause, we get bindings for the arguments that can perhaps be clearly understood by aligning the terms in in the query with those in the head:
query head
M s(M)
N s(N)
s(s(0)) Result
Bearing in mind that variables having the same name in the query as variables in the rule does not constrain unification, this triple of terms can be unified. Query M will be head s(M), that is a compound term involving functor s applied to some as-yet unknown variable M appearing in the head. Same thing for query N. The only "ground" term so far is variable Result appearing in the head (and body) of the rule, which has been bound to s(s(0)) from the query.
Now the third clause is a rule, so the Prolog engine must continue by attempting to satisfy the subgoals appearing in the body of that rule. If you substitute values from the head unification into the body, the first subgoal to satisfy is:
ackermann(M,Result1,s(s(0))).
Let me point out that I've used here the "local" variables of the clause, except that I've replaced Result by the value to which it was bound in unification. Now notice that apart from replacing N of the original top-level query by variable name Result1, we are just asking the same thing as the original query in this subgoal. Certainly it's a big clue we might be about to enter an infinite recursion.
However a bit more discussion is needed to see why we don't get any further solutions reported! This is because the first success of that first subgoal is (just as found earlier) going to require M = 0 and Result1 = s(0), and the Prolog engine must then proceed to attempt the second subgoal of the clause:
ackermann(s(0),N,s(0)).
Unfortunately this new subgoal does not unify with the first clause (fact) for ackermann/3. It does unify with the head of the second clause, as follows:
subgoal head
s(0) s(M)
N 0
s(0) Result
but this leads to a sub-subgoal (from the body of the second clause):
ackermann(0,s(0),s(0)).
This does not unify with the head of either the first or second clause. It also does not unify with the head of the third clause (which requires the first argument to have the form s(_)). So we reached a point of failure in the search tree. The Prolog engine now backtracks to see if the first subgoal of the third clause's body can be satisfied in an alternative way. As we know, it can be (since this subgoal is basically the same as the original query).
Now M = s(0) and Result1 = 0 of that second solution leads to this for the second subgoal of the third clause's body:
ackermann(s(s(0)),N,0).
While this does not unify with the first clause (fact) of the predicate, it does unify with the head of the second clause:
subgoal head
s(s(0)) s(M)
N 0
0 Result
But in order to succeed the Prolog engine must satisfy the body of the second clause as well, which is now:
ackermann(s(s(0)),s(0),0).
We can easily see this cannot unify with the head of either the first or second clause for ackermann/3. It can be unified with the head of the third clause:
sub-subgoal head(3rd clause)
s(s(0)) s(M)
s(0) s(N)
0 Result
As should be familiar now, the Prolog engine checks to see if the first subgoal of the third clause's body can be satisfied, which amounts to this sub-sub-subgoal:
ackermann(s(0),Result1,0).
This fails to unify with the first clause (fact), but does unify with the head of the second clause binding M = 0, Result1 = 0 and Result = 0 , producing (by familiar logic) the sub-sub-sub-subgoal:
ackermann(0,0,0).
Since this cannot be unified with any of the three clauses' heads, this fails. At this point the Prolog engine backtracks to trying to satisfy the above sub-sub-subgoal using the third clause. Unification goes like this:
sub-sub-subgoal head(3rd clause)
s(0) s(M)
Result1 s(N)
0 Result
and the Prolog engine's task is then to satisfy this sub-sub-sub-subgoal derived from the first part of the third clause's body:
ackermann(0,Result1,0).
But this will not unify with the head of any of the three clauses. The search for a solution to the sub-sub-subgoal above terminates in failure. The Prolog engine backtracks all the way to where it first tried to satisfy the second subgoal of the third clause as invoked by the original top-level query, as this has now failed. In other words it tried to satisfy it with the first two solutions of the first subgoal of the third clause, which you will recall was in essence the same except for change of variable names as the original query:
ackermann(M,Result1,s(s(0))).
What we've seen above are that solutions for this subgoal, duplicating the original query, from the first and second clauses of ackermann/3, do not permit the second subgoal of the third clause's body to succeed. Therefore the Prolog engine tries to find solutions that satisfy the third clause. But clearly this is now going into infinite recursion, as that third clause will unify in its head, but the body of the third clause will repeat exactly the same search we just chased through. So the Prolog engine now winds up going into the body of the third clause endlessly.
Let me rephrase your question: The query ackermann(M,N,s(s(0))). finds two solutions and then loops. Ideally, it would terminate after these two solutions, as there is no other N and M whose value is s(s(0)).
So why does the query not terminate universally? Understanding this can be quite complex, and the best advice is to not attempt to understand the precise execution mechanism. There is a very simple reason: Prolog's execution mechanism turns out to be that complex that you easily will misunderstand it anyway if you attempt to understand it by stepping through the code.
Instead, you can try the following: Insert goals false at any place in your program. If the resulting program does not terminate, then also the original program will not terminate.
In your case:
ackermann(0, N, s(N)) :- false.
ackermann(s(M),0,Result):- false,
ackermann(M,s(0),Result).
ackermann(s(M),s(N),Result):-
ackermann(M,Result1,Result), false,
ackermann(s(M),N,Result1).
We can now remove the first and second clause. And in the third clause, we can remove the goal after false. So if the following fragment does not terminate, also the original program will not terminate.
ackermann(s(M),s(N),Result):-ackermann(M,Result1,Result), false.
This program now terminates only if the first argument is known. But in our case it's free...
That is: By considering a small fraction of the program (called a failure-slice) we already were able to deduce a property of the entire program. For details, see this paper and others on the site.
Unfortunately, that kind of reasoning only works for cases of non-termination. For termination, things are more complex. The best is to try a tool like cTI which infers termination conditions and tries to prove their optimality. I entered your program already, so try to modify if and see the effects!
If we are at it: This small fragment also tells us that the second argument does not influence termination1. That means, that queries like ackermann(s(s(0)),s(s(0)),R). will
not terminate either. Exchange the goals to see the difference...
1 To be precise, a term that does not unify with s(_) will influence termination. Think of 0. But any s(0), s(s(0)), ... will not influence termination.