Breadth-first Resolution Algorithm - logic

I want to implement a resolution algorithm which tries to get empty set as it resolves the candidate clauses.
I want algorithm to resolve the candidate parent clauses in a breadth-first order. However, I got confused at a point:
Let S be conjunction of all the clauses in knowledge base and negation of goal clause
when we try to resolve candidate clauses in S with the ones again in S, we get S'
As second step in the algorithm, should we try to resolve for S and S' or S' with S' itself?
and how should it proceed?
For example;
Suppose knowledge base + neg. of goal set consists of set of clauses such as
p(a,b) ^ q(z),~p(z,b) ^ ~q(y) ((let's call this set S)
when we run resolution algorithm on set S, we get clauses like:
q(a) ^ ~p(z,b) (let's call this set S')
now, If we have to employ BFS strategy, should we first find the resolvents whose first parent is in S and second is in S' first? or try to check for the resolvents whose parents are both from S'?
In some examples, when you first check with S' and S' for resolvents, you get the solution. However, when you proceed with checking pair of sets (S, S') (S, (S, S')) you get another way leading to empty clause. So, which order does correspond to BFS?
Thanks in advance

Here it is stated that:
All of the first-level resolvents are computed first, then the second-level resolvents,
and so on. A first-level resolvent is one between two clauses in the base set; an i-th
level resolvent is one whose deepest parent is an (i-1)-th level resolvent.
and here:
Level 0 clauses are the original axioms and the negation of the goal
Level k clauses are the resolvents computed from two clauses, one of which must be from level k-1 and the other from any earlier level.
What I mean from these statements and my comments are as the following:
Level 0 consists original clauses and negation of the goal. Let this set be X.
Level 1 consists resolution of (X,X) which are the only possible candidates. Let this set be Y.
Level 2 consists resolutions of (Y,X) and (Y,Y).
and so on.
My explanation applies the second statement. Actually it will give the same results as the first one except that you will resolve same sets at every level which is unnecessary. Breadth-first strategy is already very inefficient and a wrong approach makes it even worse.
I hope this clarifies your question.

Related

How can I make Minimax/Alpha-Beta pruning prioritize shorter paths?

I am having problems regarding my implementation of the minimax algorithm in a chess engine. How can I make the algorithm favor the shortest paths to the win?
Take this configuration of the board as an example:
.
The best move here would be to move the queen on the last line, but if the algorithm has a higher depth of search it doesn't matter if the checkmate occurs "faster".
A few fairly straightforward changes:
Change the return type of the search function, so that instead of returning a pair like (move, q) where q is a measure of how good the move is, it returns a triple like (move, q, m) where m is the length of the sequence following that move.
Change the return statements to include the correct sequence length; either 0 for an immediate game end, or (m + 1) where m is the length of the followup sequence found by a recursive call.
Use the sequence length as a tie-breaker to compare moves with equal q; lower m is preferred.
If you are short-circuiting by returning a winning move as soon as one is found, change the condition for this so that only an immediately-winning move short-circuits.
Note that this will generally make the algorithm less efficient, since you have to explore more branches after finding a winning move in case there is a quicker winning move in the unexplored branches.

Prolog: Kth Largest Element of a list [duplicate]

Am writing a logic program for kth_largest(Xs,K) that implements the linear algorithm for finding the
kth largest element K of a list Xs. The algorithm has the following steps:
Break the list into groups of five elements.
Efficiently find the median of each of the groups, which can be done with a fixed number of
comparisons.
Recursively find the median of the medians.
Partition the original list with respect to the median of the medians.
Recursively find the kth largest element in the appropriate smaller list.
How do I go about it? I can select an element from a list but I don't know how to get the largest using the above procedure.Here is my definition for selecting an element from a list
select(X; HasXs; OneLessXs)
% The list OneLessXs is the result of removing
% one occurrence of X from the list HasXs.
select(X,[X|Xs],Xs).
select(X,[Y|Ys],[Y|Zs]) :- select(X,Ys,Zs).
I'm going to jump in since no one has attempted an Answer, and hopefully shed some light on the procedure to be programmed.
I've found the Wikipedia article on Selection algorithm to be quite helpful in understanding the bigger picture of "fast" (worst-case linear time) algorithms of this type.
But what you asked at the end of your Question is a somewhat simpler matter. You wrote "How do i go about it? I can select an element from a list but i dont know how to get the largest using the above procedure." (emphasis added by me)
Now there seems to be a bit of confusion about whether you want to implement "the above procedure", which is a general recipe for finding a kth largest element by successive searches for medians, or whether you ask how to use that recipe to find simply the largest element (a special case). Note that the recipe doesn't specifically use a step of finding the largest element on its way to locating the median or the kth largest element.
But you give the code to find an element of a list and the rest of that list after removing that element, a predicate that is nondeterministic and allows backtracking through all members of the list.
The task of finding the largest element is deterministic (at least if all the elements are distinct), and it is an easier task than the general selection of the kth largest element (a task associated with order statistics among other things).
Let's give some simple, hopefully obviously correct, code to find the largest element, and then talk about a more optimized way of doing it.
maxOfList(H,[H|T]) :- upperBound(H,T), !.
maxOfList(X,[_|T]) :- maxOfList(X,T).
upperBound(X,[ ]).
upperBound(X,[H|T]) :-
X >= H,
upperBound(X,T).
The idea should be understandable. We look at the head of the list and ask if that entry is an upper bound for the rest of the list. If so, that must be the maximum value and we're done (the cut makes it deterministic). If not, then the maximum value must occur later in the list, so we discard the head and continue recursively searching for an entry that is an upper bound of all the subsequent elements. The cut is essential here, because we must stop at the first such entry in order to know it is a maximum of the original list.
We've used an auxiliary predicate upperBound/2, which is not unusual, but the overall complexity of this implementation is worst-case quadratic in the length of the list. So there is room for improvement!
Let me pause here to be sure I'm not going totally off-track in trying to address your question. After all you may have meant to ask how to use "the above procedure" to find the kth largest element, and so what I'm describing may be overly specialized. However it may help to understand the cleverness of the general selection algorithms to understand the subtle optimization of the simple case, finding a largest element.
Added:
Intuitively we can reduce the number of comparisons needed in the worst case
by going through the list and keeping track of the largest value found "so
far". In a procedural language we can easily accomplish this by reassigning
the value of a variable, but Prolog doesn't allow us to do that directly.
Instead a Prolog way of doing this is to introduce an extra argument and
define the predicate maxOfList/2 by a call to an auxiliary predicate
with three arguments:
maxOfList(X,[H|T]) :- maxOfListAux(X,H,T).
The extra argument in maxOfListAux/3 can then be used to track the
largest value "so far" as follows:
maxOfListAux(X,X,[ ]).
maxOfListAux(Z,X,[H|T]) :-
( X >= H -> Y = X ; Y = H ),
maxOfListAux(Z,Y,T).
Here the first argument of maxOfListAux represents the final answer as to
the largest element of the list, but we don't know that answer until we
have emptied the list. So the first clause here "finalizes" the answer
when that happens, unifying the first argument with the second argument
(the largest value "so far") just when the tail of the list has reached
the end.
The second clause for maxOfListAux leaves the first argument unbound and
"updates" the second argument accordingly as the next element of the list
exceeds the previous largest value or not.
It isn't strictly necessary to use an auxiliary predicate in this case,
because we might have kept track of the largest value found by using the
head of the list instead of an extra argument:
maxOfList(X,[X]) :- !.
maxOfList(X,[H1,H2|T]) :-
( H1 >= H2 -> Y = H1 ; Y = H2 ),
maxOfList(X,[Y|T]).

efficient evaluation of formula

Here is the problem I ran into. I have a list of evaluators, I_1, I_2... etc, which have dependency among each other. Something like I_1 -> I_2 (reads, I_2 depends on I_1's result). There is no cyclic dependency.
each of these shared interfaces bool eval(), double value(). say I_1->eval() would update the result of I_1, which can be returned by I_1->value(). And the boolean returned by eval() tells me if the result has changed, and if so, all I_js that depend on I_1 need to be updated.
Now say I_1 has updated result, how to run as few eval()s as possible to keep all I_js up to date?
I just have a nested loop like this:
first do a tree-walk from I_1, marking it and all descendants as out-of-date
make a list of those descendants
anything_changed = true
while anything_changed
anything_changed = false
for each formula in the descendant list
if no predecessors of that formula in the descendant list are out of date
re-evaluate the formula and assert that it is not out of date
anything_changed = true
Look, it's crude but correct.
So what if it's a bit like a quadratic big-O?
If the number of formulas is not too large, and/or the cost of evaluating each one is not too small, and/or if this is not done at high frequency, performance should not be an issue.
If I could, I'd add links from a parent to it's dependant children, so the update then becomes:-
change_value ()
{
evaluate new_value based on all parents
if (value != new_value)
{
value = new_value
for each child
child->change_value ()
}
}
Of course, you'd need to cope with the case where Child(n) is the parent of Child(m)
Actually, thinking about it, it might just work but won't be a minimal set of calls to change_value
You need something like a breadth first search from l_1, omitting to search the descendants of nodes whose return from eval() said that they had not changed, and taking into account that you should not evaluate a node until you have evaluated all the nodes that it directly depends on. One way to arrange this would be to keep a count of unevaluated direct dependencies on each node, decrementing the count for all the nodes that depend on a node you have just evaluated. At each stage if there are nodes not yet evaluated that need to be there must be at least one that does not depend on an unevaluated node. If not you could produce an infinite list of unevaluated nodes by travelling from one node to a node that it depends on and so on, and we know there are no cycles in the dependency graph.
There is pseudo-code for breadth first search at https://en.wikipedia.org/wiki/Breadth-first_search.
An efficient solution would be to have two relations. If I_2 depends on I_1 you would have I_1 --influences--> I_2 and I2 --depends on--> I_1 as relations.
You basically need to be able to efficiently calculate the numbers of out-of-date evaluations that I_X depends on (let's call that number D(I_X))
Then, you do the following:
Do a BFS with the --influences--> relation, storing all reachable I_X
Store the reachable I_X in a data structure that sorts them according to their D(I_X) , e.g. a Priority Queue
// finding the D(I_X) could be integrated into the DFS and require little additional calculation time
while (still nodes to update):
Pop and re-evaluate the I_X with the lowest D(I_X) value (e.g. the first I_X from the Queue) (*)
Update the D(I_Y) value for all I_Y with I_X --influences--> I_Y
(i.e. lower it by 1)
Update the sorting/queue to reflect the new D(I_Y) values
(*) The first element should always have D(I_X) == 0, otherwise, you might have a circular dependency
The algorithm above uses quite a bit of time to find the nodes to update and order them, but gains the advantage that it only re-valuates every I_X once.

Algorithm: identify pairwise conflicts in a set of objects, given only "there is a conflict in this list" oracle

I have an (unordered) set of objects. I also have an oracle, which takes an ordered list of objects and returns true if there was at least one ordered conflict within that list, false otherwise. An ordered conflict is an ordered pair of objects (A,B) such that the oracle returns true for any input list [..., A, ..., B, ...]. (A,B) being an ordered conflict does not necessarily imply that (B,A) is an ordered conflict.
I want to identify all unordered conflicts within the set: that is, find all pairs {x, y} such that either (x, y) or (y, x) is an ordered conflict as defined above. The oracle is slow (tens to hundreds of milliseconds per invocation) so it is essential to minimize the number of oracle invocations; the obvious naive algorithm (feed every possible ordered pair of set elements to the oracle; O(n²) invocations) is unacceptable. There are hundreds of set elements, and there are expected to be fewer than ten conflicts overall.
This is as far as I've gotten: If the oracle returns true for a two-element list, then obviously the elements in the list constitute a conflict. If the oracle returns false for any list, then there are no ordered conflicts in that list; if the oracle returns false for both list L and the reversal of list L, then there are no unordered conflicts in L. So a divide-and-conquer algorithm not entirely unlike the below ought to work:
Put all the set elements in a list L (choose any convenient order).
Invoke the oracle on L.
If the oracle returns false, invoke the oracle on rev(L).
If the oracle again returns false, there are no unordered conflicts within L.
# At this point the oracle has returned true for either L or rev(L).
If L is a two-element list, the elements of L constitute an unordered conflict.
Otherwise, somehow divide the set in half and recurse on each
I'm stuck on the "divide the set in half and recurse" part. There are two complications. First, it is not sufficient to take the top half and then the bottom half of the ordered list, because the conflict(s) might be eliminated by the split (consider [...A1, A2, ... An ...][...B1, B2, ...Bn ...]). Enumerating all subsets of size n/2 should work, but I don't know how to do that efficiently. Second, a naive recursion may repeat a great deal of work due to implicit state on the call stack -- suppose we have identified that A conflicts with B, then any oracle invocation with a list containing both A and B is wasted, but we still need to rule out other conflicts {A, x} and {B, x}. I can maintain a memo matrix such that M[a][b] is true if and only if (A, B) has already been tested, but I don't know how to make that play nice with the recursion.
Additional complications due to the context: If any object appears more than once in the list, the second and subsequent instances are ignored. Furthermore, some objects have dependencies: if (P,Q) is a dependency, then any oracle input in which Q appears before the first appearance of P (if any) will spuriously report a conflict. All dependencies have already been identified before this algorithm starts. If P conflicts with A, it is not possible to know whether Q also conflicts with A, but this is an acceptable limitation.
(Context: This is for identifying pairs of C system headers which cannot be included in the same source file. The "oracle" is the compiler.)
A few suggestions:
Finding a single conflict
Suppose that you know there is a conflict in n items, you can use O(log n) operations to find the location of a single conflict by first bisecting on the end point, and then bisecting on the start point.
For example, this might look like:
Test 1,2,3,4,5,6,7,8,9,10 -> Conflict
Test 1,2,3,4,5 -> Good
Test 1,2,3,4,5,6,7 -> Conflict
Test 1,2,3,4,5,6 -> Good (Now deduce Endpoint is 7, the last end with a conflict)
Test 3,4,5,6 -> Conflict
Test 5,6 -> Good
Test 4,5,6 -> Conflict (Now deduce Startpoint is 4.)
You now know that 4,5,6,7 is tight (i.e. cannot be made any smaller without removing a conflict), so we can deduce that 4 and 7 must conflict.
Finding more conflicts
Once you have found a problem, you can remove one of the offending items, and test the remaining set. If this still conflicts, you can use the bisection method to identify another conflict.
Repeat until no more conflicts are found.
Finding remaining conflicts
You should now have a large set of not conflicting items, and a few items that have been removed which may have additional conflicts.
To find remaining conflicts you might want to try taking one of the removed items, and then reinserting all items (except those that we already know conflict with it). This should either identify another conflict, or prove that all conflicts with that item have been found.
You can repeat this process with each of the removed items to find all remaining conflicts.
You need to find answers for n*(n-1) questions, each question being if an ordered pair has a conflict. Whenever you send a sequence of length k and the oracle says "good", you will have answers to k(k-1) such questions.
Create and initialize these n*(n-1) questions as a adjacency matrix with default values -1 (set self-edges as 0). Permute the sequence randomly and apply your recursive algorithm. Whenever you find that a sequence has no conflict mark the corresponding answer in the matrix (0). Mark as conflict (1) if a sequence of exactly two has a conflict.
Now, after on big iteration you have this matrix with -1s, 0s and 1s. Assume -1s are the edges and find the longest path. Reapply your algorithm. Keep doing this until number of unknowns is very small. At which point you send out pairs to the oracle.
Since you say you have hundreds of headers and less than 10 conflicts, I am going to give a worst-case optimal solution under the assumption that you have n items and O(lg n) items involved in conflicts. In the worst-case, if you have Theta(lg n) items involved in conflicts, then all of these items conflict with each other and there is no way to determine this using fewer than Omega((lg n)^2) oracle calls. Thus O((lg n)^2) oracle calls would be optimal, assuming Theta(lg n) items involved in conflicts.
Anyway here'e the algorithm. First you do like another answer said, and iteratively identify a conflict in O(lg n) oracle calls and remove one item in the conflict from your set, until you are left with a set that has no conflicts. This takes at most O((lg n)^2) oracle calls. Then for each item Z you removed, you put Z at the beginning of the no-conflict set, and perform binary search to find a later item X that creates a conflict, or determine that none exists. (And if such X is found, remove X and repeat). Thus you find all conflicts that begin with Z, and each such conflict is found in O(lg n) oracle calls. Similarly you put Z at the end of the no-conflict list and perform binary search to find all preceding elements X that create a conflict. Then, the only thing left to do is find all conflicts among the items you originally removed in the first step of the algorithm. But there are only O(lg n) of them by assumption, so this takes O((lg n)^2) oracle calls. The overall number of oracle calls is thus O((lg n)^2).

How can write the median of medians algorithm in prolog? [duplicate]

Am writing a logic program for kth_largest(Xs,K) that implements the linear algorithm for finding the
kth largest element K of a list Xs. The algorithm has the following steps:
Break the list into groups of five elements.
Efficiently find the median of each of the groups, which can be done with a fixed number of
comparisons.
Recursively find the median of the medians.
Partition the original list with respect to the median of the medians.
Recursively find the kth largest element in the appropriate smaller list.
How do I go about it? I can select an element from a list but I don't know how to get the largest using the above procedure.Here is my definition for selecting an element from a list
select(X; HasXs; OneLessXs)
% The list OneLessXs is the result of removing
% one occurrence of X from the list HasXs.
select(X,[X|Xs],Xs).
select(X,[Y|Ys],[Y|Zs]) :- select(X,Ys,Zs).
I'm going to jump in since no one has attempted an Answer, and hopefully shed some light on the procedure to be programmed.
I've found the Wikipedia article on Selection algorithm to be quite helpful in understanding the bigger picture of "fast" (worst-case linear time) algorithms of this type.
But what you asked at the end of your Question is a somewhat simpler matter. You wrote "How do i go about it? I can select an element from a list but i dont know how to get the largest using the above procedure." (emphasis added by me)
Now there seems to be a bit of confusion about whether you want to implement "the above procedure", which is a general recipe for finding a kth largest element by successive searches for medians, or whether you ask how to use that recipe to find simply the largest element (a special case). Note that the recipe doesn't specifically use a step of finding the largest element on its way to locating the median or the kth largest element.
But you give the code to find an element of a list and the rest of that list after removing that element, a predicate that is nondeterministic and allows backtracking through all members of the list.
The task of finding the largest element is deterministic (at least if all the elements are distinct), and it is an easier task than the general selection of the kth largest element (a task associated with order statistics among other things).
Let's give some simple, hopefully obviously correct, code to find the largest element, and then talk about a more optimized way of doing it.
maxOfList(H,[H|T]) :- upperBound(H,T), !.
maxOfList(X,[_|T]) :- maxOfList(X,T).
upperBound(X,[ ]).
upperBound(X,[H|T]) :-
X >= H,
upperBound(X,T).
The idea should be understandable. We look at the head of the list and ask if that entry is an upper bound for the rest of the list. If so, that must be the maximum value and we're done (the cut makes it deterministic). If not, then the maximum value must occur later in the list, so we discard the head and continue recursively searching for an entry that is an upper bound of all the subsequent elements. The cut is essential here, because we must stop at the first such entry in order to know it is a maximum of the original list.
We've used an auxiliary predicate upperBound/2, which is not unusual, but the overall complexity of this implementation is worst-case quadratic in the length of the list. So there is room for improvement!
Let me pause here to be sure I'm not going totally off-track in trying to address your question. After all you may have meant to ask how to use "the above procedure" to find the kth largest element, and so what I'm describing may be overly specialized. However it may help to understand the cleverness of the general selection algorithms to understand the subtle optimization of the simple case, finding a largest element.
Added:
Intuitively we can reduce the number of comparisons needed in the worst case
by going through the list and keeping track of the largest value found "so
far". In a procedural language we can easily accomplish this by reassigning
the value of a variable, but Prolog doesn't allow us to do that directly.
Instead a Prolog way of doing this is to introduce an extra argument and
define the predicate maxOfList/2 by a call to an auxiliary predicate
with three arguments:
maxOfList(X,[H|T]) :- maxOfListAux(X,H,T).
The extra argument in maxOfListAux/3 can then be used to track the
largest value "so far" as follows:
maxOfListAux(X,X,[ ]).
maxOfListAux(Z,X,[H|T]) :-
( X >= H -> Y = X ; Y = H ),
maxOfListAux(Z,Y,T).
Here the first argument of maxOfListAux represents the final answer as to
the largest element of the list, but we don't know that answer until we
have emptied the list. So the first clause here "finalizes" the answer
when that happens, unifying the first argument with the second argument
(the largest value "so far") just when the tail of the list has reached
the end.
The second clause for maxOfListAux leaves the first argument unbound and
"updates" the second argument accordingly as the next element of the list
exceeds the previous largest value or not.
It isn't strictly necessary to use an auxiliary predicate in this case,
because we might have kept track of the largest value found by using the
head of the list instead of an extra argument:
maxOfList(X,[X]) :- !.
maxOfList(X,[H1,H2|T]) :-
( H1 >= H2 -> Y = H1 ; Y = H2 ),
maxOfList(X,[Y|T]).

Resources