The following is a simple program establishing basic facts in a prolog database.
% main meal
homemade(pizza).
homemade(soup).
% dessert
ripe(apple).
ripe(orange).
% meal is homemade dish and ripe fruit
meal(Main, Fruit) :- homemade(Main), !, ripe(Fruit).
The definition of a meal uses a cut ! for no other reason than to experiment and learn about the cut.
The following general query generates two solutions.
?- meal(M,F).
F = apple,
M = pizza
F = orange,
M = pizza
Question
Prolog's resolution strategy is said to be depth-first. That might suggest we get answers involving both M=pizza and M=soup.
Why are we not getting this?
Is the answer that "resolution" of queries to find the first candidate answer is depth first, but solution search is breadth-first (left-to-right along rule bodies)? And that here there is no depth required to resolve the first goal homemade(Main).
As you can see I need some clarity on this distinction.
That is depth first; the search tree of your code is:
root meal(Main, Fruit)
/ \ requires
/ \
pizza soup homemade(Main) choice of 2x
/ \ / \ and with the chosen Main
apple orange apple orange ripe(Fruit) combinatorial choice of 4x pairs
Depth first is root, down pizza, down apple, up pizza, down orange, up pizza, up root, down soup, down apple, up soup, down orange.
Breadth first might be depth 0 root; depth 1 down pizza, up root, down soup. Depth 2 root, down pizza, down apple, up pizza, down orange, up pizza, up root, down soup, down apple, up soup, down orange. (Your meal() depends on visiting down to the leaves of the search tree and the answers would come out the same, I think; depth first is more useful if the tree is unbounded (recursive) and depth first gets stuck forever in an endless left branch. Breadth first forces it to come back up and look elsewhere).
Prolog's resolution strategy is said to be depth-first. That might suggest we get answers involving both M=pizza and M=soup.
Why are we not getting this?
Because you cut them out. Cut says "cut the rest of the tree off, this branch where I am is the only one I want to search in now". homemade(pizza) is the first result of homemade(M) by either search strategy. Then cut ! commits to that branch of the search tree, so the entire soup side of the tree is not searched so you get homemade(pizza), ripe(apple) and homemade(pizza), ripe(orange) as the two results.
Related
I have a problem which I am converting into a TSP kind of problem so that I can explain it here and I am looking for any existing algorithms that might help me.
There are a list of places that I need to visit and I need to visit them all.
There are some places that have to be visited as the first x of n (IE, they need to be first 3 or first 5 places visited). (where the number is arbitrary)
There are some other places that have to be visited as the last y of n (IE, they need to be last 3 or last 5 places visited).
The places are could be categorized (some may not have a category), for those in a category, they need to visited as far away from each other (ie, if 2 places are categorized as green, then I would like to visit as many other category places as possible between these green categorized places)
Here is an example list:
A: category green: last 3
B: category none: ordering none
C: category pink: first 3
D: category none: ordering none
E: category none: last 3
F: category green: ordering none
G: category pink: first 3
The order I would like to come up with is:
G(pink,first3) -> F(green,none) -> C(pink,first3) -> D(none,none) -> B(none,none) -> E(none,last3) -> A(green,last3)
Explanation:
G came first, to keep it as far away from C as possible.
F came next to keep it as far away from A as possible.
C came next as it needed to be in first 3. C and G could be interchanged
D B could be placed anywhere
E came next as it had to be last 3
A came last as it had to be last 3 and by placing it at the end, it was as far as possible from F.
My idea is to evaluate each edge cost and the edge cost would be dynamically calculated. So if you tried to visit A and then F it would have a high cost, as opposed to visiting A and then some other place and then F (where the number of places in between would some how be part of the cost). Also, I would introduce a start and end place and so, if I had to visit some places as first x, I would be able to give it a low cost if start was within N places of that place. Same for the end.
I was wondering if there is a graph algorithm that can account for such dynamic weights/cost and determine the shortest path from start to end?
note: In some cases a best case may not be available, and that would be ok, as long as I can show that the cost is high because there wasnt enough category separation (eg: all places were in the same category).
Brute force algorithm
Initial idea I had is: Given a list of places, come up with all combinations of place ordering and calculate the costs between each and then choose the cheapest. (but this would mean evaluating n! where for 8 that would be 362880 orders that i would have to evaluate! why 8, cause that is what I believe will be the average number of places to evaluate)
But is there an algorithm that I could potentially use to determine it without testing all orderings.
One thing you could do:
Order the places as follows: first 1, ... first n, unordered, last n, ... last 1.
Go through the list and separate elemnts with the same color where possible without violating the previous order
Calculate the cost of this list and store it as the current best
Use this list to determine the order in which you evaluate permutations
While you build permutations, keep track of the cost
Abort building the current permutation when the cost exceeds the current best (including the theoretical minimum cost for the remaining places, if there is any)
Also abort when you have the theoretically possible best score.
I am following Donald Knuth's algorithm to solve the game Mastermind.
However, I am stuck on step two:
Create a set S of remaining possibilities (at this point there are
1296). The first guess is aabb.
Remove all possibilities from S that
would not give the same score of colored and white pegs if they were
the answer.
For each possible guess (not necessarily in S) calculate
how many possibilities from S would be eliminated for each possible
colored/white score. The score of the guess is the least of such
values. Play the guess with the highest score (minimax).
Go back to
step 2 until you have got it right.
I generate the set of possibilities (basically 6 x 6 x 6 x 6). From here, I formulate the initial guess of aabb. The "mastermind" gives feedback in the form of x white pegs and y black pegs.
The white pegs indicate one of the four colors in our guess was correct but in the wrong location. The black pegs indicate that one of the four colors in our guess was correct and in the correct location.
From here the next guess has to be modified based on that information.
My question is: Given that my first guess is aabb and my feedback is, say 1w1b, what permutations do I remove from the set of possibilities?
def CalcScore(answer, solution):
""" A function that will return a tupple of the number of white & black pegs """
...
todel = []
for poss in poss_answers: # Assuming that poss_answers is a list/array of possible ansers
if not current_score == CalcScore(ThisTry, poss):
todel.append(poss)
for delthis in todel:
poss_answers.remove(delthis)
It's actually quite clear in the step 2. Here is the explained version.
You can go through each permutation, computes its score (like 3w, 1w3b, etc) with respect to the current guess, and removes those permutation which give different score that the actual score.
Perverse Hangman is a game played much like regular Hangman with one important difference: The winning word is determined dynamically by the house depending on what letters have been guessed.
For example, say you have the board _ A I L and 12 remaining guesses. Because there are 13 different words ending in AIL (bail, fail, hail, jail, kail, mail, nail, pail, rail, sail, tail, vail, wail) the house is guaranteed to win because no matter what 12 letters you guess, the house will claim the chosen word was the one you didn't guess. However, if the board was _ I L M, you have cornered the house as FILM is the only word that ends in ILM.
The challenge is: Given a dictionary, a word length & the number of allowed guesses, come up with an algorithm that either:
a) proves that the player always wins by outputting a decision tree for the player that corners the house no matter what
b) proves the house always wins by outputting a decision tree for the house that allows the house to escape no matter what.
As a toy example, consider the dictionary:
bat
bar
car
If you are allowed 3 wrong guesses, the player wins with the following tree:
Guess B
NO -> Guess C, Guess A, Guess R, WIN
YES-> Guess T
NO -> Guess A, Guess R, WIN
YES-> Guess A, WIN
This is almost identical to the "how do I find the odd coin by repeated weighings?" problem. The fundamental insight is that you are trying to maximise the amount of information you gain from your guess.
The greedy algorithm to build the decision tree is as follows:
- for each guess, choose the guess which for which the answer is "true" and which the answer is "false" is as close to 50-50 as possible, as information theoretically this gives the most information.
Let N be the size of the set, A be the size of the alphabet, and L be the number of letters in the word.
So put all your words in a set. For each letter position, and for each letter in your alphabet count how many words have that letter in that position (this can be optimised with an additional hash table). Choose the count which is closest in size to half the set. This is O(L*A).
Divide the set in two taking the subset which has this letter in this position, and make that the two branches to the tree. Repeat for each subset until you have the whole tree. In worst case this will require O(N) steps, but if you have a nice dictionary this will lead to O(logN) steps.
This isn't strictly an answer, since it doesn't give you a decision tree, but I did something very similar when writing my hangman solver. Basically, it looks at the set of words in its dictionary that match the pattern and picks the most common letter. If it guesses wrong, it eliminates the largest number of candidates. Since there's no penalty to guessing right in hangman, I think this is the optimal strategy given the constraints.
So with the dictionary you gave, it would first guess a correctly. Then it would guess r, also correctly, then b (incorrect), then c.
The problem with perverse hangman is that you always guess wrong if you can guess wrong, but that's perfect for this algorithm since it eliminates the largest set first. As a slightly more meaningful example:
Dictionary:
mar
bar
car
fir
wit
In this case it guesses r incorrectly first and is left with just wit. If wit were replaced in the dictionary with sir, then it would guess r correctly then a incorrectly, eliminating the larger set, then w or f at random incorrectly, followed by the other for the final word with only 1 incorrect guess.
So this algorithm will win if it's possible to win, though you have to actually run through it to see if it does win.
A question from Math Battle.
This particular question was also asked to me in one of my job interviews.
" A monkey has two coconuts. It is fooling around by throwing coconut down from the balconies
of M-storey building. The monkey wants to know the lowest floor when coconut is broken.
What is the minimal number of attempts needed to establish that fact? "
Conditions: if a coconut is broken, you cannot reuse the same. You are left with only with the other coconut
Possible approaches/strategies I can think of are
Binary break ups & once you find the floor on which the coconut breaks use upcounting from the last found Binary break up lower index.
Window/Slices of smaller sets of floors & use binary break up within the Window/Slice
(but on the down side this would require a Slicing algorithm of it's own.)
Wondering if there are any other ways to do this.
Interview questions like this are designed to see how you think. So I would probably mention a O(N^0.5) solution as above, but also I would give the following discussion...
Since the coconuts may have internal cracking over time, the results may not be so consistent to a O(N^0.5) solution. Although the O(N^0.5) solution is efficient, it is not entirely reliable.
I would recommend a linear O(N) solution with the first coconut, and then verify the result with the second coconut. Where N is the number of floors in the building. So for the first coconut you try the 1st floor, then the 2nd, then the 3rd, ...
Assuming both coconuts are built structurally exactly the same and are dropped on the exact same angle, then you can throw the second coconut directly on the floor that the first one broke. Call this coconut breaking floor B.
For coconut #2, you don't need to test on 1..B-1 because you already know that the first cocounut didn't break on floor B-1, B-2, ... 1. So you only need to try it on B.
If the 2nd coconut breaks on B, then you know that B is the floor in question. If it doesn't break you can deduce that there were internal cracking and degradation of the coconut over time and that the test is flawed to begin with. You need more coconuts.
Given that building sizes are pretty limited, the extra confidence in your solution is worth the O(N) solution.
As #RafaĆ Dowgird mentioned, the solution also depends on whether the monkey in question is an African monkey or a European monkey. It is common knowledge that African monkeys throw with a much greater force. Hence making the breaking floor B only accurate with a variance of +/- 2 floors.
To guarantee that the monkey doesn't get tired from all those stairs, it would also be advisable to attach a string to the first coconut. That way you don't need to do 1+2+..+B = B*(B+1)/2 flights of stairs for the first coconut. You would only need to do exactly B flights of stairs.
It may seem that the number of flights of stairs is not relevant to this problem, but if the monkey gets tired out in the first place, we may never come to a solution. This gives new considerations for the halting problem.
We are also making the assumption that the building resides on earth and the gravity is set at 9.8m/s^2. We'll also assume that no gravitation waves exist.
A binary search is not the answer, because you only get one chance to over-estimate. Binary search requires log m maximum over-estimations.
This is a two phase approach. The first is to iterate over the floors with relatively big steps. After the first coconut breaks, the second phase is to try each floor starting after the last safe floor.
The big steps are roughly sqrt(m), but they are bigger early, and smaller later because if the first coconut breaks early, you can afford more iterations in the second phase.
StepSize = (minimum s such that s * (s + 1) / 2 >= m)
CurrentFloor = 0
While no coconuts broken {
CurrentFloor += StepSize
StepSize -= 1
Drop coconut from CurrentFloor
}
CurrentFloor -= StepSize + 1
While one remains coconut unbroken {
CurrentFloor += 1
Drop remaining coconut from CurrentFloor
}
// CurrentFloor is now set to the lowest floor that will break the coconut,
// using no more total drops than the original value of StepSize
The best solution I know is 2*sqrt(n). Drop the first coconut from sqrt(n), 2*sqrt(n),... up to n (or until it breaks). Then drop the second one from the last known "safe" point, in one floor increments until it breaks. Both stages take at most sqrt(n) throws.
Edit: You can improve the constant within O(sqrt(n)), see comment by recursive. I think that the first step should be around sqrt(2*n) and decrease by 1 with each throw, so that the last step (if the breaking height is actually n) is exactly 1. Details to be figured out by readers :)
Since it's an interview question, consider
The expensive operation is the monkey going up and down the stairs, not tossing the coconut. Thinking about it this way, the 'linear' approach is actually N2.
The energy imparted to the coconut by falling is roughly proportional to the height of the drop. If the shell is broken after absorbing some amount of energy in ALL of it's falls ...
Tough interview question. It took me several days.
I believe the # of tries is 1.5 times SQRT of # of floors. (For 100 floors and 2 coco it is 15)
We want to minimize the size of each try and the # of tries, using both together to cover all possible floors. In such cases a sqroot turns out to be a good starting point, but we vary the size of each try and average around the sqroot.
This way we have the best of both worlds : Having the size of each try evenly distributed around the sqroot gives us the best results. For 100 and 2, this is 15,14,13,12,11,10,9,8,7,6
This works out to 1.5 times 10.
This is intended to be a more concrete, easily expressable form of my earlier question.
Take a list of words from a dictionary with common letter length.
How to reorder this list tto keep as many letters as possible common between adjacent words?
Example 1:
AGNI, CIVA, DEVA, DEWA, KAMA, RAMA, SIVA, VAYU
reorders to:
AGNI, CIVA, SIVA, DEVA, DEWA, KAMA, RAMA, VAYU
Example 2:
DEVI, KALI, SHRI, VACH
reorders to:
DEVI, SHRI, KALI, VACH
The simplest algorithm seems to be: Pick anything, then search for the shortest distance?
However, DEVI->KALI (1 common) is equivalent to DEVI->SHRI (1 common)
Choosing the first match would result in fewer common pairs in the entire list (4 versus 5).
This seems that it should be simpler than full TSP?
What you're trying to do, is calculate the shortest hamiltonian path in a complete weighted graph, where each word is a vertex, and the weight of each edge is the number of letters that are differenct between those two words.
For your example, the graph would have edges weighted as so:
DEVI KALI SHRI VACH
DEVI X 3 3 4
KALI 3 X 3 3
SHRI 3 3 X 4
VACH 4 3 4 X
Then it's just a simple matter of picking your favorite TSP solving algorithm, and you're good to go.
My pseudo code:
Create a graph of nodes where each node represents a word
Create connections between all the nodes (every node connects to every other node). Each connection has a "value" which is the number of common characters.
Drop connections where the "value" is 0.
Walk the graph by preferring connections with the highest values. If you have two connections with the same value, try both recursively.
Store the output of a walk in a list along with the sum of the distance between the words in this particular result. I'm not 100% sure ATM if you can simply sum the connections you used. See for yourself.
From all outputs, chose the one with the highest value.
This problem is probably NP complete which means that the runtime of the algorithm will become unbearable as the dictionaries grow. Right now, I see only one way to optimize it: Cut the graph into several smaller graphs, run the code on each and then join the lists. The result won't be as perfect as when you try every permutation but the runtime will be much better and the final result might be "good enough".
[EDIT] Since this algorithm doesn't try every possible combination, it's quite possible to miss the perfect result. It's even possible to get caught in a local maximum. Say, you have a pair with a value of 7 but if you chose this pair, all other values drop to 1; if you didn't take this pair, most other values would be 2, giving a much better overall final result.
This algorithm trades perfection for speed. When trying every possible combination would take years, even with the fastest computer in the world, you must find some way to bound the runtime.
If the dictionaries are small, you can simply create every permutation and then select the best result. If they grow beyond a certain bound, you're doomed.
Another solution is to mix the two. Use the greedy algorithm to find "islands" which are probably pretty good and then use the "complete search" to sort the small islands.
This can be done with a recursive approach. Pseudo-code:
Start with one of the words, call it w
FindNext(w, l) // l = list of words without w
Get a list l of the words near to w
If only one word in list
Return that word
Else
For every word w' in l do FindNext(w', l') //l' = l without w'
You can add some score to count common pairs and to prefer "better" lists.
You may want to take a look at BK-Trees, which make finding words with a given distance to each other efficient. Not a total solution, but possibly a component of one.
This problem has a name: n-ary Gray code. Since you're using English letters, n = 26. The Wikipedia article on Gray code describes the problem and includes some sample code.