Today in an exam I was given a algorithmic problem where I was given the size N*M of a chessboard, and I should determine what is the smallest possible number of moves that a knight can do from the bottom left edge of the chessboard to go to the up right edge. How can that be done?
A solution using BFS and memoization:
# Memoization
memo = a matrix of NOVISITED of size N x M
# Starting position
# (row, column, jumps)
queue.push((0, 0, 0))
while queue is not empty:
# Get next not visited position
row, column, jumps = queue.pop()
# Mark as visited
memo[row][column] = jumps
for each possible move (nrow, ncolumn) from (row, column):
if memo[nrow][ncolumn] is NOVISITED:
# Queue next possible move
queue.append((nrow, ncolumn, jumps + 1))
NOVISITED can have value -1 or null if we consider the possible distances as a non-negative number [0,+inf).
The minimum number of jumps for each square will be accesible in memo[row][column], so the answer for the top-right corner from the bottom-left will be at memo[N - 1][M - 1].
UPDATE
Notice that if the matrix is square N x N, you can apply symmetry principles.
I believe you can reduce this down to three cases:
You have a board with no solution example: 2w * 4h
You have a board with a solution of 1: 2w * 3h
You have a board that is square and thus has a solution of 4: 3w * 3h
If you have a board larger than these, you can reduce it to one of them by setting the endpoint of one move as the starting point of a larger board.
Example: a board of size 4w * 5h:
_ _ _ _
_ _ _ _
_ e _ _
_ _ _ _
s _ _ _
where s is start and e is end.
From there, reduce it to a square board:
_ 1 e
3 _ _
s _ 2
Where it takes 4 moves to reach the end. So you have 1 + 4 moves = 5 for this size.
I hope that is enough to get you started.
EDIT: This doesn't seem to be perfect as is. However, it demonstrates a heuristic way to solve this problem. Here is another case for your viewing pleasure:
_ _ _ e
_ 3 _ _
_ _ _ _
_ _ 2 _
_ _ _ _
_ 1 _ _
_ _ _ _
s _ _ _
that has 4 moves until the end in a 4x8 board.
Via a programming lanugage, this might be better solved by starting out by mapping all possible moves from your current location and seeing if they match the end point. If they don't, check to see if your problem is now a simpler one that you have solved before. This is accomplished via memoization, as a commenter pointed out.
If you are doing this by hand, however, I bet you can solve it by isolating it into a small number of cases as I have begun to do.
You could simulate the knight's move using BFS or DFS. Personally I prefer the DFS approach, as it can be implemended recursively. If you have a function process which takes as parameters the current x position, the current y position, the rows of the table, the columns of the table and a counter, the the solution will look like that:
/* .......... */
process(x-1, y-2, R, C, count+1);
process(x+1, y-2, R, C, count+1);
process(x-2, y-1, R, C, count+1);
process(x-2, y+1, R, C, count+1);
process(x-1, y+2, R, C, count+1);
process(x+1, y+2, R, C, count+1);
process(x+2, y-1, R, C, count+1);
process(x+2, y+1, R, C, count+1);
/* .......... */
When you reach your destination, you return the current value of count.
EDIT: it can also be solved using dynamic programming. You define dp(i,j) to be the best way to reach the square (i,j). So dp(i,j) is equal to:
dp(i,j) = min{dp(all squares that can reach (i,j) in one move)} + 1
Here is the efficient solution.
First, special cases. If n = 1 you cannot jump, and the problem is only solvable for (1, 1). If n = 2 by inspection there is only one path you can take, and the problem is only solvable if m = 4k + 3 in which case you need 2k + 1 jumps to get there. The reverse is true if m = 1,2.
Now the general case. A knight has 8 possible jumps, it goes 2 in one direction, and then 1 in another. The possible directions are r, l, u, d (right, left, up, down). So let nru be the number of times it jumps 2 right, then 1 up, and likewise for the other 7 possible jumps. Then the answer must be a solution to the following pair of equations:
n - 1 = 2*nru + nur - nul - 2*nlu - 2*nld - ndl + ndr + 2*nrd
m - 1 = nru + 2*nur + 2*nul + nlu - nld - 2*ndl - 2*ndr - nrd
And the number of jumps is:
nru + nur + nul + nlu + nld + ndl + ndr + nrd
We expect the number of jumps to be as low as possible. Intuitively if we have a set of numbers that satisfies the top two equations, and we've made the number of jumps low, we shouldn't have much trouble in finding an order to put the jumps which stays inside of the box. I won't prove it, but this turns out to be true if 2 < n and 2 < m.
Thus solve this integer programming problem (solve those two equations keeping the number of jumps as low as possible) and we have our answer. There are solvers for this, but this particular problem is very simple. We just do the "obvious thing" to get close to our target, figure out a couple of extra jumps, and it is not hard to prove that this is an optimal solution to the integer equations, and therefore must be the answer to the chess problem.
So what is the obvious thing? First, if m < n, we can just flip the board over, so without loss of generality we may assume that n < m. (The board goes at least as far from you as it does sideways.) Given that fact, the obvious thing is to jump up-left until you hit the wall, or you hit the diagonal stretching down from the corner that you want. At which point you progress along the wall or that diagonal towards your target.
If you land directly on the target, you have your best possible answer.
If you went along the wall and missed by 1, it turns out that by converting one of your jumps into a pair you wind up where you need to be. If you went along the wall and missed by 2 (ie you're one diagonal) then you need to insert 2 jumps. (Distance shows you that you need at least one more, and a simple parity argument shows that you need at least 2, and a pair of jumps will do it.)
If you went along the diagonal and missed by 1, insert one pair of jumps and you're good.
If you went along the diagonal and missed by 2, then convert a up-right/right-up pair into right-up/right-up/up-left/left-up and you can do it with just 2 more jumps.
If you did not travel along the diagonal but had a up-left, convert that into a right-up/up-left/right-up triplet and again you can do it with just 2 more jumps.
The remaining special case is a 3x3 board, which takes 4 jumps.
(I leave it to you to figure out all of the appropriate inequalities and modulos that picture works out to.)
Related
I understand that there are mainly two approaches to dynamic programming solutions:
Fixed optimal order of evaluation (lets call it Foo approach): Foo approach usually goes from subproblems to bigger problems thus using results obtained earlier for subproblems to solve bigger problems, thus avoiding "revisiting" subproblem. CLRS also seems to call this "Bottom Up" approach.
Without fixed optimal order of evaluation (lets call it Non-Foo approach): In this approach evaluation proceeds from problems to their sub-problems . It ensures that sub problems are not "re-evaluated" (thus ensuring optimality) by maintaining results of their past evaluations in some data structure and then first checking if the result of the problem at hand exists in this data structure before starting its evaluation. CLRS seem to call this as "Top Down" approach
This is what is roughly conveyed as one of the main points by this answer.
I have following doubts:
Q1. Memoization or not?
CLRS uses terms "top down with memoization" approach and "bottom up" approach. I feel both approaches require memory to cache results of sub problems. But, then, why CLRS use term "memoization" only for top down approach and not for bottom up approach? After solving some problems by DP approach, I feel that solutions by top down approach for all problems require memory to caches results of "all" subproblems. However, that is not the case with bottom up approach. Solutions by bottom up approach for some problems does not need to cache results of "all" sub problems. Q1. Am I correct with this?
For example consider this problem:
Given cost[i] being the cost of ith step on a staircase, give the minimum cost of reaching the top of the floor if:
you can climb either one or two steps
you can start from the step with index 0, or the step with index 1
The top down approach solution is as follows:
class Solution:
def minCostAux(self, curStep, cost):
if self.minCosts[curStep] > -1:
return self.minCosts[curStep]
if curStep == -1:
return 0
elif curStep == 0:
self.minCosts[curStep] = cost[0]
else:
self.minCosts[curStep] = min(self.minCostAux(curStep-2, cost) + cost[curStep]
, self.minCostAux(curStep-1, cost) + cost[curStep])
return self.minCosts[curStep]
def minCostClimbingStairs(self, cost) -> int:
cost.append(0)
self.minCosts = [-1] * len(cost)
return self.minCostAux(len(cost)-1, cost)
The bottom up approach solution is as follows:
class Solution:
def minCostClimbingStairs(self, cost) -> int:
cost.append(0)
secondLastMinCost = cost[0]
lastMinCost = min(cost[0]+cost[1], cost[1])
minCost = lastMinCost
for i in range(2,len(cost)):
minCost = min(lastMinCost, secondLastMinCost) + cost[i]
secondLastMinCost = lastMinCost
lastMinCost = minCost
return minCost
Note that the top down approach caches result of all steps in self.minCosts while bottom up approach caches result of only last two steps in variables lastMinCost and secondLastMinCost.
Q2. Does all problems have solutions by both approaches?
I feel no. I came to this opinion after solving this problem:
Find the probability that the knight will not go out of n x n chessboard after k moves, if the knight was initially kept in the cell at index (row, column).
I feel the only way to solve this problem is to find successive probabilities in increasing number of steps starting from cell (row, column), that is probability that the knight will not go out of chessboard after step 1, then after step 2, then after step 3 and so on. This is bottom up approach. We cannot do it top down, for example, we cannot start with kth step and go to k-1th step, then k-2th step and so on, because:
We cannot know which cells will be reached in kth step to start with
We cannot ensure that all paths from kth step will lead to initial knight cell position (row,column).
Even one of the top voted answer gives dp solution as follows:
class Solution {
private int[][]dir = new int[][]{{-2,-1},{-1,-2},{1,-2},{2,-1},{2,1},{1,2},{-1,2},{-2,1}};
private double[][][] dp;
public double knightProbability(int N, int K, int r, int c) {
dp = new double[N][N][K + 1];
return find(N,K,r,c);
}
public double find(int N,int K,int r,int c){
if(r < 0 || r > N - 1 || c < 0 || c > N - 1) return 0;
if(K == 0) return 1;
if(dp[r][c][K] != 0) return dp[r][c][K];
double rate = 0;
for(int i = 0;i < dir.length;i++) rate += 0.125 * find(N,K - 1,r + dir[i][0],c + dir[i][1]);
dp[r][c][K] = rate;
return rate;
}
}
I feel this is still a bottom up approach since it starts with initial knight cell position (r,c) (and hence starts from 0th or no step to Kth step) despite the fact that it counts K downwads to 0. So, this is bottom up approach done recursively and not top down approach. To be precise, this solution does NOT first find:
probability of knight not going out of chessboard after K steps starting at cell (r,c)
and then find:
probability of knight not going out of chessboard after K-1 steps starting at cell (r,c)
but it finds in reverse / bottom up order: first for K-1 steps and then for K steps.
Also, I did not find any solutions in of top voted discussions in leetcode doing it in truly top down manner, starting from Kth step to 0th step ending in (row,column) cell, instead of starting with (row,column) cell.
Similarly we cannot solve the following problem with the bottom up approach but only with top down approach:
Find the probability that the Knight ends up in the cell at index (row,column) after K steps, starting at any initial cell.
Q2. So am I correct with my understanding that not all problems have solutions by both top down or bottom up approaches? Or am I just overthinking unnecessarily and both above problems can indeed be solved with both top down and bottom up approaches?
PS: I indeed seem to have done overthinking here: knightProbability() function above is indeed top down, and I ill-interpreted as explained in detailed above 😑. I have kept this explanation for reference as there are already some answers below and also as a hint of how confusion / mis-interpretaions might happen, so that I will be more cautious in future. Sorry if this long explanation caused you some confusion / frustrations. Regardless, the main question still holds: does every problem have bottom up and top down solutions?
Q3. Bottom up approach recursively?
I am pondering if bottom up solutions for all problems can also be implemented recursively. After trying to do so for other problems, I came to following conclusion:
We can implement bottom up solutions for such problems recursively, only that the recursion won't be meaningful, but kind of hacky.
For example, below is recursive bottom up solution for minimum cost climbing stairs problem mentioned in Q1:
class Solution:
def minCostAux(self, step_i, cost):
if self.minCosts[step_i] != -1:
return self.minCosts[step_i]
self.minCosts[step_i] = min(self.minCostAux(step_i-1, cost)
, self.minCostAux(step_i-2, cost)) + cost[step_i]
if step_i == len(cost)-1: # returning from non-base case, gives sense of
# not-so meaningful recursion.
# Also, base cases usually appear at the
# beginning, before recursive call.
# Or should we call it "ceil condition"?
return self.minCosts[step_i]
return self.minCostAux(step_i+1, cost)
def minCostClimbingStairs(self, cost: List[int]) -> int:
cost.append(0)
self.minCosts = [-1] * len(cost)
self.minCosts[0] = cost[0]
self.minCosts[1] = min(cost[0]+cost[1], cost[1])
return self.minCostAux(2, cost)
Is my quoted understanding correct?
First, context.
Every dynamic programming problem can be solved without dynamic programming using a recursive function. Generally this will take exponential time, but you can always do it. At least in principle. If the problem can't be written that way, then it really isn't a dynamic programming problem.
The idea of dynamic programming is that if I already did a calculation and have a saved result, I can just use that saved result instead of doing the calculation again.
The whole top-down vs bottom-up distinction refers to the naive recursive solution.
In a top-down approach your call stack looks like the naive version except that you make a "memo" of what the recursive result would have given. And then the next time you short-circuit the call and return the memo. This means you can always, always, always solve dynamic programming problems top down. There is always a solution that looks like recursion+memoization. And that solution by definition is top down.
In a bottom up approach you start with what some of the bottom levels would have been and build up from there. Because you know the structure of the data very clearly, frequently you are able to know when you are done with data and can throw it away, saving memory. Occasionally you can filter data on non-obvious conditions that are hard for memoization to duplicate, making bottom up faster as well. For a concrete example of the latter, see Sorting largest amounts to fit total delay.
Start with your summary.
I strongly disagree with your thinking about the distinction in terms of the optimal order of evaluations. I've encountered many cases with top down where optimizing the order of evaluations will cause memoization to start hitting sooner, making code run faster. Conversely while bottom up certainly picks a convenient order of operations, it is not always optimal.
Now to your questions.
Q1: Correct. Bottom up often knows when it is done with data, top down does not. Therefore bottom up gives you the opportunity to delete data when you are done with it. And you gave an example where this happens.
As for why only one is called memoization, it is because memoization is a specific technique for optimizing a function, and you get top down by memoizing recursion. While the data stored in dynamic programming may match up to specific memos in memoization, you aren't using the memoization technique.
Q2: I do not know.
I've personally found cases where I was solving a problem over some complex data structure and simply couldn't find a bottom up approach. Maybe I simply wasn't clever enough, but I don't believe that a bottom up approach always exists to be found.
But top down is always possible. Here is how to do it in Python for the example that you gave.
First the naive recursive solution looks like this:
def prob_in_board(n, i, j, k):
if i < 0 or j < 0 or n <= i or n <= j:
return 0
elif k <= 0:
return 1
else:
moves = [
(i+1, j+2), (i+1, j-2),
(i-1, j+2), (i-1, j-2),
(i+2, j+1), (i+2, j-1),
(i-2, j+1), (i-2, j-1),
]
answer = 0
for next_i, next_j in moves:
answer += prob_in_board(n, next_i, next_j, k-1) / len(moves)
return answer
print(prob_in_board(8, 3, 4, 7))
And now we just memoize.
def prob_in_board_memoized(n, i, j, k, cache=None):
if cache is None:
cache = {}
if i < 0 or j < 0 or n <= i or n <= j:
return 0
elif k <= 0:
return 1
elif (i, j, k) not in cache:
moves = [
(i+1, j+2), (i+1, j-2),
(i-1, j+2), (i-1, j-2),
(i+2, j+1), (i+2, j-1),
(i-2, j+1), (i-2, j-1),
]
answer = 0
for next_i, next_j in moves:
answer += prob_in_board_memoized(n, next_i, next_j, k-1, cache) / len(moves)
cache[(i, j, k)] = answer
return cache[(i, j, k)]
print(prob_in_board_memoized(8, 3, 4, 7))
This solution is top down. If it seems otherwise to you, then you do not correctly understand what is meant by top-down.
I found your question ( does every dynamic programming problem have bottom up and top down solutions ? ) very interesting. That's why I'm adding another answer to continue the discussion about it.
To answer the question in its generic form, I need to formulate it more precisely with math. First, I need to formulate precisely what is a dynamic programming problem. Then, I need to define precisely what is a bottom up solution and what is a top down solution.
I will try to put some definitions but I think they are not the most generic ones. I think a really generic definition would need more heavy math.
First, define a state space S of dimension d as a subset of Z^d (Z represents the integers set).
Let f: S -> R be a function that we are interested in calculate for a given point P of the state space S (R represents the real numbers set).
Let t: S -> S^k be a transition function (it associates points in the state space to sets of points in the state space).
Consider the problem of calculating f on a point P in S.
We can consider it as a dynamic programming problem if there is a function g: R^k -> R such that f(P) = g(f(t(P)[0]), f(t(P)[1]), ..., f(t(P)[k])) (a problem can be solved only by using sub problems) and t defines a directed graph that is not a tree (sub problems have some overlap).
Consider the graph defined by t. We know it has a source (the point P) and some sinks for which we know the value of f (the base cases). We can define a top down solution for the problem as a depth first search through this graph that starts in the source and calculate f for each vertex at its return time (when the depth first search of all its sub graph is completed) using the transition function. On the other hand, a bottom up solution for the problem can be defined as a multi source breadth first search through the transposed graph that starts in the sinks and finishes in the source vertex, calculating f at each visited vertex using the previous visited layer.
The problem is: to navigate through the transposed graph, for each point you visit you need to know what points transition to this point in the original graph. In math terms, for each point Q in the transition graph, you need to know the set J of points such that for each point Pi in J, t(Pi) contains Q and there is no other point Pr in the state space outside of J such that t(Pr) contains Q. Notice that a trivial way to know this is to visit all the state space for each point Q.
The conclusion is that a bottom up solution as defined here always exists but it only compensates if you have a way to navigate through the transposed graph at least as efficiently as navigating through the original graph. This depends essentially in the properties of the transition function.
In particular, for the leetcode problem you mentioned, the transition function is the function that, for each point in the chessboard, gives all the points to which the knight can go to. A very special property about this function is that it's symmetric: if the knight can go from A to B, then it can also go from B to A. So, given a certain point P, you can know to which points the knight can go as efficiently as you can know from which points the knight can come from. This is the property that guarantees you that there exists a bottom up approach as efficient as the top down approach for this problem.
For the leetcode question you mentioned, the top down approach is like the following:
Let P(x, y, k) be the probability that the knight is at the square (x, y) at the k-th step. Look at all squares that the knight could have come from (you can get them in O(1), just look at the board with a pen and paper and get the formulas from the different cases, like knight in the corner, knight in the border, knight in a central region etc). Let them be (x1, y1), ... (xj, yj). For each of these squares, what is the probability that the knight jumps to (x, y) ? Considering that it can go out of the border, it's always 1/8. So:
P(x, y, k) = (P(x1, y1, k-1) + ... + P(xj, yj, k-1))/8
The base case is k = 0:
P(x, y ,0) = 1 if (x, y) = (x_start, y_start) and P(x, y, 0) = 0 otherwise.
You iterate through all n^2 squares and use the recurrence formula to calculate P(x, y, k). Many times you will need solutions you already calculated for k-1 and so you can benefit a lot from memoization.
In the end, the final solution will be the sum of P(x, y, k) over all squares of the board.
There are a platforms that can be placed on different heights. For example, these map shows how platforms have been placed (in the program it is presented as matrix NxM, |N|, |M| <= 100
_ _ _
D _ _ _
_ _
_
S _ _ _ _ _
In this map space means space, _ - platform, S - the platform where we start from, D - destination point. The monster that walks on this map can jump up, down or move to the left or to the right.
The possible way to reach D from S by monster is:
+ + +
D + + +
+ +
+
S + + + + +
or it may reach the D in this way:
_ _ _
D _ _ _
+ _ _
+ _
S _ _ _ _ _
So, combinations of reaching destination point can be varied in many ways but the main point is that in the first case the maximum distance of a jump that is made by monster is 1, because the maximum distance between two platforms in this way is 1. In the second case monster has reached destination very quickly but he made the jump of distance 2. The main goal of the monster is to reach the destination point and not make big jumps (as small as possible), and because of it the first way is preferred. The question is what an algorithm should I use to find a such way where a maximum distance of a jump would be minimal?
I have thought about two ways:
Brute force, but it will be inconvenient when the number of platform will be =N*M;
Somehow transfer this matrix into graph where each platform is presented as a node of a graph and edges are presented by a distances of jumps and find a minimal spanning tree but firstly I do not know ho to create a matrix of adjacent in this way and will be this way correct.
To parse the map and find nodes:
for i from 1 to N
for j from 1 to M
if map(i, j) == 'S'
nodes.add(i, j);
start = nodes.Count;
elseif map(i, j) == 'D'
nodes.add(i, j);
dest = nodes.Count;
elseif map(i, j) == '_'
nodes.add(i, j);
end
end
end
In above pseudocode I assume that nodes.add(i, j) adds a new node with node.x = 1 and node.y = j to the list of nodes.
Then, to construct adjacency matrix:
n = nodes.Count;
adj = n by n matrix, filled with +inf;
for i from 1 to n
for j from i + 1 to n
if (nodes[i].x == nodes[j].x) || (nodes[i].y == nodes[j].y)
adj(i, j) = abs(nodes[i].x - nodes[j].x) +
abs(nodes[i].y - nodes[j].y);
end
end
end
The rest is a Shortest Path Problem. Use Dijkstra's algorithm to find the shortest path between start and dest nodes.
Thanks to post above, I have decided to finish the idea and got this code and for test cases that I was given it works fine. So, the idea is:
From the given map of platforms it's necessary to create a graph where one node presents one platform (inlcuding start and destination platform) and an edges between nodes are presented as a distance between them;
When you formed the graph, your goal is to find minimal spanning tree and find the maximum weight of edge in this tree - this is answer.
The code is very big and please check it on my github! Pay attention that 1 means platform, 2 means start and 3 is destination:
Check this github link out!
I have to implement an algorithm that solves the Towers of Hanoi game for k pods and d rings in a limited number of moves (let's say 4 pods, 10 rings, 50 moves for example) using Bellman dynamic programming equation (if the problem is solvable of course).
Now, I understand the logic behind the equation:
where V^T is the objective function at time T, a^0 is the action at time 0, x^0 is the starting configuration, H_0 is cumulative gain f(x^0, a^0)=x^1.
The cardinality of the state space is $k^d$ and I get that a good representation for a state is a number in base k: d digits that can go from 0 to k-1. Each digit represents a ring and the digit can go from 0 to k-1, that are the labels of the k rings.
I want to minimize the number of moves for going from the initial configuration (10 rings on the first pod) to the end one (10 rings on the last pod).
What I don't get is: how do I write my objective function?
The first you need to do is choose a reward function H_t(s,a) which will define you goal. Once this function is chosen, the (optimal) value function is defined and all you have to do is compute it.
The idea of dynamic programming for the Bellman equation is that you should compute V_t(s) bottom-up: you start with t=T, then t=T-1 and so on until t=0.
The initial case is simply given by:
V_T(s) = 0, ∀s
You can compute V_{T-1}(x) ∀x from V_T:
V_{T-1}(x) = max_a [ H_{T-1}(x,a) ]
Then you can compute V_{T-2}(x) ∀s from V_{T-1}:
V_{T-2}(x) = max_a [ H_{T-2}(x,a) + V_{T-1}(f(x,a)) ]
And you keep on computing V_{t-1}(x) ∀s from V_{t}:
V_{t-1}(x) = max_a [ H_{t-1}(x,a) + V_{t}(f(x,a)) ]
until you reach V_0.
Which gives the algorithm:
forall x:
V[T](x) ← 0
for t from T-1 to 0:
forall x:
V[t](x) ← max_a { H[t](x,a) + V[t-1](f(x,a)) }
What actually was requested was this:
def k_hanoi(npods,nrings):
if nrings == 1 and npods > 1: #one remaining ring: just one move
return 1
if npods == 3:
return 2**nrings - 1 #optimal solution with 3 pods take 2^d -1 moves
if npods > 3 and nrings > 0:
sol = []
for pivot in xrange(1, nrings): #loop on all possible pivots
sol.append(2*k_hanoi(npods, pivot)+k_hanoi(npods-1, nrings-pivot))
return min(sol) #minimization on pivot
k = 4
d = 10
print k_hanoi(k, d)
I think it is the Frame algorithm, with optimization on the pivot chosen to divide the disks in two subgroups. I also think someone demonstrated this is optimal for 4 pegs (in 2014 or something like that? Not sure btw) and conjectured to be optimal for more than 4 pegs. The limitation on the number of moves can be implemented easily.
The value function in this case was the number of steps needed to go from the initial configuration to the ending one and it needed be minimized. Thank you all for the contribution.
I've got the following task to do:
Given a rectangular chocolate bar, consisting of m x n small rectangles, and the wish of breaking it into its parts. At each step you can only pick one piece and break it either along any of its vertical lines or along its horizontal lines. How should you break the chocolate bar using the minimum number of steps?
I know you need exactly m x n - 1 steps to break the chocolate bar, but I'm asked to do it "the CS way:"
Define a predicate which selects the minimum number of steps among all alternative possibilities to break the chocolate bar into pieces. Construct a straucture on an additional argument position, which tells you where and how to break the bar and what to do with the resulting two pieces.
My thoughts: after breaking the piece of chocolate once, you have the choice of breaking it either on its vertical or its horizontal lines. So this is my code, but it doesn't work:
break_chocolate(Horizontal, Vertical, Minimum) :-
break_horizontal(Horizontal, Vertical, Min1),
break_vertical(Horizontal, Vertical, Min2),
Minimum is min(Min1, Min2).
break_horizontal(0,0,_).
break_vertical(0,0,_).
break_horizontal(0, V, Min) :-
V > 0,
break_horizontal(0, V, Min).
break_horizontal(H, V, Min) :-
H1 is H-1,
Min1 is Min + 1,
break_vertical(H1, V, Min1).
break_horizontal(H, V, Min) :-
H1 is H-1,
Min1 is Min + 1,
break_vertical(H1, V, Min).
break_vertical(H, V, Min) :-
V1 is V-1,
Min1 is Min + 1,
break_horizontal(H, V1, Min1).
break_vertical(H, V, Min) :-
V1 is V-1,
Min1 is Min + 1,
break_vertical(H, V1, Min1).
break_vertical(H, 0, Min) :-
H > 0,
break_horizontal(H, 0, Min).
Could anyone help me with this one?
This is not a complete answer, but should push you towards the right direction:
First an observation: every time you cut a chocolate bar, you end up with exactly one more pieces than you had before. So, actually, there is no "minimal" number of breaks you can have; you start with 1 piece (the whole bar), and you end up with m * n pieces, so you always have exactly m * n - 1 breaks. So either you have misunderstood your problem statement, or somehow misrepresented it in your question.
Second: once you break into two pieces, you will have to break each of the two in the same way that you have broken the previous one. One way to program that would be with a recursive call. I don't see this in your program, as it stands.
Third: so do you want to report the breaks that you make? How are you going to do this?
Whether you program in Prolog, C, or JavaScript, understanding your problem is a prerequisite to finding a solution.
Here are some additional hints for representing and solving your problem.
Each break separates one piece into two pieces (see Boris' second hint). You can, therefore, think of the collection of breaks as a binary tree of breaks which has the following characteristics:
The root node of the tree has the value M-N (the bar is M x N in dimension)
Suppose X-Y represents the value of any node in the tree representing a single X by Y piece that is not the single piece, 1-1. Since the two children of the node represents the piece of dimension X and Y being broken along one dimension, then the children of X-Y either have the values A-Y and B-Y where A + B = X, or the values X-A and X-B where A + B = Y
All of the leaf nodes of the tree have the value 1-1 (the smallest possible piece)
Each node of a binary tree consists of the node value, the left sub tree, and the right sub tree. An empty sub tree would have the value nil (or some other suitably chosen atom). A common representation of a tree would be something like, btree(X-Y, LeftSubTree, RightSubTree) (the term X-Y being the value of the top node of the tree, which in this problem, would be the dimensions of the piece in question). Using this scheme, the smallest piece of candy would be, btree(1-1, nil, nil), for example. A set of breaks for a 2 x 1 candy bar would be, btree(2-1, btree(1-1, nil, nil), btree(1-1, nil, nil)).
You can use the CLPFD library to constrain C #= A + B, A #> 0, B #> 0 and, to eliminate symmetrical cases, A #=< B.
As an algorithm (I'm not familiar with Prolog), I can't find any different answers in the number of breaks. I've tried 4x4, and been unable to come up with an answer other than 15 (either above or below); I've tried with a 5x2 and been unable to come up with an answer other than 9.
On this basis, I would suggest the simplest possible coding method:
while there is more than one column:
snap off the left-most column
while this column has more than one square:
snap off the top square
while the remaining column has more than one square:
snap off the top square
Depending on the situation, you may wish to change one or more of: (left, column)<->(top, row), left->right, top->bottom.
Recently I came across this question and I have no clue where or how to start solving it. Here is the question:
There are 8 statues 0,1,2,3,4,5,6,7 . Each statue is pointing in one of the following four direction North, South, East or West. John would like to arrange the statues so that they all point in same direction. However John is restricted to the following 8 moves which correspond to rotation each statue listed 90 degrees clockwise. (N to E, E to S, S to W, W to N)
Moves
A: 0,1
B: 0,1,2
C: 1,4,5,6
D: 2,5
E: 3,5
F: 3,7
G: 5,7
H: 6,7
Help John figure out fewest number of moves to help point all statues in one direction.
Input : A string initialpos consisting of 8 chars. Each char is either 'N,'S,'E,'W'
Output: An integer which represents fewest no. of moves needed to arrange statues in same direction. If no sequence possible then return -1.
Sample test cases:
input: SSSSSSSS
Output: 0
Explanation: All statues point in same direction. So it takes 0 moves
Test case 1:
Input : WWNNNNNN
Output: 1
Exp: John can use Move A which will make all statues point to North
Test Case 3:
input: NNSEWSWN
Output: 6
Exp: John uses Move A twice, B once, F twice, G once. This will result in all statues facing W.
The only approach I was able to think of was to brute force it. But since the moves can be done multiple times (test case 3), what would be the limit to applying the moves before we conclude that such an arrangement is not possible (i.e output -1)? I am looking for specific types of algorithms that can be used to solve this, also what part of the problem is used in identifying an algorithm.
Note that the order of moves makes no difference, only the set (with repetition). Also note that making the same move 4 times is equivalent to doing nothing, so there is never any reason to make the same move more than 3 times. This reduces the space to 48 possible sequences, which isn't too terrible, but we can still do better than brute force.
The only move that treats 0 and 1 differently is C, so apply C as many times as is necessary to bring 0 and 1 into alignment. We mustn't use C any more than that, and C is the only thing that can move 4, so the remaining task is to align everything to 4.
The only way to move 6 is with H; apply H to align 6.
Now to align 3 and 7. We could do it with E and G, but we may have the option to use F as a short-cut. The optimal number of F moves is not yet clear, so we'll use E and G, and come back to F later.
Apply D to align 5.
Apply B to align 2.
Apply A to align 0 and 1.
Now revisit F, and see whether the short-cut actually saves moves. Pick the optimal number of F moves. (This is easy even by brute force, since there are only 4 possibilities to test.)
The directions N, E, W, S with operation of turning are congruent with Z mod 4 with succ: turn N = (succ 0) mod 4, turn W twice = (succ succ 2) mod 4 etc.
Each move is a vector of zeros (no change) and ones (turn by one) being added to inputs: say you have your example of NNSEWSWN, which would be [0, 0, 2, 1, 3, 2, 3, 0], and you push the button A, which is [1, 1, 0, 0, 0, 0, 0, 0], resulting in [1, 1, 2, 1, 3, 2, 3, 0], or EESEWSWN.
Now if you do a bunch of different operations, they all add up. Thus, you can represent the whole system with this matrix equation:
(start + move_matrix * applied_moves) mod 4 = finish
where start and finish are position vectors as described above, move_matrix the 8x8 matrix with all the moves, and applied_moves a 8-element vector saying how many times we push each button (in range 0..3).
From this, you can get:
applied_moves = (inverse(move_matrix) * (finish - start)) mod 4
Number of applied moves is then just this:
num_applied_moves = sum((inverse(move_matrix) * (finish - start)) mod 4)
Now just plug in the four different values for finish and see which one is least.
You can use matlab, numpy, octave, APL, whatever rocks your boat, as long as it supports matrix algebra, to get your answer very quickly and very easily.
This sounds a little like homework... but I would go with this line of logic.
Run a loop seeing how many moves it would take to move all the statues to face one direction. You would get something like allEast = 30, allWest = 5, etc. Take the lowest sum and corresponding direction would be the answer. With that mindset its pretty easy to build an algorithm to handle computation.
Brute-force could work. A move applied 4 times is the same as not applying the move at all, so each move can only be applied 0, 1, 2, or 3 times.
The order of the moves does not matter. Move a followed by b is the same as b followed by a.
So there are only 4^8 = 65536 possible combinations of moves.
A general solution is to note that there are only 4^8 = 64k different configurations. Each move can therefore be represented as a table of 64k 2 byte indices taking one configuration to the next. The 2 bytes e.g. are divided into 8 2-bit fields 0=N, 1=E, 2=S, 3=W. Further we can use one more table of bits to say which of the 64k configurations have all statues pointing in the same direction.
These tables don't need to be computed at run time. They can be preprocessed while writing the program and stored.
Let table A[c] give the configuration resulting after applying move A to configuration c. and Z[c] return true iff c is a successful config.
So we can use a kind of BFS:
1. Let C be a set of configurations, initially C = { s } where s is the starting config
2. Let n = 0
3. If Z[c] is true for any c in C, return n
4. Let C' be the result of applying A, B, ... H to each element of C.
5. Set C = C', n = n + 1 and go to 3
C can theoretically grow to be almost 64k in size, but the bigger it gets, the better the chances of success, so this ought to be quite fast in practice.