Optimized TSP Algorithms - algorithm

I am interested in ways to improve or come up with algorithms that are able to solve the Travelling salesman problem for about n = 100 to 200 cities.
The wikipedia link I gave lists various optimizations, but it does so at a pretty high level, and I don't know how to go about actually implementing them in code.
There are industrial strength solvers out there, such as Concorde, but those are way too complex for what I want, and the classic solutions that flood the searches for TSP all present randomized algorithms or the classic backtracking or dynamic programming algorithms that only work for about 20 cities.
So, does anyone know how to implement a simple (by simple I mean that an implementation doesn't take more than 100-200 lines of code) TSP solver that works in reasonable time (a few seconds) for at least 100 cities? I am only interested in exact solutions.
You may assume that the input will be randomly generated, so I don't care for inputs that are aimed specifically at breaking a certain algorithm.

200 lines and no libraries is a tough constraint. The advanced solvers use branch and bound with the Held–Karp relaxation, and I'm not sure if even the most basic version of that would fit into 200 normal lines. Nevertheless, here's an outline.
Held Karp
One way to write TSP as an integer program is as follows (Dantzig, Fulkerson, Johnson). For all edges e, constant we denotes the length of edge e, and variable xe is 1 if edge e is on the tour and 0 otherwise. For all subsets S of vertices, ∂(S) denotes the edges connecting a vertex in S with a vertex not in S.
minimize sumedges e we xe
subject to
1. for all vertices v, sumedges e in ∂({v}) xe = 2
2. for all nonempty proper subsets S of vertices, sumedges e in ∂(S) xe ≥ 2
3. for all edges e in E, xe in {0, 1}
Condition 1 ensures that the set of edges is a collection of tours. Condition 2 ensures that there's only one. (Otherwise, let S be the set of vertices visited by one of the tours.) The Held–Karp relaxation is obtained by making this change.
3. for all edges e in E, xe in {0, 1}
3. for all edges e in E, 0 ≤ xe ≤ 1
Held–Karp is a linear program but it has an exponential number of constraints. One way to solve it is to introduce Lagrange multipliers and then do subgradient optimization. That boils down to a loop that computes a minimum spanning tree and then updates some vectors, but the details are sort of involved. Besides "Held–Karp" and "subgradient (descent|optimization)", "1-tree" is another useful search term.
(A slower alternative is to write an LP solver and introduce subtour constraints as they are violated by previous optima. This means writing an LP solver and a min-cut procedure, which is also more code, but it might extend better to more exotic TSP constraints.)
Branch and bound
By "partial solution", I mean an partial assignment of variables to 0 or 1, where an edge assigned 1 is definitely in the tour, and an edge assigned 0 is definitely out. Evaluating Held–Karp with these side constraints gives a lower bound on the optimum tour that respects the decisions already made (an extension).
Branch and bound maintains a set of partial solutions, at least one of which extends to an optimal solution. The pseudocode for one variant, depth-first search with best-first backtracking is as follows.
let h be an empty minheap of partial solutions, ordered by Held–Karp value
let bestsolsofar = null
let cursol be the partial solution with no variables assigned
loop
while cursol is not a complete solution and cursol's H–K value is at least as good as the value of bestsolsofar
choose a branching variable v
let sol0 be cursol union {v -> 0}
let sol1 be cursol union {v -> 1}
evaluate sol0 and sol1
let cursol be the better of the two; put the other in h
end while
if cursol is better than bestsolsofar then
let bestsolsofar = cursol
delete all heap nodes worse than cursol
end if
if h is empty then stop; we've found the optimal solution
pop the minimum element of h and store it in cursol
end loop
The idea of branch and bound is that there's a search tree of partial solutions. The point of solving Held–Karp is that the value of the LP is at most the length OPT of the optimal tour but also conjectured to be at least 3/4 OPT (in practice, usually closer to OPT).
The one detail in the pseudocode I've left out is how to choose the branching variable. The goal is usually to make the "hard" decisions first, so fixing a variable whose value is already near 0 or 1 is probably not wise. One option is to choose the closest to 0.5, but there are many, many others.
EDIT
Java implementation. 198 nonblank, noncomment lines. I forgot that 1-trees don't work with assigning variables to 1, so I branch by finding a vertex whose 1-tree has degree >2 and delete each edge in turn. This program accepts TSPLIB instances in EUC_2D format, e.g., eil51.tsp and eil76.tsp and eil101.tsp and lin105.tsp from http://www2.iwr.uni-heidelberg.de/groups/comopt/software/TSPLIB95/tsp/.
// simple exact TSP solver based on branch-and-bound/Held--Karp
import java.io.*;
import java.util.*;
import java.util.regex.*;
public class TSP {
// number of cities
private int n;
// city locations
private double[] x;
private double[] y;
// cost matrix
private double[][] cost;
// matrix of adjusted costs
private double[][] costWithPi;
Node bestNode = new Node();
public static void main(String[] args) throws IOException {
// read the input in TSPLIB format
// assume TYPE: TSP, EDGE_WEIGHT_TYPE: EUC_2D
// no error checking
TSP tsp = new TSP();
tsp.readInput(new InputStreamReader(System.in));
tsp.solve();
}
public void readInput(Reader r) throws IOException {
BufferedReader in = new BufferedReader(r);
Pattern specification = Pattern.compile("\\s*([A-Z_]+)\\s*(:\\s*([0-9]+))?\\s*");
Pattern data = Pattern.compile("\\s*([0-9]+)\\s+([-+.0-9Ee]+)\\s+([-+.0-9Ee]+)\\s*");
String line;
while ((line = in.readLine()) != null) {
Matcher m = specification.matcher(line);
if (!m.matches()) continue;
String keyword = m.group(1);
if (keyword.equals("DIMENSION")) {
n = Integer.parseInt(m.group(3));
cost = new double[n][n];
} else if (keyword.equals("NODE_COORD_SECTION")) {
x = new double[n];
y = new double[n];
for (int k = 0; k < n; k++) {
line = in.readLine();
m = data.matcher(line);
m.matches();
int i = Integer.parseInt(m.group(1)) - 1;
x[i] = Double.parseDouble(m.group(2));
y[i] = Double.parseDouble(m.group(3));
}
// TSPLIB distances are rounded to the nearest integer to avoid the sum of square roots problem
for (int i = 0; i < n; i++) {
for (int j = 0; j < n; j++) {
double dx = x[i] - x[j];
double dy = y[i] - y[j];
cost[i][j] = Math.rint(Math.sqrt(dx * dx + dy * dy));
}
}
}
}
}
public void solve() {
bestNode.lowerBound = Double.MAX_VALUE;
Node currentNode = new Node();
currentNode.excluded = new boolean[n][n];
costWithPi = new double[n][n];
computeHeldKarp(currentNode);
PriorityQueue<Node> pq = new PriorityQueue<Node>(11, new NodeComparator());
do {
do {
boolean isTour = true;
int i = -1;
for (int j = 0; j < n; j++) {
if (currentNode.degree[j] > 2 && (i < 0 || currentNode.degree[j] < currentNode.degree[i])) i = j;
}
if (i < 0) {
if (currentNode.lowerBound < bestNode.lowerBound) {
bestNode = currentNode;
System.err.printf("%.0f", bestNode.lowerBound);
}
break;
}
System.err.printf(".");
PriorityQueue<Node> children = new PriorityQueue<Node>(11, new NodeComparator());
children.add(exclude(currentNode, i, currentNode.parent[i]));
for (int j = 0; j < n; j++) {
if (currentNode.parent[j] == i) children.add(exclude(currentNode, i, j));
}
currentNode = children.poll();
pq.addAll(children);
} while (currentNode.lowerBound < bestNode.lowerBound);
System.err.printf("%n");
currentNode = pq.poll();
} while (currentNode != null && currentNode.lowerBound < bestNode.lowerBound);
// output suitable for gnuplot
// set style data vector
System.out.printf("# %.0f%n", bestNode.lowerBound);
int j = 0;
do {
int i = bestNode.parent[j];
System.out.printf("%f\t%f\t%f\t%f%n", x[j], y[j], x[i] - x[j], y[i] - y[j]);
j = i;
} while (j != 0);
}
private Node exclude(Node node, int i, int j) {
Node child = new Node();
child.excluded = node.excluded.clone();
child.excluded[i] = node.excluded[i].clone();
child.excluded[j] = node.excluded[j].clone();
child.excluded[i][j] = true;
child.excluded[j][i] = true;
computeHeldKarp(child);
return child;
}
private void computeHeldKarp(Node node) {
node.pi = new double[n];
node.lowerBound = Double.MIN_VALUE;
node.degree = new int[n];
node.parent = new int[n];
double lambda = 0.1;
while (lambda > 1e-06) {
double previousLowerBound = node.lowerBound;
computeOneTree(node);
if (!(node.lowerBound < bestNode.lowerBound)) return;
if (!(node.lowerBound < previousLowerBound)) lambda *= 0.9;
int denom = 0;
for (int i = 1; i < n; i++) {
int d = node.degree[i] - 2;
denom += d * d;
}
if (denom == 0) return;
double t = lambda * node.lowerBound / denom;
for (int i = 1; i < n; i++) node.pi[i] += t * (node.degree[i] - 2);
}
}
private void computeOneTree(Node node) {
// compute adjusted costs
node.lowerBound = 0.0;
Arrays.fill(node.degree, 0);
for (int i = 0; i < n; i++) {
for (int j = 0; j < n; j++) costWithPi[i][j] = node.excluded[i][j] ? Double.MAX_VALUE : cost[i][j] + node.pi[i] + node.pi[j];
}
int firstNeighbor;
int secondNeighbor;
// find the two cheapest edges from 0
if (costWithPi[0][2] < costWithPi[0][1]) {
firstNeighbor = 2;
secondNeighbor = 1;
} else {
firstNeighbor = 1;
secondNeighbor = 2;
}
for (int j = 3; j < n; j++) {
if (costWithPi[0][j] < costWithPi[0][secondNeighbor]) {
if (costWithPi[0][j] < costWithPi[0][firstNeighbor]) {
secondNeighbor = firstNeighbor;
firstNeighbor = j;
} else {
secondNeighbor = j;
}
}
}
addEdge(node, 0, firstNeighbor);
Arrays.fill(node.parent, firstNeighbor);
node.parent[firstNeighbor] = 0;
// compute the minimum spanning tree on nodes 1..n-1
double[] minCost = costWithPi[firstNeighbor].clone();
for (int k = 2; k < n; k++) {
int i;
for (i = 1; i < n; i++) {
if (node.degree[i] == 0) break;
}
for (int j = i + 1; j < n; j++) {
if (node.degree[j] == 0 && minCost[j] < minCost[i]) i = j;
}
addEdge(node, node.parent[i], i);
for (int j = 1; j < n; j++) {
if (node.degree[j] == 0 && costWithPi[i][j] < minCost[j]) {
minCost[j] = costWithPi[i][j];
node.parent[j] = i;
}
}
}
addEdge(node, 0, secondNeighbor);
node.parent[0] = secondNeighbor;
node.lowerBound = Math.rint(node.lowerBound);
}
private void addEdge(Node node, int i, int j) {
double q = node.lowerBound;
node.lowerBound += costWithPi[i][j];
node.degree[i]++;
node.degree[j]++;
}
}
class Node {
public boolean[][] excluded;
// Held--Karp solution
public double[] pi;
public double lowerBound;
public int[] degree;
public int[] parent;
}
class NodeComparator implements Comparator<Node> {
public int compare(Node a, Node b) {
return Double.compare(a.lowerBound, b.lowerBound);
}
}

If your graph satisfy the triangle inequality and you want a guarantee of 3/2 within the optimum I suggest the christofides algorithm. I've wrote an implementation in php at phpclasses.org.

As of 2013, It is possible to solve for 100 cities using only the exact formulation in Cplex. Add degree equations for each vertex, but include subtour-avoiding constraints only as they appear. Most of them are not necessary. Cplex has an example on this.
You should be able to solve for 100 cities. You will have to iterate every time a new subtour is found. I ran an example here and in a couple of minutes and 100 iterations later I got my results.

I took Held-Karp algorithm from concorde library and 25 cities are solved in 0.15 seconds. This performance is perfectly good for me! You can extract the code (writen in ANSI C) of held-karp from concorde library: http://www.math.uwaterloo.ca/tsp/concorde/downloads/downloads.htm. If the download has the extension gz, it should be tgz. You might need to rename it. Then you should make little ajustments to port in in VC++. First take the file heldkarp h and c (rename it cpp) and other about 5 files, make adjustments and it should work calling CCheldkarp_small(...) with edgelen: euclid_ceiling_edgelen.

TSP is an NP-hard problem. (As far as we know) there is no algorithm for NP-hard problems which runs in polynomial time, so you ask for something that doesn't exist.
It's either fast enough to finish in a reasonable time and then it's not exact, or exact but won't finish in your lifetime for 100 cities.

To give a dumb answer: me too. Everyone is interrested in such algorithm, but as others already stated: I does not (yet?) exist. Esp your combination of exact, 200 nodes, few seconds runtime and just 200 lines of code is impossible. You already know that is it NP hard and if you got the slightest impression of asymptotic behaviour you should know that there is no way of achieving this (except you prove that NP=P, and even that I would say thats not possible). Even the exact commercial solvers need for such instances far more than some seconds and as you can imagine they have far more than 200 lines of code (even when you just consider their kernels).
EDIT: The wiki algorithms are the "usual suspects" of the field: Linear Programming and branch-and-bound. Their solutions for the instances with thousands of nodes took Years to solve (they just did it with very very much CPUs parallel, so they can do it faster). Some even use for the branch-and-bound problem specific knowledge for the bounding, so they are no general approaches.
Branch and bound just enumerates all possible paths (e.g. with backtracking) and applies once it has a solution this for to stop a started recursion when it can prove that the result is not better than the already found solution (e.g. if you just visited 2 of your cities and the path is already longer than a found 200 city tour. You can discard all tours that start with that 2 city combination). Here you can invest very much problem specific knowledge in the function that tells you, that the path is not going to be better than the already found solution. The better it is, the less paths you have to look at, the faster is your algorithm.
Linear Programming is an optimization method so solve linear inequality problems. It works in polynomial time (simplex just practically, but that doesnt matter here), but the solution is real. When you have the additional constraint that the solution must be integer, it gets NP-complete. For small instances it is possible, e.g. one method to solve it, then look which variable of the solution violates the integer part and add addition inequalities to change it (this is called cutting-plane, the name cames from the fact that the inequalities define (higher-dimensional) plane, the solution space is a polytop and by adding additional inequalities you cut something with a plane from the polytop). The topic is very complex and even a general simple simplex is hard to understand when you dont want dive deep into the math. There are several good books about, one of the betters is from Chvatal, Linear Programming, but there are several more.

I have a theory, but I've never had the time to pursue it:
The TSP is a bounding problem (single shape where all points lie on the perimeter) where the optimal solution is that solution that has the shortest perimeter.
There are plenty of simple ways to get all the points that lie on a minimum bounding perimeter (imagine a large elastic band stretched around a bunch of nails in a large board.)
My theory is that if you start pushing in on the elastic band so that the length of band increases by the same amount between adjacent points on the perimeter, and each segment remains in the shape of an eliptical arc, the stretched elastic will cross points on the optimal path before crossing points on non-optimal paths. See this page on mathopenref.com on drawing ellipses--particularly steps 5 and 6. Points on the bounding perimeter can be viewed as focal points of the ellipse (F1, F2) in the images below.
What I don't know is if the "bubble stretching" process needs to be reset after each new point is added, or if the existing "bubbles" continue to grow and each new point on the perimeter causes only the localized "bubble" to turn into two line segments. I'll leave that for you to figure out.

Related

HMM Localization in 2D maze, trouble applying smoothing (backward algorithm)

We use HMM (Hidden Markov Model) to localize a robot in a windy maze with damaged sensors. If he attempts to move in a direction, he will do so with a high probability, and a low chance to accidentally go to either side. If his movement would make him go over an obstacle, he will bounce back to the original tile.
From any given position, he can sense in all four directions. He will notice an obstacle if it is there with high certainty, and see an obstacle when there is none with low certainty.
We have a probability map for all possible places the robot might be in the maze, since he knows what the maze looks like. Initially it all starts evenly distributed.
I have completed the motion and sensing aspect of this and am getting the proper answers, but I am stuck on smoothing (backward algorithm).
Assume that the robot performs the following sequence of actions: senses, moves, senses, moves, senses. This gives us 3 states in our HMM model. Assume that the results I have at each step of the way so far are correct.
I am having a lot of trouble performing smoothing (backward algorithm), given that there are four conditional probabilities (one for each direction).
Assume SP is for smoothing probability, BP is for backward probability
Assume Sk is for a state, and Zk is for an observation at that state. The problem for me is figuring out how to construct my backwards equation given that each Zk is only for a single direction.
I know the algorithm for smoothing is: SP(k) is proportional to BP(k+1) * P(Sk | Z1:k)
Where BP(k+1) is defined as :
if (k == n) return 1 else return Sum(s) of BP(k+1) * P(Zk+1|Sk+1) * P(Sk+1=s | Sk)
This is where I am having my trouble. Mainly in the Conditional Probability portion of this equation. Because each spot has four different directions that it observed! In other words, each state has four different evidence variables as opposed to just one! Do I average these values? Do I do a separate summation for them? How do I account for multiple observations at a given state and properly condense it into this equation which only has room for one conditional probability?
Here is the code I have performing the smoothing:
public static void Smoothing(List<int[]> observations) {
int n = observations.Count; //n is Total length of evidence sequence
int k = n - 1; //k is the state we are trying to smooth. start with n-1
for (; k >= 1; k--) { //Smooth all the way back to the first state
for (int dir = 0; dir < 4; dir++) {
//We must smooth each direction separately
SmoothDirection(dir, observations, k, n);
}
Console.WriteLine($"Smoothing for k = {k}\n");
UpdateMapMotion(mapHistory[k]);
PrintMap();
}
}
public static void SmoothDirection(int dir, List<int[]> observations, int k, int n) {
var alphas = new double[ROWS, COLS];
var normalizer = 0.0;
int row, col;
foreach (var t in map) {
if (t.isObstacle) continue;
row = t.pos.y;
col = t.pos.x;
alphas[row, col] = mapHistory[k][row, col]
* Backwards(k, n, t, dir, observations, moves[^(n - k)]);
normalizer += alphas[row, col];
}
UpdateHistory(k, alphas, normalizer);
}
public static void UpdateHistory(int index, double[,] alphas, double normalizer) {
for (int r = 0; r < ROWS; r++) {
for (int c = 0; c < COLS; c++) {
mapHistory[index][r, c] = alphas[r, c] / normalizer;
}
}
}
public static double Backwards(int k, int n, Tile t, int dir, List<int[]> observations, int moveDir) {
if (k == n) return 1;
double p = 0;
var nextStates = GetPossibleNextStates(t, moveDir);
foreach (var s in nextStates) {
p += Cond_Prob(s.hasObstacle[dir], observations[^(n - k)][dir] == 1) * Trans_Prob(t, s, moveDir)
* Backwards(k+1, n, s, dir, observations, moves[^(n - k)]);
}
return p;
}
public static List<Tile> GetPossibleNextStates(Tile t, int direction) {
var tiles = new List<Tile>(); //Next States
var perpDirs = GetPerpendicularDir(direction); //Perpendicular Directions
//If obstacle in front of Tile t or on the sides, Tile t is a possible next state.
if (t.hasObstacle[direction] || t.hasObstacle[perpDirs[0]] || t.hasObstacle[perpDirs[1]])
tiles.Add(t);
//If there is no obstacle in front of Tile t, then that tile is a possible next state.
if (!t.hasObstacle[direction])
tiles.Add(GetTileAtPos(t.pos + directions[direction]));
//If there are no obstacles on the sides of Tile t, then those are possible next states.
foreach (var dir in perpDirs) {
if (!t.hasObstacle[dir])
tiles.Add(GetTileAtPos(t.pos + directions[dir]));
}
return tiles;
}
TL;DR : How do I perform smoothing (backward algorithm) in a Hidden Markov Model when there are 4 evidences at each state as opposed to just 1?
SOLVED!
It was actually rather much more simple than I imagined.
I don't actually need to each iteration separately in each direction.
I just need to replace the Cond_Prob() function with Joint_Cond_Prob() which finds the joint probability of all directional observations at a given state.
So P(Zk|Sk) is actually P(Zk1:Zk4|Sk) which is just P(Zk1|Sk)P(Zk2|Sk)P(Zk3|Sk)P(Zk4|Sk)

Find largest circle not overlapping with others using genetic algorithm

I'm using GA, so I took example from this page (http://www.ai-junkie.com/ga/intro/gat3.html) and tried to do on my own.
The problem is, it doesn't work. For example, maximum fitness does not always grow in the next generation, but becomes smallest. Also, after some number of generations, it just stops getting better. For example, in first 100 generations, it found the largest circle with radius 104. And in next 900 largest radius is 107. And after drawing it, I see that it can grow much more.
Here is my code connected with GA. I leave out generating random circles, decoding and drawing.
private Genome ChooseParent(Genome[] population, Random r)
{
double sumFitness = 0;
double maxFitness = 0;
for (int i = 0; i < population.Length; i++)
{
sumFitness += population[i].fitness;
if (i == 0 || maxFitness < population[i].fitness)
{
maxFitness = population[i].fitness;
}
}
sumFitness = population.Length * maxFitness - sumFitness;
double randNum = r.NextDouble() *sumFitness;
double acumulatedSum = 0;
for(int i=0;i<population.Length;i++)
{
acumulatedSum += population[i].fitness;
if(randNum<acumulatedSum)
{
return population[i];
}
}
return population[0];
}
private void Crossover(Genome parent1, Genome parent2, Genome child1, Genome child2, Random r)
{
double d=r.NextDouble();
if(d>this.crossoverRate || child1.Equals(child2))
{
for (int i = 0; i < parent1.bitNum; i++)
{
child1.bit[i] = parent1.bit[i];
child2.bit[i] = parent2.bit[i];
}
}
else
{
int cp = r.Next(parent1.bitNum - 1);
for (int i = 0; i < cp; i++)
{
child1.bit[i] = parent1.bit[i];
child2.bit[i] = parent2.bit[i];
}
for (int i = cp; i < parent1.bitNum; i++)
{
child1.bit[i] = parent2.bit[i];
child2.bit[i] = parent1.bit[i];
}
}
}
private void Mutation(Genome child, Random r)
{
for(int i=0;i<child.bitNum;i++)
{
if(r.NextDouble()<=this.mutationRate)
{
child.bit[i] = (byte)(1 - child.bit[i]);
}
}
}
public void Run()
{
for(int generation=0;generation<1000;generation++)
{
CalculateFitness(population);
System.Diagnostics.Debug.WriteLine(maxFitness);
population = population.OrderByDescending(x => x).ToArray();
//ELITIZM
Copy(population[0], newpopulation[0]);
Copy(population[1], newpopulation[1]);
for(int i=1;i<this.populationSize/2;i++)
{
Genome parent1 = ChooseParent(population, r);
Genome parent2 = ChooseParent(population, r);
Genome child1 = newpopulation[2 * i];
Genome child2 = newpopulation[2 * i + 1];
Crossover(parent1, parent2, child1, child2, r);
Mutation(child1, r);
Mutation(child2, r);
}
Genome[] tmp = population;
population = newpopulation;
newpopulation = tmp;
DekodePopulation(population); //decoding and fitness calculation for each member of population
}
}
If someone can point on potential problem that caused such behaviour and ways to fix it, I'll be grateful.
Welcome to the world of genetic algorithms!
I'll go through your issues and suggest a potential problem. Here we go:
maximum fitness does not always grow in the next generation, but becomes smallest - You probably meant smaller. This is weird since you employed elitism, so each generation's best individual should be at least as good as in the previous one. I suggest you check your code for mistakes because this really should not happen. However, the fitness does not need to always grow. It is impossible to achieve this in GA - it's a stochastic algorithm, working with randomness - suppose that, by chance, no mutation nor crossover happens in a generation - then the fitness cannot improve to the next generation since there is no change.
after some number of generations, it just stops getting better. For example, in first 100 generations, it found the largest circle with radius 104. And in next 900 largest radius is 107. And after drawing it, I see that it can grow much more. - this is (probably) a sign of a phenomenon called premature convergence and it's, unfortunately, a "normal" thing in genetic algorithm. Premature convergence is a situation when the whole population converges to a single solution or to a set of solutions which are near each other and which is/are sub-optimal (i.e. it is not the best possible soluion). When this happens, the GA has a very hard time escaping this local optimum. You can try to tweak the parameters, especially the mutation probability, to force more exploration.
Also, another very important thing that can cause problems is the encoding, i.e. how is the bit string mapped to the circle. If the encoding is much too indirect, it can lead to poor performance of the GA. GAs work when there are some building blocks in the genotype which can be exchanged between among the population. If there are no such blocks, the performance of a GA is usually going to be poor.
I have implemented this exercise and achieved good results. Here is the link:
https://github.com/ManhTruongDang/ai-junkie
Hope this can be of use to you.

The Maximum Volume of Trapped Rain Water in 3D

A classic algorithm question in 2D version is typically described as
Given n non-negative integers representing an elevation map where the width of each bar is 1, compute how much water it is able to trap after raining.
For example, Given the input
[0,1,0,2,1,0,1,3,2,1,2,1]
the return value would be
6
The algorithm that I used to solve the above 2D problem is
int trapWaterVolume2D(vector<int> A) {
int n = A.size();
vector<int> leftmost(n, 0), rightmost(n, 0);
//left exclusive scan, O(n), the highest bar to the left each point
int leftMaxSoFar = 0;
for (int i = 0; i < n; i++){
leftmost[i] = leftMaxSoFar;
if (A[i] > leftMaxSoFar) leftMaxSoFar = A[i];
}
//right exclusive scan, O(n), the highest bar to the right each point
int rightMaxSoFar = 0;
for (int i = n - 1; i >= 0; i--){
rightmost[i] = rightMaxSoFar;
if (A[i] > rightMaxSoFar) rightMaxSoFar = A[i];
}
// Summation, O(n)
int vol = 0;
for (int i = 0; i < n; i++){
vol += max(0, min(leftmost[i], rightmost[i]) - A[i]);
}
return vol;
}
My Question is how to make the above algorithm extensible to the 3D version of the problem, to compute the maximum of water trapped in real-world 3D terrain. i.e. To implement
int trapWaterVolume3D(vector<vector<int> > A);
Sample graph:
We know the elevation at each (x, y) point and the goal is to compute the maximum volume of water that can be trapped in the shape. Any thoughts and references are welcome.
For each point on the terrain consider all paths from that point to the border of the terrain. The level of water would be the minimum of the maximum heights of the points of those paths. To find it we need to perform a slightly modified Dijkstra's algorithm, filling the water level matrix starting from the border.
For every point on the border set the water level to the point height
For every point not on the border set the water level to infinity
Put every point on the border into the set of active points
While the set of active points is not empty:
Select the active point P with minimum level
Remove P from the set of active points
For every point Q adjacent to P:
Level(Q) = max(Height(Q), min(Level(Q), Level(P)))
If Level(Q) was changed:
Add Q to the set of active points
user3290797's "slightly modified Dijkstra algorithm" is closer to Prim's algorithm than Dijkstra's. In minimum spanning tree terms, we prepare a graph with one vertex per tile, one vertex for the outside, and edges with weights equal to the maximum height of their two adjoining tiles (the outside has height "minus infinity").
Given a path in this graph to the outside vertex, the maximum weight of an edge in the path is the height that the water has to reach in order to escape along that path. The relevant property of a minimum spanning tree is that, for every pair of vertices, the maximum weight of an edge in the path in the spanning tree is the minimum possible among all paths between those vertices. The minimum spanning tree thus describes the most economical escape paths for water, and the water heights can be extracted in linear time with one traversal.
As a bonus, since the graph is planar, there's a linear-time algorithm for computing the minimum spanning tree, consisting of alternating Boruvka passes and simplifications. This improves on the O(n log n) running time of Prim.
This problem can be solved using the Priority-Flood algorithm. It's been discovered and published a number of times over the past few decades (and again by other people answering this question), though the specific variant you're looking for is not, to my knowledge, in the literature.
You can find a review paper of the algorithm and its variants here. Since that paper was published an even faster variant has been discovered (link), as well as methods to perform this calculation on datasets of trillions of cells (link). A method for selectively breaching low/narrow divides is discussed here. Contact me if you'd like copies of any of these papers.
I have a repository here with many of the above variants; additional implementations can be found here.
A simple script to calculate volume using the RichDEM library is as follows:
#include "richdem/common/version.hpp"
#include "richdem/common/router.hpp"
#include "richdem/depressions/Lindsay2016.hpp"
#include "richdem/common/Array2D.hpp"
/**
#brief Calculates the volume of depressions in a DEM
#author Richard Barnes (rbarnes#umn.edu)
Priority-Flood starts on the edges of the DEM and then works its way inwards
using a priority queue to determine the lowest cell which has a path to the
edge. The neighbours of this cell are added to the priority queue if they
are higher. If they are lower, then they are members of a depression and the
elevation of the flooding minus the elevation of the DEM times the cell area
is the flooded volume of the cell. The cell is flooded, total volume
tracked, and the neighbors are then added to a "depressions" queue which is
used to flood depressions. Cells which are higher than a depression being
filled are added to the priority queue. In this way, depressions are filled
without incurring the expense of the priority queue.
#param[in,out] &elevations A grid of cell elevations
#pre
1. **elevations** contains the elevations of every cell or a value _NoData_
for cells not part of the DEM. Note that the _NoData_ value is assumed to
be a negative number less than any actual data value.
#return
Returns the total volume of the flooded depressions.
#correctness
The correctness of this command is determined by inspection. (TODO)
*/
template <class elev_t>
double improved_priority_flood_volume(const Array2D<elev_t> &elevations){
GridCellZ_pq<elev_t> open;
std::queue<GridCellZ<elev_t> > pit;
uint64_t processed_cells = 0;
uint64_t pitc = 0;
ProgressBar progress;
std::cerr<<"\nPriority-Flood (Improved) Volume"<<std::endl;
std::cerr<<"\nC Barnes, R., Lehman, C., Mulla, D., 2014. Priority-flood: An optimal depression-filling and watershed-labeling algorithm for digital elevation models. Computers & Geosciences 62, 117–127. doi:10.1016/j.cageo.2013.04.024"<<std::endl;
std::cerr<<"p Setting up boolean flood array matrix..."<<std::endl;
//Used to keep track of which cells have already been considered
Array2D<int8_t> closed(elevations.width(),elevations.height(),false);
std::cerr<<"The priority queue will require approximately "
<<(elevations.width()*2+elevations.height()*2)*((long)sizeof(GridCellZ<elev_t>))/1024/1024
<<"MB of RAM."
<<std::endl;
std::cerr<<"p Adding cells to the priority queue..."<<std::endl;
//Add all cells on the edge of the DEM to the priority queue
for(int x=0;x<elevations.width();x++){
open.emplace(x,0,elevations(x,0) );
open.emplace(x,elevations.height()-1,elevations(x,elevations.height()-1) );
closed(x,0)=true;
closed(x,elevations.height()-1)=true;
}
for(int y=1;y<elevations.height()-1;y++){
open.emplace(0,y,elevations(0,y) );
open.emplace(elevations.width()-1,y,elevations(elevations.width()-1,y) );
closed(0,y)=true;
closed(elevations.width()-1,y)=true;
}
double volume = 0;
std::cerr<<"p Performing the improved Priority-Flood..."<<std::endl;
progress.start( elevations.size() );
while(open.size()>0 || pit.size()>0){
GridCellZ<elev_t> c;
if(pit.size()>0){
c=pit.front();
pit.pop();
} else {
c=open.top();
open.pop();
}
processed_cells++;
for(int n=1;n<=8;n++){
int nx=c.x+dx[n];
int ny=c.y+dy[n];
if(!elevations.inGrid(nx,ny)) continue;
if(closed(nx,ny))
continue;
closed(nx,ny)=true;
if(elevations(nx,ny)<=c.z){
if(elevations(nx,ny)<c.z){
++pitc;
volume += (c.z-elevations(nx,ny))*std::abs(elevations.getCellArea());
}
pit.emplace(nx,ny,c.z);
} else
open.emplace(nx,ny,elevations(nx,ny));
}
progress.update(processed_cells);
}
std::cerr<<"t Succeeded in "<<std::fixed<<std::setprecision(1)<<progress.stop()<<" s"<<std::endl;
std::cerr<<"m Cells processed = "<<processed_cells<<std::endl;
std::cerr<<"m Cells in pits = " <<pitc <<std::endl;
return volume;
}
template<class T>
int PerformAlgorithm(std::string analysis, Array2D<T> elevations){
elevations.loadData();
std::cout<<"Volume: "<<improved_priority_flood_volume(elevations)<<std::endl;
return 0;
}
int main(int argc, char **argv){
std::string analysis = PrintRichdemHeader(argc,argv);
if(argc!=2){
std::cerr<<argv[0]<<" <Input>"<<std::endl;
return -1;
}
return PerformAlgorithm(argv[1],analysis);
}
It should be straight-forward to adapt this to whatever 2d array format you are using
In pseudocode, the following is equivalent to the foregoing:
Let PQ be a priority-queue which always pops the cell of lowest elevation
Let Closed be a boolean array initially set to False
Let Volume = 0
Add all the border cells to PQ.
For each border cell, set the cell's entry in Closed to True.
While PQ is not empty:
Select the top cell from PQ, call it C.
Pop the top cell from PQ.
For each neighbor N of C:
If Closed(N):
Continue
If Elevation(N)<Elevation(C):
Volume += (Elevation(C)-Elevation(N))*Area
Add N to PQ, but with Elevation(C)
Else:
Add N to PQ with Elevation(N)
Set Closed(N)=True
This problem is very close to the construction of the morphological watershed of a grayscale image.
One approach is as follows (flooding process):
sort all pixels by increasing elevation.
work incrementally, by increasing elevations, assigning labels to the pixels per catchment basin.
for a new elevation level, you need to label a new set of pixels:
Some have no labeled
neighbor, they form a local minimum configuration and begin a new catchment basin.
Some have only neighbors with the same label, they can be labeled similarly (they extend a catchment basin).
Some have neighbors with different labels. They do not belong to a specific catchment basin and they define the watershed lines.
You will need to enhance the standard watershed algorithm to be able to compute the volume of water. You can do that by determining the maximum water level in each basin and deduce the ground height on every pixel. The water level in a basin is given by the elevation of the lowest watershed pixel around it.
You can act every time you discover a watershed pixel: if a neighboring basin has not been assigned a level yet, that basin can stand the current level without leaking.
In order to accomplish tapping water problem in 3D i.e., to calculate the maximum volume of trapped rain water you can do something like this:
#include<bits/stdc++.h>
using namespace std;
#define MAX 10
int new2d[MAX][MAX];
int dp[MAX][MAX],visited[MAX][MAX];
int dx[] = {1,0,-1,0};
int dy[] = {0,-1,0,1};
int boundedBy(int i,int j,int k,int in11,int in22)
{
if(i<0 || j<0 || i>=in11 || j>=in22)
return 0;
if(new2d[i][j]>k)
return new2d[i][j];
if(visited[i][j]) return INT_MAX;
visited[i][j] = 1;
int r = INT_MAX;
for(int dir = 0 ; dir<4 ; dir++)
{
int nx = i + dx[dir];
int ny = j + dy[dir];
r = min(r,boundedBy(nx,ny,k,in11,in22));
}
return r;
}
void mark(int i,int j,int k,int in1,int in2)
{
if(i<0 || j<0 || i>=in1 || j>=in2)
return;
if(new2d[i][j]>=k)
return;
if(visited[i][j]) return ;
visited[i][j] = 1;
for(int dir = 0;dir<4;dir++)
{
int nx = i + dx[dir];
int ny = j + dy[dir];
mark(nx,ny,k,in1,in2);
}
dp[i][j] = max(dp[i][j],k);
}
struct node
{
int i,j,key;
node(int x,int y,int k)
{
i = x;
j = y;
key = k;
}
};
bool compare(node a,node b)
{
return a.key>b.key;
}
vector<node> store;
int getData(int input1, int input2, int input3[])
{
int row=input1;
int col=input2;
int temp=0;
int count=0;
for(int i=0;i<row;i++)
{
for(int j=0;j<col;j++)
{
if(count==(col*row))
break;
new2d[i][j]=input3[count];
count++;
}
}
store.clear();
for(int i = 0;i<input1;i++)
{
for(int j = 0;j<input2;j++)
{
store.push_back(node(i,j,new2d[i][j]));
}
}
memset(dp,0,sizeof(dp));
sort(store.begin(),store.end(),compare);
for(int i = 0;i<store.size();i++)
{
memset(visited,0,sizeof(visited));
int aux = boundedBy(store[i].i,store[i].j,store[i].key,input1,input2);
if(aux>store[i].key)
{
memset(visited,0,sizeof(visited));
mark(store[i].i,store[i].j,aux,input1,input2);
}
}
long long result =0 ;
for(int i = 0;i<input1;i++)
{
for(int j = 0;j<input2;j++)
{
result = result + max(0,dp[i][j]-new2d[i][j]);
}
}
return result;
}
int main()
{
cin.sync_with_stdio(false);
cout.sync_with_stdio(false);
int n,m;
cin>>n>>m;
int inp3[n*m];
store.clear();
for(int j = 0;j<n*m;j++)
{
cin>>inp3[j];
}
int k = getData(n,m,inp3);
cout<<k;
return 0;
}
class Solution(object):
def trapRainWater(self, heightMap):
"""
:type heightMap: List[List[int]]
:rtype: int
"""
m = len(heightMap)
if m == 0:
return 0
n = len(heightMap[0])
if n == 0:
return 0
visited = [[False for i in range(n)] for j in range(m)]
from Queue import PriorityQueue
q = PriorityQueue()
for i in range(m):
visited[i][0] = True
q.put([heightMap[i][0],i,0])
visited[i][n-1] = True
q.put([heightMap[i][n-1],i,n-1])
for j in range(1, n-1):
visited[0][j] = True
q.put([heightMap[0][j],0,j])
visited[m-1][j] = True
q.put([heightMap[m-1][j],m-1,j])
S = 0
while not q.empty():
cell = q.get()
for (i, j) in [(1,0), (-1,0), (0,1), (0,-1)]:
x = cell[1] + i
y = cell[2] + j
if x in range(m) and y in range(n) and not visited[x][y]:
S += max(0, cell[0] - heightMap[x][y]) # how much water at the cell
q.put([max(heightMap[x][y],cell[0]),x,y])
visited[x][y] = True
return S
Here is the simple code for the same-
#include<iostream>
using namespace std;
int main()
{
int n,count=0,a[100];
cin>>n;
for(int i=0;i<n;i++)
{
cin>>a[i];
}
for(int i=1;i<n-1;i++)
{
///computing left most largest and Right most largest element of array;
int leftmax=0;
int rightmax=0;
///left most largest
for(int j=i-1;j>=1;j--)
{
if(a[j]>leftmax)
{
leftmax=a[j];
}
}
///rightmost largest
for(int k=i+1;k<=n-1;k++)
{
if(a[k]>rightmax)
{
rightmax=a[k];
}
}
///computing hight of the water contained-
int x=(min(rightmax,leftmax)-a[i]);
if(x>0)
{
count=count+x;
}
}
cout<<count;
return 0;
}

Shuffle list, ensuring that no item remains in same position

I want to shuffle a list of unique items, but not do an entirely random shuffle. I need to be sure that no element in the shuffled list is at the same position as in the original list. Thus, if the original list is (A, B, C, D, E), this result would be OK: (C, D, B, E, A), but this one would not: (C, E, A, D, B) because "D" is still the fourth item. The list will have at most seven items. Extreme efficiency is not a consideration. I think this modification to Fisher/Yates does the trick, but I can't prove it mathematically:
function shuffle(data) {
for (var i = 0; i < data.length - 1; i++) {
var j = i + 1 + Math.floor(Math.random() * (data.length - i - 1));
var temp = data[j];
data[j] = data[i];
data[i] = temp;
}
}
You are looking for a derangement of your entries.
First of all, your algorithm works in the sense that it outputs a random derangement, ie a permutation with no fixed point. However it has a enormous flaw (which you might not mind, but is worth keeping in mind): some derangements cannot be obtained with your algorithm. In other words, it gives probability zero to some possible derangements, so the resulting distribution is definitely not uniformly random.
One possible solution, as suggested in the comments, would be to use a rejection algorithm:
pick a permutation uniformly at random
if it hax no fixed points, return it
otherwise retry
Asymptotically, the probability of obtaining a derangement is close to 1/e = 0.3679 (as seen in the wikipedia article). Which means that to obtain a derangement you will need to generate an average of e = 2.718 permutations, which is quite costly.
A better way to do that would be to reject at each step of the algorithm. In pseudocode, something like this (assuming the original array contains i at position i, ie a[i]==i):
for (i = 1 to n-1) {
do {
j = rand(i, n) // random integer from i to n inclusive
} while a[j] != i // rejection part
swap a[i] a[j]
}
The main difference from your algorithm is that we allow j to be equal to i, but only if it does not produce a fixed point. It is slightly longer to execute (due to the rejection part), and demands that you be able to check if an entry is at its original place or not, but it has the advantage that it can produce every possible derangement (uniformly, for that matter).
I am guessing non-rejection algorithms should exist, but I would believe them to be less straight-forward.
Edit:
My algorithm is actually bad: you still have a chance of ending with the last point unshuffled, and the distribution is not random at all, see the marginal distributions of a simulation:
An algorithm that produces uniformly distributed derangements can be found here, with some context on the problem, thorough explanations and analysis.
Second Edit:
Actually your algorithm is known as Sattolo's algorithm, and is known to produce all cycles with equal probability. So any derangement which is not a cycle but a product of several disjoint cycles cannot be obtained with the algorithm. For example, with four elements, the permutation that exchanges 1 and 2, and 3 and 4 is a derangement but not a cycle.
If you don't mind obtaining only cycles, then Sattolo's algorithm is the way to go, it's actually much faster than any uniform derangement algorithm, since no rejection is needed.
As #FelixCQ has mentioned, the shuffles you are looking for are called derangements. Constructing uniformly randomly distributed derangements is not a trivial problem, but some results are known in the literature. The most obvious way to construct derangements is by the rejection method: you generate uniformly randomly distributed permutations using an algorithm like Fisher-Yates and then reject permutations with fixed points. The average running time of that procedure is e*n + o(n) where e is Euler's constant 2.71828... That would probably work in your case.
The other major approach for generating derangements is to use a recursive algorithm. However, unlike Fisher-Yates, we have two branches to the algorithm: the last item in the list can be swapped with another item (i.e., part of a two-cycle), or can be part of a larger cycle. So at each step, the recursive algorithm has to branch in order to generate all possible derangements. Furthermore, the decision of whether to take one branch or the other has to be made with the correct probabilities.
Let D(n) be the number of derangements of n items. At each stage, the number of branches taking the last item to two-cycles is (n-1)D(n-2), and the number of branches taking the last item to larger cycles is (n-1)D(n-1). This gives us a recursive way of calculating the number of derangements, namely D(n)=(n-1)(D(n-2)+D(n-1)), and gives us the probability of branching to a two-cycle at any stage, namely (n-1)D(n-2)/D(n-1).
Now we can construct derangements by deciding to which type of cycle the last element belongs, swapping the last element to one of the n-1 other positions, and repeating. It can be complicated to keep track of all the branching, however, so in 2008 some researchers developed a streamlined algorithm using those ideas. You can see a walkthrough at http://www.cs.upc.edu/~conrado/research/talks/analco08.pdf . The running time of the algorithm is proportional to 2n + O(log^2 n), a 36% improvement in speed over the rejection method.
I have implemented their algorithm in Java. Using longs works for n up to 22 or so. Using BigIntegers extends the algorithm to n=170 or so. Using BigIntegers and BigDecimals extends the algorithm to n=40000 or so (the limit depends on memory usage in the rest of the program).
package io.github.edoolittle.combinatorics;
import java.math.BigInteger;
import java.math.BigDecimal;
import java.math.MathContext;
import java.util.Random;
import java.util.HashMap;
import java.util.TreeMap;
public final class Derangements {
// cache calculated values to speed up recursive algorithm
private static HashMap<Integer,BigInteger> numberOfDerangementsMap
= new HashMap<Integer,BigInteger>();
private static int greatestNCached = -1;
// load numberOfDerangementsMap with initial values D(0)=1 and D(1)=0
static {
numberOfDerangementsMap.put(0,BigInteger.valueOf(1));
numberOfDerangementsMap.put(1,BigInteger.valueOf(0));
greatestNCached = 1;
}
private static Random rand = new Random();
// private default constructor so class isn't accidentally instantiated
private Derangements() { }
public static BigInteger numberOfDerangements(int n)
throws IllegalArgumentException {
if (numberOfDerangementsMap.containsKey(n)) {
return numberOfDerangementsMap.get(n);
} else if (n>=2) {
// pre-load the cache to avoid stack overflow (occurs near n=5000)
for (int i=greatestNCached+1; i<n; i++) numberOfDerangements(i);
greatestNCached = n-1;
// recursion for derangements: D(n) = (n-1)*(D(n-1) + D(n-2))
BigInteger Dn_1 = numberOfDerangements(n-1);
BigInteger Dn_2 = numberOfDerangements(n-2);
BigInteger Dn = (Dn_1.add(Dn_2)).multiply(BigInteger.valueOf(n-1));
numberOfDerangementsMap.put(n,Dn);
greatestNCached = n;
return Dn;
} else {
throw new IllegalArgumentException("argument must be >= 0 but was " + n);
}
}
public static int[] randomDerangement(int n)
throws IllegalArgumentException {
if (n<2)
throw new IllegalArgumentException("argument must be >= 2 but was " + n);
int[] result = new int[n];
boolean[] mark = new boolean[n];
for (int i=0; i<n; i++) {
result[i] = i;
mark[i] = false;
}
int unmarked = n;
for (int i=n-1; i>=0; i--) {
if (unmarked<2) break; // can't move anything else
if (mark[i]) continue; // can't move item at i if marked
// use the rejection method to generate random unmarked index j &lt i;
// this could be replaced by more straightforward technique
int j;
while (mark[j=rand.nextInt(i)]);
// swap two elements of the array
int temp = result[i];
result[i] = result[j];
result[j] = temp;
// mark position j as end of cycle with probability (u-1)D(u-2)/D(u)
double probability
= (new BigDecimal(numberOfDerangements(unmarked-2))).
multiply(new BigDecimal(unmarked-1)).
divide(new BigDecimal(numberOfDerangements(unmarked)),
MathContext.DECIMAL64).doubleValue();
if (rand.nextDouble() < probability) {
mark[j] = true;
unmarked--;
}
// position i now becomes out of play so we could mark it
//mark[i] = true;
// but we don't need to because loop won't touch it from now on
// however we do have to decrement unmarked
unmarked--;
}
return result;
}
// unit tests
public static void main(String[] args) {
// test derangement numbers D(i)
for (int i=0; i<100; i++) {
System.out.println("D(" + i + ") = " + numberOfDerangements(i));
}
System.out.println();
// test quantity (u-1)D_(u-2)/D_u for overflow, inaccuracy
for (int u=2; u<100; u++) {
double d = numberOfDerangements(u-2).doubleValue() * (u-1) /
numberOfDerangements(u).doubleValue();
System.out.println((u-1) + " * D(" + (u-2) + ") / D(" + u + ") = " + d);
}
System.out.println();
// test derangements for correctness, uniform distribution
int size = 5;
long reps = 10000000;
TreeMap<String,Integer> countMap = new TreeMap&ltString,Integer>();
System.out.println("Derangement\tCount");
System.out.println("-----------\t-----");
for (long rep = 0; rep < reps; rep++) {
int[] d = randomDerangement(size);
String s = "";
String sep = "";
if (size > 10) sep = " ";
for (int i=0; i<d.length; i++) {
s += d[i] + sep;
}
if (countMap.containsKey(s)) {
countMap.put(s,countMap.get(s)+1);
} else {
countMap.put(s,1);
}
}
for (String key : countMap.keySet()) {
System.out.println(key + "\t\t" + countMap.get(key));
}
System.out.println();
// large random derangement
int size1 = 1000;
System.out.println("Random derangement of " + size1 + " elements:");
int[] d1 = randomDerangement(size1);
for (int i=0; i<d1.length; i++) {
System.out.print(d1[i] + " ");
}
System.out.println();
System.out.println();
System.out.println("We start to run into memory issues around u=40000:");
{
// increase this number from 40000 to around 50000 to trigger
// out of memory-type exceptions
int u = 40003;
BigDecimal d = (new BigDecimal(numberOfDerangements(u-2))).
multiply(new BigDecimal(u-1)).
divide(new BigDecimal(numberOfDerangements(u)),MathContext.DECIMAL64);
System.out.println((u-1) + " * D(" + (u-2) + ") / D(" + u + ") = " + d);
}
}
}
In C++:
template <class T> void shuffle(std::vector<T>&arr)
{
int size = arr.size();
for (auto i = 1; i < size; i++)
{
int n = rand() % (size - i) + i;
std::swap(arr[i-1], arr[n]);
}
}

ACM Problem: Coin-Flipping, help me identify the type of problem this is

I'm practicing for the upcoming ACM programming competition in a week and I've gotten stumped on this programming problem.
The problem is as follows:
You have a puzzle consisting of a square grid of size 4. Each grid square holds a single coin; each coin is showing either heads (H) and tails (T). One such puzzle is shown here:
H H H H
T T T T
H T H T
T T H T
Any coin that is current showing Tails (T) can be flipped to Heads (H). However, any time we flip a coin, we must also flip the adjacent coins direct above, below and to the left and right in the same row. Thus if we flip the second coin in the second row we must also flip 4 other coins, giving us this arrangment (coins that changed are shown in bold).
H T H H
H H H T
H H H T
T T H T
If a coin is at the edge of the puzzle, so there is no coin on one side or the other, then we flip fewer coins. We do not "wrap around" to the other side. For example, if we flipped the bottom right coin of the arragnement above we would get:
H T H H
H H H T
H H H H
T T T H
Note: Only coins showing (T) tails can be selected for flipping. However, anytime we flip such a coin, adjacent coins are also flipped, regardless of their state.
The goal of the puzzle is to have all coins show heads. While it is possible for some arragnements to not have solutions, all the problems given will have solutions. The answer we are looking for is, for any given 4x4 grid of coins what is the least number of flips in order to make the grid entirely heads.
For Example the grid:
H T H H
T T T H
H T H T
H H T T
The answer to this grid is: 2 flips.
What I have done so far:
I'm storing our grids as two-dimensional array of booleans. Heads = true, tails = false.
I have a flip(int row, int col) method that will flip the adjacent coins according the rules above and I have a isSolved() method that will determine if the puzzle is in a solved state (all heads). So we have our "mechanics" in place.
The part we are having problems with is how should we loop through, going an the least amount of times deep?
Your puzzle is a classic Breadth-First Search candidate. This is because you're looking for a solution with the fewest possible 'moves'.
If you knew the number of moves to the goal, then that would be ideal for a Depth-First Search.
Those Wikipedia articles contain plenty of information about the way the searches work, they even contain code samples in several languages.
Either search can be recursive, if you're sure you won't run out of stack space.
EDIT: I hadn't noticed that you can't use a coin as the primary move unless it's showing tails. That does indeed make order important. I'll leave this answer here, but look into writing another one as well.
No pseudo-code here, but think about this: can you ever imagine yourself flipping a coin twice? What would be the effect?
Alternative, write down some arbitrary board (literally, write it down). Set up some real world coins, and pick two arbitrary ones, X and Y. Do an "X flip", then a "Y flip" then another "X flip". Write down the result. Now reset the board to the starting version, and just do a "Y flip". Compare the results, and think about what's happened. Try it a few times, sometimes with X and Y close together, sometimes not. Become confident in your conclusion.
That line of thought should lead you to a way of determining a finite set of possible solutions. You can test all of them fairly easily.
Hope this hint wasn't too blatant - I'll keep an eye on this question to see if you need more help. It's a nice puzzle.
As for recursion: you could use recursion. Personally, I wouldn't in this case.
EDIT: Actually, on second thoughts I probably would use recursion. It could make life a lot simpler.
Okay, perhaps that wasn't obvious enough. Let's label the coins A-P, like this:
ABCD
EFGH
IJKL
MNOP
Flipping F will always involve the following coins changing state: BEFGJ.
Flipping J will always involve the following coins changing state: FIJKN.
What happens if you flip a coin twice? The two flips cancel each other out, no matter what other flips occur.
In other words, flipping F and then J is the same as flipping J and then F. Flipping F and then J and then F again is the same as just flipping J to start with.
So any solution isn't really a path of "flip A then F then J" - it's "flip <these coins>; don't flip <these coins>". (It's unfortunate that the word "flip" is used for both the primary coin to flip and the secondary coins which change state for a particular move, but never mind - hopefully it's clear what I mean.)
Each coin will either be used as a primary move or not, 0 or 1. There are 16 coins, so 2^16 possibilities. So 0 might represent "don't do anything"; 1 might represent "just A"; 2 might represent "just B"; 3 "A and B" etc.
Test each combination. If (somehow) there's more than one solution, count the number of bits in each solution to find the least number.
Implementation hint: the "current state" can be represented as a 16 bit number as well. Using a particular coin as a primary move will always XOR the current state with a fixed number (for that coin). This makes it really easy to work out the effect of any particular combination of moves.
Okay, here's the solution in C#. It shows how many moves were required for each solution it finds, but it doesn't keep track of which moves those were, or what the least number of moves is. That's a SMOP :)
The input is a list of which coins are showing tails to start with - so for the example in the question, you'd start the program with an argument of "BEFGJLOP". Code:
using System;
public class CoinFlip
{
// All ints could really be ushorts, but ints are easier
// to work with
static readonly int[] MoveTransitions = CalculateMoveTransitions();
static int[] CalculateMoveTransitions()
{
int[] ret = new int[16];
for (int i=0; i < 16; i++)
{
int row = i / 4;
int col = i % 4;
ret[i] = PositionToBit(row, col) +
PositionToBit(row-1, col) +
PositionToBit(row+1, col) +
PositionToBit(row, col-1) +
PositionToBit(row, col+1);
}
return ret;
}
static int PositionToBit(int row, int col)
{
if (row < 0 || row > 3 || col < 0 || col > 3)
{
// Makes edge detection easier
return 0;
}
return 1 << (row * 4 + col);
}
static void Main(string[] args)
{
int initial = 0;
foreach (char c in args[0])
{
initial += 1 << (c-'A');
}
Console.WriteLine("Initial = {0}", initial);
ChangeState(initial, 0, 0);
}
static void ChangeState(int current, int nextCoin, int currentFlips)
{
// Reached the end. Success?
if (nextCoin == 16)
{
if (current == 0)
{
// More work required if we want to display the solution :)
Console.WriteLine("Found solution with {0} flips", currentFlips);
}
}
else
{
// Don't flip this coin
ChangeState(current, nextCoin+1, currentFlips);
// Or do...
ChangeState(current ^ MoveTransitions[nextCoin], nextCoin+1, currentFlips+1);
}
}
}
I would suggest a breadth first search, as someone else already mentioned.
The big secret here is to have multiple copies of the game board. Don't think of "the board."
I suggest creating a data structure that contains a representation of a board, and an ordered list of moves that got to that board from the starting position. A move is the coordinates of the center coin in a set of flips. I'll call an instance of this data structure a "state" below.
My basic algorithm would look something like this:
Create a queue.
Create a state that contains the start position and an empty list of moves.
Put this state into the queue.
Loop forever:
Pull first state off of queue.
For each coin showing tails on the board:
Create a new state by flipping that coin and the appropriate others around it.
Add the coordinates of that coin to the list of moves in the new state.
If the new state shows all heads:
Rejoice, you are done.
Push the new state into the end of the queue.
If you like, you could add a limit to the length of the queue or the length of move lists, to pick a place to give up. You could also keep track of boards that you have already seen in order to detect loops. If the queue empties and you haven't found any solutions, then none exist.
Also, a few of the comments already made seem to ignore the fact that the problem only allows coins that show tails to be in the middle of a move. This means that order very much does matter. If the first move flips a coin from heads to tails, then that coin can be the center of the second move, but it could not have been the center of the first move. Similarly, if the first move flips a coin from tails to heads, then that coin cannot be the center of the second move, even though it could have been the center of the first move.
The grid, read in row-major order, is nothing more than a 16 bit integer. Both the grid given by the problem and the 16 possible moves (or "generators") can be stored as 16 bit integers, thus the problems amounts to find the least possible number of generators which, summed by means of bitwise XOR, gives the grid itself as the result. I wonder if there's a smarter alternative than trying all the 65536 possibilities.
EDIT: Indeed there is a convenient way to do bruteforcing. You can try all the 1-move patterns, then all the 2-moves patterns, and so on. When a n-moves pattern matches the grid, you can stop, exhibit the winning pattern and say that the solution requires at least n moves. Enumeration of all the n-moves patterns is a recursive problem.
EDIT2: You can bruteforce with something along the lines of the following (probably buggy) recursive pseudocode:
// Tries all the n bit patterns with k bits set to 1
tryAllPatterns(unsigned short n, unsigned short k, unsigned short commonAddend=0)
{
if(n == 0)
tryPattern(commonAddend);
else
{
// All the patterns that have the n-th bit set to 1 and k-1 bits
// set to 1 in the remaining
tryAllPatterns(n-1, k-1, (2^(n-1) xor commonAddend) );
// All the patterns that have the n-th bit set to 0 and k bits
// set to 1 in the remaining
tryAllPatterns(n-1, k, commonAddend );
}
}
To elaborate on Federico's suggestion, the problem is about finding a set of the 16 generators that xor'ed together gives the starting position.
But if we consider each generator as a vector of integers modulo 2, this becomes finding a linear combination of vectors, that equal the starting position.
Solving this should just be a matter of gaussian elimination (mod 2).
EDIT:
After thinking a bit more, I think this would work:
Build a binary matrix G of all the generators, and let s be the starting state. We are looking for vectors x satisfying Gx=s (mod 2). After doing gaussian elimination, we either end up with such a vector x or we find that there are no solutions.
The problem is then to find the vector y such that Gy = 0 and x^y has as few bits set as possible, and I think the easiest way to find this would be to try all such y. Since they only depend on G, they can be precomputed.
I admit that a brute-force search would be a lot easier to implement, though. =)
Okay, here's an answer now that I've read the rules properly :)
It's a breadth-first search using a queue of states and the moves taken to get there. It doesn't make any attempt to prevent cycles, but you have to specify a maximum number of iterations to try, so it can't go on forever.
This implementation creates a lot of strings - an immutable linked list of moves would be neater on this front, but I don't have time for that right now.
using System;
using System.Collections.Generic;
public class CoinFlip
{
struct Position
{
readonly string moves;
readonly int state;
public Position(string moves, int state)
{
this.moves = moves;
this.state = state;
}
public string Moves { get { return moves; } }
public int State { get { return state; } }
public IEnumerable<Position> GetNextPositions()
{
for (int move = 0; move < 16; move++)
{
if ((state & (1 << move)) == 0)
{
continue; // Not allowed - it's already heads
}
int newState = state ^ MoveTransitions[move];
yield return new Position(moves + (char)(move+'A'), newState);
}
}
}
// All ints could really be ushorts, but ints are easier
// to work with
static readonly int[] MoveTransitions = CalculateMoveTransitions();
static int[] CalculateMoveTransitions()
{
int[] ret = new int[16];
for (int i=0; i < 16; i++)
{
int row = i / 4;
int col = i % 4;
ret[i] = PositionToBit(row, col) +
PositionToBit(row-1, col) +
PositionToBit(row+1, col) +
PositionToBit(row, col-1) +
PositionToBit(row, col+1);
}
return ret;
}
static int PositionToBit(int row, int col)
{
if (row < 0 || row > 3 || col < 0 || col > 3)
{
return 0;
}
return 1 << (row * 4 + col);
}
static void Main(string[] args)
{
int initial = 0;
foreach (char c in args[0])
{
initial += 1 << (c-'A');
}
int maxDepth = int.Parse(args[1]);
Queue<Position> queue = new Queue<Position>();
queue.Enqueue(new Position("", initial));
while (queue.Count != 0)
{
Position current = queue.Dequeue();
if (current.State == 0)
{
Console.WriteLine("Found solution in {0} moves: {1}",
current.Moves.Length, current.Moves);
return;
}
if (current.Moves.Length == maxDepth)
{
continue;
}
// Shame Queue<T> doesn't have EnqueueRange :(
foreach (Position nextPosition in current.GetNextPositions())
{
queue.Enqueue(nextPosition);
}
}
Console.WriteLine("No solutions");
}
}
If you are practicing for the ACM, I would consider this puzzle also for non-trivial boards, say 1000x1000. Brute force / greedy may still work, but be careful to avoid exponential blow-up.
The is the classic "Lights Out" problem. There is actually an easy O(2^N) brute force solution, where N is either the width or the height, whichever is smaller.
Let's assume the following works on the width, since you can transpose it.
One observation is that you don't need to press the same button twice - it just cancels out.
The key concept is just that you only need to determine if you want to press the button for each item on the first row. Every other button press is uniquely determined by one thing - whether the light above the considered button is on. If you're looking at cell (x,y), and cell (x,y-1) is on, there's only one way to turn it off, by pressing (x,y). Iterate through the rows from top to bottom and if there are no lights left on at the end, you have a solution there. You can then take the min of all the tries.
It's a finite state machine, where each "state" is the 16 bit integer corresponding the the value of each coin.
Each state has 16 outbound transitions, corresponding to the state after you flip each coin.
Once you've mapped out all the states and transitions, you have to find the shortest path in the graph from your beginning state to state 1111 1111 1111 1111,
I sat down and attempted my own solution to this problem (based on the help I received in this thread). I'm using a 2d array of booleans, so it isn't as nice as the people using 16bit integers with bit manipulation.
In any case, here is my solution in Java:
import java.util.*;
class Node
{
public boolean[][] Value;
public Node Parent;
public Node (boolean[][] value, Node parent)
{
this.Value = value;
this.Parent = parent;
}
}
public class CoinFlip
{
public static void main(String[] args)
{
boolean[][] startState = {{true, false, true, true},
{false, false, false, true},
{true, false, true, false},
{true, true, false, false}};
List<boolean[][]> solutionPath = search(startState);
System.out.println("Solution Depth: " + solutionPath.size());
for(int i = 0; i < solutionPath.size(); i++)
{
System.out.println("Transition " + (i+1) + ":");
print2DArray(solutionPath.get(i));
}
}
public static List<boolean[][]> search(boolean[][] startState)
{
Queue<Node> Open = new LinkedList<Node>();
Queue<Node> Closed = new LinkedList<Node>();
Node StartNode = new Node(startState, null);
Open.add(StartNode);
while(!Open.isEmpty())
{
Node nextState = Open.remove();
System.out.println("Considering: ");
print2DArray(nextState.Value);
if (isComplete(nextState.Value))
{
System.out.println("Solution Found!");
return constructPath(nextState);
}
else
{
List<Node> children = generateChildren(nextState);
Closed.add(nextState);
for(Node child : children)
{
if (!Open.contains(child))
Open.add(child);
}
}
}
return new ArrayList<boolean[][]>();
}
public static List<boolean[][]> constructPath(Node node)
{
List<boolean[][]> solutionPath = new ArrayList<boolean[][]>();
while(node.Parent != null)
{
solutionPath.add(node.Value);
node = node.Parent;
}
Collections.reverse(solutionPath);
return solutionPath;
}
public static List<Node> generateChildren(Node parent)
{
System.out.println("Generating Children...");
List<Node> children = new ArrayList<Node>();
boolean[][] coinState = parent.Value;
for(int i = 0; i < coinState.length; i++)
{
for(int j = 0; j < coinState[i].length; j++)
{
if (!coinState[i][j])
{
boolean[][] child = arrayDeepCopy(coinState);
flip(child, i, j);
children.add(new Node(child, parent));
}
}
}
return children;
}
public static boolean[][] arrayDeepCopy(boolean[][] original)
{
boolean[][] r = new boolean[original.length][original[0].length];
for(int i=0; i < original.length; i++)
for (int j=0; j < original[0].length; j++)
r[i][j] = original[i][j];
return r;
}
public static void flip(boolean[][] grid, int i, int j)
{
//System.out.println("Flip("+i+","+j+")");
// if (i,j) is on the grid, and it is tails
if ((i >= 0 && i < grid.length) && (j >= 0 && j <= grid[i].length))
{
// flip (i,j)
grid[i][j] = !grid[i][j];
// flip 1 to the right
if (i+1 >= 0 && i+1 < grid.length) grid[i+1][j] = !grid[i+1][j];
// flip 1 down
if (j+1 >= 0 && j+1 < grid[i].length) grid[i][j+1] = !grid[i][j+1];
// flip 1 to the left
if (i-1 >= 0 && i-1 < grid.length) grid[i-1][j] = !grid[i-1][j];
// flip 1 up
if (j-1 >= 0 && j-1 < grid[i].length) grid[i][j-1] = !grid[i][j-1];
}
}
public static boolean isComplete(boolean[][] coins)
{
boolean complete = true;
for(int i = 0; i < coins.length; i++)
{
for(int j = 0; j < coins[i].length; j++)
{
if (coins[i][j] == false) complete = false;
}
}
return complete;
}
public static void print2DArray(boolean[][] array)
{
for (int row=0; row < array.length; row++)
{
for (int col=0; col < array[row].length; col++)
{
System.out.print((array[row][col] ? "H" : "T") + " ");
}
System.out.println();
}
}
}

Resources