Related
I'm playing a game that has a weapon-forging component, where you combine two weapons to get a new one. The sheer number of weapon combinations (see "6.1. Blade Combination Tables" at http://www.gamefaqs.com/ps/914326-vagrant-story/faqs/8485) makes it difficult to figure out what you can ultimately create out of your current weapons through repeated forging, so I tried writing a program that would do this for me. I give it a list of weapons that I currently have, such as:
francisca
tabarzin
kris
and it gives me the list of all weapons that I can forge:
ball mace
chamkaq
dirk
francisca
large crescent
throwing knife
The problem is that I'm using a brute-force algorithm that scales extremely poorly; it takes about 15 seconds to calculate all possible weapons for seven starting weapons, and a few minutes to calculate for eight starting weapons. I'd like it to be able to calculate up to 64 weapons (the maximum that you can hold at once), but I don't think I'd live long enough to see the results.
function find_possible_weapons(source_weapons)
{
for (i in source_weapons)
{
for (j in source_weapons)
{
if (i != j)
{
result_weapon = combine_weapons(source_weapons[i], source_weapons[j]);
new_weapons = array();
new_weapons.add(result_weapon);
for (k in source_weapons)
{
if (k != i && k != j)
new_weapons.add(source_weapons[k]);
}
find_possible_weapons(new_weapons);
}
}
}
}
In English: I attempt every combination of two weapons from my list of source weapons. For each of those combinations, I create a new list of all weapons that I'd have following that combination (that is, the newly-combined weapon plus all of the source weapons except the two that I combined), and then I repeat these steps for the new list.
Is there a better way to do this?
Note that combining weapons in the reverse order can change the result (Rapier + Firangi = Short Sword, but Firangi + Rapier = Spatha), so I can't skip those reversals in the j loop.
Edit: Here's a breakdown of the test example that I gave above, to show what the algorithm is doing. A line in brackets shows the result of a combination, and the following line is the new list of weapons that's created as a result:
francisca,tabarzin,kris
[francisca + tabarzin = chamkaq]
chamkaq,kris
[chamkaq + kris = large crescent]
large crescent
[kris + chamkaq = large crescent]
large crescent
[francisca + kris = dirk]
dirk,tabarzin
[dirk + tabarzin = francisca]
francisca
[tabarzin + dirk = francisca]
francisca
[tabarzin + francisca = chamkaq]
chamkaq,kris
[chamkaq + kris = large crescent]
large crescent
[kris + chamkaq = large crescent]
large crescent
[tabarzin + kris = throwing knife]
throwing knife,francisca
[throwing knife + francisca = ball mace]
ball mace
[francisca + throwing knife = ball mace]
ball mace
[kris + francisca = dirk]
dirk,tabarzin
[dirk + tabarzin = francisca]
francisca
[tabarzin + dirk = francisca]
francisca
[kris + tabarzin = throwing knife]
throwing knife,francisca
[throwing knife + francisca = ball mace]
ball mace
[francisca + throwing knife = ball mace]
ball mace
Also, note that duplicate items in a list of weapons are significant and can't be removed. For example, if I add a second kris to my list of starting weapons so that I have the following list:
francisca
tabarzin
kris
kris
then I'm able to forge the following items:
ball mace
battle axe
battle knife
chamkaq
dirk
francisca
kris
kudi
large crescent
scramasax
throwing knife
The addition of a duplicate kris allowed me to forge four new items that I couldn't before. It also increased the total number of forge tests to 252 for a four-item list, up from 27 for the three-item list.
Edit: I'm getting the feeling that solving this would require more math and computer science knowledge than I have, so I'm going to give up on it. It seemed like a simple enough problem at first, but then, so does the Travelling Salesman. I'm accepting David Eisenstat's answer since the suggestion of remembering and skipping duplicate item lists made such a huge difference in execution time and seems like it would be applicable to a lot of similar problems.
Start by memoizing the brute force solution, i.e., sort source_weapons, make it hashable (e.g., convert to a string by joining with commas), and look it up in a map of input/output pairs. If it isn't there, do the computation as normal and add the result to the map. This often results in big wins for little effort.
Alternatively, you could do a backward search. Given a multiset of weapons, form predecessors by replacing one of the weapon with two weapons that forge it, in all possible ways. Starting with the singleton list consisting of the singleton multiset consisting of the goal weapon, repeatedly expand the list by predecessors of list elements and then cull multisets that are supersets of others. Stop when you reach a fixed point.
If linear programming is an option, then there are systematic ways to prune search trees. In particular, let's make the problem easier by (i) allowing an infinite supply of "catalysts" (maybe not needed here?) (ii) allowing "fractional" forging, e.g., if X + Y => Z, then 0.5 X + 0.5 Y => 0.5 Z. Then there's an LP formulation as follows. For all i + j => k (i and j forge k), the variable x_{ijk} is the number of times this forge is performed.
minimize sum_{i, j => k} x_{ijk} (to prevent wasteful cycles)
for all i: sum_{j, k: j + k => i} x_{jki}
- sum_{j, k: j + i => k} x_{jik}
- sum_{j, k: i + j => k} x_{ijk} >= q_i,
for all i + j => k: x_{ijk} >= 0,
where q_i is 1 if i is the goal item, else minus the number of i initially available. There are efficient solvers for this easy version. Since the reactions are always 2 => 1, you can always recover a feasible forging schedule for an integer solution. Accordingly, I would recommend integer programming for this problem. The paragraph below may still be of interest.
I know shipping an LP solver may be inconvenient, so here's an insight that will let you do without. This LP is feasible if and only if its dual is bounded. Intuitively, the dual problem is to assign a "value" to each item such that, however you forge, the total value of your inventory does not increase. If the goal item is valued at more than the available inventory, then you can't forge it. You can use any method that you can think of to assign these values.
I think you are unlikely to get a good general answer to this question because
if there was an efficient algorithm to solve your problem, then it would also be able to solve NP-complete problems.
For example, consider the problem of finding the maximum number of independent rows in a binary matrix.
This is a known NP-complete problem (e.g. by showing equivalence to the maximum independent set problem).
We can reduce this problem to your question in the following manner:
We can start holding one weapon for each column in the binary matrix, and then we imagine each row describes an alternative way of making a new weapon (say a battle axe).
We construct the weapon translation table such that to make the battle axe using method i, we need all weapons j such that M[i,j] is equal to 1 (this may involve inventing some additional weapons).
Then we construct a series of super weapons which can be made by combining different numbers of our battle axes.
For example, the mega ultimate battle axe may require 4 battle axes to be combined.
If we are able to work out the best weapon that can be constructed from your starting weapons, then we have solved the problem of finding the maximum number of independent rows in the original binary matrix.
It's not a huge saving, however looking at the source document, there are times when combining weapons produces the same weapon as one that was combined. I assume that you won't want to do this as you'll end up with less weapons.
So if you added a check for if the result_weapon was the same type as one of the inputs, and didn't go ahead and recursively call find_possible_weapons(new_weapons), you'd trim the search down a little.
The other thing I could think of, is you are not keeping a track of work done, so if the return from find_possible_weapons(new_weapons) returns the same weapon that you already have got by combining other weapons, you might well be performing the same search branch multiple times.
e.g. if you have a, b, c, d, e, f, g, and if a + b = x, and c + d = x, then you algorithm will be performing two lots of comparing x against e, f, and g. So if you keep a track of what you've already computed, you'll be onto a winner...
Basically, you have to trim the search tree. There are loads of different techniques to do this: it's called search. If you want more advice, I'd recommend going to the computer science stack exchange.
If you are still struggling, then you could always start weighting items/resulting items, and only focus on doing the calculation on 'high gain' objects...
You might want to start by creating a Weapon[][] matrix, to show the results of forging each pair. You could map the name of the weapon to the index of the matrix axis, and lookup of the results of a weapon combination would occur in constant time.
My task:
Create a program to copy a picture (given as input) using primitives only (like triangle or something). The program should use evolutionary algorithm to create output picture.
My question:
I need to invent an algorithm to create populations and check them (how much - in % - they match the input picture).
I have an idea; you can find it below.
So what I want from you: advice (if you find my idea not so bad) or inspiration (maybe you have a better idea?)
My idea:
Let's say that I'll use only triangles to build the output picture.
My first population is P pictures (generated by using T randomly generated triangles - called Elements).
I check by my fitness function every pictures in population and choose E of them as elite and rest of population just remove:
To compare 2 pictures we check every pixel in picture A and compare his R,G,B with
the same pixel (the same coordinates) in picture B.
I use this:
SingleDif = sqrt[ (Ar - Br)^2 + (Ag - Bg)^2 + (Ab - Bb)^2]
then i sum all differences (from all pixels) - lets call it SumDif
and use:
PictureDif = (DifMax - SumDif)/DifMax
where
DifMax = pictureHeight * pictureWidth * 255*3
The best are used to create the next population in this way:
picture MakeChild(picture Mother, picture Father)
{
picture child;
for( int i = 0; i < T; ++i )
{
j //this is a random number from 0 to 1 - created now
if( j < 0.5 ) child.element(i) = Mother.element(i);
else child.element(i) = Father.element(i)
if( j < some small % ) mutate( child.element(i) );
}
return child;
}
So it's quite simple. Only the mutation needs a comment: So there is always some small probability that element X in child will be different than X in his parent. To do this we make random changes in element in child (change his colour by random number, or add random number to his (x,y) coordinate - or his node).
So this is my idea... I didn't test it, didn't code it.
Please check my idea - what do you think about it?
I would make the number of patches of each child dynamic and get the mutation operation to insert/delete patches with some (low) probability. Of course this could result in a lot of redundancy and bloat in the child's genome. In these situations, it is usually a good idea to use the length of an individual's genome as a parameter of the fitness function so that individuals get rewarded (with a higher fitness value) for using fewer patches. So for example if the PictureDif of individuals A and B are the same but the A has fewer patches than B, then A has a higher fitness.
Another issue is the reproductive operator that you proposed (namely, the crossover operation). In order for the evolutionary process to work efficiently, you need to achieve a reasonable exploration and exploitation balance. One way of doing this is by having a set of reproductive operators that exhibit a good fitness correlation [1] which means the fitness of a child must be close to the fitness of its parent(s).
In the case of single parent reproduction you only need to find the right mutation parameters. However, when it comes to multi-parent reproduction (crossover) one of the frequently used techniques is to produce 2 children (instead of 1) from the same 2 parents. For the first child, each gene comes from the mother with the probability of 0.2 and from the father with the probability of 0.8, and for the second child the other way around. Of course after the crossover, you can do the mutation.
Oh and one more thing, for the mutation operators, when you say
... make random changes in element in child (change his colour by random number, or add random number to his (x,y) coordinate - or his node)
it's a good idea to use a Gaussian distribution to change the colour, coordinate etc.
[1] Evolutionary Computation: A unified approach by Kenneth A. De Jong, page 69
I want to make a linear fit to few data points, as shown on the image. Since I know the intercept (in this case say 0.05), I want to fit only points which are in the linear region with this particular intercept. In this case it will be lets say points 5:22 (but not 22:30).
I'm looking for the simple algorithm to determine this optimal amount of points, based on... hmm, that's the question... R^2? Any Ideas how to do it?
I was thinking about probing R^2 for fits using points 1 to 2:30, 2 to 3:30, and so on, but I don't really know how to enclose it into clear and simple function. For fits with fixed intercept I'm using polyfit0 (http://www.mathworks.com/matlabcentral/fileexchange/272-polyfit0-m) . Thanks for any suggestions!
EDIT:
sample data:
intercept = 0.043;
x = 0.01:0.01:0.3;
y = [0.0530642513911393,0.0600786706929529,0.0673485248329648,0.0794662409166333,0.0895915873196170,0.103837395346484,0.107224784565365,0.120300492775786,0.126318699218730,0.141508831492330,0.147135757370947,0.161734674733680,0.170982455701681,0.191799936622712,0.192312642057298,0.204771365716483,0.222689541632988,0.242582251060963,0.252582727297656,0.267390860166283,0.282890010610515,0.292381165948577,0.307990544720676,0.314264952297699,0.332344368808024,0.355781519885611,0.373277721489254,0.387722683944356,0.413648156978284,0.446500064130389;];
What you have here is a rather difficult problem to find a general solution of.
One approach would be to compute all the slopes/intersects between all consecutive pairs of points, and then do cluster analysis on the intersepts:
slopes = diff(y)./diff(x);
intersepts = y(1:end-1) - slopes.*x(1:end-1);
idx = kmeans(intersepts, 3);
x([idx; 3] == 2) % the points with the intersepts closest to the linear one.
This requires the statistics toolbox (for kmeans). This is the best of all methods I tried, although the range of points found this way might have a few small holes in it; e.g., when the slopes of two points in the start and end range lie close to the slope of the line, these points will be detected as belonging to the line. This (and other factors) will require a bit more post-processing of the solution found this way.
Another approach (which I failed to construct successfully) is to do a linear fit in a loop, each time increasing the range of points from some point in the middle towards both of the endpoints, and see if the sum of the squared error remains small. This I gave up very quickly, because defining what "small" is is very subjective and must be done in some heuristic way.
I tried a more systematic and robust approach of the above:
function test
%% example data
slope = 2;
intercept = 1.5;
x = linspace(0.1, 5, 100).';
y = slope*x + intercept;
y(1:12) = log(x(1:12)) + y(12)-log(x(12));
y(74:100) = y(74:100) + (x(74:100)-x(74)).^8;
y = y + 0.2*randn(size(y));
%% simple algorithm
[X,fn] = fminsearch(#(ii)P(ii, x,y,intercept), [0.5 0.5])
[~,inds] = P(X, y,x,intercept)
end
function [C, inds] = P(ii, x,y,intercept)
% ii represents fraction of range from center to end,
% So ii lies between 0 and 1.
N = numel(x);
n = round(N/2);
ii = round(ii*n);
inds = min(max(1, n+(-ii(1):ii(2))), N);
% Solve linear system with fixed intercept
A = x(inds);
b = y(inds) - intercept;
% and return the sum of squared errors, divided by
% the number of points included in the set. This
% last step is required to prevent fminsearch from
% reducing the set to 1 point (= minimum possible
% squared error).
C = sum(((A\b)*A - b).^2)/numel(inds);
end
which only finds a rough approximation to the desired indices (12 and 74 in this example).
When fminsearch is run a few dozen times with random starting values (really just rand(1,2)), it gets more reliable, but I still wouln't bet my life on it.
If you have the statistics toolbox, use the kmeans option.
Depending on the number of data values, I would split the data into a relative small number of overlapping segments, and for each segment calculate the linear fit, or rather the 1-st order coefficient, (remember you know the intercept, which will be same for all segments).
Then, for each coefficient calculate the MSE between this hypothetical line and entire dataset, choosing the coefficient which yields the smallest MSE.
I've already written a solution for this, but it doesn't feel "right", so I'd like some input from others.
The rules are:
Movement is on a 2D grid (Directions arbitrarily labelled N, NE, E, SE, S, SW, W, NW)
Probabilities of moving in a given direction are relative to the direction of travel (i.e. 40% represents ahead), and weighted:
[14%][40%][14%]
[ 8%][ 4%][ 8%]
[ 4%][ 4%][ 4%]
This means with overwhelming probability, travel will continue along its current trajectory. The middle value represents stopping. As an example, if the last move was NW, then the absolute probabilities would read:
[40%][14%][ 8%]
[14%][ 4%][ 4%]
[ 8%][ 4%][ 4%]
The probabilities are approximate - one thing I toyed with was making stopped a static 5% chance outside of the main calculation, which would have altered the probability of any other operation ever so slightly.
My current algorithm is as follows (in simplified pseudocode):
int[] probabilities = [4,40,14,8,4,4,4,8,14]
if move.previous == null:
move.previous = STOPPED
if move.previous != STOPPED:
// Cycle probabilities[1:8] array until indexof(move.previous) = 40%
r = Random % 99
if r < probabilities.sum[0:0]:
move.current = STOPPED
elif r < probabilities.sum[0:1]:
move.current = NW
elif r < probabilities.sum[0:2]:
move.current = NW
...
Reasons why I really dislike this method:
* It forces me to assign specific roles to array indices: [0] = stopped, [1] = North...
* It forces me to operate on a subset of the array when cycling (i.e. STOPPED always remains in place)
* It's very iterative, and therefore, slow. It has to check every value in turn until it gets to the right one. Cycling the array requires up to 4 operations.
* A 9-case if-block (most languages do not allow dynamic switches).
* Stopped has to be special cased in everything.
Things I have considered:
* Circular linked list: Simplifies the cycling (make the pivot always equal north) but requires maintaining a set of pointers, and still involves assigning roles to specific indices.
* Vectors: Really not sure how I'd go about weighting this, plus I'd need to worry about magnitude.
* Matrices: Rotating matrices does not work like that :)
* Use a well-known random walk algorithm: Overkill? Though recommendations are considered.
* Trees: Just thought of this, so no real thought given to it...
So. Does anyone have any bright ideas?
8You have 8 directions and when you hit some direction you have to "rotate this matrix"
But this is just modulo over finite field.
Since you have only 100 integers to pick probability from, you can just putt all integers in list and value from each integers points to index of your direction.
This direction you rotate (modulo addition) in way that it points to move that you have to make.
And than you have one array that have difference that you have to apply to your move.
somethihing like that.
40 numbers 14 numbers 8 numbers
int[100] probab={0,0,0,0,0,0,....,1,1,1,.......,2,2,2,...};
and then
N NE E SE STOP
int[9] next_move={{0,1},{ 1,1},{1,1},{1,-1}...,{0,0}}; //in circle
So you pick
move=probab[randint(100)]
if(move != 8)//if 8 you got stop
{
move=(prevous_move+move)%8;
}
move_x=next_move[move][0];
move_y=next_move[move][1];
Use a more direct representation of direction in your algorithms, something like a (dx, dy) pair, for example.
This allows you to move by just having x += dx; y += dy;
(You can still use the "direction ENUM" + a lookup table if you wish...)
Your next problem is finding a good representation of the "probability table". Since r only ranges from 1 to 99 it might be feasible to just do a dumb array and use prob_table[r] directly.
Then, compute a 3x3 matrix of these probability tables using the method of your choice. It doesn't matter if it is slow because you only do it once.
To get the next direction simply
prob_table = dir_table[curr_dx][curr_dy];
(curr_dx, curr_dy) = get_next_dir(prob_table, random_number());
This is going to be a long post and just for fun, so if you don't have much time better go help folks with more important questions instead :)
There is a game called "Tower Bloxx" recently released on xbox. One part of the game is to place different colored towers on a field in a most optimal way in order to maximize number of most valuable towers. I wrote an algorithm that would determine the most efficient tower placement but it is not very efficient and pretty much just brute forcing all possible combinations. For 4x4 field with 4 tower types it solves it in about 1 hr, 5 tower types would take about 40 hours which is too much.
Here are the rules:
There are 5 types of towers that could be placed on a field. There are several types of fields, the easiest one is just 4x4 matrix, others fields have some "blanks" where you can't build. Your aim is to put as many the most valuable towers on a field as possible to maximize total tower value on a field (lets assume that all towers are built at once, there is no turns).
Tower types (in order from less to most valuable):
Blue - can be placed anywhere, value = 10
Red - can be placed only besides blue, value = 20
Green - placed besides red and blue, value = 30
Yellow - besides green, red and blue, value = 40
White - besides yellow, green, red and blue, value = 100
Which means that for example green tower should have at least 1 red and 1 blue towers at either north, south, west or east neighbor cells (diagonals don't count). White tower should be surrounded with all other colors.
Here is my algorithm for 4 towers on 4x4 field:
Total number of combinations = 4^16
Loop through [1..4^16] and convert every number to base4 string in order to encode tower placement, so 4^16 = "3333 3333 3333 3333" which would represent our tower types (0=blue,...,3=yellow)
Convert tower placement string into matrix.
For every tower in a matrix check its neighbors and if any of requirements fails this whole combination fails.
Put all correct combinations into an array and then sort this array as strings in lexicographic order to find best possible combination (first need to sort characters in a string).
The only optimization I came up with is to skip combinations that don't contain any most valuable towers. It skips some processing but I still loop through all 4^16 combinations.
Any thought how this can be improved? Code samples would be helpful if in java or php.
-------Update--------
After adding more illegal states (yellow cannot be built in the corners, white cannot be built in corners and on the edges, field should contain at least one tower of each type), realizing that only 1 white tower could be possibly built on 4x4 field and optimizing java code the total time was brought down from 40 to ~16 hours. Maybe threading would bring it down to 10 hrs but that's probably brute forcing limit.
I found this question intriguing, and since I'm teaching myself Haskell, I decided to try my hand at implementing a solution in that language.
I thought about branch-and-bound, but couldn't come up with a good way to bound the solutions, so I just did some pruning by discarding boards that violate the rules.
My algorithm works by starting with an "empty" board. It places each possible color of tower in the first empty slot and in each case (each color) then recursively calls itself. The recursed calls try each color in the second slot, recursing again, until the board is full.
As each tower is placed, I check the just-placed tower and all of it's neighbors to verify that they're obeying the rules, treating any empty neighbors as wild cards. So if a white tower has four empty neighbors, I consider it valid. If a placement is invalid, I do not recurse on that placement, effectively pruning the entire tree of possibilities under it.
The way the code is written, I generate a list of all possible solutions, then look through the list to find the best one. In actuality, thanks to Haskell's lazy evaluation, the list elements are generated as the search function needs them, and since they're never referred to again they become available for garbage collection right away, so even for a 5x5 board memory usage is fairly small (2 MB).
Performance is pretty good. On my 2.1 GHz laptop, the compiled version of the program solves the 4x4 case in ~50 seconds, using one core. I'm running a 5x5 example right now to see how long it will take. Since functional code is quite easy to parallelize, I'm also going to experiment with parallel processing. There's a parallelized Haskell compiler that will not only spread the work across multiple cores, but across multiple machines as well, and this is a very parallelizable problem.
Here's my code so far. I realize that you specified Java or PHP, and Haskell is quite different. If you want to play with it, you can modify the definition of the variable "bnd" just above the bottom to set the board size. Just set it to ((1,1),(x, y)), where x and y are the number of columns and rows, respectively.
import Array
import Data.List
-- Enumeration of Tower types. "Empty" isn't really a tower color,
-- but it allows boards to have empty cells
data Tower = Empty | Blue | Red | Green | Yellow | White
deriving(Eq, Ord, Enum, Show)
type Location = (Int, Int)
type Board = Array Location Tower
-- towerScore omputes the score of a single tower
towerScore :: Tower -> Int
towerScore White = 100
towerScore t = (fromEnum t) * 10
-- towerUpper computes the upper bound for a single tower
towerUpper :: Tower -> Int
towerUpper Empty = 100
towerUpper t = towerScore t
-- boardScore computes the score of a board
boardScore :: Board -> Int
boardScore b = sum [ towerScore (b!loc) | loc <- range (bounds b) ]
-- boardUpper computes the upper bound of the score of a board
boardUpper :: Board -> Int
boardUpper b = sum [ bestScore loc | loc <- range (bounds b) ]
where
bestScore l | tower == Empty =
towerScore (head [ t | t <- colors, canPlace b l t ])
| otherwise = towerScore tower
where
tower = b!l
colors = reverse (enumFromTo Empty White)
-- Compute the neighbor locations of the specified location
neighborLoc :: ((Int,Int),(Int,Int)) -> (Int,Int) -> [(Int,Int)]
neighborLoc bounds (col, row) = filter valid neighborLoc'
where
valid loc = inRange bounds loc
neighborLoc' = [(col-1,row),(col+1,row),(col,row-1),(col,row+1)]
-- Array to store all of the neighbors of each location, so we don't
-- have to recalculate them repeatedly.
neighborArr = array bnd [(loc, neighborLoc bnd loc) | loc <- range bnd]
-- Get the contents of neighboring cells
neighborTowers :: Board -> Location -> [Tower]
neighborTowers board loc = [ board!l | l <- (neighborArr!loc) ]
-- The tower placement rule. Yields a list of tower colors that must
-- be adjacent to a tower of the specified color.
requiredTowers :: Tower -> [Tower]
requiredTowers Empty = []
requiredTowers Blue = []
requiredTowers Red = [Blue]
requiredTowers Green = [Red, Blue]
requiredTowers Yellow = [Green, Red, Blue]
requiredTowers White = [Yellow, Green, Red, Blue]
-- cellValid determines if a cell satisfies the rule.
cellValid :: Board -> Location -> Bool
cellValid board loc = null required ||
null needed ||
(length needed <= length empties)
where
neighbors = neighborTowers board loc
required = requiredTowers (board!loc)
needed = required \\ neighbors
empties = filter (==Empty) neighbors
-- canPlace determines if 'tower' can be placed in 'cell' without
-- violating the rule.
canPlace :: Board -> Location -> Tower -> Bool
canPlace board loc tower =
let b' = board // [(loc,tower)]
in cellValid b' loc && and [ cellValid b' l | l <- neighborArr!loc ]
-- Generate a board full of empty cells
cleanBoard :: Array Location Tower
cleanBoard = listArray bnd (replicate 80 Empty)
-- The heart of the algorithm, this function takes a partial board
-- (and a list of empty locations, just to avoid having to search for
-- them) and a score and returns the best board obtainable by filling
-- in the partial board
solutions :: Board -> [Location] -> Int -> Board
solutions b empties best | null empties = b
solutions b empties best =
fst (foldl' f (cleanBoard, best) [ b // [(l,t)] | t <- colors, canPlace b l t ])
where
f :: (Board, Int) -> Board -> (Board, Int)
f (b1, best) b2 | boardUpper b2 <= best = (b1, best)
| otherwise = if newScore > lstScore
then (new, max newScore best)
else (b1, best)
where
lstScore = boardScore b1
new = solutions b2 e' best
newScore = boardScore new
l = head empties
e' = tail empties
colors = reverse (enumFromTo Blue White)
-- showBoard converts a board to a printable string representation
showBoard :: Board -> String
showBoard board = unlines [ printRow row | row <- [minrow..maxrow] ]
where
((mincol, minrow), (maxcol, maxrow)) = bounds board
printRow row = unwords [ printCell col row | col <- [mincol..maxcol] ]
printCell col row = take 1 (show (board!(col,row)))
-- Set 'bnd' to the size of the desired board.
bnd = ((1,1),(4,4))
-- Main function generates the solutions, finds the best and prints
-- it out, along with its score
main = do putStrLn (showBoard best); putStrLn (show (boardScore best))
where
s = solutions cleanBoard (range (bounds cleanBoard)) 0
best = s
Also, please remember this is my first non-trivial Haskell program. I'm sure it can be done much more elegantly and succinctly.
Update: Since it was still very time-consuming to do a 5x5 with 5 colors (I waited 12 hours and it hadn't finished), I took another look at how to use bounding to prune more of the search tree.
My first approach was to estimate the upper bound of a partially-filled board by assuming every empty cell is filled with a white tower. I then modified the 'solution' function to track the best score seen and to ignore any board whose upper bound is less than than that best score.
That helped some, reducing a 4x4x5 board from 23s to 15s. To improve it further, I modified the upper bound function to assume that each Empty is filled with the best tower possible, consistent with the existing non-empty cell contents. That helped a great deal, reducing the 4x4x5 time to 2s.
Running it on 5x5x5 took 2600s, giving the following board:
G B G R B
R B W Y G
Y G R B R
B W Y G Y
G R B R B
with a score of 730.
I may make another modification and have it find all of the maximal-scoring boards, rather than just one.
If you don't want to do A*, use a branch and bound approach. The problem should be relatively easy to code up because your value functions are well defined. I imagine you should be able to prune off huge sections of the search space with relative ease. However because your search space is pretty large it may still take some time. Only one way to find out :)
The wiki article isn't the best in the world. Google can find you a ton of nice examples and trees and stuff to further illustrate the approach.
One easy way to improve the brute force method is to explore only legal states. For example, if you are trying all possible states, you will be testing many states where the top right corner is a white tower. All of these states will be illegal. It doesn't make sense to generate and test all of those states. So you want to generate your states one block at a time, and only go deeper into the tree when you are actually at a potentially valid state. This will cut down your search tree by many orders of magnitude.
There may be further fancy things you can do, but this is an easy to understand (hopefully) improvement to your current solution.
I think you will want to use a branch-and-bound algorithm because I think coming up with a good heuristic for an A* implementation will be hard (but, that's just my intuitition).
The pseudo-code for a branch-and-bound implementation is:
board = initial board with nothing on it, probably a 2D array
bestBoard = {}
function findBest(board)
if no more pieces can be added to board then
if score(board) > score(bestBoard) then
bestBoard = board
return
else
for each piece P we can legally add to board
newBoard = board with piece P added
//loose upper bound, could be improved
if score(newBoard) + 100*number of blanks in newBoard > score(bestBoard)
findBestHelper(newBoard)
The idea is that we search all possible boards, in order, but we keep track of the best one we have found so far (this is the bound). Then, if we find a partial board which we know will never be better than the best one so far then we stop looking working on that partial board: we trim that branch of the search tree.
In the code above I am doing the check by assuming that all the blanks would be filled by the white pieces, as we can't do better than that. I am sure that with a little bit of thought you can come up with a tighter bound than that.
Another place where you can try to optimize is in the order of the for-each loop. You want to try pieces in the order correct order. That is, optimally you want the first solution found to be the best one, or at least one with a really high score.
It seems like a good approach would be to start with a white tower and then build a set of towers around it based on the requirements, trying to find the smallest possible colored set of shapes which can act as interlocking tiles.
I wanted to advocate linear programming with integer unknowns, but it turns out that it's NP-hard even in the binary case. However, you can still get great success at optimizing a problem like yours, where there are many valid solutions and you simply want the best one.
Linear programming, for this kind of problem, essentially amounts to having a lot of variables (for example, the number of red towers present in cell (M, N)) and relationships among the variables (for example, the number of white towers in cell (M, N) must be less than or equal to the number of towers of the non-white color that has the smallest such number, among all its neighbors). It's kind of a pain to write up a linear program, but if you want a solution that runs in seconds, it's probably your best bet.
You've received a lot of good advice on the algorithmic side of things, so I don't have a lot to add. But, assuming Java as the language, here are a few fairly obvious suggestions for performance improvement.
Make sure you're not instantiating any objects inside that 4^16 loop. It's much, much cheaper for the JVM to re-initialize an existing object than to create a new one. Even cheaper to use arrays of primitives. :)
If you can help it, step away from the collection classes. They'll add a lot of overhead that you probably don't need.
Make sure you're not concatenating any strings. Use StringBuilder.
And lastly, consider re-writing the whole thing in C.