Finding the number of cases of solving paint-tool-puzzle - algorithm

I was making a kind of paint-tool-puzzle game.
It's pretty easy to understand the rule if you see the short previews of the puzzle.
Preview 1
Preview 2
As you see, Some color blocks have colored triangles. When you click a triangle, it changes all the same color around it into the color of itself.
The goal is to unify the whole color blocks into one single color block.
I was trying to find the the number of cases of solving the puzzle algorithmically. So I replaced the puzzle in simple graph data structure and set up input format.
line 1) Pair of Integers v and e : numbers of vertices and numbers of edges
lines 2...v+1) one Character or pairs of Characters : color of vertex and if exists, a color of triangle inside.
line v+2...v+e+1) Pair of Integers : indexes of two vertices to be linked each other.
for example, a graph of Preview 1 can be shown like this. (each vertex indicates from leftmost to rightmost color block.)
5 5
A C
B C
C D
D A
C B
0 1
1 2
1 3
2 3
3 4
(The result should be 1. There's only one way to solve the puzzle.)
Then I programmed the code in C#; Made a structure which can indicate each color block, Made several methods to implement combination of several blocks with same colors, changing color of the block when clicking the triangle...
But all things I can do with them is just brute-forcing all possible combinations of clicking every triangles, which will take a time of enormous when solving a bit more complicated puzzle.
I need more efficient way to solve the problem or I just want to know if there's any algorithm which can be run faster than factorial time or not.
I've tried dynamic programming to improve performance, but I don't think the problem can be broken down into smaller pieces and I don't have any clue to apply memoiaztion to the whole bunch of data.
I ask if you have any ideas to help me with the problem.
ps. sorry if you felt inconvenient reading imperfect english. it's been a long time since i've posted a piece of writing in english.

Related

Procedural Maze Algorithm With Cells Determined Independently of Neighbors

I was thinking about maze algorithms recently (mostly because I'm working on a game, but I felt this is a more general question than game development related). In simple terms, I was wondering if there is a sort of maze algorithm that can generate (a possibly infinite number of) cells without any information specifically about the cell's neighbors. I imagine, if such a thing were possible, it would rely heavily upon noise functions such as Perlin or Simplex.
Each cell has four walls, these are used when actually rendering the maze so that corridors and walls are not the same thickness.
Let's say, for example, I'd like a cell at (32, 15) to generate its walls.
I know of algorithms like Ellers (which requires a limited number of columns, but infinite rows) and the Virtual fractal Mazes algorithm (which needs to know previous cells in order to build upon them infinitely in both x and y directions).
Does anyone know of any algorithm I could look into for this specific request? If not, are there any algorithms that are good for chunk-based mazes that you know of?
(Note: I did search around for a bit through StackOverflow to see if there were any questions with similar requests to mine, but I did not come across any. If you happen to know of one, a link would be greatly appreciated :D)
Thank you in advance.
Seeeeeecreeeets. My preeeeciooouss secretts. But yeah I can understand the frustration so I'll throw this one to you OP/SO. Feel free to update the PCG Wiki if you're not as lazy as me :3
There are actually many ways to do this. Some of the best techniques for procgen are:
Asking what you really want.
Design backwards. Play in reverse. Result is forwards.
Look at a random sampling of your target goal and try to see overall patterns.
But to get back to the question, there are two simple ways and they both start from asking what your really want. I'll give those first.
The first is to create 2 layers. Both are random noise. You connect the top and the bottom so they're fully connected. No isolated portions. This is asking what you really want which is connected-ness. And then guaranteeing it in a local clean-up step. (tbh I forget the function applied to layer 2 that guarantees connected-ness. I can't find the code atm.... But I remember it was a really simple local function... XOR, Curl, or something similar. Maybe you can figure it out before I fix this).
The second way is using the properties of your functions. As long as your random function is smooth enough you can take the gradient and assign a tile to it. The way you assign the tiles changes the maze structure but you can guarantee connectivity by clever selection of tiles for each gradient (b/c similar or opposite gradients are more likely to be near each other on a smooth gradient function). From here your smooth random can be any form of Perlin Noise, etc. Once again a asking what you want technique.
For backwards-reversed you unfortunately have an NP problem (I'm not sure if it's hard, complete, or whatever it's been a while since I've worked on this...). So while you could generate a random map of distances down a maze path. And then from there generate the actual obstacles... it's not really advisable. There's also a ton of consideration on different cases even for small mazes...
012
123
234
Is simple. There's a column in the lower right corner of 0 and the middle 2 has an _| shaped wall.
042
123
234
This one makes less sense. You still are required to have the same basic walls as before on all the non-changed squares... But you can't have that 4. It needs to be within 1 of at least a single neighbor. (I mean you could have a +3 cost for that square by having something like a conveyor belt or something, but then we're out of the maze problem) Okay so....
032
123
234
Makes more sense but the 2 in the corner is nonsense once again. Flipping that from a trough to a peak would give.
034
123
234
Which makes sense. At any rate. If you can get to this point then looking at local neighbors will give you walls if it's +/-1 then no wall. Otherwise wall. Also note that you can break the rules for the distance map in a consistent way and make a maze just fine. (Like instead of allowing a column picking a wall and throwing it up. This is just loop splitting at this point and should be safe)
For random sampling as the final one that I'm going to look at... Certain maze generation algorithms in the limit take on some interesting properties either as an average configuration or after millions of steps. Some form Voronoi regions. Some form concentric circles with a randomly flipped wall to allow a connection between loops. Etc. The loop one is good example to work off of. Create a set of loops. Flip a random wall on each loop. One will delete a wall which will create access to the next loop. One will split a path and offer a dead-end and a continuation. For a random flip to be a failure there has to be an opening and a split made right next to each other (unless you allow diagonals then we're good). So make loops. Generate random noise per loop. Xor together. Replace local failures with a fixed path if no diagonals are allowed.
So how do we get random noise per loop? Or how do we get better loops than just squares? Just take a random function. Separate divergence and now you have a loop map. If you have the differential equations for the source random function you can pick one random per loop. A simpler way might be to generate concentric circular walls and pick a random point at each radius to flip. Then distort the final result. You have to be careful your distortion doesn't violate any of your path-connected-ness conditions at that point though.

Minutiae-based fingerprint matching algorithm

The problem
I need to match two fingerprints and give a score of resemblance.
I have posted a similar question before, but I think I've made enough progress to warrant a new question.
The input
For each image, I have a list of minutiae (important points). I want to match the fingerprints by matching these two lists.
When represented graphically, they look like this:
A minutia consists of a triplet (i, j, theta) where:
i is the row in a matrix
j is the column in a matrix
theta is a direction. I don't use that parameter yet in my matching algorithm.
What I have done so far
For each list, find the "dense regions" or "clusters". Some areas have more points than others, and I have written an algorithm to find them. I can explain further if you want.
Shifting the second list in order to account for the difference in finger position between both images. I neglect differences in finger rotation. The shift is done by aligning the barycenters of the centers of the clusters. (It is more reliable than the barycenter of all minutiae)
I tried building a matrix for each list (post-shift) so that for every minutia increments the corresponding element and it's close neighbours, like below.
1 1 1 1 1 1 1
1 2 2 2 2 2 1
1 2 3 3 3 2 1
1 2 3 4 3 2 1
1 2 3 3 3 2 1
1 2 2 2 2 2 1
1 1 1 1 1 1 1
By subtracting the two matrices and adding up the absolute values of all elements in the resulting matrix, I hoped to get low numbers for close fingerprints.
Results
I tested a few fingerprints and found that the number of clusters is very stable. Matching fingerprints very often have the same number of clusters, and different fingers give different numbers. So that will definitely be a factor in the overall resemblance score.
The sum of the differences didn't work at all however. There was no correlation between resemblance and the sum.
Thoughts
I may need to use the directions of the points but I don't know how yet
I could use the standard deviation of the points, or of the clusters.
I could repeat the process for different types of minutiae. Right now my algorithm detects ridge endings and ridge bifurcations but maybe I should process these separately.
Question: How can I improve my algorithm ?
Edit
I've come a long way since posting this question, so here's my update.
I dropped the bifurcations altogether, because my thinning algorithm messes those up too often. I did however end up using the angles quite a lot.
My initial cluster-counting idea does hold up pretty well on the small scale tests I ran (different combinations of my fingers and those of a handful of volunteers).
I give a score based on the following tests (10 tests, so 10% per success. It's a bit naïve but I'll find a better way to turn these 10 results into a score, as each test has its specificities):
Cluster-thingy (all the following don't use clusters, but minutiae. This is the only cluster-related approach I took)
Mean i position
Mean angle
i variance
j variance
Angle variance
i kurtosis
j kurtosis
Angle kurtosis
j skewness
A statistical approch indeed.
Same finger comparisons give pretty much always between 80 and 100%. Odd finger comparisons between 0 and 60% (not often 60%). I don't have exact numbers here so I won't pretend this a statistically significant success but it seems like a good first shot.
Your clustering approach is interesting, but one thing I'm curious about is how well you've tested it. For a new matching algorithm to be useful with respect to all the research and methods that already exists, you need to have a reasonably low EER. Have you tested your method with any of the standard databases? I have doubts as to the ability of cluster counts and locations alone to identify individuals at larger scales.
1) Fingerprint matching is a well studied problem and there are many good papers that can help you implement this. For a nice place to start, check out this paper, "Fingerprint Minutiae Matching Based on the Local and Global Structures" by Jiang & Yau. It's a classic paper, a short read (only 4 pages), and can be implemented fairly reasonably. They also define a scoring metric that can be used to quantify the degree to which two fingerprint images match. Again, this should only be a starting point because these days there are many algorithms that perform better.
2) If you want your algorithm to be robust, it should consider transformations of the fingerprint between images. Scanned fingerprints and certainly latent prints may not be consistent from image to image.
Also, calculating the direction of the minutiae points provides a method for handling fingerprint rotations. By measuring the angles between minutiae point directions, which will remain the same or close to the same across multiple images regardless of global rotation (though small inconsistencies may occur because skin is not rigid and may stretch slightly), you can find the best set of corresponding minutia pairs or triplets and use them as the basis for rotational alignment.
3) I recommend that you distinguish between ridge line endings and bifurcations. The more features you can isolate, the more accurately you can determine whether or not the fingerprints match. You might also consider the number of ridge lines that occur between each minutiae point.
This image below illustrates the features used by Jiang and Yau.
d: Euclidean distance between minutiae
θ: Angle measure between minutiae directions
φ: Global minutiae angle
n: Number of ridge lines between minutiae i and j
If you haven't read the Handbook of Fingerprint Recognition, I recommend it.

Reconstruction a signal from random samples with holes

I've encountered the following problem as part of my master thesis, and having been unable to find a suitable solution over the last few weeks I will ask the masses.
The problem 1
Assume there exist an (unknown) sequence of symbols of a known length. Say for instance
ABCBACBBBAACBAABCCBABBCA... # 2000 Symbols long
Now, given N samples from arbitrary positions in the sequence, the task is to reconstruct the original sequence. For instance:
ABCBACBBBAA
ACBBBAACBAABCCBAB
CBACBBBAACBAAB
BAABCCBABBCA
...
The problem 2 (Harder)
Now, on the bright side, there is no limit to how many samples I can make, whilst on the not so bright side there is more to the story.
The samples are noisy. i.e. There might be errors.
There are known holes in the samples. I am only able to observe every 4-6th symbol.
Thus the samples are actually looking more like this:
A A A
A A A C
C B B
B B C* # The C should have been an A.
...
I have tried the following:
Let S be the set of all partial noisy sequences with holes.
Greedy algorithm with random sampling and sliding window.
Let X be the the "best" sequence thus far.
Set X as a random sample from S.
Choose a sequence v from S
Slide v along X and score the match, and choose the "best" sequence as the new X.
Repeat from 3.
The problem with this algorithm is that I have been unable to find a good metric to score the sequences. Especially when considering the holes + noise. The result tended to favor shorter sequences, and the result was highly divergent in subsequent runs. Ideas to resolve this are most welcome.
Trying to align the start of the sequence.
This approach attempted to use the fact that I might be able to identify a suffix in the strings that likely make up beginning of the unknown sequence. However, due to the holes in the samples, I would need to shift even the matching sequences a few steps right or left. This results in exponential complexity and makes the problem intractable.
I have also played with the idea of using a Hidden Markov Model, but am thwarted on how to deal with the missing data.
Other ideas include, trying max flow through a graph built from the strings (don't think this will work), trellis decoding [Viterbi] (don't see how I can deal with samples starting in the middle of the unknown sequence) and more.
Any fresh Ideas are very welcome. Links/references to relevant articles are like manna!
Specific information about my data set
I have three symbols S (start), A and B.
I am < 60% certain any given symbol is sampled correctly.
The S symbol should only appear a few times at the start of the master sequence, but does occur more often due to misclassification.
The symbol B occurs about 1.5 times as often as A in the master sequence.
Problem 1 is known as the Shortest Common Supersequence problem. It is NP-hard for more than two input strings, even with only two symbols. Problem 2 is an instance of Multiple Sequence Alignment. There are many algorithms and implementations for it, mostly heuristic since it is also NP-hard in general.

Snake cube puzzle correctness

TrialPay posted a programming question about a snake cube puzzle on their blog.
Recently, one of our engineers introduced us to the snake cube. A snake cube is a puzzle composed of a chain of cubelets, connected by an elastic band running through each cubelet. Each cubelet can rotate 360° about the elastic band, allowing various structures to be built depending upon the way in which the chain is initially constructed, with the ultimate goal of arranging the cubes in such a way to create a cube.
Example:
This particular arrangement contains 17 groups of cubelets, composed of 8 groups of two cubelets and 9 groups of three cubelets. This arrangement can be expressed in a variety of ways, but for the purposes of this exercise, let '0' denote pieces whose rotation does not change the orientation of the puzzle, or may be considered a "straight" piece, while '1' will denote pieces whose rotation changes the puzzle configuration, or "bend" the snake. Using that schema, the snake puzzle above could be described as 001110110111010111101010100.
Challenge:
Your challenge is to write a program, in any language of your choosing, that takes the cube dimensions (X, Y, Z) and a binary string as input, and outputs '1' (without quotes) if it is possible to solve the puzzle, i.e. construct a proper XYZ cube given the cubelet orientation, and '0' if the current arrangement cannot be solved.
I posted a semi-detailed explanation of the solution, but how do I determine if the program solves the problem? I thought about getting more test cases, but I ran into some problems:
The snake cube example from TrialPay's blog has the same combination as the picture on Wikipedia's Snake Cube page and www.mathematische-basteleien.de.
It's very tedious to manually convert an image into a string.
I tried to make a program that would churn out a lot of combinations:
#We should start at the binary representation of 16777216 (00100...), because
#lesser numbers have more than 2 consecutive 0s (000111...)
i = 16777216
solved = []
while i <= 2**27:
s = str(bin(i))[2:]
#Add 0s
if len(s) < 27:
s = '0'*(27-len(s)) + s
#Check if there are more than 2 consecutive 0s
print s
if s.find("000") != -1:
if snake_cube_solution(3, 3, 3, s) == 1:
solved.append(s)
i += 1
But it just takes forever to finish executing. Is there a better way to verify the program?
Thanks in advance!
TL;DR: This isn't a programming problem, but a mathematical one. You may be better served at math.stackexchange.com.
Since the cube size and snake length are passed as input, the space of inputs a checker program would need to verify is essentially infinite. Even though checking the solutions's answer for a single input is reasonable, brute forcing this check across the entire input space is clearly not.
If your solution fails on certain cases, your checker program can help you find these. However it can't establish your program's correctness: if your solution is actually correct the checker will simply run forever and leave you wondering.
Unfortunately (or not, depending on your tastes), what you are looking for is not a program but a mathematical proof.
(Proving) Algorithm correctness is itself an entire field of study, and you can spend a long time in it. That said, proof by induction is often applicable (especially for recursive algorithms.)
Other times, navigating between state configurations can be restated as optimizing a utility function. Proving things about the space being optimized (such as it has only one extrema) can then translate to a proof of program correctness.
Your state configurations in this second approach could be snake orientations, or they might be some deeper structure. For example, the general strategy underneath solving a Rubik's cube
isn't usually stated on literal cube states, but on expressions of a group of relevant symmetries. This is what I personally expect your solution will eventually play out as.
EDIT: Years later, I feel I should point out that for a given, fixed cube size and snake length, of course the search space is actually finite. You can write a program to brute-force check all combinations. If you were clever, you could even argue that the times to check a set of cases can be treated as a set of independent random variables. From this you could build a reasonable progress bar to estimate how (very) long your wait would be.
I think your assertion that there can not be three consecutive 0's is false. Consider this arrangement:
000
100
101
100
100
101
100
100
100
One of the problems I'm having with this puzzle is the notation. A 1 indicates that the cubelet can change the puzzle's orientation, but about which axis? In my example above, assume that the Y axis is vertical and the X axis is horizontal. A 1 on the left indicates the ability to rotate about the cubelet's Y axis, and a 1 on the right indicates the ability to rotate about the cubelet's X axis.
I think it's possible to construct an arrangement similar to that above, but with three 000 groups. But I don't have the notation for it. Clearly, the example above could be modified so that the first three lines are:
001
000
101
With the first segment's 1 indicating rotation about the Y axis.
I wrote a Java application for the same problem not long ago.
I used the backtracking algorithm for this.
You just have to do an recursive search through the whole cube checking what directions are possible. If you have found one, you can stop and print the solution (I chose to print out all solutions).
For the 3x3x3 cubes my program solved them in under a second, for the bigger ones it takes about five seconds up to 15 minutes.
I'm sorry I couldn't find any code right now.

Challenge: Take a 48x48 image, find contiguous areas that result in the cheapest Lego solution to create that image! [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Background
Lego produces the X-Large Gray Baseplate, which is a large building plate that is 48 studs wide and 48 studs tall, resulting in a total area of 2304 studs. Being a Lego fanatic, I've modeled a few mosaic-style designs that can be put onto these baseplates and then perhaps hung on walls or in a display (see: Android, Dream Theater, The Galactic Empire, Pokemon).
The Challenge
My challenge is now to get the lowest cost to purchase these designs. Purchasing 2304 individual 1x1 plates can get expensive. Using BrickLink, essentially an eBay for Lego, I can find data to determine what the cheapest parts are for given colors. For example, a 1x4 plate at $0.10 (or $0.025 per stud) would be cheaper than a 6x6 plate at $2.16 (or $0.06 per stud). We can also determine a list of all possible plates that can be used to assemble an image:
1x1
1x2
1x3
1x4
1x6
1x8
1x10
1x12
2x2 corner!
2x2
2x3
2x4
2x6
2x8
2x10
2x12
2x16
4x4 corner!
4x4
4x6
4x8
4x10
4x12
6x6
6x8
6x10
6x12
6x14
6x16
6x24
8x8
8x11
8x16
16x16
The Problem
For this problem, let's assume that we have a list of all plates, their color(s), and a "weight" or cost for each plate. For the sake of simplicity, we can even remove the corner pieces, but that would be an interesting challenge to tackle. How would you find the cheapest components to create the 48x48 image? How would you find the solution that uses the fewest components (not necessarily the cheapest)? If we were to add corner pieces as allowable pieces, how would you account for them?
We can assume we have some master list that is obtained by querying BrickLink, getting the average price for a given brick in a given color, and adding that as an element in the list. So, there would be no black 16x16 plate simply because it is not made or for sale. The 16x16 Bright Green plate, however, would have a value of $3.74, going by the current available average price.
I hope that my write-up of the problem is succint enough. It's something I've been thinking about for a few days now, and I'm curious as to what you guys think. I tagged it as "interview-questions" because it's challenging, not because I got it through an interview (though I think it'd be a fun question!).
EDIT
Here's a link to the 2x2 corner piece and to the 4x4 corner piece. The answer doesn't necessarily need to take into account color, but it should be expandable to cover that scenario. The scenario would be that not all plates are available in all colors, so imagine that we've got a array of elements that identify a plate, its color, and the average cost of that plate (an example is below). Thanks to Benjamin for providing a bounty!
1x1|white|.07
1x1|yellow|.04
[...]
1x2|white|.05
1x2|yellow|.04
[...]
This list would NOT have the entry:
8x8|yellow|imaginarydollaramount
This is because an 8x8 yellow plate does not exist. The list itself is trivial and should only be thought about as providing references for the solution; it does not impact the solution itself.
EDIT2
Changed some wording for clarity.
Karl's approach is basically sound, but could use some more details. It will find the optimal cost solution, but will be too slow for certain inputs. Large open areas especially will have too many possibilities to search through naively.
Anyways, I made a quick implementation in C++ here: http://pastebin.com/S6FpuBMc
It solves filling in the empty space (periods), with 4 different kinds of bricks:
0: 1x1 cost = 1000
1: 1x2 cost = 150
2: 2x1 cost = 150
3: 1x3 cost = 250
4: 3x1 cost = 250
5: 3x3 cost = 1
.......... 1112222221
...#####.. 111#####11
..#....#.. 11#2222#13
..####.#.. 11####1#13
..#....#.. 22#1221#13
.......... 1221122555
..##..#... --> 11##11#555
..#.#.#... 11#1#1#555
..#..##... 11#11##221
.......... 1122112211
......#..# 122221#11#
...####.#. 555####1#0
...#..##.. 555#22##22
...####... 555####444 total cost = 7352
So, the algorithm fills in a given area. It is recursive (DFS):
FindBestCostToFillInRemainingArea()
{
- find next empty square
- if no empty square, return 0
- for each piece type available
- if it's legal to place the piece with upper-left corner on the empty square
- place the piece
- total cost = cost to place this piece + FindBestCostToFillInRemainingArea()
- remove the piece
return the cheapest "total cost" found
}
Once we figure out the cheapest way to fill a sub-area, we'll cache the result. To very efficiently identify a sub-area, we'll use a 64-bit integer using Zobrist hashing. Warning: hash collisions may cause incorrect results. Once our routine returns, we can reconstruct the optimal solution based on our cached values.
Optimizing:
In the example, 41936 nodes (recursive calls) are explored (searching for empty square top-to-bottom). However, if we search for empty squares left-to-right, ~900,000 nodes are explored.
For large open areas: I'd suggest finding the most cost-efficient piece and filling in a lot of the open area with that piece as a pre-process step. Another technique is to divide your image into a few regions, and optimize each region separately.
Good luck! I'll be unavailable until March 26th, so hopefully I didn't miss anything!
Steps
Step 1: Iterate through all solutions.
Step 2: Find the cheapest solution.
Create pieces inventory
For an array of possible pieces (include single pieces of each color), make at least n duplicates of each piece, where n = max(board#/piece# of each color). Therefore, at most n of that piece can cover all of the entire board's colors by area.
Now we have a huge collection of possible pieces, bounded because it is guaranteed that a subset of this collection will completely fill the board.
Then it becomes a subset problem, which is NP-Complete.
Solving the subset problem
For each unused piece in the set
For each possible rotation (e.g. for a square only 1, for a rectangle piece 2, for an elbow piece 4)
For each possible position in the *remaining* open places on board matching the color and rotation of the piece
- Put down the piece
- Mark the piece as used from the set
- Recursively decent on the board (with already some pieces filled)
Optimizations
Obviously being an O(2^n) algorithm, pruning of the search tree early is of utmost importance. Optimizations must be done early to avoid long-running. n is a very large number; just consider a 48x48 board -- you have 48x48xc (where c = number of colors) just for single pieces alone.
Therefore, 99% of the search tree must be pruned from the first few hundred plies in order for this algorithm to complete in any time. For example, keep a tally of the lowest cost solution found so far, and just stop searching all lower plies and backtrack whenever the current cost plus (the number of empty board positions x lowest average cost for each color) > current lowest cost solution.
For example, further optimize by always favoring the largest pieces (or the lowest average-cost pieces) first, so as to reduce the baseline lowest cost solution as quickly as possible and to prune as many future cases as possible.
Finding the cheapest
Calculate cost of each solution, find the cheapest!
Comments
This algorithm is generic. It does not assume a piece is of the same color (you can have multi-colored pieces!). It does not assume that a large piece is cheaper than the sum of smaller pieces. It doesn't really assume anything.
If some assumptions can be made, then this information can be used to further prune the search tree as early as possible. For example, when using only single-colored pieces, you can prune large sections of the board (with the wrong colors) and prune large number of pieces in the set (of the wrong color).
Suggestion
Do not try to do 48x48 at once. Try it on something small, say, 8x8, with a reasonably small set of pieces. Then increase number of pieces and board size progressively. I really have no idea how long the program will take -- but would love for somebody to tell me!
First you use flood fill to break up the problem into filling continuous regions of lego bricks. Then for each of those you can use a dfs with memoization you wish. The flood fill is trivial so I will not describe it farther.
Make sure to follow a right hand rule while expanding the search tree to not repeat states.
My solution will be:
Sort all the pieces by stud cost.
For each piece in the sorted list, try to place as many as you can in the plate:
Raster a 2D image of your design looking for regions of the image with uniform color, the shape of the current piece and free studs for each stud that the piece will use.
If the color of the region found do not exist for that particular piece, ignore an continue searching.
If the color exists: tag the studs used by that pieces and increment a counter for that kind of piece and that color.
Step 2 will be done once for squared pieces, twice for rectangular pieces (once vertical and once horizontal) and 4 times for corner pieces.
Iterate to 2 until the plate is full or no more type of pieces are available.
Once arrived to the end you will have the number of pieces of each kind and each color that you needed with a minimum cost.
If cost by stubs can change by color, then the original sorted list must include not only the type of piece by also the color.

Resources