everybody.
first of all, i'm an amateur programmer. i'm trying to make a simple city-building application using C++ and SFML, for learning. it's not really a game, since it will just build the city blocks and buildings, and show them to the user.
for now, i'm able to create the city blocks. my problem is how to subdivide the blocks into buildings. i don't have a real idea on how to do it.
possible solutions would be (i don't have enough reputation to post images, but there's the link):
https://i.postimg.cc/630GKGW7/bitmap.png
the only rules are:
(1) each building must fit a minimum and maximum known size;
(2) each building must have at least one face touching any block edge;
(3) no empty spaces should remain.
i've been strugling with this for days. can anyone give me a idea on how to do it? pseudocode also would be great.
thanks in advance!
Just a note, I'll be using OOP syntax to make this easier, but it's not valid code. Let's first create an interface to define the behavior we want:
class CityBlock {
Building[] buildings // should initially contain one building taking up the whole CityBlock
double width
double height
double maxBuildingSize
double minBuildingSize
splitBuilding(horizontal/vertical, coord) // This will split a building horizontally/vertically
createRandomBuildings() // this is what we want to create!
}
class Building {
Point position // position of top-left corner
Building[] subBuildings // buildings created by subdivision
double width
double height
double size() { return width * height }
}
Now for the fun part! Let's try and make the createRandomBuildings() method. The approach I'll be taking is to repeatedly subdivide buildings until they are between 2 * minBuildingSize (less than that means no subdivide can create two valid buildings) and maxBuildingSize.
IMPORTANT NOTE: This approach only guarantees valid buildings if maxBuildingSize >= 2 * minBuildingSize, even if a valid subdivision is possible. Considering your use case, I figured the size constraint would not pose any issues, and a more "random" solution would be better as opposed to a more deterministic one.
Let's get to it! We'll create a recursive function called subdivide to do the heavy lifting.
Building[] subdivide(Building b, horizontal/vertical) {} // Subdivides b into a random number of other buildings
The way I'll subdivide each building is to split it into a random number of horizontal/vertical segments. Ex.
From this
To this
NOTE: To simplify matters, I'm going to work through this treating the subdivision as vertical, as in the image above. For a horizontal subdivision, just swap width/height.
Of course, we can't use any number of subdivisions. Too many, and all the resulting buildings will be too small. So we should first define the maximum number of subdivisions that will still allow us to create valid buildings.
minSubdivisionWidth = minSize / b.height // ensures that subdivisionWidth * b.height >= minSize
maxSubdivisions = floor(b.width / minSubdivisionWidth)
subdivisions = randomInt(2, maxSubdivisions)
Now that we have a valid number of subdivisions, we need to space them randomly while ensuring the buildings aren't too small. To do this, let's split the space we have available into two portions: minimum space and free space. Each subdivision will need to have the minimum space, but there is also free (or leftover) space equal to b.size() - minBuildingSize * subdivisions. This free space is what we want to randomly distribute among our subdivided rectangles.
Blue is minimum space, and pink is free space
Let's allocate this space:
widths[] // This will be the widths of our subdivided buildings
freeWidth = b.width - minSubdivisionWidth * subdivisions
weights[] // randomly assigned weight for free space
sumWeight
for i = 1 to subdivisions {
randWeight = random()
weights[i] = randWeight
sumWeight += randWeight
}
for i = 1 to subdivisions {
widths[i] = minSubdivisionWidth + (weights[i] / sumWeight) * freeWidth
}
And now we can do the actual subdivision:
// transform individual widths into coordinates for building split
cumulativeWidth = 0
for i = 1 to (subdivisions - 1) {
cumulativeWidth += widths[i]
splitBuilding(vertical, cumulativeWidth)
}
We're almost there! Now we just need a snippet to randomly not subdivide if the building is below the max:
probToNotSubdivide = .3 // obviously change this to whatever
if b.size() < maxBuildingSize and randomDouble(0, 1) <= probToNotSubdivide { return }
One to not subdivide if the building is too small:
if b.size() < minBuildingSize * 2 { return }
One to not subdivide if it would cut off a building from the edge of the block:
/*
If the building is touching a horizontal edge, vertical subdivisions
will not cut anything off. If the building is touching both
vertical edges, one subdivision can be made.
*/
if not (b.position.y == 0 or (b.position.y + b.height) == cityBlock.height) {
if b.width == cityBlock.width {
// do one subdivision and recurse
splitBuilding(vertical, randomDouble(minSubdivisionWidth, cityBlock.width - minSubdivisionWidth)
for subBuilding in b.subBuildings {
subdivide(horizontal, subBuilding)
}
return
} else { return }
}
Add a bit of recursion at the end and...
Building[] subdivide(Building b, horizontal/vertical) {
// exit conditions
if b.size() < maxBuildingSize and randomDouble(0, 1) <= probToNotSubdivide { return }
if b.size() < minBuildingSize * 2 { return }
/*
If the building is touching a horizontal edge, vertical subdivisions
will not cut anything off. If the building is touching both
vertical edges, one subdivision can be made.
*/
if not (b.position.y == 0 or (b.position.y + b.height) == cityBlock.height) {
if b.width == cityBlock.width {
// do one subdivision and recurse
splitBuilding(vertical, randomDouble(minSubdivisionWidth, cityBlock.width - minSubdivisionWidth)
for subBuilding in b.subBuildings {
subdivide(horizontal, subBuilding)
}
return
} else { return }
}
// get # subdivisions
minSubdivisionWidth = minSize / b.height // ensures that subdivisionWidth * b.height <= minSize
maxSubdivisions = floor(b.width / minSubdivisionWidth)
subdivisions = randomInt(2, maxSubdivisions)
// get subdivision widths
widths[] // This will be the widths of our subdivided buildings
freeWidth = b.width - minSubdivisionWidth * subdivisions
weights[] // randomly assigned weight for free space
sumWeight
for i = 1 to subdivisions {
randWeight = random()
weights[i] = randWeight
sumWeight += randWeight
}
for i = 1 to subdivisions {
widths[i] = minSubdivisionWidth + (weights[i] / sumWeight) * freeWidth
}
// transform individual widths into coordinates for building split
cumulativeWidth = 0
for i = 1 to (subdivisions - 1) {
cumulativeWidth += widths[i]
splitBuilding(vertical, cumulativeWidth)
}
// recurse
for subBuilding in b.subBuildings {
subdivide(horizontal, subBuilding)
}
}
And that's it! Now we have createRandomBuildings() { subdivide(vertical, initialBuilding) }, and we've subdivided our city block.
P.S. Again, this code isn't meant to be valid, and this is also a very long post. If something in here doesn't work right, edit/comment on this answer. I hope this gives some insight as to the approach you could take.
EDIT: To clarify, you should switch between horizontal and vertical subdivisions on each level of recursion.
Related
I'm trying to write a GA to solve the following puzzle...
A binary encoding is (I think) very efficient. Each piece can be:
the original way up or flipped - 1 bit
rotated by 0 (ie none), 90, 180 or 270 degs - 2 bits
at position (x, y), where x and y go from 0 to 7 - 3 bits for each co-ordinate
This means that each piece's orientation and position can be encoded in 9 bits, making a total of 117 bits for the whole puzzle.
The fitness is calculated by placing each piece in the frame, ignoring any parts that lie out of the frame, and then adding up the number of empty squares. When that hits zero, we have a solution.
I have some standard GA methods I've used in other code (which I'll paste in below), but I can't seem to get it to converge. The fitness drops to about 11 (give or take), but never seems to go any lower. I've tried fiddling with the parameters, but can't get it any better.
At the risk of posting too much code, I'll show what I've got (where it seems relevant). If anyone can give me some idea how I could improve, it would be great. This is all in C#, but it should be clear enough to people who use other languages.
After generating an initial population of 1000 chromosomes (code not shown as it's just generating random binary strings of length 117), I enter the main loop, where on each generation, I call the Breed method, passing in the current population and some parameters...
private static List<Chromosome> Breed(List<Chromosome> population, int crossoverGene,
double mutationProbability, double mutationRate) {
List<Chromosome> nextGeneration = new List<Chromosome>();
// Cross breed half of the population number
for (int nChromosome = 0; nChromosome < population.Count / 2; nChromosome++) {
Chromosome daddy = Roulette(population);
Chromosome mummy = Roulette(population);
string babyGenes = daddy.Genes.Substring(0, crossoverGene)
+ mummy.Genes.Substring(crossoverGene);
Chromosome baby = new Chromosome(babyGenes);
baby.Fitness = Fitness(baby);
nextGeneration.Add(baby);
}
// Mutate some chromosomes
int numberToMutate = (int)(P() * 100 * mutationProbability);
List<Chromosome> mutatedChromosomes = new List<Chromosome>();
for (int i = 0; i < numberToMutate; i++) {
Chromosome c = Roulette(population);
string mutatedGenes = MutateGenes(c.Genes, mutationRate);
Chromosome mutatedChromosome = new Chromosome(mutatedGenes);
mutatedChromosome.Fitness = Fitness(mutatedChromosome);
mutatedChromosomes.Add(mutatedChromosome);
}
// Get the next generation from the fittest chromosomes
nextGeneration = nextGeneration
.Union(population)
.Union(mutatedChromosomes)
.OrderBy(p => p.Fitness)
.Take(population.Count)
.ToList();
return nextGeneration;
}
MutateGenes just flips bits at random, based on the mutation rate passed in. The main loop continues until we either hit the maximum number of generations, or the fitness drops to zero. I'm currently running for 1000 generations.
Here's the roulette method...
private static Chromosome Roulette(List<Chromosome> population) {
double totalFitness = population.Sum(c => 1 / c.Fitness);
double targetProbability = totalFitness * P();
double cumProbability = 0.0;
List<Chromosome> orderedPopulation = population.OrderBy(c => c.Fitness).ToList();
for (int i = 0; i < orderedPopulation.Count; i++) {
Chromosome c = orderedPopulation[i];
cumProbability += 1 / c.Fitness;
if (cumProbability > targetProbability) {
return c;
}
}
return orderedPopulation.Last();
}
Don't know if you need to see any of the other code. I was a bit wary about posting too much in case it put people off!
Anyone able to make any suggestions as to how I can get this to improve?
Todor Balabanov's answer is very interesting. Probably using relative coordinates and a proper packing function is the keystone.
Anyway I'd like to expand upon your idea as much as possible. A full discussion is probably too long for Stackoverflow...
TL;DR
Binary encoding does not give you any advantage.
The chosen alphabet isn't the smallest that permits a natural expression of the problem.
Considering the full range of coordinates ([0;7] x [0;7]) for every piece is excessive (and somewhat misleading for fitness evaluation).
Points (2) and (3) allow to reduce the search space from 2^117 to 2^95 elements.
A more informative fitness function is a great help.
You can use a multi-value fitness score, penalizing configurations that present holes.
Squares covered by overlapping pieces shouldn't be counted: an illegal configuration cannot have a fitness greater than a legal one.
ALPS can reduce the problem of premature convergence (reference implementation here).
I've have elaborated on these points in a GitHub wiki (it's a work in progress).
If you use genetic algorithms framework like Apache GA Framework you can implement chromosomes as list of shapes and you can use permutation crossover and mutation.
You will have blank spaces, which you will try to minimize (reduce them to 0). It is not a problem that you will have blanks, just count them and include them as penalty component in the fitness function.
Generally GAs are not so strong in combinatorial problems. I did many experiments like solving Rubik’s Cube with GA or solving Puzzle 15 with GA. Another experiment was 2D Optimal Cutting Problem with GA. If you are interested I can provide you the research papers and source code (GitHub). GAs are good giving you sub-optimal solution, but they are not good in giving you the optimal solution, which is even harder when it is a combinatorial problem.
The size of the population is an open question. You should do convergence investigation with different populations. Bigger population does not mean better and faster solution. Even 100 is too much for most of the problems solved with GA.
If you use absolute coordinates you will need to handle x and y, which is too complicated. Imagine that you support list of shapes. Packing procedure can get shape by shape and place each shape as close as possible to already handled shapes. It will speed-up your convergence.
/**
* Pack function which uses bounding rectangle of the polygons in the sheet
* with specified dimensions.
*
* #param width
* Sheet width.
* #param height
* Sheet height.
*/
public void pack1(int width, int height) {
int level[] = new int[width];
for (int i = 0; i < level.length; i++) {
level[i] = 0;
}
/*
* Insure pieces width according sheet width.
*/
for (Piece piece: population.get(worstIndex)) {
if (piece.getWidth() > width) {
piece.flip();
}
}
/*
* Pack pieces.
*/
int x = 0;
int y = 0;
for (Piece piece: population.get(worstIndex)) {
if (x + (int) piece.getWidth() >= width) {
x = 0;
}
/*
* Find y offset for current piece.
*/
y = 0;
for (int dx = x; dx < (x + piece.getWidth()); dx++) {
if (dx < width && y < level[dx]) {
y = level[dx];
}
}
// TODO Check the delta after subtraction.
/*
* Set current piece coordinates.
*/
piece.moveX(x - piece.getMinX());
piece.moveY(y - piece.getMinY());
/*
* Move lines for next placement.
*/
for (int dx = x; dx < (x + piece.getWidth()); dx++) {
if (dx < width) {
level[dx] = (int)(y + piece.getHeight());
}
}
// TODO Some strange behavior with the rotation.
x += (int) piece.getWidth() + 1;
}
}
/**
* Pack function which uses exact boundaries of the polygons in the sheet
* with specified dimensions.
*
* #param width
* Sheet width.
* #param height
* Sheet height.
*/
public void pack2(int width, int height) {
/*
* Pieces already placed on the sheet.
*/
List < Piece > front = new ArrayList < Piece > ();
/*
* Virtual Y boundary.
*/
double level = 0;
/*
* Place all pieces on the sheet
*/
for (Piece current: population.get(worstIndex)) {
double bestLeft = 0;
double bestTop = level;
current.moveX(-current.getMinX());
current.moveY(-current.getMinY() + level);
/*
* Move across sheet width.
*/
while (current.getMaxX() < width) {
/*
* Touch sheet bounds of touch other piece.
*/
while (current.getMinY() > 0 && Util.overlap(current, front) == false) {
current.moveY(-1);
}
// TODO Plus one may be is wrong if the piece should be part of
// the area.
current.moveY(+2);
/*
* Keep the best found position.
*/
if (current.getMinY() < bestTop) {
bestTop = current.getMinY();
bestLeft = current.getMinX();
}
/*
* Try next position on right.
*/
current.moveX(+1);
}
/*
* Put the piece in the best available coordinates.
*/
current.moveX(-current.getMinX() + bestLeft);
current.moveY(-current.getMinY() + bestTop);
/*
* Shift sheet level if the current piece is out of previous bounds.
*/
if (current.getMaxY() > level) {
level = current.getMaxY() + 1;
}
/*
* Add current piece in the ordered set and the front set.
*/
front.add(current);
}
}
/**
* Pack function which uses exact boundaries of the polygons in the sheet
* with specified dimensions.
*
* #param width
* Sheet width.
* #param height
* Sheet height.
*/
public void pack3(int width, int height) {
Polygon stack = new Polygon(
GEOMETRY_FACTORY
.createLinearRing(new Coordinate[] {
new Coordinate(0, -2, 0), new Coordinate(width - 1, -2, 0),
new Coordinate(width - 1, 0, 0), new Coordinate(0, 0, 0), new Coordinate(0, -2, 0)
}),
null, GEOMETRY_FACTORY);
/*
* Virtual Y boundary.
*/
double level = stack.getEnvelopeInternal().getMaxX();
/*
* Place all pieces on the sheet
*/
for (Piece current: population.get(worstIndex)) {
double bestLeft = 0;
double bestTop = level;
current.moveX(-current.getMinX());
current.moveY(-current.getMinY() + level);
/*
* Move across sheet width.
*/
while (current.getMaxX() < width) {
/*
* Touch sheet bounds of touch other piece.
*/
while (current.getMinY() > 0 && Util.overlap(current, stack) == false) {
current.moveY(-1);
}
// TODO Plus one may be is wrong if the piece should be part of
// the area.
current.moveY(+2);
/*
* Keep the best found position.
*/
if (current.getMinY() < bestTop) {
bestTop = current.getMinY();
bestLeft = current.getMinX();
}
/*
* Try next position on right.
*/
current.moveX(+1);
}
/*
* Put the piece in the best available coordinates.
*/
current.moveX(-current.getMinX() + bestLeft);
current.moveY(-current.getMinY() + bestTop);
/*
* Shift sheet level if the current piece is out of previous bounds.
*/
if (current.getMaxY() > level) {
level = current.getMaxY() + 1;
}
/*
* Add current piece in the ordered set and the front set.
*/
stack = (Polygon) SnapOverlayOp.union(stack, current.getPolygon()).getBoundary().convexHull();
stack.normalize();
}
}
You have a very interesting problem to solve. I like it very much. First of all it is a combinatorial problem, which can be very hard to solve with the classical genetic algorithms. I have some comments, but they are my subjective opinion: 1) Binary encoding does not give you any advantage (only overhead for encoding and decoding), you can use C# objects; 2) It is not smart to ignore pieces outside of the frame; 3) You will be trapped in local optimum all the time, this is the nature of genetic algorithms; 4) Population size of 1K is too much, use something smaller; 5) Do not use absolute x-y coordinates, use relative coordinates and proper packing function.
I am currently work some sort of map generation algorithm for my game. I have a basic understanding on what I want it to do and how it would generate the map.
I want to use the Polar Coordinate system. I want a circular graph so that each player would spawn on the edge of the circle, evenly spread out.
The algorithm should generate "cities" spread out from across the circle (but only inside the circle). Each city should be connected some form of way.
The size of the circle should depends on the number of players.
Everything should be random, meaning if I run
GenerateMap()
two times, it should not give the same results.
Here is a picture showing what I want: img
The red arrows are pointing to the cities and the lines are the connections between the cities.
How would I go about creating an algorithm based on the above?
Update: Sorry the link was broken. Fixed it.
I see the cities like this:
compute sizes and constants from N
as your cities should have constant average density then the radius can be computed from it directly. as it scales linearly with average or min city distance.
loop N (cities) times
generate random (x,y) with uniform distribution
throw away iterations where (x,y) is outside circle
throw away iterations where (x,y) is too near to already generated city
The paths are similar just generate all possible paths (non random) and throw away:
paths much longer then average or min distance between cities (connects jutst neighbors)
paths that intersect already generated path
In C++ code it could look like this:
//---------------------------------------------------------------------------
// some globals first
const int _max=128; // just max limit for cities and paths
const int R0=10; // city radius
const int RR=R0*R0*9; // min distance^2 between cities
int N=20; // number of cities
int R1=100; // map radius
struct _city { int x,y; }; // all the data you need for city
_city city[_max]; // list of cities
struct _path { int i0,i1; };// index of cities to join with path
_path path[_max]; // list of paths
int M=0; // number of paths in the list
//---------------------------------------------------------------------------
bool LinesIntersect(float X1,float Y1,float X2,float Y2,float X3,float Y3,float X4,float Y4)
{
if (fabs(X2-X3)+fabs(Y2-Y3)<1e-3) return false;
if (fabs(X1-X4)+fabs(Y1-Y4)<1e-3) return false;
float dx1,dy1,dx2,dy2;
dx1 = X2 - X1;
dy1 = Y2 - Y1;
dx2 = X4 - X3;
dy2 = Y4 - Y3;
float s,t,ax,ay,b;
ax=X1-X3;
ay=Y1-Y3;
b=(-(dx2*dy1)+(dx1*dy2)); if (fabs(b)>1e-3) b=1.0/b; else b=0.0;
s = (-(dy1*ax)+(dx1*ay))*b;
t = ( (dx2*ay)-(dy2*ax))*b;
if ((s>=0)&&(s<=1)&&(t>=0)&&(t<=1)) return true;
return false; // No collision
}
//---------------------------------------------------------------------------
// here generate n cities into circle at x0,y0
// compute R1,N from R0,RR,n
void genere(int x0,int y0,int n)
{
_city c;
_path p,*q;
int i,j,cnt,x,y,rr;
Randomize(); // init pseudo random generator
// [generate cities]
R1=(sqrt(RR*n)*8)/10;
rr=R1-R0; rr*=rr;
for (cnt=50*n,i=0;i<n;) // loop until all cities are generated
{
// watchdog
cnt--; if (cnt<=0) break;
// pseudo random position
c.x=Random(R1+R1)-R1; // <-r,+r>
c.y=Random(R1+R1)-R1; // <-r,+r>
// ignore cities outside R1 radius
if ((c.x*c.x)+(c.y*c.y)>rr) continue;
c.x+=x0; // position to center
c.y+=y0;
// ignore city if closer to any other then RR
for (j=0;j<i;j++)
{
x=c.x-city[j].x;
y=c.y-city[j].y;
if ((x*x)+(y*y)<=RR) { j=-1; break; }
}
if (j<0) continue;
// add new city to list
city[i]=c; i++;
}
N=i; // just in case watch dog kiks in
// [generate paths]
for (M=0,p.i0=0;p.i0<N;p.i0++)
for (p.i1=p.i0+1;p.i1<N;p.i1++)
{
// ignore too long path
x=city[p.i1].x-city[p.i0].x;
y=city[p.i1].y-city[p.i0].y;
if ((x*x)+(y*y)>5*RR) continue; // this constant determine the path density per city
// ignore intersecting path
for (q=path,i=0;i<M;i++,q++)
if ((q->i0!=p.i0)&&(q->i0!=p.i1)&&(q->i1!=p.i0)&&(q->i1!=p.i1))
if (LinesIntersect(
city[p.i0].x,city[p.i0].y,city[p.i1].x,city[p.i1].y,
city[q->i0].x,city[q->i0].y,city[q->i1].x,city[q->i1].y
)) { i=-1; break; }
if (i<0) continue;
// add path to list
if (M>=_max) break;
path[M]=p; M++;
}
}
//---------------------------------------------------------------------------
Here overview of generated layout:
And Growth of N:
The blue circles are the cities, the gray area is the target circle and Lines are the paths. The cnt is just watch dog to avoid infinite loop if constants are wrong. Set the _max value properly so it is high enough for your N or use dynamic allocation instead. There is much more paths than cities so they could have separate _max value to preserve memory (was too lazy to add it).
You can use the RandSeed to have procedural generated maps ...
You can rescale output to better match circle layout after the generation simply by finding bounding box and rescale to radius R1.
Some constants are obtained empirically so play with them to achieve best output for your purpose.
I'm working on a specific layout algorithm to display photos in a unit based grid. The desired behaviour is to have every photo placed in the next available space line by line.
Since there could easily be a thousand photos whose positions need to be calculated at once, efficiency is very important.
Has this problem maybe been solved with an existing algorithm already?
If not, how can I approach it to be as efficient as possible?
Edit
Regarding the positioning:
What I'm basically doing right now is iterating every line of the grid cell by cell until I find room to fit the element. That's why 4 is placed next to 2.
How about keeping a list of next available row by width? Initially the next-available-row list looks like:
(0,0,0,0,0)
When you've added the first photo, it looks like
(0,0,0,0,1)
Then
(0,0,0,2,2)
Then
(0,0,0,3,3)
Then
(1,1,1,4,4)
And the final photo doesn't change the list.
This could be efficient because you're only maintaining a small list, updating a little bit at each iteration (versus searching the entire space every time. It gets a little complicated - there could be a situation (with a tall photo) where the nominal next available row doesn't work, and then you could default to the existing approach. But overall I think this should save a fair amount of time, at the cost of a little added complexity.
Update
In response to #matteok's request for a coordinateForPhoto(width, height) method:
Let's say I called that array "nextAvailableRowByWidth".
public Coordinate coordinateForPhoto(width, height) {
int rowIndex = nextAvailableRowByWidth[width + 1]; // because arrays are zero-indexed
int[] row = space[rowIndex]
int column = findConsecutiveEmptySpace(width, row);
for (int i = 1; i < height; i++) {
if (!consecutiveEmptySpaceExists(width, space[i], column)) {
return null;
// return and fall back on the slow method, starting at rowIndex
}
}
// now either you broke out and are solving some other way,
// or your starting point is rowIndex, column. Done.
return new Coordinate(rowIndex, column);
}
Update #2
In response to #matteok's request for how to update the nextAvailableRowByWidth array:
OK, so you've just placed a new photo of height H and width W at row R. Any elements in the array which are less than R don't change (because this change didn't affect their row, so if there were 3 consecutive spaces available in the row before placing the photo, there are still 3 consecutive spaces available in it after). Every element which is in the range (R, R+H) needs to be checked, because it might have been affected. Let's postulate a method maxConsecutiveBlocksInRow() - because that's easy to write, right?
public void updateAvailableAfterPlacing(int W, int H, int R) {
for (int i = 0; i < nextAvailableRowByWidth.length; i++) {
if (nextAvailableRowByWidth[i] < R) {
continue;
}
int r = R;
while (maxConsecutiveBlocksInRow(r) < i + 1) {
r++;
}
nextAvailableRowByWidth[i] = r;
}
}
I think that should do it.
How about a matrix (your example would be 5x9) where each cell has a value representing the distance from the top left corner (for instance (row+1)*(column+1) [+1 is only necessary if your first row and value are 0]). In this matrix you look for the area which has the lowest value (when summing up the values of empty cells).
A 2nd matrix (or a 3rd dimension of the first matrix) stores the status of each cell.
edit:
int[][] grid = new int[9][5];
int[] filledRows = new int [9];
int photowidth = 2;
int photoheight = 1;
int emptyRowCounter = 0;
boolean photoFits = true;
for(int i = 0; i < grid.length; i++){
for(int m = 0; m < filledRows.length; m++){
if(filledRows[m]-(photoHeight-1) > i || filledRows[m]+(photoHeight-1) < i){
for(int j = 0; j < grid[i].length; j++){
if(grid[i][j] == 0){
for(int k = 0; k < photowidth; k++){
for(int l = 0; k < photoheight){
if(grid[i+l][j+k]!=0){
photoFits = false;
}
}
}
} else{
emptyRowCounter++;
}
}
if(photoFits){
//place Photo at i,j
}
if(emptyRowCounter == 5){
filledRows[i] = 1;
}
}
}
}
In the gif you have above, it turned out nicely that there was a photo (5) that could fit into the gap under (1) and to the left of (2). My intuition suggests we want to avoid creating gaps like that. Here is an idea that should avoid these gaps.
Maintain a list of "open regions", where an open region has a int leftBoundary, an int topBoundary, and an optional int bottomBoundary. The first open region is just the whole grid (leftBoundary:0, topBoundary: 0, bottom: null).
Sort the photos by height, breaking ties by width.
Until you have placed all photos:
Choose the tallest photo (in case of ties, choose the widest of the tallest photos). Find the first open region it can fit in (such that grid.Width - region.leftBoundary >= photo.Width). Place the photo at the top left of this region. When you place this photo, it may span the entire width or height of the region.
If it spans both the width and the height of the region, the region is filled! Remove this region from the list of open regions.
If it spans the width, but not the height, add the photo's height to the topBoundary of the region.
If it spans the height, but not the width, add the photo's width to the leftBoundary of the region.
If it does not span the height or width of the boundary, we are going to conceptually divide this region into two: one region will cover the space directly to the right of this photo (call it rightRegion), and the other region will cover the space below this region (call it belowRegion).
rightRegion = {
leftBoundary = parentRegion.leftBoundary + photo.width,
topBoundary = parentRegion.topBoundary,
bottomBoundary = parentRegion.topBoundary + photo.height
}
belowRegion = {
leftBoundary = 0,
topBoundary = parentRegion.topBoundary + photo.height,
bottomBoundary = parentRegion.bottomBoundary
}
Replace the current region in the list of open regions with rightRegion, and insert belowRegion directly after rightRegion.
You can visualize how this algorithm would work on your example: First, it would sort the photos: (2,3,4,1,5).
It considers 2, which fits into the first region (the whole grid). When it places 2 at the top left, it splits that region into the space directly to the right of 2, and the space below 2.
Then, it considers 3. It considers the open regions in turn. The first open region is to the right of 2. 3 fits there, so that's where it goes. It spans the width of the region, so the region's topBoundary gets adjusted downward.
Then, it considers 4. It again fits in the first open region, so it places 4 there. 4 spans the height of the region, so the region's leftBoundary gets adjusted rightward.
Then, 1 gets put in the 1x1 gap to the right of 4, filling its region. Finally, 5 gets put just below 2.
In my (Minecraft-like) 3D voxel world, I want to smooth the shapes for more natural visuals. Let's look at this example in 2D first.
Left is how the world looks without any smoothing. The terrain data is binary and each voxel is rendered as a unit size cube.
In the center you can see a naive circular smoothing. It only takes the four directly adjacent blocks into account. It is still not very natural looking. Moreover, I'd like to have flat 45-degree slopes emerge.
On the right you can see a smoothing algorithm I came up with. It takes the eight direct and diagonal neighbors into account in order to come up with the shape of a block. I have the C++ code online. Here is the code that comes up with the control points that the bezier curve is drawn along.
#include <iostream>
using namespace std;
using namespace glm;
list<list<dvec2>> Points::find(ivec2 block)
{
// Control points
list<list<ivec2>> lines;
list<ivec2> *line = nullptr;
// Fetch blocks, neighbours start top left and count
// around the center block clock wise
int center = m_blocks->get(block);
int neighs[8];
for (int i = 0; i < 8; i++) {
auto coord = blockFromIndex(i);
neighs[i] = m_blocks->get(block + coord);
}
// Iterate over neighbour blocks
for (int i = 0; i < 8; i++) {
int current = neighs[i];
int next = neighs[(i + 1) % 8];
bool is_side = (((i + 1) % 2) == 0);
bool is_corner = (((i + 1) % 2) == 1);
if (line) {
// Border between air and ground needs a line
if (current != center) {
// Sides are cool, but corners get skipped when they don't
// stop a line
if (is_side || next == center)
line->push_back(blockFromIndex(i));
} else if (center || is_side || next == center) {
// Stop line since we found an end of the border. Always
// stop for ground blocks here, since they connect over
// corners so there must be open docking sites
line = nullptr;
}
} else {
// Start a new line for the border between air and ground that
// just appeared. However, corners get skipped if they don't
// end a line.
if (current != center) {
lines.emplace_back();
line = &lines.back();
line->push_back(blockFromIndex(i));
}
}
}
// Merge last line with first if touching. Only close around a differing corner for air
// blocks.
if (neighs[7] != center && (neighs[0] != center || (!center && neighs[1] != center))) {
// Skip first corner if enclosed
if (neighs[0] != center && neighs[1] != center)
lines.front().pop_front();
if (lines.size() == 1) {
// Close circle
auto first_point = lines.front().front();
lines.front().push_back(first_point);
} else {
// Insert last line into first one
lines.front().insert(lines.front().begin(), line->begin(), line->end());
lines.pop_back();
}
}
// Discard lines with too few points
auto i = lines.begin();
while (i != lines.end()) {
if (i->size() < 2)
lines.erase(i++);
else
++i;
}
// Convert to concrete points for output
list<list<dvec2>> points;
for (auto &line : lines) {
points.emplace_back();
for (auto &neighbour : line)
points.back().push_back(pointTowards(neighbour));
}
return points;
}
glm::ivec2 Points::blockFromIndex(int i)
{
// Returns first positive representant, we need this so that the
// conditions below "wrap around"
auto modulo = [](int i, int n) { return (i % n + n) % n; };
ivec2 block(0, 0);
// For two indices, zero is right so skip
if (modulo(i - 1, 4))
// The others are either 1 or -1
block.x = modulo(i - 1, 8) / 4 ? -1 : 1;
// Other axis is same sequence but shifted
if (modulo(i - 3, 4))
block.y = modulo(i - 3, 8) / 4 ? -1 : 1;
return block;
}
dvec2 Points::pointTowards(ivec2 neighbour)
{
dvec2 point;
point.x = static_cast<double>(neighbour.x);
point.y = static_cast<double>(neighbour.y);
// Convert from neighbour space into
// drawing space of the block
point *= 0.5;
point += dvec2(.5);
return point;
}
However, this is still in 2D. How to translate this algorithm into three dimensions?
You should probably have a look at the marching cubes algorithm and work from there. You can easily control the smoothness of the resulting blob:
Imagine that each voxel defines a field, with a high density at it's center, slowly fading to nothing as you move away from the center. For example, you could use a function that is 1 inside a voxel and goes to 0 two voxels away. No matter what exact function you choose, make sure that it's only non-zero inside a limited (preferrably small) area.
For each point, sum the densities of all fields.
Use the marching cubes algorithm on the sum of those fields
Use a high resolution mesh for the algorithm
In order to change the look/smoothness you change the density function and the threshold of the marching cubes algorithm. A possible extension to marching cubes to create smoother meshes is the following idea: Imagine that you encounter two points on an edge of a cube, where one point lies inside your volume (above a threshold) and the other outside (under the threshold). In this case many marching cubes algorithms place the boundary exactly at the middle of the edge. One can calculate the exact boundary point - this gets rid of aliasing.
Also I would recommend that you run a mesh simplification algorithm after that. Using marching cubes results in meshes with many unnecessary triangles.
As an alternative to my answer above: You could also use NURBS or any algorithm for subdivision surfaces. Especially the subdivision surfaces algorithms are spezialized to smooth meshes. Depending on the algorithm and it's configuration you will get smoother versions of your original mesh with
the same volume
the same surface
the same silhouette
and so on.
Use 3D implementations for Biezer curves known as Biezer surfaces or use the B-Spline Surface algorithms explained:
here
or
here
I have a 2D image randomly and sparsely scattered with pixels.
given a point on the image, I need to find the distance to the closest pixel that is not in the background color (black).
What is the fastest way to do this?
The only method I could come up with is building a kd-tree for the pixels. but I would really want to avoid such expensive preprocessing. also, it seems that a kd-tree gives me more than I need. I only need the distance to something and I don't care about what this something is.
Personally, I'd ignore MusiGenesis' suggestion of a lookup table.
Calculating the distance between pixels is not expensive, particularly as for this initial test you don't need the actual distance so there's no need to take the square root. You can work with distance^2, i.e:
r^2 = dx^2 + dy^2
Also, if you're going outwards one pixel at a time remember that:
(n + 1)^2 = n^2 + 2n + 1
or if nx is the current value and ox is the previous value:
nx^2 = ox^2 + 2ox + 1
= ox^2 + 2(nx - 1) + 1
= ox^2 + 2nx - 1
=> nx^2 += 2nx - 1
It's easy to see how this works:
1^2 = 0 + 2*1 - 1 = 1
2^2 = 1 + 2*2 - 1 = 4
3^2 = 4 + 2*3 - 1 = 9
4^2 = 9 + 2*4 - 1 = 16
5^2 = 16 + 2*5 - 1 = 25
etc...
So, in each iteration you therefore need only retain some intermediate variables thus:
int dx2 = 0, dy2, r2;
for (dx = 1; dx < w; ++dx) { // ignoring bounds checks
dx2 += (dx << 1) - 1;
dy2 = 0;
for (dy = 1; dy < h; ++dy) {
dy2 += (dy << 1) - 1;
r2 = dx2 + dy2;
// do tests here
}
}
Tada! r^2 calculation with only bit shifts, adds and subtracts :)
Of course, on any decent modern CPU calculating r^2 = dx*dx + dy*dy might be just as fast as this...
As Pyro says, search the perimeter of a square that you keep moving out one pixel at a time from your original point (i.e. increasing the width and height by two pixels at a time). When you hit a non-black pixel, you calculate the distance (this is your first expensive calculation) and then continue searching outwards until the width of your box is twice the distance to the first found point (any points beyond this cannot possibly be closer than your original found pixel). Save any non-black points you find during this part, and then calculate each of their distances to see if any of them are closer than your original point.
In an ideal find, you only have to make one expensive distance calculation.
Update: Because you're calculating pixel-to-pixel distances here (instead of arbitrary precision floating point locations), you can speed up this algorithm substantially by using a pre-calculated lookup table (just a height-by-width array) to give you distance as a function of x and y. A 100x100 array costs you essentially 40K of memory and covers a 200x200 square around the original point, and spares you the cost of doing an expensive distance calculation (whether Pythagorean or matrix algebra) for every colored pixel you find. This array could even be pre-calculated and embedded in your app as a resource, to spare you the initial calculation time (this is probably serious overkill).
Update 2: Also, there are ways to optimize searching the square perimeter. Your search should start at the four points that intersect the axes and move one pixel at a time towards the corners (you have 8 moving search points, which could easily make this more trouble than it's worth, depending on your application's requirements). As soon as you locate a colored pixel, there is no need to continue towards the corners, as the remaining points are all further from the origin.
After the first found pixel, you can further restrict the additional search area required to the minimum by using the lookup table to ensure that each searched point is closer than the found point (again starting at the axes, and stopping when the distance limit is reached). This second optimization would probably be much too expensive to employ if you had to calculate each distance on the fly.
If the nearest pixel is within the 200x200 box (or whatever size works for your data), you will only search within a circle bounded by the pixel, doing only lookups and <>comparisons.
You didn't specify how you want to measure distance. I'll assume L1 (rectilinear) because it's easier; possibly these ideas could be modified for L2 (Euclidean).
If you're only doing this for relatively few pixels, then just search outward from the source pixel in a spiral until you hit a nonblack one.
If you're doing this for many/all of them, how about this: Build a 2-D array the size of the image, where each cell stores the distance to the nearest nonblack pixel (and if necessary, the coordinates of that pixel). Do four line sweeps: left to right, right to left, bottom to top, and top to bottom. Consider the left to right sweep; as you sweep, keep a 1-D column containing the last nonblack pixel seen in each row, and mark each cell in the 2-D array with the distance to and/or coordinates of that pixel. O(n^2).
Alternatively, a k-d tree is overkill; you could use a quadtree. Only a little more difficult to code than my line sweep, a little more memory (but less than twice as much), and possibly faster.
Search "Nearest neighbor search", first two links in Google should help you.
If you are only doing this for 1 pixel per image, I think your best bet is just a linear search, 1 pixel width box at time outwards. You can't take the first point you find, if your search box is square. You have to be careful
Yes, the Nearest neighbor search is good, but does not guarantee you'll find the 'nearest'. Moving one pixel out each time will produce a square search - the diagonals will be farther away than the horizontal / vertical. If this is important, you'll want to verify - continue expanding until the absolute horizontal has a distance greater than the 'found' pixel, and then calculate distances on all non-black pixels that were located.
Ok, this sounds interesting.
I made a c++ version of a soulution, I don't know if this helps you. I think it works fast enough as it's almost instant on a 800*600 matrix. If you have any questions just ask.
Sorry for any mistakes I've made, it's a 10min code...
This is a iterative version (I was planing on making a recursive one too, but I've changed my mind).
The algorithm could be improved by not adding any point to the points array that is to a larger distance from the starting point then the min_dist, but this involves calculating for each pixel (despite it's color) the distance from the starting point.
Hope that helps
//(c++ version)
#include<iostream>
#include<cmath>
#include<ctime>
using namespace std;
//ITERATIVE VERSION
//picture witdh&height
#define width 800
#define height 600
//indexex
int i,j;
//initial point coordinates
int x,y;
//variables to work with the array
int p,u;
//minimum dist
double min_dist=2000000000;
//array for memorising the points added
struct point{
int x;
int y;
} points[width*height];
double dist;
bool viz[width][height];
// direction vectors, used for adding adjacent points in the "points" array.
int dx[8]={1,1,0,-1,-1,-1,0,1};
int dy[8]={0,1,1,1,0,-1,-1,-1};
int k,nX,nY;
//we will generate an image with white&black pixels (0&1)
bool image[width-1][height-1];
int main(){
srand(time(0));
//generate the random pic
for(i=1;i<=width-1;i++)
for(j=1;j<=height-1;j++)
if(rand()%10001<=9999) //9999/10000 chances of generating a black pixel
image[i][j]=0;
else image[i][j]=1;
//random coordinates for starting x&y
x=rand()%width;
y=rand()%height;
p=1;u=1;
points[1].x=x;
points[1].y=y;
while(p<=u){
for(k=0;k<=7;k++){
nX=points[p].x+dx[k];
nY=points[p].y+dy[k];
//nX&nY are the coordinates for the next point
//if we haven't added the point yet
//also check if the point is valid
if(nX>0&&nY>0&&nX<width&&nY<height)
if(viz[nX][nY] == 0 ){
//mark it as added
viz[nX][nY]=1;
//add it in the array
u++;
points[u].x=nX;
points[u].y=nY;
//if it's not black
if(image[nX][nY]!=0){
//calculate the distance
dist=(x-nX)*(x-nX) + (y-nY)*(y-nY);
dist=sqrt(dist);
//if the dist is shorter than the minimum, we save it
if(dist<min_dist)
min_dist=dist;
//you could save the coordinates of the point that has
//the minimum distance too, like sX=nX;, sY=nY;
}
}
}
p++;
}
cout<<"Minimum dist:"<<min_dist<<"\n";
return 0;
}
I'm sure this could be done better but here's some code that searches the perimeter of a square around the centre pixel, examining the centre first and moving toward the corners. If a pixel isn't found the perimeter (radius) is expanded until either the radius limit is reached or a pixel is found. The first implementation was a loop doing a simple spiral around the centre point but as noted that doesn't find the absolute closest pixel. SomeBigObjCStruct's creation inside the loop was very slow - removing it from the loop made it good enough and the spiral approach is what got used. But here's this implementation anyway - beware, little to no testing done.
It is all done with integer addition and subtraction.
- (SomeBigObjCStruct *)nearestWalkablePoint:(SomeBigObjCStruct)point {
typedef struct _testPoint { // using the IYMapPoint object here is very slow
int x;
int y;
} testPoint;
// see if the point supplied is walkable
testPoint centre;
centre.x = point.x;
centre.y = point.y;
NSMutableData *map = [self getWalkingMapDataForLevelId:point.levelId];
// check point for walkable (case radius = 0)
if(testThePoint(centre.x, centre.y, map) != 0) // bullseye
return point;
// radius is the distance from the location of point. A square is checked on each iteration, radius units from point.
// The point with y=0 or x=0 distance is checked first, i.e. the centre of the side of the square. A cursor variable
// is used to move along the side of the square looking for a walkable point. This proceeds until a walkable point
// is found or the side is exhausted. Sides are checked until radius is exhausted at which point the search fails.
int radius = 1;
BOOL leftWithinMap = YES, rightWithinMap = YES, upWithinMap = YES, downWithinMap = YES;
testPoint leftCentre, upCentre, rightCentre, downCentre;
testPoint leftUp, leftDown, rightUp, rightDown;
testPoint upLeft, upRight, downLeft, downRight;
leftCentre = rightCentre = upCentre = downCentre = centre;
int foundX = -1;
int foundY = -1;
while(radius < 1000) {
// radius increases. move centres outward
if(leftWithinMap == YES) {
leftCentre.x -= 1; // move left
if(leftCentre.x < 0) {
leftWithinMap = NO;
}
}
if(rightWithinMap == YES) {
rightCentre.x += 1; // move right
if(!(rightCentre.x < kIYMapWidth)) {
rightWithinMap = NO;
}
}
if(upWithinMap == YES) {
upCentre.y -= 1; // move up
if(upCentre.y < 0) {
upWithinMap = NO;
}
}
if(downWithinMap == YES) {
downCentre.y += 1; // move down
if(!(downCentre.y < kIYMapHeight)) {
downWithinMap = NO;
}
}
// set up cursor values for checking along the sides of the square
leftUp = leftDown = leftCentre;
leftUp.y -= 1;
leftDown.y += 1;
rightUp = rightDown = rightCentre;
rightUp.y -= 1;
rightDown.y += 1;
upRight = upLeft = upCentre;
upRight.x += 1;
upLeft.x -= 1;
downRight = downLeft = downCentre;
downRight.x += 1;
downLeft.x -= 1;
// check centres
if(testThePoint(leftCentre.x, leftCentre.y, map) != 0) {
foundX = leftCentre.x;
foundY = leftCentre.y;
break;
}
if(testThePoint(rightCentre.x, rightCentre.y, map) != 0) {
foundX = rightCentre.x;
foundY = rightCentre.y;
break;
}
if(testThePoint(upCentre.x, upCentre.y, map) != 0) {
foundX = upCentre.x;
foundY = upCentre.y;
break;
}
if(testThePoint(downCentre.x, downCentre.y, map) != 0) {
foundX = downCentre.x;
foundY = downCentre.y;
break;
}
int i;
for(i = 0; i < radius; i++) {
if(leftWithinMap == YES) {
// LEFT Side - stop short of top/bottom rows because up/down horizontal cursors check that line
// if cursor position is within map
if(i < radius - 1) {
if(leftUp.y > 0) {
// check it
if(testThePoint(leftUp.x, leftUp.y, map) != 0) {
foundX = leftUp.x;
foundY = leftUp.y;
break;
}
leftUp.y -= 1; // moving up
}
if(leftDown.y < kIYMapHeight) {
// check it
if(testThePoint(leftDown.x, leftDown.y, map) != 0) {
foundX = leftDown.x;
foundY = leftDown.y;
break;
}
leftDown.y += 1; // moving down
}
}
}
if(rightWithinMap == YES) {
// RIGHT Side
if(i < radius - 1) {
if(rightUp.y > 0) {
if(testThePoint(rightUp.x, rightUp.y, map) != 0) {
foundX = rightUp.x;
foundY = rightUp.y;
break;
}
rightUp.y -= 1; // moving up
}
if(rightDown.y < kIYMapHeight) {
if(testThePoint(rightDown.x, rightDown.y, map) != 0) {
foundX = rightDown.x;
foundY = rightDown.y;
break;
}
rightDown.y += 1; // moving down
}
}
}
if(upWithinMap == YES) {
// UP Side
if(upRight.x < kIYMapWidth) {
if(testThePoint(upRight.x, upRight.y, map) != 0) {
foundX = upRight.x;
foundY = upRight.y;
break;
}
upRight.x += 1; // moving right
}
if(upLeft.x > 0) {
if(testThePoint(upLeft.x, upLeft.y, map) != 0) {
foundX = upLeft.x;
foundY = upLeft.y;
break;
}
upLeft.y -= 1; // moving left
}
}
if(downWithinMap == YES) {
// DOWN Side
if(downRight.x < kIYMapWidth) {
if(testThePoint(downRight.x, downRight.y, map) != 0) {
foundX = downRight.x;
foundY = downRight.y;
break;
}
downRight.x += 1; // moving right
}
if(downLeft.x > 0) {
if(testThePoint(upLeft.x, upLeft.y, map) != 0) {
foundX = downLeft.x;
foundY = downLeft.y;
break;
}
downLeft.y -= 1; // moving left
}
}
}
if(foundX != -1 && foundY != -1) {
break;
}
radius++;
}
// build the return object
if(foundX != -1 && foundY != -1) {
SomeBigObjCStruct *foundPoint = [SomeBigObjCStruct mapPointWithX:foundX Y:foundY levelId:point.levelId];
foundPoint.z = [self zWithLevelId:point.levelId];
return foundPoint;
}
return nil;
}
You can combine many ways to speed it up.
A way to accelerate the pixel lookup is to use what I call a spatial lookup map. It is basically a downsampled map (say of 8x8 pixels, but its a tradeoff) of the pixels in that block. Values can be "no pixels set" "partial pixels set" "all pixels set". This way one read can tell if a block/cell is either full, partially full or empty.
scanning a box/rectangle around the center may not be ideal because there are many pixels/cells which are far far away. I use a circle drawing algorithm (Bresenham) to reduce the overhead.
reading the raw pixel values can happen in horizontal batches, for example a byte (for a cell size of 8x8 or multiples of it), dword or long. This should give you a serious speedup again.
you can also use multiple levels of "spatial lookup maps", its again a tradeoff.
For the distance calculatation the mentioned lookup table can be used, but its a (cache)bandwidth vs calculation speed tradeoff (I dunno how it performs on a GPU for example).
Another approach I have investigated and likely will stick to: Utilizing the Bresenham circle algorithm.
It is surprisingly fast as it saves you any sort of distance comparisons!
You effectively just draw bigger and bigger circles around your target point so that when the first time you encounter a non-black pixel you automatically know it is the closest, saving any further checks.
What I have not verified yet is whether the bresenham circle will catch every single pixel but that wasn't a concern for my case as my pixels will occur in blobs of some sort.
I would do a simple lookup table - for every pixel, precalculate distance to the closest non-black pixel and store the value in the same offset as the corresponding pixel. Of course, this way you will need more memory.