I have a 500 x 400px square with a 100px grid inside it. Now I want to fill that square with smaller random sized square that snap to the grid. This means that the smaller squares can be either 100, 200, 300 or 400 pixels in size. Their size and position needs to be random, so the output will look different every time it runs.
This image shows the large square, its grid, and a possible output with the smaller squares that I'm trying to create.
I'm generating this in Ruby / Sinatra with DIV's, but I guess the question is more general towards the actual algorithm to use.
Any suggestions on how to do this with the least amount of code?
This method would take a lot of code, but I think what I would do is using Donald Knuth's Dancing Links algorithm (DLX) (or some other algorithm) to find all possible arrangements of squares. You can compute the arrangements ahead of time, then you can quickly/randomly pick one later when you need them.
You can read his paper about the algorithm and its application to pentominoes (which is very similar to your problem) here:
http://www-cs-faculty.stanford.edu/~uno/papers/dancing-color.ps.gz
http://en.wikipedia.org/wiki/Dancing_Links
One simple recursive approach you could take that would produce a fairly good random distribution works like this: as a base case, any grid that is 100x100 must be filled with a 100x100 square. Otherwise, if the grid is n x n for some n that's small enough to hold a square, you may choose to tile it with a square of that size. Otherwise, pick some side of the rectangle that isn't of size 100, pick some random place that's a multiple of 100, then split it in half and recursively tile both halves.
The main advantage of this approach is that you never have to keep track of where you've put older rectangles to avoid hitting them. You always work with empty rectangles and keep recursively subdividing the problem in a way that ensures that the regions never overlap.
This may not always give good results, but it's very easy to code up (I'd assume maybe 15-25 lines of code total) and can easily be tweaked to change the output.
Hope this helps!
Related
What I am asking here is an algorithm question. I'm not asking for specifics of how to do it in the programming language I'm working in or with the framework and libraries I'm currently using. I want to know how to do this in principle.
As a hobby, I am working on an open source virtual reality remake of the 1992 first-person shooter game Wolfenstein 3D. My program will support classic mods and map packs for WOLF3D made in the original format from the 90s. This means that my program will not know in advance what the maps are going to be. They are loaded in at runtime from user provided files.
A Wolfenstein 3D map is a 2D square grid of normally 64x64 tiles. let's assume I have a 2D array of bools which return true if a particular tile can be traversed by the player and false if the tile will never be traversable no matter what happens in the game.
I want to generate rectangular collision objects for a modern game engine which will prevent collisions into non traversable tiles on the map. Right now, I have a small collision object on each surface of each wall tile with a traversible tile next to it and that is very inefficient because it makes way more collision objects than necessary. What I should have instead is a smaller number of large rectangles which fill all of the squares on the grid where that 2D array I mentioned has a false value to indicate non-traversible.
When I search for any algorithms or research that might have been done for problems similar to this, I find lots of information about rectangle packing for the purposes of making texture atlases for games, which packs rectangles into a square, but I haven't found anything that tries to pack the smallest number of rectangles into an arbitrary set of selected / marked square tiles.
The naive approach which occurs to me is to first make 64 rectangles representing 64 rows and then chop out whatever squares are traversible. but I suspect that there's got to be an algorithm which can do better, meaning that it can fill the same spaces with a smaller number of rectangles. Maybe something that starts with my naive approach and then checks each rectangle for adjacent rectangles which it could merge with? But I'm not sure how far to take that approach or if it will even truly reduce the number of rectangles.
The result doesn't have to be perfect. I am just fishing here to see if anyone has any magic tricks that could take me even a little bit beyond the naive approach.
Has anyone done this before? What is it called? Just knowing what some of the vocabulary words I would need to even talk about this are would help. Thanks!
(later edit)
Here is some sample input as comma-separated values. The 1s represent the area that must be filled with the rectangles while the 0s represent the area that should not be filled with the rectangles.
I expect that the result would be a list of sets of 4 integers where each set represents a rectangle like this:
First integer would be the x coordinate of the left/western edge of the rectangle.
Second integer would be the y coordinate of the top/northern edge of the rectangle.
Third integer would be the width of the rectangle.
Fourth integer would be the depth of the rectangle.
My program is in C# but I'm sure I can translate anything in a normal mainstream general purpose programming language or psuedocode.
Mark all tiles as not visited
For each tile:
skip if the tile is not a top-left corner or was visited before
# now, the tile is a top-left corner
expand right until top-right corner is found
expand down
save the rectangle
mark all tiles in the rectangle as visited
However simplistic it looks, it will likely generate minimal number of rectangles - simply because we need at least one rectangle per pair of top corners.
For faster downward expansion, it makes sense to precompute a table holding sum of all element top and left from the tile (aka integral image).
For non-overlapping rectangles, worst case complexity for an n x n "image" should not exceed O(n^3). If rectangles can overlap (would result in smaller number of them), integral image optimization is not applicable and the worst case will be O(n^4).
I want to place some points in a rectangle randomly.
Generating random x, y coordinates it's not a good idea, because many times happens that the points are mainly distributed on the same area instead cover the whole rectangle.
I don't need an algorithm incredibly fast or the best cover position, just something that could run in a simple game that generate random (x, y) that cover almost the whole rectangle.
In my particular case I'm trying to generate a simple sky, so the idea is to place almost 40/50 stars in the sky rectangle.
Could someone point me some common algorithm to do that?
There is a number of algorithms to pseudo-randomly fill a 2d plane. One of them is Poisson Disk Sampling which places the samples randomly, but then checks that any two are not too close. The result would look something like this:
You can check some articles describing this algorithm. And even some implementations are available.
The problem though is that the resulting distribution looks nothing like the actual stars in the sky. But it gives a good tool to start with - by controlling the Poisson radius we can create very naturally looking looking patterns. For example in this article they use Perlin Noise to control the radius of the Poisson Disk Sampling:
You would also want to adjust the brightness of the stars, but you can experiment with uniform random values or Perlin noise.
Once I have used a completely different approach for a game. I took real positions of the stars in cartesian system from HYG database by David Nash and transformed them to my viewpoint. With this approach you can even create the exact view that can be seen from where you are on Earth.
I once showed this database to the girl I wanted to date, saying "I want to show you the stars… in cartesian coordinate system".
Upd. It’s been over seven years now and we are still together.
Just some ideas which might make your cover to appear "more uniform". These approaches don't necessarily provide an efficient way to generate a truly uniform cover, but they might be good enough and worth looking at in your case.
First, you can divide the original rectangle in 4 (or 10, or 100 - as long as performance allows you) subrectangles and cover those subrectangles separately with random points. By doing so you will make sure that no subrectangle will be left uncovered. You can generate the same number of points for each subrectangle, but you can also vary the number of points from one subrectangle to another. For example, for each subrectangle you can first generate a random number num_points_in_subrectangle (which can come from a uniform random distribution on some interval [lower, upper]) and then randomly fill the subrectangle with this many points. So all subrectangles will contain random number of points and will probably look less "programmatically generated".
Another thing you can try is to generate random points inside the original rectangle and for each generated point check if there already exists a point within some radius R. If there is such point, you reject the candidate and generate the new one. Again, here you can vary the radius from one point to another by making R a random variable.
Finally, you can combine several approaches. Generate some random number n of points you want in total. First, divide the original rectangle in subrectangles and cover those in such a way that there are n / 3 points in total. Then generate next n / 3 points by selecting the random point inside the original rectangle without any restrictions. After this, generate the last n / 3 points randomly with checks for neighbors within the radius.
Using a uniform drawing of X, Y, if you draw 40 points, the probability of having all points in the same half is about one over a trillion (~0.0000000000009).
what I want to achieve is a transition between two image files. The pixels from the image A move and rearrange themselves to form the image B. Imagine a cloud of particles (that is made from the A image's pixels) that forms into the picture B.
So far I have thought of going through all the pixels in image A and comparing them to pixels in image B; pixels that are the most similar are taken out of the arrays (with their x,y coordinates, too) and put into another array. So, in the end, I have pairs of pixels from both images that are similar. Then I only have to create the animation / possible color balancing (obviously all the pairs won't consist of identical pixels), which is fairly easy.
The problem is the algorithm that finds pixel pairs. For a small 100px x 100px image it would take 50 005 000 comparisons, for larger it would be impossible.
Dividing pictures in clusters? Any ideas will be appreciated.
I'd say that you're likely to achieve the best result matching up pixels by hue first, then saturation, finally luminance. If I'm right, then your best bet for optimization would be to convert to HSV first. Once there, you can just sort your pixels and binary search the results to find your pairs.
I'd say you'd may want to additionally search a fixed window around the result you find, to match up pixels that are least distance away from each other. That may make the resulting transition more coherent.
You may want to take a look at the Hungarian algorithm, which reduces the amount of actual comparisons for 100x100 pixels to 10000 - and after that you have O(n^3) time for finding the optimal matches. Basically, give each pixel combination a "cost" based on similarity and then send the (inverted) cost matrix through the algorithm to get the optimal assignment of pixels from A to pixels from B.
But it still might be too much computation for too little gain, depending on whether you need real time. I.e. this kind of work doesn't necessarily need an optimal match, just good enough - still, it may work as a point of origin in terms of finding less computationally intensive methods.
See bottom of the linked article for implementations in various languages - it's not entirely trival to implement.
I have a set of points which are contained within the rectangle. I'd like to split the rectangles into subrectangles based on point density (giving a number of subrectangles or desired density, whichever is easiest).
The partitioning doesn't have to be exact (almost any approximation better than regular grid would do), but the algorithm has to cope with the large number of points - approx. 200 millions. The desired number of subrectangles however is substantially lower (around 1000).
Does anyone know any algorithm which may help me with this particular task?
Just to understand the problem.
The following is crude and perform badly, but I want to know if the result is what you want>
Assumption> Number of rectangles is even
Assumption> Point distribution is markedly 2D (no big accumulation in one line)
Procedure>
Bisect n/2 times in either axis, looping from one end to the other of each previously determined rectangle counting "passed" points and storing the number of passed points at each iteration. Once counted, bisect the rectangle selecting by the points counted in each loop.
Is that what you want to achieve?
I think I'd start with the following, which is close to what #belisarius already proposed. If you have any additional requirements, such as preferring 'nearly square' rectangles to 'long and thin' ones you'll need to modify this naive approach. I'll assume, for the sake of simplicity, that the points are approximately randomly distributed.
Split your initial rectangle in 2 with a line parallel to the short side of the rectangle and running exactly through the mid-point.
Count the number of points in both half-rectangles. If they are equal (enough) then go to step 4. Otherwise, go to step 3.
Based on the distribution of points between the half-rectangles, move the line to even things up again. So if, perchance, the first cut split the points 1/3, 2/3, move the line half-way into the heavy half of the rectangle. Go to step 2. (Be careful not to get trapped here, moving the line in ever decreasing steps first in one direction, then the other.)
Now, pass each of the half-rectangles in to a recursive call to this function, at step 1.
I hope that outlines the proposal well enough. It has limitations: it will produce a number of rectangles equal to some power of 2, so adjust it if that's not good enough. I've phrased it recursively, but it's ideal for parallelisation. Each split creates two tasks, each of which splits a rectangle and creates two more tasks.
If you don't like that approach, perhaps you could start with a regular grid with some multiple (10 - 100 perhaps) of the number of rectangles you want. Count the number of points in each of these tiny rectangles. Then start gluing the tiny rectangles together until the less-tiny rectangle contains (approximately) the right number of points. Or, if it satisfies your requirements well enough, you could use this as a discretisation method and integrate it with my first approach, but only place the cutting lines along the boundaries of the tiny rectangles. This would probably be much quicker as you'd only have to count the points in each tiny rectangle once.
I haven't really thought about the running time of either of these; I have a preference for the former approach 'cos I do a fair amount of parallel programming and have oodles of processors.
You're after a standard Kd-tree or binary space partitioning tree, I think. (You can look it up on Wikipedia.)
Since you have very many points, you may wish to only approximately partition the first few levels. In this case, you should take a random sample of your 200M points--maybe 200k of them--and split the full data set at the midpoint of the subsample (along whichever axis is longer). If you actually choose the points at random, the probability that you'll miss a huge cluster of points that need to be subdivided will be approximately zero.
Now you have two problems of about 100M points each. Divide each along the longer axis. Repeat until you stop taking subsamples and split along the whole data set. After ten breadth-first iterations you'll be done.
If you have a different problem--you must provide tick marks along the X and Y axis and fill in a grid along those as best you can, rather than having the irregular decomposition of a Kd-tree--take your subsample of points and find the 0/32, 1/32, ..., 32/32 percentiles along each axis. Draw your grid lines there, then fill the resulting 1024-element grid with your points.
R-tree
Good question.
I think the area you need to investigate is "computational geometry" and the "k-partitioning" problem. There's a link that might help get you started here
You might find that the problem itself is NP-hard which means a good approximation algorithm is the best you're going to get.
Would K-means clustering or a Voronoi diagram be a good fit for the problem you are trying to solve?
That's looks like Cluster analysis.
Would a QuadTree work?
A quadtree is a tree data structure in which each internal node has exactly four children. Quadtrees are most often used to partition a two dimensional space by recursively subdividing it into four quadrants or regions. The regions may be square or rectangular, or may have arbitrary shapes. This data structure was named a quadtree by Raphael Finkel and J.L. Bentley in 1974. A similar partitioning is also known as a Q-tree. All forms of Quadtrees share some common features:
They decompose space into adaptable cells
Each cell (or bucket) has a maximum capacity. When maximum capacity is reached, the bucket splits
The tree directory follows the spatial decomposition of the Quadtree
I am looking for an algorithm as follows:
Given a set of possibly overlapping rectangles (All of which are "not rotated", can be uniformly represented as (left,top,right,bottom) tuplets, etc...), it returns a minimal set of (non-rotated) non-overlapping rectangles, that occupy the same area.
It seems simple enough at first glance, but prooves to be tricky (at least to be done efficiently).
Are there some known methods for this/ideas/pointers?
Methods for not necessarily minimal, but heuristicly small, sets, are interesting as well, so are methods that produce any valid output set at all.
Something based on a line-sweep algorithm would work, I think:
Sort all of your rectangles' min and max x coordinates into an array, as "start-rectangle" and "end-rectangle" events
Step through the array, adding each new rectangle encountered (start-event) into a current set
Simultaneously, maintain a set of "non-overlapping rectangles" that will be your output set
Any time you encounter a new rectangle you can check whether it's completely contained already in the current / output set (simple comparisons of y-coordinates will suffice)
If it isn't, add a new rectangle to your output set, with y-coordinates set to the part of the new rectangle that isn't already covered.
Any time you hit a rectangle end-event, stop any rectangles in your output set that aren't covering anything anymore.
I'm not completely sure this covers everything, but I think with some tweaking it should get the job done. Or at least give you some ideas... :)
So, if I were trying to do this, the first thing I'd do is come up with a unified grid space. Find all unique x and y coordinates, and create a mapping to an index space. So if you have x values { -1, 1.5, 3.1 } then map those to { 0, 1, 2 }, and likewise for y. Then every rectangle can be exactly represented with these packed integer coordinates.
Then I'd allocate a bitvector or something that covers the entire grid, and rasterize your rectangles in the grid. The nice thing about having a grid is that it's really easy to work with, and by limiting it to unique x and y coordinates it's minimal and exact.
One way to come up with a pretty quick solution is just dump every 'pixel' of your grid.. run them back through your mapping, and you're done. If you're looking for a more optimal number of rectangles, then you've got some sort of search problem on your hands.
Let's look at 4 neighboring pixels, a little 2x2 square. When I write algorithms like these, typically I think in terms of verts, edges, and faces. So, these are the faces around a vert. If 3 of them are on and 1 is off, then you've got a concave corner. This is the only invalid case. For example, if I don't have any concave corners, I just grab the extents and dump the whole thing as a single rectangle. For each concave corner, you need to decide whether to split horizontally, vertically, or both. I think of the splitting as marking edges not to cross when finding extents. You could also do it as coloring into sets, whatever is easier for you.
The concave corners and their split directions are your search space.. you can use whatever optimization algorithm you'd like. Branch/bound might work well. I bet you could find a simple heuristic that performs well (for example, if there's another concave corner directly across from the one you're considering, always split in that direction. Otherwise, split in the shorter direction). You could just go greedy. Or you could just split every concave vert in both directions, which would generally give you fewer rectangles than outputting every 'pixel' as a rect, and would be pretty simple.
Reading over this I realize that there may be areas that are unclear. Let me know if you want me to clarify anything.