Algorithm to minimize vertex distances - Dwarf Fortress - algorithm

I play Dwarf Fortress game. And the main challenge for me is to design layout of the fortress efficiently. Meaning, that each industry flow should be as dense as possible, to minimize the travel distances.
An example could be food industry . Each grey ellipse represents a single building. Each white rectangle represents product from the building.
My goal is to find algorithm which would distribute the buildings on 2D grid in such manner that distance between those building is minimal in the sense how they are connected. Meaning that fishery and loom can be far apart, but loom and farmer's should be as close as possible.
At the moment I have considered using some ready software, to simulate the layout, but some tips for algorithm would be fine.
Currently I'm considering some force-directed algorithm, but I'm not sure about the discrete grid requirement.
Formalization of question: Is there a Force Draw Graph algorithm which works in discrete coordinates?
UPDATE: I have found implementation of the Force draw algorithm in AS3 (the web contains JS version too). I will try to convert it to discrete version. But I have some doubts it will work...
UPDATE2: Some further restrictions were requested in comments. Here they are:
Each building occupy single cell on virtual grid. Buildings can be on adjacent cells. Buildings cannot stack/overlap.
(PS: In game, each building has deifned size, usually 3x3, but I want to keep the problem more general, to allow for more approaches).

You are pretty much trying to solve an instance of a floor-planning problem where you are trying to minimize the total "connection" length. Most of these problems are instances of NP-hard problems, some of them have pseudo-polynomial run-time algorithms.
There is a special case you might be interested that is actually fully solvable in polynomial time: if the relative positions of the "boxes" or buildings you want to place are known ahead of time.
For full details on how to solve this particular case, please refer to this tutorial on geometric programming from Stanford, chapter 6, section 6.1 the first example entitled "Floor planning." Another website also includes matlab code that implements and solves the problem (under chapter 8 Geometric Programming.)

So I've managed to do some code which aproximates solution of this problem. It's not a top class product but it's working. I plan to do some updates over time, but I don't have any time frame set.
The source code is here: https://github.com/sutr90/DF-Layout
My code uses Simulated Annealing approach. Where cost function is based on total area, total length of edges and overlap. To measure distance, I use taxi-cab metric, but that is a subject to change.

Related

Finding correspondence of edges for image matching

I have a challenging problem to solve. The Figure shows green lines, that are derived from an image and the red lines are the edges derived from another image. Both the images are taken from the same camera, so the intrinsic parameters are same. Only, the exterior parameters are different, i.e. there is a slight rotation and translation while taking the 2nd image. As it can be seen in the figure, the two sets of lines are pretty close. My task is to find correspondence between the edges derived from the 1st image and the edges derived from the second image.
I have gone through a few sources, that mention taking corresponding the nearest line segment, by calculating Euclidean distances between the endpoints of an edge of image 1 to the edges of image 2. However, this method is not acceptable for my case, as there are edges in image 1, near to other edges in image 2 that are not corresponding, and this will lead to a huge number of mismatches.
After a bit of more research, few more sources referred to Hausdorff distance. I believe that this could really be a solution to my problem and the paper
"Rucklidge, William J. "Efficiently locating objects using the
Hausdorff distance." International Journal of Computer Vision 24.3
(1997): 251-270."
seemed to be really interesting.
If, I got it correct the paper formulated a function for calculating translation of model edges to image edges. However, while implementation in MATLAB, I'm completely lost, where to begin. I will be much obliged if I can be directed to a pseudocode of the same algorithm or MATLAB implementation of the same.
Additionally, I am aware of
"Apply Hausdorff distance to tile image classification" link
and
"Hausdorff regression"
However, still, I'm unsure how to minimise Hausdorff distance.
Note1: Computational cost is not of concern now, but faster algorithm is preferred
Note2: I am open to other algorithms and methods to solve this as long as there is a pseudocode available or an open implementation.
Have you considered MATLAB's image registration tools?
With imregister(https://www.mathworks.com/help/images/ref/imregister.html), you can just insert both images, 1 as reference, one as "moving" and it will register them together using an affine transform. The function call is just
[optimizer, metric] = imregconfig('monomodal');
output_registered = imregister(moving,fixed,'affine',optimizer,metric);
For better visualization, use the RegistrationEstimator command to open up a gui in which you can import the 2 images and play around with it to register your images. From there you can export code for future images.
Furthermore if you wish to account for non-rigid transforms there is imregdemons(https://www.mathworks.com/help/images/ref/imregdemons.html) which works much the same way.
You can compute the Hausdorff distance using Matlab's bwdist function. You would compute the distance transform of one image, evaluate it at the edge points of the other, and take the maximum value. (You can also take the sum instead, in which case it is called the chamfer distance.) For this problem you'll probably want the symmetric Hausdorff distance, so you would do the computation in both directions.
Both Hausdorff and chamfer distance measure the match quality of a particular alignment. To find the best registration you'll need to try multiple alignment transformations and evaluate them all looking for the best one. As suggested in another answer, you may find it easier to use registration existing tools than to write your own.

Algorithm for finding optimal zone coverage

I have a few points randomly distributed over a 2D-map. I also have a finite number of circles that I want to place so they cover as many of the points as possible, kind of like a turret-game AI that places turrets in a base to protect valuable buildings. Is there any good way to do this?
What you are describing sounds like a form of the maximum coverage problem. One simple way to solve this problem is applying the greedy algorithm.
This means you start by drawing the first circle such that it covers the biggest possible region. Then you draw the second circle such that it covers the biggest possible area and so on.

Reduce number of nodes in 3D A* pathfinding using (part of a) uniform grid representation

I am calculating pathfinding inside a mesh which I have build a uniform grid around. The nodes (cells in the 3D grid) close to what I deem a "standable" surface I mark as accessible and they are used in my pathfinding. To get alot of detail (like being able to pathfind up small stair cases) the ammount of accessible cells in my grid have grown quite large, several thousand in larger buildings. (every grid cell is 0.5x0.5x0.5 m and the meshes are rooms with real world dimensions). Even though I only use a fraction of the actual cells in my grid for pathfinding the huge ammount slows the algorithm down. Other than that it works fine and finds the correct path through the mesh, using a weighted manhattan distance heuristic.
Imagine my grid looks like that and the mesh is inside it (can be more or less cubes but its always cubical), however the pathfinding will not be calculated on all the small cubes just a few marked as accessible (usually at the bottom of the grid but that can depend on how many floors the mesh has).
I am looking to reduce the search space for the pathfinding... I have looked at clustering like how HPA* does it and other clustering algorithms like Markov but they all seem to be best used with node graphs and not grids. One obvious solution would be to just increase the size of the small cubes building the grid but then I would lose alot of detail in the pathfinding and it would not be as robust. How could I cluster these small cubes? This is how a typical search space looks when I do my pathfinding (blue are accessible, green is path):
and as you see there is a lot of cubes to search through because the distance between them is quite small!
Never mind that the grid is an unoptimal solution for pathfinding for now.
Does anyone have an idea on how to reduce the ammount of cubes in the grid I have to search through and how would I access the neighbors after I reduce the space? :) Right now it only looks at the closest neighbors while expanding the search space.
A couple possibilities come to mind.
Higher-level Pathfinding
The first is that your A* search may be searching the entire problem space. For example, you live in Austin, Texas, and want to get into a particular building somewhere in Alberta, Canada. A simple A* algorithm would search a lot of Mexico and the USA before finally searching Canada for the building.
Consider creating a second layer of A* to solve this problem. You'd first find out which states to travel between to get to Canada, then which provinces to reach Alberta, then Calgary, and then the Calgary Zoo, for example. In a sense, you start with an overview, then fill it in with more detailed paths.
If you have enormous levels, such as skyrim's, you may need to add pathfinding layers between towns (multiple buildings), regions (multiple towns), and even countries (multiple regions). If you were making a GPS system, you might even need continents. If we'd become interstellar, our spaceships might contain pathfinding layers for planets, sectors, and even galaxies.
By using layers, you help to narrow down your search area significantly, especially if different areas don't use the same co-ordinate system! (It's fairly hard to estimate distance for one A* pathfinder if one of the regions needs latitude-longitude, another 3d-cartesian, and the next requires pathfinding through a time dimension.)
More efficient algorithms
Finding efficient algorithms becomes more important in 3 dimensions because there are more nodes to expand while searching. A Dijkstra search which expands x^2 nodes would search x^3, with x being the distance between the start and goal. A 4D game would require yet more efficiency in pathfinding.
One of the benefits of grid-based pathfinding is that you can exploit topographical properties like path symmetry. If two paths consist of the same movements in a different order, you don't need to find both of them. This is where a very efficient algorithm called Jump Point Search comes into play.
Here is a side-by-side comparison of A* (left) and JPS (right). Expanded/searched nodes are shown in red with walls in black:
Notice that they both find the same path, but JPS easily searched less than a tenth of what A* did.
As of now, I haven't seen an official 3-dimensional implementation, but I've helped another user generalize the algorithm to multiple dimensions.
Simplified Meshes (Graphs)
Another way to get rid of nodes during the search is to remove them before the search. For example, do you really need nodes in wide-open areas where you can trust a much more stupid AI to find its way? If you are building levels that don't change, create a script that parses them into the simplest grid which only contains important nodes.
This is actually called 'offline pathfinding'; basically finding ways to calculate paths before you need to find them. If your level will remain the same, running the script for a few minutes each time you update the level will easily cut 90% of the time you pathfind. After all, you've done most of the work before it became urgent. It's like trying to find your way around a new city compared to one you grew up in; knowing the landmarks means you don't really need a map.
Similar approaches to the 'symmetry-breaking' that Jump Point Search uses were introduced by Daniel Harabor, the creator of the algorithm. They are mentioned in one of his lectures, and allow you to preprocess the level to store only jump-points in your pathfinding mesh.
Clever Heuristics
Many academic papers state that A*'s cost function is f(x) = g(x) + h(x), which doesn't make it obvious that you may use other functions, multiply the weight of the cost functions, and even implement heatmaps of territory or recent deaths as functions. These may create sub-optimal paths, but they greatly improve the intelligence of your search. Who cares about the shortest path when your opponent has a choke point on it and has been easily dispatching anybody travelling through it? Better to be certain the AI can reach the goal safely than to let it be stupid.
For example, you may want to prevent the algorithm from letting enemies access secret areas so that they avoid revealing them to the player, and so that they AI seems to be unaware of them. All you need to achieve this is a uniform cost function for any point within those 'off-limits' regions. In a game like this, enemies would simply give up on hunting the player after the path grew too costly. Another cool option is to 'scent' regions the player has been recently (by temporarily increasing the cost of unvisited locations because many algorithms dislike negative costs).
If you know what places you won't need to search, but can't implement in your algorithm's logic, a simple increase to their cost will prevent unnecessary searching. There's a lot of ways to take advantage of heuristics to simplify and inform your pathfinding, but your biggest gains will come from Jump Point Search.
EDIT: Jump Point Search implicitly selects pathfinding direction using the same heuristics as A*, so you may be able to implement heuristics to a small degree, but their cost function won't be the cost of a node, but rather, the cost of traveling between the two nodes. (A* generally searches adjacent nodes, so the distinction between a node's cost and the cost of traveling to it tends to break down.)
Summary
Although octrees/quad-trees/b-trees can be useful in collision-detection, they aren't as applicable to searches because they section a graph based on its coordinates; not on its connections. Layering your graph (mesh in your vocabulary) into super graphs (regions) is a more effective solution.
Hopefully I've covered anything you'll find useful.
Good luck!

Offline algorithm for simplifying 2d maps

I have a polygon 2d map of an environment and I want to simplify this map for some planning tests.
For example I want to close areas which can't be reached with the robot model, because the passage is to small.
The second problem is when i have two segments which a nearly parallel i want to set them parallel.
Can anyone tell me some algorithm names for that? That i know where i have to search?
thank you for your help.
Hunk
The first task is usually solved by applying Minkowski difference to your map and robot. This implies you know the profile of your robot.
For the second task common approach is to use 2D Snap Rounding. You can find a lot of papers on that topic at scholar.google.com. However it may also be helpful to take a look at Ramer–Douglas–Peucker algorithm for reducing a number of points in curve. It can not help with solving your problem, but it is useful to know about its existence.
Most likely your work is connected to Motion Planning, so I highly recommend you to read Computational Geometry by Kreweld, De Berg, Overmars and Schwarzkopf. It is classic of computational geometry. There you can find a lot about visibility graphs and motion planning.
Similarly to Mikhail's answer, for your first problem you can convert the map polygon into a binary image and apply a morphological dilation taking the size of the robot into account. Then, the areas separated by narrow paths will be disconnected components in the binary image.
Another approach is to divide the space into a grid and mark cells as empty or full depending on if some map line traverse them. Then thicken the boundaries according to the robot size and look for connected components from the cell where the robot is to find feasible paths.
You can use a delaunay triangulation of the map and travel the edges of the mesh for the shortest route.

Fill arbitrary 2D shape with given set of rectangles

I have a set of rectangles and arbitrary shape in 2D space. The shape is not necessary a polygon (it may be a circle), and rectangles have different widths and heights. The task is to approximate the shape with rectangles as close as possible. I can't change rectangles dimensions, but rotation is permitted.
It sounds very similar to packing problem and covering problem but covering area is not rectangular...
I guess it's NP problem, and I'm pretty sure there should be some papers that show good heuristics to solve it, but I don't know what to google? Where should I start?
Update: One idea just came into my mind but I'm not sure if it's worth investigating. What if we consider bounding shape as a physical mold filled with water. Each rectangle is considered as a positively charged particle with size. Now drop the smallest rectangle to it. Then drop the next by size at random point. If rectangles too close they repel each other. Keep adding rectangles until all are used. Could this method work?
I think you could look for packing and automatic layout generation algorithms. Automatic VLSI layout generation algorithms might need similar things, just like textile layout questions...
This paper Hegedüs: Algorithms for covering polygons by rectangles seems to address a similar problem. And since this paper is from 1982, it might be interesting to look at the papers which cite this one. Additionally, this meeting seems to be discussing research problems related to this, so might be a starting point for keywords or names who do research in this idea.
I don't know if the computational geometry research has algorithms for your specific problem, or if these algorithms are easy/practical enough to implement. Here is how I would approach it if I had to do it without being able to look up previous work. This is just a direction, by far not a solution...
Formulate it as an optimization problem. You have discrete variables of which rectangles you choose (yes or no) and continuous variables (location and orientation of the triangles). Now you can set up two independent optimizations: a discrete optimization which picks the rectangles; and a continuous that optimizes for the location and orientation once rectangles are given. Interleave these two optimizations. Of course the difficulty lies in the formulation of optimizations, and designing your error energy such that it does not get stuck in some strange configurations (local minima). I'd try to get the continuous as a least squares problem such that I can use standard optimizations libraries.
I think this problem is suitable for solving with genetic algorithm and/or evolutionary strategy algorithm. I've done similar box packing problem with the help of evolutionary strategy algorithm of some kind. Check this out in my blog.
So if you will use such approach - encode into chromosomes box:
x coordinate
y coordinate
angle
Then try to minimize such fitness function-
y = w1 * box_intersection_area +
w2 * box_area_out_of_shape +
w3 * average_circle_radius_in_free_space
Choose weights w1,w2,w3 such as to affect importance of factors. When genetic algorithm will find partial solution - remove boxes which still overlaps together or are out of shape - and you will have at least legal (but not necessary optimal) solution.
good luck in this interesting problem !
It is NP hard indeed and since it has hi-tech application, reasonably efficients approximate strategies are not even in patents, let alone published papers.
The best you can do with a limited budget is to start by limiting the problem. Assume that all rectangles are exactly the same, Assume that all rectangles which are binary sub-divisions of your standard rectangle are also allowed since you can efficiently pre-pack them to fit your core division. For extra points you can also form several fixed schemas for gluing core rectangles to cover a few larger shapes with substantially different proportions. Assume that you can change dimensions of your standard rectangle/cell as long as the rest (pre-packing and gluing schema) remains the same - this gives you parameters to decide approximate size of the core rectangle based on rectangles you are given.
Now you can play with aspect ratios to approximate the error such limited system could guarantee. For the first iterations assume that it can have 50% error with a simple sub-division schema and then change schema to reduce the error but without increasing asymptotic complexity of pre-packing. At the end of the day you are always just assigning given rectangles to your pre-calculated and now fixed grid and binary sub-divisions - meaning you are not trying to do a layout or backtrack at all - you are always happy with the first approximate fit into the grid.
Work on defining classes of rectangles that pack well with your schema - that's again to keep the whole process inverted - you are never trying to actually fit what you are given - you are defining what you have to be given in order to fir it well - then you punt the rest as error since it is approximation.
Then you can try to do a bit more, but not much more - any slip into backtracking or nailing arbitrary small error and it's exponential.
If you are at a research facility and can get some supercomputer time - run a set of exhaustive searches with pathological mixes there just to see how optimal packing may look like and to see if you can derive a few more sub-division schemas and/or classes of rectangle sets.
That should be enough for the first 2 yrs or research :-)

Resources