I have been trying to solve a 2D variant of this problem: Box stacking problem
The catch is that unlike the original, multiple instances of the same box are not allowed. You can still rotate the 2D rectangles, of course.
There is also a height limit imposed so the tower has to be less than or equal this limit.
The base of a box below another box has to be larger than or equal to (not strictly larger) it.
I've been trying to apply the LIS algorithm and the other restrictions seem to be handled, but I cannot think of how to account for the no duplicates rule.
So my main question is how do you account for the no duplicates rule if you are trying to maximise the height of the stack and keep it below the limit?
Thanks
EDIT:
I realised that if you create the two possible rotations for each item like you do for the 3-D variant, this problem becomes very similar to the 0-1 knapsack problem. Since the optimal tower must be built using a subset of this sorted list in order then we have to choose which ones to take. However, I still don't know how to make sure no duplicates are taken. Any help on resolving this?
Thanks
EDIT 2:
I found this link: http://courses.csail.mit.edu/6.006/fall10/handouts/recitation11-19.pdf
which on page 4 describes how to solve the single-instance 3D maximum height version, however I think this will not work for the height limit version since it returns the maximum height for each call. Maybe this can be modified to accommodate the height limit?
Ok so I found out the solution was just the 0-1 style except with a boolean table after realising that the order is not important since any set of 2D rectangles can be sorted into a tower which abides by the restrictions.
Any set of 2D rectangles cannot necessarily be sorted into a tower that abides by the height restriction. Even if it could you would still need to decide which orientation to use for a particular box (rotating it so the base is largest, if it fits, would enable wider rectangles to be stacked on top but wouldn't be as tall.)
The non 0/1 version, where multiple instances of the rectangles are allowed, is solved via dynamic programming by creating two rectangles, rotated differently, sort the array of rectangles (enforcing a partial order, so that if rectangle i, with a specified rotation, can fit on top of rectangle j, with a specified rotation, then i must be less than j) and then calculate for i=0...n the maximum heights that can be achieved by a tower that ends with the ith box with a specified rotation.
The partial order is required. In the 0/1 case, where multiple rectangles/boxes are not allowed, it seems that you have to generate sets of all possible rotations of all the rectangles/boxes, sort each one and calculate its maximum height, that doesn't violate any conditions, e.g. via dynamic programming, and then keep track of the tallest stack height possible over all the subsets (Note that there are an exponential number of possible subsets which, as in the dynamic programming solution for the traveling salesman problem, is much less than the factorial number of possible orderings of a set.) In particular, the solution http://courses.csail.mit.edu/6.006/fall10/handouts/recitation11-19.pdf appears incorrect: if i < j but box i with orientation x can fit on top of box j with orientation y, then the tallest tower ending in box i with orientation x might contain box j with orientation y which wouldn't have been considered when H(i,R) was calculated, contradicting the formula.
Note that the strategy of creating duplicates also seems to fail in the 0/1 case because whether you can fit the jth box on top of the ith box depends not just on i but on whether there is a rotated version of the jth box underneath i in the stack. So it seems you need to store the highest stack that doesn't contain any possible subset of boxes in which case we are back to calculating the highest stack for every subset of boxes.
Related
how to solve this problem?
Given a group of boxes, it is requested to arrange these boxes on top of each other to reach the minimum possible height.
it is mandatory that a box “X” cannot be placed on top of another box “Y” unless the 2D base area of X is less than or equal to 2D base area of Y. It is allowed to rotated any box to use any two sides as its base. it must use only one instance of each box
Find the minimum length side of every box, and the area formed by the other two sides, in linear time. Sort by these areas in O(n log n).
This isn't a DP problem.
I dont think we can solve this using DP. I am suggesting backtracking approach.
Find all rotations possible of every box, maintain array of these possible rotations
Sort array according to their base areas
Process each box 1 by 1, check if it can be added on top of already added boxes.
When all boxes are added to stack, check if it has minimum height.
Backtrack to process these boxes in different way.
Also, we will need to have a set which will make sure only 1 instance of each box is added(before adding a box, check if box has been added in this iteration or not).
Link to famouse backtracking problem as you mentioned in comments:
https://www.geeksforgeeks.org/box-stacking-problem-dp-22/
I'm looking for a general algorithm for creating an evenly spaced grid, and I've been surprised how difficult it is to find!
Is this a well solved problem whose name I don't know?
Or is this an unsolved problem that is best done by self organising map?
More specifically, I'm attempting to make a grid on a 2D Cartesian plane in which the Euclidean distance between each point and 4 bounding lines (or "walls" to make a bounding box) are equal or nearly equal.
For a square number, this is as simple as making a grid with sqrt(n) rows and sqrt(n) columns with equal spacing positioned in the center of the bounding box. For 5 points, the pattern would presumably either be circular or 4 points with a point in the middle.
I didn't find a very good solution, so I've sadly left the problem alone and settled with a quick function that produces the following grid:
There is no simple general solution to this problem. A self-organizing map is probably one of the best choices.
Another way to approach this problem is to imagine the points as particles that repel each others and that are also repelled by the walls. As an initial arrangement, you could already evenly distribute the points up to the next smaller square number - for this you already have a solution. Then randomly add the remaining points.
Iteratively modify the locations to minimize the energy function based on the total force between the particles and walls. The result will of course depend on the force law, i.e. how the force depends on the distance.
To solve this, you can use numerical methods like FEM.
A simplified and less efficient method that is based on the same principle is to first set up an estimated minimal distance, based on the square number case which you can calculate. Then iterate through all points a number of times and for each one calculate the distance to its closest neighbor. If this is smaller than the estimated distance, move your point into the opposite direction by a certain fraction of the difference.
This method will generally not lead to a stable minimum but should find an acceptable solution after a number ot iterations. You will have to experiment with the stepsize and the number of iterations.
To summarize, you have three options:
FEM method: Efficient but difficult to implement
Self organizing map: Slightly less efficient, medium complexity of implementation.
Iteration described in last section: Less efficient but easy to implement.
Unfortunately your problem is still not very clearly specified. You say you want the points to be "equidistant" yet in your example, some pairs of points are far apart (eg top left and bottom right) and the points are all different distances from the walls.
Perhaps you want the points to have equal minimum distance? In which case a simple solution is to draw a cross shape, with one point in the centre and the remainder forming a vertical and horizontal crossed line. The gap between the walls and the points, and the points in the lines can all be equal and this can work with any number of points.
I have different dimension of small rectangles (1cm x 2xm, 2cmx3cm, 4cm*6cm etc). The number of different type rectangles may vary depending on case. Each type of different rectangles may have different number of counts.
I need to create a big rectangle with all these small rectangles which these small rectangles can only be placed on the edges. no rotations. The final outer rectangle should ideally be smiliar to a square shape. X ~Y. Not all edges need to be filled up. There can be gaps in between smaller rectangles. Picture Example:
http://i.stack.imgur.com/GqI5z.png
I am trying to write a code that finds out the minimum possible area that can be formed.
I have an algorithm that loop through all possible placement to find out the minimum area possible. But that takes a long run time as number of different type rectangles and number of rectangles increase. i.e. 2 type of rectangles, each has 100 + rectangles. 8 for loops. That will be ~100^8 iterations
Any ideas on better and faster algorithm to calculate the minimum possible area? code is in python, but any algorithm concept is fine.
for rectange_1_top_count in (range(0,all_rectangles[1]["count"]+1)):
for rectange_1_bottom_count in range(0,all_rectangles[1]["count"]-rectange_1_top_count+1):
for rectange_1_left_count in (range(0,all_rectangles[1]["count"]-rectange_1_top_count-rectange_1_bottom_count+1)):
for rectange_1_right_count in ([all_rectangles[1]["count"]-rectange_1_top_count-rectange_1_bottom_count-rectange_1_left_count]):
for rectange_2_top_count in (range(0,all_rectangles[2]["count"]+1)):
for rectange_2_bottom_count in (range(0,all_rectangles[2]["count"]-rectange_2_top_count+1)):
for rectange_2_left_count in (range(0,all_rectangles[2]["count"]-rectange_2_bottom_count-rectange_2_top_count+1)):
for rectange_2_right_count in [(all_rectangles[2]["count"]-rectange_2_bottom_count-rectange_2_left_count-rectange_2_top_count)]:
area=calculate_minimum_area()
if area< minimum_area:
minimum_area=area
This looks like an NP-hard problem, so there exists no simple and efficient algorithm. It doesn't mean that there is no good heuristic that you can use, but if you have many small rectangles, you won't find the optimal solution fast.
Why is it NP-hard? Let's assume all your rectangles have height 1 and you have on rectangle of height 2, then it would make sense to look for a solution with total height 2 (basically, you try to form two horizontal lines of height-1 rectangles with the same length). To figure out if such a solution exists, you would have to form two subsets of your small rectangles, both adding up to the same total width. This is called the partition problem and it is NP-complete. Even if there may be gaps and the total widths are not required to be the same, this is still an NP-hard problem. You can reduce the partition problem to your rectangle problem by converting the elements to partition into rectangles of height 1 as outlined above.
I'll wait for the answer to the questions I posted in the comments to your question and then think about it again.
Let's say I have n number of equally sized and equally rotated squared boxes inside a limited area in a 2D coordinate system (floating point coordinates). The boxes should not overlap.
Now I want to find a free space for one more box. I need some tips for an algorithm to solve this. Any ideas?
There ought to be a scan line algorithm for this. You say the boxes are equally rotated, so you should be able to rotate the co-ordinate system, if necessary, so that the edges of the boxes are parallel to the x and y coordinates. I would then sort the boxes in order of y coordinate.
Now try placing a box in the lowest possible position. Read from the sorted boxes to find all the boxes low enough to interfere with your placement and create an ordered set (e.g. red-black tree or similar container class) of these boxes. Now scan along this set of boxes and see if there is a gap big enough to place a box. If not, use the original sorted list of boxes to find and remove the lowest box, so you can consider putting the new box in just above that lowest box, so it cannot interfere with this. Add more boxes from the sorted list to cover all boxes high enough to interfere with this new possible height of box. Keep track of where you have removed boxes from the list and check there to see if a gap big enough to hold a box has opened up. If not, repeat the exercise until you find a gap or run out of space at the top of the possible area.
This looks like cost N log N for the initial sort, and then a cost of at most log N per box to insert and delete boxes from the ordered set. Checking for gaps is no more expensive than this, because you only check for a gap in a location where you have just removed a box. So I think the total cost is N log N.
I started toying with this idea some years ago when I wrote my university papers. The idea is this - the perfect color quantization algorithm would take an arbitrary true-color picture and reduce the number of colors to the minimum possible, while maintaining that the new image is completely indistinguishable from the original with a naked eye.
Basically the setting is simple - you have a set of points in the RGB cube (from 0 to 255 integer values on each axis). You have to replace each of these points with another point in such a way that:
The total number of points after the operation is as small as possible;
The distance from an original point to the replaced point is no larger than some predefined constants R, G and B on each of the red, green and blue axis (these are taken from the sensitivity of the human eye and are in general configurable by the user).
I know that there are many color quantization algorithms out there that work with different efficiencies, but they are mostly targeted at reducing colors to a certain number, not "the minimum possible without violating these constraints".
Also, I would like the algorithm to produce really absolute minimum possible, not just something that is "pretty close to minimum".
Is this possible without a time consuming full search of all combinations (infeasible for any real picture)? My instincts tell me that this is a NP-complete problem or worse, but I cannot prove it.
Bonus setting: Change the limit from constants R,G,B to a function F(Rsource, Gsource, Bsource, Rtarget, Gtarget, Btarget) which returns TRUE if the mapping would be OK, and FALSE if it was out of range.
Given your definitions the structure of the picture (i.e. how the pixels are organized) does not matter at all, the only thing that matters is the subset of RGB triplets that appear at least once in the picture as a pixel value. Let that subset be S. You want to find then another subset of RGB triplets E (the encoding) such that for every s in S there exists a counterpart e in E such that diff(s,e) <= threshold where threshold is the limit you impose on the acceptable difference and diff(...) reduces the triplet distance into a single number.
Additionally, you want to find E that is minimal in size i.e. for any E' s.t. |E'|<|E|, there is at least one (s,e) pair violating the difference constraint.
This particular problem cannot be given an asymptotic complexity assessment because it has only a finite set of instances. It can be solved in constant time (theoretically) by precalculating the minimum set E for every subset S. There is a huge amount of subsets S but yet only a finite number, so the problem cannot be e.g. classified as NP-complete optimization problem or anything. The actual run-time of your algorithm for this parcticular problem hence depends completely on the amount of preprocessing you are willing to tolerate. In order to get an asymptotic complexity assessment you need to generalize the problem first so that the set of problem instances is strictly infinite.
Optimal quantization is an NP-hard problem (Son H. Nguyen, Andrzej Skowron — Quantization Of Real Value Attributes, 1995).
Predefined maximum distance doesn't make things easier when you have clusters of points which are larger than your sphere, but distances between points are less than sphere radius — then you have a lot of combinations (as each choice of placement of a sphere may displace all other spheres). And unfortunately this is going to happen quite often on real images with gradients (it's not unusual for entire histogram to be one huge cluster).
You can modify many quantization algorithms to pick number of clusters until certain quality is satisfied, e.g. in Median Cut and Linde–Buzo–Gray you can simply stop subdividing space when you reach your quality limit. It won't be guarantee that it's global minimum (that is NP-hard), but in LBG you'll at least know you're at local minimum.
Here's an idea how I'd go about this - unfortunately this will probably need a lot of memory and be very slow:
You create a 256x256x256 cubic data structure that contains a counter and a "neighbors" list of colors. For every unique color that you find in your image you increase the counter of each cell which is within the radius of a sphere around that color. The radius of the sphere is the maximum acceptable distance that you have defined originally. You also add the color to the neighbors list of each cell.
Once you have added all unique colors you loop through the cube and find the cell with the maximum counter value. Add this color to your result list. Now loop through your cube again and remove this color and all colors that are in the neighbors list of that color from all cells and decrease each cell's counter whenever you remove a color. Then repeat searching for the maximum counter and removing until no more colors are in the cube.
Alternatively once could also add the same color multiple times if it occurs more often in the image. Not sure if that would improve the visual result.