Highest possible pyramid - algorithm

I tried everything, but I still can't solve this problem without brute force:
I get N blocks with a known height and width. I can rotate them (height become width and width become height) and I have to build the tallest possible pyramid from them (of course I can change the order of blocks).
The problem is that you can't put a block of width X onto a block with width smaller than X.
EDIT:
The problem is, that you can't put a block onto a block of the same width.
Any ideas?

What I understand reading your problem statement and comments is that you want to build tallest pyramid with width from bottom to top in decreasing order.
If this is the case, then what we can do is simply the following steps:
Loop over blocks and swap width and height only if width > height.
Now, sort the array of blocks in decreasing order of width which is the order used for stacking blocks from bottom to top in pyramid.
Answer is summation of all heights.
Note: step -2 is only needed if you want to display order of blocks
from bottom to top in pyramid.

Related

Find the rectangle with the smallest area that can hold another rectangle

Assume that I have a set of rectangles (with different or same dimensions).
The task is to find (and remove) the rectangle from the set that is larger or equal to a given rectangle.
It should also be the smallest rectangle in the set than can encompass the given rectangle.
This is easily solved in O(n) time by doing a linear search / update, but is it possible to achieve better results?
O(log n) would be optimal I'd assume.
Insert and removal must also be faster than O(n) for this to be of any use in my case.
Can any shortcuts be made by not finding the optimal rectangle, but rather relax the 2nd restriction to:
"It should also be one of the smallest rectangles that can encompass the given rectangle"-
I was thinking along the lines of using a Z-order curve (of the width/height) and use as a one dimensional index and combine that with a tree.
Would that work? Or would there be too much waste?
Another approach would be to use tree using one axis, and then test the other linearly.
Anyone done something similar and can share their experience?
Here's an idea which is not fully elaborated yet:
Maybe you could use a fourfold-branched tree with 2-tuple values (height and width) each representing one rectangle.
One node (w, h) has 4 child-nodes:
(<w, <h) - contains rects which have smaller width and smaller height
(>=w, <h) - contains rects which have greater width and smaller height
(<w, >=h) - contains rects which have smaller width and greater height
(>=w, >=h) - contains rects which have greater width and greater height
When you descend at a (w, h) rect node to look for a container for your (w2, h2) rect there are 4 different cases now:
w2<w and h2<h - three options: (>=w, <h), (<w, >=h), (>=w, >=h)
w2>=w and h2<h - two options: (>=w, <h), (>=w, >=h)
w2<w and h2>=h - two options: (<w, >=h), (>=w, >=h)
w2>=w and h2>=h - one option: (>=w, >=h)
You would have to descend to all possible branches, which is still better than O(n).
Inserting is O(log n).
Not sure about deleting and balancing yet. But I am almost certain there is a solution for that as well.

Locating "black rectangles" on an image - language independent

I am writing a small image analysis program just for fun. Image analysis has always fascinated me. I am trying to locate regions on a scanned document. These regions are going to be marked by clearly defined filled black rectangles (pre-printed on the page).
My problem is locating the rectangles. I know SIFT\SURF find "features" but I am trying to find something specific. Here is what I was thinking of doing. I am not sure if this is the "right" way or there is a better idea.
First off using some library I will turn the image into greyscale, perhaps a PGM since that is what I'm used to working with in school. For the analysis I first plan to run the image through a state of the art deskew algorithm in OpenCV or something else that I find. Once I have my deskewed image I will then threshhold it at some pretty high thresshold. The rectangles are going to be straight black hence me using a pretty high threshhold. I will then experimentally determine a good size black rectangle to slide across the image. While sliding my rectangle across the image I will determine the areas where the greatest percentage of pixles are the same. I will have a cutoff, say 90%. If 90% of the pixles contained in my window are black I must have found a rectangle. My reasoning is that a true black rectangle slid over something that is "pretty much" a black rectangle is most likely a black rectangle. Since I deskewed the image I can assume that the rectangles are straight up and down "enough". I can then track the (x,y) offsets where the rectangles are found on the image and mark them.
Would anyone suggest a better approach?
There are many approaches that might work. (One can easily come up with 10 or more approaches.)
Idea #1 - Canny edge detection; find rectangle fit to contours
cv::Canny
cv::findContours
cv::minAreaRect, or
cv::boundingRect might also work, if the deskewing works as advertised.
Idea #2 - Find all lines using Hough transform; Iterates through all regions created from line intersections.
Idea #3 - (Improvement on #2) Restrict the Hough transform to horizontal and vertical lines by pre-processing.
Idea #4 - Compute Horizontal and Vertical profiles on the entire image; find dips; iterate through all candidate regions.
This idea is based on the assumption that the black rectangles are large enough that they leave a "depression" in both the horizontal and vertical projection profiles, which would be detectable despite other noise objects in the image.
cv::reduce
With dim = 0 or 1 for reducing to a row or column respectively,
With CV_REDUCE_AVG flag
Apply cv::threshold to the horizontal and vertical projection profiles, separately.
For each profile now thresholded into zero/non-zero, find runs of zeroes. These are the possible row ranges and column ranges that could contain the dark rectangles.
For each combination of candidate row range and column range, calculate the average pixel value to decide if it is a true dark rectangle.
Idea #5 - Use integral image (summed area table) to quickly calculate the average pixel value in arbitrary rectangles
cv::integral
To compute the sum (and average) of a rectangle from an integral image, see the Wikipedia article on Summed Area Table
Preprocessing idea - use morphological dilation (or erosion) to "erase" things that cannot be the large continuous black box.
Preprocessing idea - use pre-processing to enhance horizontal and vertical edges; suppress edges in other directions.
I don't know if it is a better approach, but the first thing that came to mind would be a scan-line solution (assuming black or white pixels): I'd check each scanline from top to bottom. In each scanline I'd check each pixel from left to right. A "first" black pixel would be a possible upperleft corner of a rect. If there were enough following contiguous black pixels on the line to meet my desired minimum width, keep the [left, width] in a list of possible rects. Find all possible rect starts and widths on the line.
For a rect to stay in the list and grow in height, the next scanline would have to have the same [left, width] occurrence, otherwise the rect is finished (if its height meets my desired minimum height) or discarded or ignored as too short in height.
You can easily add logic for situations like two rectangles too close to one another vertically or horizontally. Overlapping rectangles would be trickier but still possible to detect with added code.
Here's some pseudocode:
for s := 1 to scanlinecount do
begin
pixel := 1
while pixel <= scanlinewidth do
if black(s, pixel) then // possible rect
begin
left := pixel
repeat
inc(pixel)
until (pixel > scanlinewidth) or white(s, pixel)
width := pixel - left
if width >= MINWIDTH then // wide enough
rememberrect(s, left, width) // bumps height if already in list
end
else inc(pixel)
end
Your list of found rects stores the starting scanline, leftmost pixel, width, and height for each rect found. The "rememberrect" routine checks each rect in the list:
rememberrect(currentline, left, width):
for r := 1 to rectlist.count do
if rectlist[r].left = left
& rectlist[r].width = width
& rectlist[r].y + rectlist[r].height = currentline then
begin // found rect continuing on scanline
inc(rectlist[r].height)
exit
end
inc(rectlist.count) // add new rect to list
rectlist[rectlist.count].left := left
rectlist[rectlist.count].width := width
rectlist[rectlist.count].y := currentline
rectlist[rectlist.count].height := 1
If the group of black pixels on the current scanline has the same leftmost pixel and width as a group on the previous scanline (you'll know they're vertically contiguous because the starting scanline of the rect in the list plus its height will equal the current scanline) then rememberrect bumps the height of the found and remembered rect by 1. Otherwise, remember the new rect with initial height 1.
After the last scanline you'll have a long list of rect candidates, many of them only 1 pixel high. Delete or ignore any rects in the list that aren't high enough. To avoid growing a long list of futile candidates: at the start of each scanline mark all rects found so far as "finished". If rememberrect grows an existing rect or adds a new rect, mark that rect as "grown". At the end of each scanline, any rect still marked as finished that isn't tall enough can be deleted from the list.

Minimum number of rectangles in shape made from rectangles?

I'm not sure if there's an algorithm that can solve this.
A given number of rectangles are placed side by side horizontally from left to right to form a shape. You are given the width and height of each.
How would you determine the minimum number of rectangles needed to cover the whole shape?
i.e How would you redraw this shape using as few rectangles as possible?
I've can only think about trying to squeeze as many big rectangles as i can but that seems inefficient.
Any ideas?
Edit:
You are given a number n , and then n sizes:
2
1 3
2 5
The above would have two rectangles of sizes 1x3 and 2x5 next to each other.
I'm wondering how many rectangles would i least need to recreate that shape given rectangles cannot overlap.
Since your rectangles are well aligned, it makes the problem easier. You can simply create rectangles from the bottom up. Each time you do that, it creates new shapes to check. The good thing is, all your new shapes will also be base-aligned, and you can just repeat as necessary.
First, you want to find the minimum height rectangle. Make a rectangle that height, with the width as total width for the shape. Cut that much off the bottom of the shape.
You'll be left with multiple shapes. For each one, do the same thing.
Finding the minimum height rectangle should be O(n). Since you do that for each group, worst case is all different heights. Totals out to O(n2).
For example:
In the image, the minimum for each shape is highlighted green. The resulting rectangle is blue, to the right. The total number of rectangles needed is the total number of blue ones in the image, 7.
Note that I'm explaining this as if these were physical rectangles. In code, you can completely do away with the width, since it doesn't matter in the least unless you want to output the rectangles rather than just counting how many it takes.
You can also reduce the "make a rectangle and cut it from the shape" to simply subtracting the height from each rectangle that makes up that shape/subshape. Each contiguous section of shapes with +ve height after doing so will make up a new subshape.
If you look for an overview on algorithms for the general problem, Rectangular Decomposition of Binary Images (article by Tomas Suk, Cyril Höschl, and Jan Flusser) might be helpful. It compares different approaches: row methods, quadtree, largest inscribed block, transformation- and graph-based methods.
A juicy figure (from page 11) as an appetizer:
Figure 5: (a) The binary convolution kernel used in the experiment. (b) Its 10 blocks of GBD decomposition.

Partitioning of 192 items into packages (max. 12 per package) while minimizing total surface area

I'm trying to solve following problem.
Given 192 items with specified length and width, I want to find a packaging order that minimizes the total surface area. The items all have the same height (which is not specified). Each package cannot contain more than 12 items, and due to the dimensions of the items it is not possible to store more than 1 item in the same layer. An item can only be stacked on top of another item if its width and length do not exceed the width and length of the lower item.
The goal is to minimize the total surface area, being the surface of the largest object (on the bottom).
I've found an extensive amount of literature on pallet and bin loading, but I can't figure out what I need exactly. Here's what I've come up with:
1) select the item i with the largest surface (width*length) and place it on the bottom of stack j.
2) select the item i with the second largest surface
a) if its width and height do not exceed stack j's bottom item's width and height, place it on top of the bottom item in stack j=1
b) if its width and height do exceed stack j's bottom item's width and height, rotate the item. If it fits, place it on top of the bottom item in stack j=1.
c) if the rotated item's width and height exceed stack j's bottom item's width and height, place it on the bottom of stack j+1 = 2
3) select the item with the third largest surface and repeat steps a, b and c
and so on...
Any remarks, or tips? I have no idea if this will yield an (optimal) solution.
Just a hint for thinking: the "can be stacked on top of" constraint defines a partial ordering of the items. The partial ordering can be represented as a graph by means of topological ordering.
Now you can consider the paths starting at every item, with length not exceeding 12. Tentative solutions can be tried by iteratively removing those paths from the graph until the graph is exhausted. (Whe you remove a path, you will have to repair other paths having items in common with it.)
It might be that the problem expressed in terms of paths is easier to solve than when expressed in terms of nodes.
An issue has to be solved: when removing a path, can it always be of the maximal length or can shorter ones yield better global solutions ?

Packing rectangles for compact representation

I am looking for pointers to the solution of the following problem: I have a set of rectangles, whose height is known and x-positions also and I want to pack them in the more compact form. With a little drawing (where all rectangles are of the same width, but the width may vary in real life), i would like, instead of.
-r1-
-r2--
-r3--
-r4-
-r5--
something like.
-r1- -r3--
-r2-- -r4-
-r5--
All hints will be appreciated. I am not necessarily looking for "the" best solution.
Your problem is a simpler variant, but you might get some tips reading about heuristics developed for the "binpacking" problem. There has been a lot written about this, but this page is a good start.
Topcoder had a competition to solve the 3D version of this problem. The winner discussed his approach here, it might be an interesting read for you.
Are the rectangles all of the same height? If they are, and the problem is just which row to put each rectangle in, then the problem boils down to a series of constraints over all pairs of rectangles (X,Y) of the form "rectangle X cannot be in the same row as rectangle Y" when rectangle X overlaps in the x-direction with rectangle Y.
A 'greedy' algorithm for this sorts the rectangles from left to right, then assigns each rectangle in turn to the lowest-numbered row in which it fits. Because the rectangles are being processed from left to right, one only needs to worry about whether the left hand edge of the current rectangle will overlap any other rectangles, which simplifies the overlap detection algorithm somewhat.
I can't prove that this is gives the optimal solution, but on the other hand can't think of any counterexamples offhand either. Anyone?
Something like this?
Sort your collection of rectangles by x-position
write a method that checks which rectangles are present on a certain interval of the x-axis
Collection<Rectangle> overlaps (int startx, int endx, Collection<Rectangle> rects){
...
}
loop over the collection of rectangles
Collection<Rectangle> toDraw;
Collection<Rectangle> drawn;
foreach (Rectangle r in toDraw){
Collection<Rectangle> overlapping = overlaps (r.x, r.x+r.width, drawn);
int y = 0;
foreach(Rectangle overlapRect in overlapping){
y += overlapRect.height;
}
drawRectangle(y, Rectangle);
drawn.add(r);
}
Put a tetris-like game into you website. Generate the blocks that fall and the size of the play area based on your paramters. Award points to players based on the compactness (less free space = more points) of their design. Get your website visitors to perform the work for you.
I had worked on a problem like this before. The most intuitive picture is probably one where the large rectangles are on the bottom, and the smaller ones are on top, kinda like putting them all in a container and shaking it so the heavy ones fall to the bottom. So to accomplish this, first sort your array in order of decreasing area (or width) -- we will process the large items first and build the picture ground up.
Now the problem is to assign y-coordinates to a set of rectangles whose x-coordinates are given, if I understand you correctly.
Iterate over your array of rectangles. For each rectangle, initialize the rectangle's y-coordinate to 0. Then loop by increasing this rectangle's y-coordinate until it does not intersect with any of the previously placed rectangles (you need to keep track of which rectangles have been previously placed). Commit to the y-coordinate you just found, and continue on to process the next rectangle.

Resources