Problem description: Given a rectangular array of data we want to display them in a table of given width W and minimal height H. Some data items take more space than others. I can live with the simplifying assumption that the space requirement of the item can be expressed in terms of an area aij.
More formally, find positive w1+...+wm=W minimizing H=h1+...+hn subject to aij <= hiwj
This feels like a standard problem to me but I wasn't able to find much on it.
My question is:
Is this indeed a standard problem? I.e. does it have a name? Is it a hard problem? Are there textbook algorithms for solving it?
Related
I'm working with a problem that is similar to the box stacking problem that can be solved with a dynamic programming algorithm. I read posts here on SO about it but I have a difficult time understanding the DP approach, and would like some explanation as to how it works. Here's the problem at hand:
Given X objects, each with its own weight 'w' and strength 's', how
many can you stack on top of each other? An object can carry its own
weight and the sum of all weights on top of it as long as it does not
exceed its strength.
I understand that it has an optimal substructure, but its the overlapping subproblem part that confuses me. I'm trying to create a recursion tree to see where it would calculate the same thing several times, but I can't figure out if the function would take one or two parameters for example.
The first step to solving this problem is proving that you can find an optimal stack with boxes ordered from highest to lowest strength.
Then you just have to sort the boxes by strength and figure out which ones are included in the optimal stack.
The recursive subproblem has two parameters: find the best stack you can put on top of a stack with X remaining strength, using boxes at positions >= Y in the list.
If good DP solution exists, it takes 2 params:
number of visited objects or number of unvisited objects
total weight of unvisited objects you can currently afford (weight of visited objects does not matter)
To make it work you have to find ordering, where placing object on top of next objects is useless. That is, for any solution with violation of this ordering there is another solution that follows this ordering and is better or equal.
You have to proof that selected ordering exists and define it clearly. I don't think simple sorting by strength, suggested by Matt Timmermans, is enough, since weight has some meaning. But it's the proofing part...
Suppose you have a rectangle surface and some round/rectangle objects (different size).
I want to write an algorithm that will tidy up those objects on the surface.
I have to put a maximum objects on the same surface.
I think i will have to put biggest objects first and smallest then.
Do you know if there is a specific algorithm in order to optimize this ?
It is a kind of tetris resolution but i can choose order of pieces.
Thanks
Since you want maximise the number of objects you are going to place, a greedy algorithm might work well in most of the cases:
Sort boxes according to length(ascending order).
Start from the smallest box:
for every box :
try to place it in a already occupied row
if not possible place it in a new row.
if not possible to place - break; //since anything bigger than would not fit.
If you are considering height also, this is called Packing Problem.
You can check related algorithms here
This is called the Knapsack problem
EDIT:
It's actually a subtype of the Knapsack problem: Bin Packing Problem
I posted this on computer science section but no one replied :(. Any help would be greatly appreciated :).
There is a grid of size MxN. M~20000 and N~10. So M is very huge. So one way is to look at this is N grid blocks of size M placed side by side. Next assume that there are K number of users who each have a utility matrix of MxN, where each element provides the utility that the user will obtain if that user is assigned that grid element. The allocation needs to be done in a way such for each assigned user total utility must exceed a certain threshold utility U in every grid block. Assume only one user can be assigned one grid element. What is the maximum number of users that can be assigned?. (So its okay if some users are not assigned ).
Level 2: Now assume for each user at least n out N blocks must exceed utility threshold U. For this problem, whats the maximum number of users that can be assigned.
Of course brute force search is of no use here due to K^(MN) complexity. I am guessing that some kind of dynamic programming approach maybe possible.
To my understanding, the problem can be modelled as a Maximum Bipartite Matching problem, which can be solved efficiently with the Hungarian algorithm. In the left partition L, create K nodes, one for each user. In the right partition R, create L*M*N nodes, one for each cell in the grid. As edges create edges for each l in L and r in R with cost equal to the cost of the assignment of user l to the grid cell r.
Using a different interpretation of your question than Codor, I am going to claim that (at least in theory, in the worst case) it is a hard problem.
Suppose that we can solve it in the special case when there is one block which must be shared between two users, who each have the same utility for each cell, and the threshold utility U is (half of the total utility for all the cells in the block), minus one.
This means that in order to solve the problem we must take a list of numbers and divide them up into two sets such that the sum of the numbers in each set is the same, and is exactly half of the total sum of the numbers available.
This is http://en.wikipedia.org/wiki/Partition_problem, which is NP complete, so if you could solve your problem as I have described it you could solve a problem known to be hard.
(However the Wikipedia entry does say that this is known as "the easiest hard problem" so even if I have described it correctly, there may be solutions that work well in practice).
For example, let's say we have a bounded 2D grid which we want to cover with square tiles of equal size. We have unlimited number of tiles that fall into a defined number of types. Each type of tile specifies the letters printed on that tile. Letters are printed next to each edge and only the tiles with matching letters on their adjacent edges can be placed next to one another on the grid. Tiles may be rotated.
Given the size of the grid and tile type definitions, what is the fastest method of arranging the tiles such that the above constraint is met and the entire/majority of the grid is covered? Note that my use case is for large grids (~20 in each dimension) and medium-large number of solutions (unlike Eternity II).
So far, I've tried DFS starting in the center and picking the locations around filled area that allow the least number of possibilities and backtrack in case no progress can be made. This only works for simple scenarios with one or two types. Any more and too much backtracking ensues.
Here's a trivial example, showing input and the final output:
This is a hard puzzle.
The Eternity 2 was a puzzle of this form with a 16 by 16 square grid.
Despite a 2 million dollar prize, no one found the solution in several years.
The paper "Jigsaw Puzzles, Edge Matching, and Polyomino Packing: Connections and Complexity" by Erik D. Demaine, Martin L. Demaine shows that this type of puzzle is NP-complete.
Given a problem of this sort with a square grid I would try a brute force list of all possible columns, and then a dynamic programming solution across all of the rows. This will find the number of solutions, and with a little extra work can be used to generate all of the solutions.
However if your rows are n long and there are m letters with k tiles, then the brute force list of all possible columns has up to mn possible right edges with up to m4k combinations/rotations of tiles needed to generate it. Then the transition from the right edge of one column to the right edge of the next next potentially has up to m2n possibilities in it. Those numbers are usually not worst case scenarios, but the size of those data structures will be the upper bound on the feasibility of that technique.
Of course if those data structures get too large to be feasible, then the recursive algorithm that you are describing will be too slow to enumerate all solutions. But if there are enough, it still might run acceptably fast even if this approach is infeasible.
Of course if you have more rows than columns, you'll want to reverse the role of rows and columns in this technique.
I have been tasked with figuring out a state space for a problem based on the area of a rectangle. It seems that I have made my state space far too large and need some feedback.
So far I have an area that has a value fo 600 for a y axis and 300 for an x axis. I determined the number of points to be
(600 x 300) ! or 180,000!
Therefore my robot would need to inspect this many potential spaces, before I apply an algorithm.
This number seems quite high and if that is the case it would make my problem unsolveable before I die especially if I implement the algorithm incorrectly. Any help would be greatly appreciated especially if my math is off in determining the number of points.
EDIT
I was under the impression to see how many pairs of points you would have to take the cartesian product of the total available points. Which in turn would be (600x300)! . If this is incorrect please let me know.
First of all, the number of "points" (as defined in mathematics - the only relevant definition) in a rectangle of any size (non-zero area) is infinity. Why? Because a point does not necessarily have to have integer coordinates - there can be a point at (0,0), (0,0.1), (0.001), (0,0.0001) and so on. I think what you mean by points in your question is that all points must have integer coordinates (i.e. lattice points), or alternately, "cells" in a rectangular grid (like cells on a chess board). Please let me know if I misunderstood your question.
There are 600 rows and 300 coloumns. This means that there are 600 * 300 = 180,000 different cells. It follows that there are nCr(180,000,2) = 16,199,910,000 unique pairs in the grid. I am assuming you consider the pair ((1,1),(2,2)) and ((2,2),(1,1)) equivalent. Otherwise, there are 180,000*180,000 = 32,400,000,000 pairs.