Algorithm improvement for tidy up - algorithm

Suppose you have a rectangle surface and some round/rectangle objects (different size).
I want to write an algorithm that will tidy up those objects on the surface.
I have to put a maximum objects on the same surface.
I think i will have to put biggest objects first and smallest then.
Do you know if there is a specific algorithm in order to optimize this ?
It is a kind of tetris resolution but i can choose order of pieces.
Thanks

Since you want maximise the number of objects you are going to place, a greedy algorithm might work well in most of the cases:
Sort boxes according to length(ascending order).
Start from the smallest box:
for every box :
try to place it in a already occupied row
if not possible place it in a new row.
if not possible to place - break; //since anything bigger than would not fit.
If you are considering height also, this is called Packing Problem.
You can check related algorithms here

This is called the Knapsack problem
EDIT:
It's actually a subtype of the Knapsack problem: Bin Packing Problem

Related

Number of ways to fill a n*m piece matrix with L-shaped three piece tiles using recursive programming

I'm looking for an approach to this problem where you have to fill a n*m (n, m <=8) piece matrix with L-shaped three piece tiles. The tiles can't be placed on top of each other in any way.
I'm not necessarily looking for the whole answer, just a hint on how to approach it.
Source: https://cses.fi/dt/task/336
I solved this graph problem using a recursive backtracking algorithm plus memoization. My solution is not particularly fast and takes a minute or so to solve a 9x12 grid, but it should be sufficient for the 8x8 grid in your question (it takes about a second on a 9x9). There are no solutions for 7x7 and 8x8 grids because they are not divisible by the triomino size, 3.
The strategy is to start in a corner of the grid and move through it cell by cell, trying to place each block whenever it is legal to do so and thereby exploring the solution space methodically.
If placement of a block is legal but creates an unfillable air pocket in the grid, remove the block; we know ahead of time there will be no solutions to this state and can abandon exploring its children. For example, on a 3x6 grid,
abb.c.
aabcc.
......
is hopelessly unsolvable.
Once a state is reached where all cells have been filled, we can report a count of 1 solution to its parent state. Here's an example of a solved 3x6 grid:
aaccee
abcdef
bbddff
If every possible block has been placed at a position, backtrack, reporting the solution count to parent states along the way and exploring any states that are as yet unexplored.
In terms of memoization, call any two grid states equivalent if there is some arrangement of tiles such that they cover the exact same coordinates. For example:
aacc..
abdc..
bbdd..
and
aacc..
bacd..
bbdd..
are considered to be equivalent even though the two states were reached through different tile placements. Both states have the same substructure, so counting the number of solutions to one state is enough; add this to the memo, and if we reach the state again, we can simply report the number of solutions from the memo rather than re-computing everything.
My program reports 8 solutions on a 3x6 grid:
As I mentioned, my Python solution isn't fast or optimized. It's possible to solve a 9x12 grid less than a second. Large optimizations aside, there are basic things I neglected in my implementation. For example, I copied the entire grid for each tile placement; adding/removing tiles on a single grid would have been an easy improvement. I also did not check for unsolvable gaps in the grid, which can be seen in the animation.
After you solve the the problem, be sure to hunt around for some of the mind-blowing solutions people have come up with. I don't want to give away much more than this!
There's a trick that's applicable to a lot of recursive enumeration problems. In whichever way you like, define a deterministic procedure for removing one piece from a nonempty partial solution. Then the recursive enumeration works in the opposite direction, building the possible solutions from the empty solution, but each time it places a piece, that same piece has to be the one that would be removed by the deterministic procedure.
If you verify that the board size is divisible by three before beginning the enumeration, you shouldn't have any problem with the time limit.

Optimal placement of objects wrt pairwise similarity weights

Ok this is an abstract algorithmic challenge and it will remain abstract since it is a top secret where I am going to use it.
Suppose we have a set of objects O = {o_1, ..., o_N} and a symmetric similarity matrix S where s_ij is the pairwise correlation of objects o_i and o_j.
Assume also that we have an one-dimensional space with discrete positions where objects may be put (like having N boxes in a row or chairs for people).
Having a certain placement, we may measure the cost of moving from the position of one object to that of another object as the number of boxes we need to pass by until we reach our target multiplied with their pairwise object similarity. Moving from a position to the box right after or before that position has zero cost.
Imagine an example where for three objects we have the following similarity matrix:
1.0 0.5 0.8
S = 0.5 1.0 0.1
0.8 0.1 1.0
Then, the best ordering of objects in the tree boxes is obviously:
[o_3] [o_1] [o_2]
The cost of this ordering is the sum of costs (counting boxes) for moving from one object to all others. So here we have cost only for the distance between o_2 and o_3 equal to 1box * 0.1sim = 0.1, the same as:
[o_3] [o_1] [o_2]
On the other hand:
[o_1] [o_2] [o_3]
would have cost = cost(o_1-->o_3) = 1box * 0.8sim = 0.8.
The target is to determine a placement of the N objects in the available positions in a way that we minimize the above mentioned overall cost for all possible pairs of objects!
An analogue is to imagine that we have a table and chairs side by side in one row only (like the boxes) and you need to put N people to sit on the chairs. Now those ppl have some relations that is -lets say- how probable is one of them to want to speak to another. This is to stand up pass by a number of chairs and speak to the guy there. When the people sit on two successive chairs then they don't need to move in order to talk to each other.
So how can we put those ppl down so that every distance-cost between two ppl are minimized. This means that during the night the overall number of distances walked by the guests are close to minimum.
Greedy search is... ok forget it!
I am interested in hearing if there is a standard formulation of such problem for which I could find some literature, and also different searching approaches (e.g. dynamic programming, tabu search, simulated annealing etc from combinatorial optimization field).
Looking forward to hear your ideas.
PS. My question has something in common with this thread Algorithm for ordering a list of Objects, but I think here it is better posed as problem and probably slightly different.
That sounds like an instance of the Quadratic Assignment Problem. The speciality is due to the fact that the locations are placed on one line only, but I don't think this will make it easier to solve. The QAP in general is NP hard. Unless I misinterpreted your problem you can't find an optimal algorithm that solves the problem in polynomial time without proving P=NP at the same time.
If the instances are small you can use exact methods such as branch and bound. You can also use tabu search or other metaheuristics if the problem is more difficult. We have an implementation of the QAP and some metaheuristics in HeuristicLab. You can configure the problem in the GUI, just paste the similarity and the distance matrix into the appropriate parameters. Try starting with the robust Taboo Search. It's an older, but still quite well working algorithm. Taillard also has the C code for it on his website if you want to implement it for yourself. Our implementation is based on that code.
There has been a lot of publications done on the QAP. More modern algorithms combine genetic search abilities with local search heuristics (e. g. Genetic Local Search from Stützle IIRC).
Here's a variation of the already posted method. I don't think this one is optimal, but it may be a start.
Create a list of all the pairs in descending cost order.
While list not empty:
Pop the head item from the list.
If neither element is in an existing group, create a new group containing
the pair.
If one element is in an existing group, add the other element to whichever
end puts it closer to the group member.
If both elements are in existing groups, combine them so as to minimize
the distance between the pair.
Group combining may require reversal of order in a group, and the data structure should
be designed to support that.
Let me help the thread (of my own) with a simplistic ordering approach.
1. Order the upper half of the similarity matrix.
2. Start with the pair of objects having the highest similarity weight and place them in the center positions.
3. The next object may be put on the left or the right side of them. So each time you may select the object that when put to left or right
has the highest cost to the pre-placed objects. Goto Step 2.
The selection of Step 3 is because if you left this object and place it later this cost will be again the greatest of the remaining, and even more (farther to the pre-placed objects). So the costly placements should be done as earlier as it can be.
This is too simple and of course does not discover a good solution.
Another approach is to
1. start with a complete ordering generated somehow (random or from another algorithm)
2. try to improve it using "swaps" of object pairs.
I believe local minima would be a huge deterrent.

Finding the optimal 3D box sizes for a group of 3D rectangular items

When I say box I am talking about shipping boxes.
I have a number of random sized, small items that I need to pack into as few boxes as possible.
I need to know what box sizes are optimal.
All items are rectangular prisms.
It's easy to exclude a box size for an item which is too large to fit.
I know the box sizes (they are the available box sizes which I have in-stock)
Items can be positioned horizontally or vertically, not diagonal.
As many boxes as required can be used. The goal is to use as few boxes as possible.
Multiple box sizes may be used to optimally fit the varying-sized items.
What algorithm exists that allows me to calculate the box sizes I need to use for optimal space usage? To fit the most items into as few boxes as possible.
The available box sizes come from what I have available, in-stock. You can create a finite number of made up box sizes for example purposes.
This is a generalization of the Bin packing problem, meaning it is NP-Hard.
To see this, imagine all bins and packages have the same width and height, and additionally all bins (but not packages) have the same length. Then it is a one-dimensional problem: we have bins of size V, and packages of size a1, a2, ..., an. This simplified case is exactly the Bin-packing problem. Thus, a fast solution to your problem gives us a fast solution to bin-packing, so your problem is at least as hard; and since bin-packing is NP-Hard, so is your problem.
There are some approximation algorithms available though; for example, it's easy to show that the simple first-fit algorithm (put every item in the first bin that it fits into) will never do worse than 2x the optimal solution.
The similar "First Fit Decreasing" algorithm (sort the items in descending order, then put every item in the first bin it fits into) is even better, guaranteeing to be within about 25% of the optimal solution. There is also another, slightly more complicated algorithm called MFFD which guarantees to be within about 20%.
And of course, with only 7 boxes, you could always just brute-force the solution. This will require about 7n steps (where n is the number of items), so this solution is infeasible with more than a dozen or so items.
You have described a variation of the knapsack problem. Check out the wikipedia article for more detail about approaches to the problem than could be given here.

Retrieve set of rectangles containing a specified point

I can't figure out how to implement this in a performing way, so I decided to ask you guys.
I have a list of rectangles - actually atm only squares, but I might have to migrate to rectangles later, so let's stick to them and keep it a bit more general - in a 2 dimensional space. Each rectangle is specified by two points, rectangles can overlap and I don't care all too much about setup time, because the rectangles are basicly static and there's some room for precalculate any setup stuff (like building trees, sorting, precalculating additional vectors, whatever etc). Oh I am developing in JavaScript if this is of any concern.
To my actual question: given a point, how do I get a set of all rectangles that include that point?
Linear approaches do not perform well enough. So I look for something that performs better than O(n). I read some stuff, like on Bounding Volume Hierarchies and similar things, but whatever I tried the fact that rectangles can overlap (and I actually want to get all of them, if the point lies within multiple rectangles) seems to always get into my way.
Are there any suggestions? Have I missed something obvious? Are BVH even applicable to possibly overlapping bounds? If so, how do I build such a possibly overlapping tree? If not, what else could I use? It is of no concern to me if borders are inside, outside or not determined.
If someone could come up with anything helpfull like a link or a rant on how stupid I am to use BVH and not Some_Super_Cool_Structure_Perfectly_Suited_For_My_Problem I'd really appreciate it!
Edit: Ok, I played around a bit with R-Trees and this is exactly what I was looking for. Infact I am currently using the RTree implementation http://stackulator.com/rtree/ as suggested by endy_c. It performs really well and fullfills my requirements entirely. Thanks alot for your support guys!
You could look at R-Trees
Java code
there's also a wiki, but can only post one link ;-)
You can divide the space into grid, and for each grid cell have a list of rectangles (or rectangle identifiers) that exist at least partially in that grid. Search for rectangles only in corresponding grid's cell. The complexity should be O(sqrt(n)).
Another approach is to maintain four sorted arrays of x1,y1,x2,y2 values, and binary search your point within those 4 arrays. The result of each search is a set of rectangle candidates, and the final result is intersection of those 4 sets. Depending on how set intersection is implemented this should be efficient than O(n).

Writing Simulated Annealing algorithm for 0-1 knapsack in C#

I'm in the process of learning about simulated annealing algorithms and have a few questions on how I would modify an example algorithm to solve a 0-1 knapsack problem.
I found this great code on CP:
http://www.codeproject.com/KB/recipes/simulatedAnnealingTSP.aspx
I'm pretty sure I understand how it all works now (except the whole Bolzman condition, as far as I'm concerned is black magic, though I understand about escaping local optimums and apparently this does exactly that). I'd like to re-design this to solve a 0-1 knapsack-"ish" problem. Basically I'm putting one of 5,000 objects in 10 sacks and need to optimize for the least unused space. The actual "score" I assign to a solution is a bit more complex, but not related to the algorithm.
This seems easy enough. This means the Anneal() function would be basically the same. I'd have to implement the GetNextArrangement() function to fit my needs. In the TSM problem, he just swaps two random nodes along the path (ie, he makes a very small change each iteration).
For my problem, on the first iteration, I'd pick 10 random objects and look at the leftover space. For the next iteration, would I just pick 10 new random objects? Or am I best only swapping out a few of the objects, like half of them or only even one of them? Or maybe the number of objects I swap out should be relative to the temperature? Any of these seem doable to me, I'm just wondering if someone has some advice on the best approach (though I can mess around with improvements once I have the code working).
Thanks!
Mike
With simulated annealing, you want to make neighbour states as close in energy as possible. If the neighbours have significantly greater energy, then it will just never jump to them without a very high temperature -- high enough that it will never make progress. On the other hand, if you can come up with heuristics that exploit lower-energy states, then exploit them.
For the TSP, this means swapping adjacent cities. For your problem, I'd suggest a conditional neighbour selection algorithm as follows:
If there are objects that fit in the empty space, then it always puts the biggest one in.
If no objects fit in the empty space, then pick an object to swap out -- but prefer to swap objects of similar sizes.
That is, objects have a probability inverse to the difference in their sizes. You might want to use something like roulette selection here, with the slice size being something like (1 / (size1 - size2)^2).
Ah, I think I found my answer on Wikipedia.. It suggests moving to a "neighbor" state, which usually implies changing as little as possible (like swapping two cities in a TSM problem)..
From: http://en.wikipedia.org/wiki/Simulated_annealing
"The neighbours of a state are new states of the problem that are produced after altering the given state in some particular way. For example, in the traveling salesman problem, each state is typically defined as a particular permutation of the cities to be visited. The neighbours of some particular permutation are the permutations that are produced for example by interchanging a pair of adjacent cities. The action taken to alter the solution in order to find neighbouring solutions is called "move" and different "moves" give different neighbours. These moves usually result in minimal alterations of the solution, as the previous example depicts, in order to help an algorithm to optimize the solution to the maximum extent and also to retain the already optimum parts of the solution and affect only the suboptimum parts. In the previous example, the parts of the solution are the parts of the tour."
So I believe my GetNextArrangement function would want to swap out a random item with an item unused in the set..

Resources