Bin Packing with Overflow - algorithm

I have N bins (homogenous or heterogeneous in size, depending on variant of task) in which I am trying to fit M items (always heterogeneous in size). Items can be larger than a single bin and are allowed to overflow to the next bin(s) (but no wrap around from bin N-1 to 0).
The more bins an item spans, the higher its allocation cost.
I want to minimize the overall allocation cost of fitting all M into N bins.
Everything is static. It is guaranteed that all M fit in N.
I think I am looking for a variant of the Bin Packing algorithm. But any hints towards an existing solution/approximation/good heuristic are appreciated.
My current approach looks like this:
sort items by size
for i in items:
for b in bins:
try allocation of i starting at b
if allocation valid:
record cost
do allocation of i in b with lowest recorded cost
update all b fill level
So basically a greedy by size approach with O(MxNxC) runtime, where C~"longest allocation across banks" (try allocation takes C time).

I would suggest dynamic programming for the exact solution. To make it easier to visualize, assume each bin's size is the length of an array of cells. Since they are contiguous, you can visualize them as a contiguous set of arrays, e.g.
|xxx|xx|xxx|xxxx|
the delimiters of the arrays are || and the positions in the arrays are given by x. so this has 4 arrays, arr_0 of size 3, arr_1 of size 2 and so on.
If you place an item at position i, it will ocuppy position i to i+(h-1), where h is the size of the items. E.g. if items are of size h=5, and you place an item at position 1, you would get
|xb|bb|bbx|xxxx|
One trick to use is that if we introduce the additional constraint that the items need to be inserted “in order”, i.e. the first inserted is the leftmost inserted item, the second is the second-leftmost inserted item, etc. then this problem will have the same optimal solution solution as the original one (since we can just take the optimal solution and insert it in order).
Consider pos(k-1,i) to be the optimal position to insert the k-th object
Given that we the k-1th object ended at (I-1). opt_c(k-i,i) the optimal extra-cost of inserting the k-1…N pieces, given that the k-1th object ended at (I-1).
Then pos(N-1,i) can be easily calculated by running through the cells and calculating the extra-cost (and even easier by noting that at least one of the borders should match up with pos(N-1,i), but to make analysis easier we will evaluate the extra-cost each of the I…NumCells-h). opt_c(N-1,i) equals this extra-cost.
Similarly,
pos(N-2,i) = argmin_x extra_cost(x,i, pos(N-1,x+h)) + opt(N-1,x+h)
Where extra_cost is the extra_cost(x,i,j) of inserting at x, given that the last inserted object ended at I-1 and the next inserted object will start at j.
And by substituting x = opt(N-2,i)
opt(N-2,i) = extra_cost(opt(N-2,i),I, pos(N-1,opt(N-2,i)+h)) + opt(N-1,opt(N-2,i)+h)
By induction, for all 1<=W<N-1
pos(W,i) = argmin_x extra_cost(x,i, pos(W+1,x+h)) + opt(W+1,x+h)
And
opt(W,i) = extra_cost( pos(W,i),I, pos(N+1, pos(W,i)+h)) + opt(N-1, pos(W,i)+h)
And your final result is given by the minimal over i of all opt(0,i).

Related

Algorithm challenge to merge sets

Given n sets of numbers. Each set contains some numbers from 1 to 100. How to select sets to merge into the longest set under a special rule, only two non-overlapping sets can merge. [1,2,3] can merge with [4,5] but not [3,4]. What will be an efficient algorithm to merge into the longest set.
My first attempt is to form an n by n matrix. Each row/column represents a set. Entry(i,j) equals to 1 if two sets overlap, entry(i,i) stores the length of set i. Then the questions becomes can we perform row and column operations at the same time to create a diagonal sub-matrix on top left corner whose trace is as large as possible.
However, I got stuck in how to efficiently perform row and column operations to form such a diagonal sub-matrix on top left corner.
As already pointed out in the comments (maximum coverage problem) you have a NP-hart problem. Luckily, matlab offers solvers for integer linear programming.
So we try to reduce the problem to the form:
min f*x subject to Ax<=b , 0<=x
There are n sets, we can encode a set as a vector of 0s and 1s. For example (1,1,1,0,0,...) would represent {1,2,3} and (0,0,1,1,0,0...) - {3,4}.
Every column of A represents a set. A(i,j)=1 means that the i-th element is in the j-th set, A(i,j)=0 means that the i-th element is not in the j-th set.
Now, x represents the sets we select: if x_j=1 than the set j is selected, if x_j=0 - than not selected!
As every element must be at most in one set, we choose b=(1, 1, 1, ..., 1): If we take two sets which contain the i-th element, than the i-th element of (Ax) would be at least 2.
The only question left is what is f? We try to maximize the number of elements in the union, so we choose f_j=-|set_j| (minus owing to min<->max conversion), with |set_j| - number of elements in the j-th set.
Putting it all in matlab we get:
f=-sum(A)
xopt=intlinprog(f.',1:n,A,ones(m,1),[],[],zeros(n,1),ones(n,1))
f.' - cost function as column
1:n - all n elements of x are integers
A - encodes n sets
ones(m,1) - b=(1,1,1...), there are m=100 elements
[],[] - there are no constrains of the form Aeq*x=beq
zeros(n,1), 0<=x must hold
ones(n,1), x<=1 follows already from others constrains, but maybe it will help the solver a little bit
You can represent sets as bit fields. A bitwise and operation yielding zero would indicate non-overlapping sets. Depending on the width of the underlying data type, you may need to perform multiple and operations. For example, with a 64 bit machine word size, I'd need two words to cover 1 to 100 as a bit field.

A new Bin-packing?

I'm looking in to a kind-of bin-packing problem, but not quite the same.
The problem asks to put n items into minimum number of bins without total weight exceeding capacity of bins. (classical definition)
The difference is:
Each item has a weight and bound, and the capacity of the bin is dynamically determined by the minimum bound of items in that bin.
E.g.,
I have four items A[11,12], B[1,10], C[3,4], D[20,22] ([weight,bound]).
Now, if I put item A into a bin, call it b1, then the capacity of b1 become 12. Now I try to put item B into b1, but failed because the total weight is 11+1 =12, and the capacity of b1 become 10, which is smaller than total weight. So, B is put into bin b2, whose capacity become 10. Now, put item C into b2, because the total weight is 1+3 =4, and the capacity of b2 become 4.
I don't know whether this question has been solved in some areas with some name. Or it is a variant of bin-packing that has been discussed somewhere.
I don't know whether this is the right place to post the question, any helps are appreciated!
Usually with algorithm design for NP-hard problems, it's necessary to reuse techniques rather than whole algorithms. Here, the algorithms for standard bin packing that use branch-and-bound with column generation carry over well.
The idea is that we formulate an enormous set cover instance where the sets are the sets of items that fit into a single bin. Integer programming is a good technique for normal set cover, but there are so many sets that we need to do something else, i.e., column generation. There is a one-to-one correspondence between sets and columns, so we rip out the part of the linear programming solver that uses brute force to find a good column to enter and replace it with a solver for what turns out to be the knapsack analog of this problem.
This modified knapsack problem is, given items with weights, profits, and bounds, find the most profitable set of items whose total weight is less than the minimum bound. The dynamic program for solving knapsack with small integer weights happily transfers over with no loss of efficiency. Just sort the items by descending bounds; then, when forming sets involving the most recent item, the weight limit is just that item's bound.
The following is based on Anony-mouse's answer. I am not an algorithm expert, so consider the following as "just my two cents", for what they are worth.
I think Anony-mouse is correct in starting with the smallest items (by bound). This is because a bin tends to get smaller in capacity the more items you add to it; a bin's maximum capacity is determined with the first item placed in it, it can never get larger after that point.
So instead of starting with a large bin and have its capacity slowly reduced, and having to worry about taking out too-large items that previously fit, let's jut try to keep bins' capacities as constant as possible. If we can keep the bins' capacities stable, we can use "standard" algorithms that know nothing about "bound".
So I'd suggest this:
Group all items by bound.
This will allow you to use a standard bin packing algorithm per group because if all items have the same bound (i.e. bound is constant), it can essentially be disregarded. All that the bound means now is that you know the resulting bins' capacity in advance.
Start with the group with the smallest bound and perform a standard bin packing for its items.
This will result in 1 or more bins that have a capacity equal to the bound of all items in them.
Proceed with the item group having the next-larger bound. See if there are any items that could still be put in an already existing bin (i.e. a bin produced by the previous steps).
Note that bound can again be ignored; since all pre-existing bins already have a smaller capacity than these additional items' bound, the bins' capacity cannot be affected; only weight is relevant, so you can use "standard" algorithms.
I suspect this step is an instance of the (multiple) knapsack problem, so look towards knapsack algorithms to determine how to distribute these items over and into the pre-existing, partially filled bins.
It's possible that the item group from the previous group has only been partially processed, there might be items left. These will go into one or more new bins: Basically, repeat step 3.
Repeat the above steps (from 3 onwards) until no more items are left.
It can still be written as an ILP instance, like so:
Make a binary variable x_{i,j} signifying whether item j goes into bin i, helper variables y_i that signify whether bin i is used, helper variables c_i that determine the capacity of bin i, and there are constants s_j (size of item j) b_j (bound of item j) and M (a large enough constant), now
minimize sum[j] y_j
subject to:
1: for all j:
(sum[i] x_{i,j}) = 1
2: for all i,j:
y_i ≥ x_{i,j}
3: for all i:
(sum[j] s_j * x_{i,j}) ≤ c_i
4: for all i,j:
c_i ≤ b_j + (M - M * x_{i,j})
5: x_{i,j} ϵ {0,1}
6: y_i ϵ {0,1}
The constraints mean
any item is in exactly one bin
if an item is in a bin, then that bin is used
the items in a bin do not exceed the capacity of that bin
the capacity of a bin is no more than the lowest bound of the items that are in it (the thing with the big M prevents items that are not in the bin from changing the capacity, provided you choose M no less than the highest bound)
and 6., variables are binary.
But the integrality gap can be atrocious.
First of all i might be totally wrong and there might exist an algorithm that is even better than mine.
Bin packing is NP-hard and is efficiently solved using classic algorithms like First Fit etc.There are some improvements to this too.Korf's algorithm
I aim to reduce this to normal bin packing by sorting the items by thier bound.The steps are
Sort items by bound :Sorting items by bound will help us in arranging the bins as limiting condition is minimum of bound.
Insert smallest item(by bound) into a bin
Check whether the next item(sorted by bound) can coexist in this bin.If it can then keep the item in the bin too.If not then try putting it in another bin or create another bin for it.
Repeat the procedure till all elements are arranged. The procedure is repeated in ascending order of bounds.
I think this pretty much solves the problem.Please inform me if it doesn't.I am trying to implement the same.And if there are any suggestions or improvements inform me that too. :) Thank you

Are there sorting algorithms that respect final position restrictions and run in O(n log n) time?

I'm looking for a sorting algorithm that honors a min and max range for each element1. The problem domain is a recommendations engine that combines a set of business rules (the restrictions) with a recommendation score (the value). If we have a recommendation we want to promote (e.g. a special product or deal) or an announcement we want to appear near the top of the list (e.g. "This is super important, remember to verify your email address to participate in an upcoming promotion!") or near the bottom of the list (e.g. "If you liked these recommendations, click here for more..."), they will be curated with certain position restriction in place. For example, this should always be the top position, these should be in the top 10, or middle 5 etc. This curation step is done ahead of time and remains fixed for a given time period and for business reasons must remain very flexible.
Please don't question the business purpose, UI or input validation. I'm just trying to implement the algorithm in the constraints I've been given. Please treat this as an academic question. I will endeavor to provide a rigorous problem statement, and feedback on all other aspects of the problem is very welcome.
So if we were sorting chars, our data would have a structure of
struct {
char value;
Integer minPosition;
Integer maxPosition;
}
Where minPosition and maxPosition may be null (unrestricted). If this were called on an algorithm where all positions restrictions were null, or all minPositions were 0 or less and all maxPositions were equal to or greater than the size of the list, then the output would just be chars in ascending order.
This algorithm would only reorder two elements if the minPosition and maxPosition of both elements would not be violated by their new positions. An insertion-based algorithm which promotes items to the top of the list and reorders the rest has obvious problems in that every later element would have to be revalidated after each iteration; in my head, that rules out such algorithms for having O(n3) complexity, but I won't rule out such algorithms without considering evidence to the contrary, if presented.
In the output list, certain elements will be out of order with regard to their value, if and only if the set of position constraints dictates it. These outputs are still valid.
A valid list is any list where all elements are in a position that does not conflict with their constraints.
An optimal list is a list which cannot be reordered to more closely match the natural order without violating one or more position constraint. An invalid list is never optimal. I don't have a strict definition I can spell out for 'more closely matching' between one ordering or another. However, I think it's fairly easy to let intuition guide you, or choose something similar to a distance metric.
Multiple optimal orderings may exist if multiple inputs have the same value. You could make an argument that the above paragraph is therefore incorrect, because either one can be reordered to the other without violating constraints and therefore neither can be optimal. However, any rigorous distance function would treat these lists as identical, with the same distance from the natural order and therefore reordering the identical elements is allowed (because it's a no-op).
I would call such outputs the correct, sorted order which respects the position constraints, but several commentators pointed out that we're not really returning a sorted list, so let's stick with 'optimal'.
For example, the following are a input lists (in the form of <char>(<minPosition>:<maxPosition>), where Z(1:1) indicates a Z that must be at the front of the list and M(-:-) indicates an M that may be in any position in the final list and the natural order (sorted by value only) is A...M...Z) and their optimal orders.
Input order
A(1:1) D(-:-) C(-:-) E(-:-) B(-:-)
Optimal order
A B C D E
This is a trivial example to show that the natural order prevails in a list with no constraints.
Input order
E(1:1) D(2:2) C(3:3) B(4:4) A(5:5)
Optimal order
E D C B A
This example is to show that a fully constrained list is output in the same order it is given. The input is already a valid and optimal list. The algorithm should still run in O(n log n) time for such inputs. (Our initial solution is able to short-circuit any fully constrained list to run in linear time; I added the example both to drive home the definitions of optimal and valid and because some swap-based algorithms I considered handled this as the worse case.)
Input order
E(1:1) C(-:-) B(1:5) A(4:4) D(2:3)
Optimal Order
E B D A C
E is constrained to 1:1, so it is first in the list even though it has the lowest value. A is similarly constrained to 4:4, so it is also out of natural order. B has essentially identical constraints to C and may appear anywhere in the final list, but B will be before C because of value. D may be in positions 2 or 3, so it appears after B because of natural ordering but before C because of its constraints.
Note that the final order is correct despite being wildly different from the natural order (which is still A,B,C,D,E). As explained in the previous paragraph, nothing in this list can be reordered without violating the constraints of one or more items.
Input order
B(-:-) C(2:2) A(-:-) A(-:-)
Optimal order
A(-:-) C(2:2) A(-:-) B(-:-)
C remains unmoved because it already in its only valid position. B is reordered to the end because its value is less than both A's. In reality, there will be additional fields that differentiate the two A's, but from the standpoint of the algorithm, they are identical and preserving OR reversing their input ordering is an optimal solution.
Input order
A(1:1) B(1:1) C(3:4) D(3:4) E(3:4)
Undefined output
This input is invalid for two reasons: 1) A and B are both constrained to position 1 and 2) C, D, and E are constrained to a range than can only hold 2 elements. In other words, the ranges 1:1 and 3:4 are over-constrained. However, the consistency and legality of the constraints are enforced by UI validation, so it's officially not the algorithms problem if they are incorrect, and the algorithm can return a best-effort ordering OR the original ordering in that case. Passing an input like this to the algorithm may be considered undefined behavior; anything can happen. So, for the rest of the question...
All input lists will have elements that are initially in valid positions.
The sorting algorithm itself can assume the constraints are valid and an optimal order exists.2
We've currently settled on a customized selection sort (with runtime complexity of O(n2)) and reasonably proved that it works for all inputs whose position restrictions are valid and consistent (e.g. not overbooked for a given position or range of positions).
Is there a sorting algorithm that is guaranteed to return the optimal final order and run in better than O(n2) time complexity?3
I feel that a library standard sorting algorithm could be modified to handle these constrains by providing a custom comparator that accepts the candidate destination position for each element. This would be equivalent to the current position of each element, so maybe modifying the value holding class to include the current position of the element and do the extra accounting in the comparison (.equals()) and swap methods would be sufficient.
However, the more I think about it, an algorithm that runs in O(n log n) time could not work correctly with these restrictions. Intuitively, such algorithms are based on running n comparisons log n times. The log n is achieved by leveraging a divide and conquer mechanism, which only compares certain candidates for certain positions.
In other words, input lists with valid position constraints (i.e. counterexamples) exist for any O(n log n) sorting algorithm where a candidate element would be compared with an element (or range in the case of Quicksort and variants) with/to which it could not be swapped, and therefore would never move to the correct final position. If that's too vague, I can come up with a counter example for mergesort and quicksort.
In contrast, an O(n2) sorting algorithm makes exhaustive comparisons and can always move an element to its correct final position.
To ask an actual question: Is my intuition correct when I reason that an O(n log n) sort is not guaranteed to find a valid order? If so, can you provide more concrete proof? If not, why not? Is there other existing research on this class of problem?
1: I've not been able to find a set of search terms that points me in the direction of any concrete classification of such sorting algorithm or constraints; that's why I'm asking some basic questions about the complexity. If there is a term for this type of problem, please post it up.
2: Validation is a separate problem, worthy of its own investigation and algorithm. I'm pretty sure that the existence of a valid order can be proven in linear time:
Allocate array of tuples of length equal to your list. Each tuple is an integer counter k and a double value v for the relative assignment weight.
Walk the list, adding the fractional value of each elements position constraint to the corresponding range and incrementing its counter by 1 (e.g. range 2:5 on a list of 10 adds 0.4 to each of 2,3,4, and 5 on our tuple list, incrementing the counter of each as well)
Walk the tuple list and
If no entry has value v greater than the sum of the series from 1 to k of 1/k, a valid order exists.
If there is such a tuple, the position it is in is over-constrained; throw an exception, log an error, use the doubles array to correct the problem elements etc.
Edit: This validation algorithm itself is actually O(n2). Worst case, every element has the constraints 1:n, you end up walking your list of n tuples n times. This is still irrelevant to the scope of the question, because in the real problem domain, the constraints are enforced once and don't change.
Determining that a given list is in valid order is even easier. Just check each elements current position against its constraints.
3: This is admittedly a little bit premature optimization. Our initial use for this is for fairly small lists, but we're eyeing expansion to longer lists, so if we can optimize now we'd get small performance gains now and large performance gains later. And besides, my curiosity is piqued and if there is research out there on this topic, I would like to see it and (hopefully) learn from it.
On the existence of a solution: You can view this as a bipartite digraph with one set of vertices (U) being the k values, and the other set (V) the k ranks (1 to k), and an arc from each vertex in U to its valid ranks in V. Then the existence of a solution is equivalent to the maximum matching being a bijection. One way to check for this is to add a source vertex with an arc to each vertex in U, and a sink vertex with an arc from each vertex in V. Assign each edge a capacity of 1, then find the max flow. If it's k then there's a solution, otherwise not.
http://en.wikipedia.org/wiki/Maximum_flow_problem
--edit-- O(k^3) solution: First sort to find the sorted rank of each vertex (1-k). Next, consider your values and ranks as 2 sets of k vertices, U and V, with weighted edges from each vertex in U to all of its legal ranks in V. The weight to assign each edge is the distance from the vertices rank in sorted order. E.g., if U is 10 to 20, then the natural rank of 10 is 1. An edge from value 10 to rank 1 would have a weight of zero, to rank 3 would have a weight of 2. Next, assume all missing edges exist and assign them infinite weight. Lastly, find the "MINIMUM WEIGHT PERFECT MATCHING" in O(k^3).
http://www-math.mit.edu/~goemans/18433S09/matching-notes.pdf
This does not take advantage of the fact that the legal ranks for each element in U are contiguous, which may help get the running time down to O(k^2).
Here is what a coworker and I have come up with. I think it's an O(n2) solution that returns a valid, optimal order if one exists, and a closest-possible effort if the initial ranges were over-constrained. I just tweaked a few things about the implementation and we're still writing tests, so there's a chance it doesn't work as advertised. This over-constrained condition is detected fairly easily when it occurs.
To start, things are simplified if you normalize your inputs to have all non-null constraints. In linear time, that is:
for each item in input
if an item doesn't have a minimum position, set it to 1
if an item doesn't have a maximum position, set it to the length of your list
The next goal is to construct a list of ranges, each containing all of the candidate elements that have that range and ordered by the remaining capacity of the range, ascending so ranges with the fewest remaining spots are on first, then by start position of the range, then by end position of the range. This can be done by creating a set of such ranges, then sorting them in O(n log n) time with a simple comparator.
For the rest of this answer, a range will be a simple object like so
class Range<T> implements Collection<T> {
int startPosition;
int endPosition;
Collection<T> items;
public int remainingCapacity() {
return endPosition - startPosition + 1 - items.size();
}
// implement Collection<T> methods, passing through to the items collection
public void add(T item) {
// Validity checking here exposes some simple cases of over-constraining
// We'll catch these cases with the tricky stuff later anyways, so don't choke
items.add(item);
}
}
If an element A has range 1:5, construct a range(1,5) object and add A to its elements. This range has remaining capacity of 5 - 1 + 1 - 1 (max - min + 1 - size) = 4. If an element B has range 1:5, add it to your existing range, which now has capacity 3.
Then it's a relatively simple matter of picking the best element that fits each position 1 => k in turn. Iterate your ranges in their sorted order, keeping track of the best eligible element, with the twist that you stop looking if you've reached a range that has a remaining size that can't fit into its remaining positions. This is equivalent to the simple calculation range.max - current position + 1 > range.size (which can probably be simplified, but I think it's most understandable in this form). Remove each element from its range as it is selected. Remove each range from your list as it is emptied (optional; iterating an empty range will yield no candidates. That's a poor explanation, so lets do one of our examples from the question. Note that C(-:-) has been updated to the sanitized C(1:5) as described in above.
Input order
E(1:1) C(1:5) B(1:5) A(4:4) D(2:3)
Built ranges (min:max) <remaining capacity> [elements]
(1:1)0[E] (4:4)0[A] (2:3)1[D] (1:5)3[C,B]
Find best for 1
Consider (1:1), best element from its list is E
Consider further ranges?
range.max - current position + 1 > range.size ?
range.max = 1; current position = 1; range.size = 1;
1 - 1 + 1 > 1 = false; do not consider subsequent ranges
Remove E from range, add to output list
Find best for 2; current range list is:
(4:4)0[A] (2:3)1[D] (1:5)3[C,B]
Consider (4:4); skip it because it is not eligible for position 2
Consider (2:3); best element is D
Consider further ranges?
3 - 2 + 1 > 1 = true; check next range
Consider (2:5); best element is B
End of range list; remove B from range, add to output list
An added simplifying factor is that the capacities do not need to be updated or the ranges reordered. An item is only removed if the rest of the higher-sorted ranges would not be disturbed by doing so. The remaining capacity is never checked after the initial sort.
Find best for 3; output is now E, B; current range list is:
(4:4)0[A] (2:3)1[D] (1:5)3[C]
Consider (4:4); skip it because it is not eligible for position 3
Consider (2:3); best element is D
Consider further ranges?
same as previous check, but current position is now 3
3 - 3 + 1 > 1 = false; don't check next range
Remove D from range, add to output list
Find best for 4; output is now E, B, D; current range list is:
(4:4)0[A] (1:5)3[C]
Consider (4:4); best element is A
Consider further ranges?
4 - 4 + 1 > 1 = false; don't check next range
Remove A from range, add to output list
Output is now E, B, D, A and there is one element left to be checked, so it gets appended to the end. This is the output list we desired to have.
This build process is the longest part. At its core, it's a straightforward n2 selection sorting algorithm. The range constraints only work to shorten the inner loop and there is no loopback or recursion; but the worst case (I think) is still sumi = 0 n(n - i), which is n2/2 - n/2.
The detection step comes into play by not excluding a candidate range if the current position is beyond the end of that ranges max position. You have to track the range your best candidate came from in order to remove it, so when you do the removal, just check if the position you're extracting the candidate for is greater than that ranges endPosition.
I have several other counter-examples that foiled my earlier algorithms, including a nice example that shows several over-constraint detections on the same input list and also how the final output is closest to the optimal as the constraints will allow. In the mean time, please post any optimizations you can see and especially any counter examples where this algorithm makes an objectively incorrect choice (i.e. arrives at an invalid or suboptimal output when one exists).
I'm not going to accept this answer, because I specifically asked if it could be done in better than O(n2). I haven't wrapped my head around the constraints satisfaction approach in #DaveGalvin's answer yet and I've never done a maximum flow problem, but I thought this might be helpful for others to look at.
Also, I discovered the best way to come up with valid test data is to start with a valid list and randomize it: for 0 -> i, create a random value and constraints such that min < i < max. (Again, posting it because it took me longer than it should have to come up with and others might find it helpful.)
Not likely*. I assume you mean average run time of O(n log n) in-place, non-stable, off-line. Most Sorting algorithms that improve on bubble sort average run time of O(n^2) like tim sort rely on the assumption that comparing 2 elements in a sub set will produce the same result in the super set. A slower variant of Quicksort would be a good approach for your range constraints. The worst case won't change but the average case will likely decrease and the algorithm will have the extra constraint of a valid sort existing.
Is ... O(n log n) sort is not guaranteed to find a valid order?
All popular sort algorithms I am aware of are guaranteed to find an order so long as there constraints are met. Formal analysis (concrete proof) is on each sort algorithems wikepedia page.
Is there other existing research on this class of problem?
Yes; there are many journals like IJCSEA with sorting research.
*but that depends on your average data set.

Nesting algorithm

I have a problem with nesting of lengths please suggest me the best way of solving this. my problem is as follows.
I have some standard length lets say Total length (This is the total length which we need to fill with some specific length blocks)
input is list of length blocks eg: 5000, 4000, 3000
and gap between each block is a range eg: 200 to 500 (this gap can be adjusted with in the range)
Now we have to fill the Total length with the above available length blocks with gap between each block and that gap should be with in the gap range given above.
Please suggest me some way of solving this problem.
Thank in advance...
Regards,
Anil
This problem is essentially the Subset sum problem with a small twist. You can use the same pseudo-polynomial time solution as subset sum: create an array of length "total length" (which we call n from now on) and, for each length k, add it to every existing length in the array (so if element m is populated, create a new entry at m + k (if m + k ≤ n), but leave the existing one at m as well), as well as creating a new array entry at location k representing the creation of a new set. You can build up a set of entries at array element i to represent the set of lists of length blocks totaling i. Each set entry should link back to the array entry it came from, which it can do by simply storing the last length that got you there. This is similar to a question I answered recently here, which you can adjust to allow duplicates as you see fit.
However, you need to modify the above approach to account for the gaps. Let's call the minimum gap between each element x and the maximum gap y. When adding an entry of length k, we include the minimum gap whenever adding it to another entry (so if m is populated, we actually create the entry at m + k + x). We continue to create the initial entry at k because we include the gaps between elements. When we create an entry, we can also determine if it fills the space. Suppose the new entry contains t elements and has total m. Then it fills the space iff m ≥ n - t ( y - x ). If it fills the space, we should add it to a solution list. Depending on how many solutions you want, you can terminate the algorithm as soon as enough solutions are found, or let it find all solutions. At the end, simply iterate through the solutions list.
Anything within the range can represent its gaps in any of a number of different ways, but one way that works is to greedily allocate the "slack" - for example, if you are 1000 away from the new total length using your above example, you can pick the first three gaps to be 500 (which is 300 extra slack each, for 900 total extra) and then the fourth to be 300 (for the extra 100 slack, totaling 1000) and every additional gap should be minimum (200).

Best item from list based on 3 variables

Say I have the following for a bunch of items.
item position
item size
item length
A smaller position is better, but a larger length and size are better.
I want to find the item that has the smallest position, largest length and size.
Can I simply calculate a value such as (total - position) * size * length for each item, and then find the item with the largest value? Would it be better to work off percentages?
Either add a fourth item, which is your calculated value of 'goodness', and sort by that OR if your language of choice allows, override the comparason operators for sorting to use your formula and then sort. Note that the latter approach means that the function to determine betterness will be applied multiple times per item in the list, but it has the advantage of ease of making a procedural comparason possible (eg first look at the position, then if that is equal, look at size then length) - athough this could also be expressed as a formula resulting in a single number to sort by.
As for your proposed formula, note that each item has the same numerical weight even though they are measured on completely unrelated scales. Furthermore, all items with either position=total, size=0 or length=0 evaluate to zero.
If what you want is that position is the most important thing, but given equal positions, size is the next most important thing, but given equal positions and sizes, then go by length, this can be formulated into a single number as follows:
(P-position)*(S*L) + size*L + length
where L is a magic number that is greater than the maximum possible length value, S is a number greater than the maximum possible size value, and P is a number greater than the maximum possible position value.
If, on the other hand, what you want is some scale where the items are of whatever relative importances, one possible formula looks like this:
((P-position)/P)*pScale * (size/S)*sScale * (length/L)*lScale
In this version, P, S and L have much the same definitions as before - but it is very inmportant that the values of P, S and L are meaningful in a compatible way, e.g all very close to expected maximum values. pScale, sScale and lScale are there so you can essentially specify the relative importance of each item. They could all be 1 if all atems are equally important, in which case you could leave them out entirely.
As previously answered, though, there are also a potentially infinite number of other ways you could choose to code this. As a random example, for large sizes, length could become less important; those possibilities would require much additional thought as to what is actually meant by such a vague statement.

Resources