DCOS cluster resource allocation is np-hard - cluster-computing

Here in the DCOS documents it is stated that
"Deciding where to run processes to best utilize cluster resources is
hard, NP-hard in-fact."
I don't deny that that sounds right, but is there a proof somewhere?

Best utilization of resources is variation of bin packaging problem:
In the bin packing problem, objects of different volumes must be
packed into a finite number of bins or containers each of volume V in
a way that minimizes the number of bins used. In computational
complexity theory, it is a combinatorial NP-hard problem. The decision
problem (deciding if objects will fit into a specified number of bins)
is NP-complete.
We have n-dimension space where every dimension corresponds with one resource type. Each task to be scheduled has specific volume defined by required resources. Additionally task can have constraints that slightly change original task but we can treat this constraints as an additional discrete dimension. The task is to schedule tasks in a way to minimize slack resources and so prevent fragmentation.
For example Marathon uses first fit algorithm which is approximation allogrithm but is not that bad:
This is a very straightforward greedy approximation algorithm. The algorithm processes the items in arbitrary order. For each item, it attempts to place the item in the first bin that can accommodate the item. If no bin is found, it opens a new bin and puts the item within the new bin.
It is rather simple to show this algorithm achieves an approximation factor of 2, that is, the number of bins used by this algorithm is no more than twice the optimal number of bins.

Related

bin packing with volume and weight and different bins

In our company, we regularly import merchandises over the sea.
When we make an order, we have to distribute them into containers.
We basically can choose between three types of containers,
and our goal is of course to distribute the items so that we use the minimum number of container (and if possible the smallest container as they are cheaper).
We have two physical constraints:
- we cannot exceed the container maximum weight
- we cannot exceed the container maximum volume
We have the volume and weight of each item.
Actually, we do the distribution manually, but it would be great if there was some kind of algorithm that could help make a distribution proposition for us.
So I found the bin-packing algorithm, but it often treats only the weight, or the volume, but not both at the same time.
My question is: is there an existing algorithm for our problem (and if so what's its name and how would you use it), or is it something that remains to be created?
Actually I came across such issue few days ago, If I were you I would use the Genetic Algorithm to improve the output of bin-packing algorithm for weight or volume, using the following assumptions:
1- Each Chromosome represents materials that can be fit in one container.
2- The Chromosome is valid only when it contains valid sum of weights and dimensions.
3- Fitness function would be combination of (occupied space/total space and materials weight/allowed weight).
4- Mutation should be inserting a new item that is not previously used.
My friend did such research as a kind of homework, it might be not that good, but if you wish I can send it to you.

Exploration Algorithm

Massively edited this question to make it easier to understand.
Given an environment with arbitrary dimensions and arbitrary positioning of an arbitrary number of obstacles, I have an agent exploring the environment with a limited range of sight (obstacles don't block sight). It can move in the four cardinal directions of NSEW, one cell at a time, and the graph is unweighted (each step has a cost of 1). Linked below is a map representing the agent's (yellow guy) current belief of the environment at the instant of planning. Time does not pass in the simulation while the agent is planning.
http://imagizer.imageshack.us/a/img913/9274/qRsazT.jpg
What exploration algorithm can I use to maximise the cost-efficiency of utility, given that revisiting cells are allowed? Each cell holds a utility value. Ideally, I would seek to maximise the sum of utility of all cells SEEN (not visited) divided by the path length, although if that is too complex for any suitable algorithm then the number of cells seen will suffice. There is a maximum path length but it is generally in the hundreds or higher. (The actual test environments used on my agent are at least 4x bigger, although theoretically there is no upper bound on the dimensions that can be set, and the maximum path length would thus increase accordingly)
I consider BFS and DFS to be intractable, A* to be non-optimal given a lack of suitable heuristics, and Dijkstra's inappropriate in generating a single unbroken path. Is there any algorithm you can think of? Also, I need help with loop detection, as I've never done that before since allowing revisitations is my first time.
One approach I have considered is to reduce the map into a spanning tree, except that instead of defining it as a tree that connects all cells, it is defined as a tree that can see all cells. My approach would result in the following:
http://imagizer.imageshack.us/a/img910/3050/HGu40d.jpg
In the resultant tree, the agent can go from a node to any adjacent nodes that are 0-1 turn away at intersections. This is as far as my thinking has gotten right now. A solution generated using this tree may not be optimal, but it should at least be near-optimal with much fewer cells being processed by the algorithm, so if that would make the algorithm more likely to be tractable, then I guess that is an acceptable trade-off. I'm still stuck with thinking how exactly to generate a path for this however.
Your problem is very similar to a canonical Reinforcement Learning (RL) problem, the Grid World. I would formalize it as a standard Markov Decision Process (MDP) and use any RL algorithm to solve it.
The formalization would be:
States s: your NxM discrete grid.
Actions a: UP, DOWN, LEFT, RIGHT.
Reward r: the value of the cells that the agent can see from the destination cell s', i.e. r(s,a,s') = sum(value(seen(s')).
Transition function: P(s' | s, a) = 1 if s' is not out of the boundaries or a black cell, 0 otherwise.
Since you are interested in the average reward, the discount factor is 1 and you have to normalize the cumulative reward by the number of steps. You also said that each step has cost one, so you could subtract 1 to the immediate reward rat each time step, but this would not add anything since you will already average by the number of steps.
Since the problem is discrete the policy could be a simple softmax (or Gibbs) distribution.
As solving algorithm you can use Q-learning, which guarantees the optimality of the solution provided a sufficient number of samples. However, if your grid is too big (and you said that there is no limit) I would suggest policy search algorithms, like policy gradient or relative entropy (although they guarantee convergence only to local optima). You can find something about Q-learning basically everywhere on the Internet. For a recent survey on policy search I suggest this.
The cool thing about these approaches is that they encode the exploration in the policy (e.g., the temperature in a softmax policy, the variance in a Gaussian distribution) and will try to maximize the cumulative long term reward as described by your MDP. So usually you initialize your policy with a high exploration (e.g., a complete random policy) and by trial and error the algorithm will make it deterministic and converge to the optimal one (however, sometimes also a stochastic policy is optimal).
The main difference between all the RL algorithms is how they perform the update of the policy at each iteration and manage the tradeoff exploration-exploitation (how much should I explore VS how much should I exploit the information I already have).
As suggested by Demplo, you could also use Genetic Algorithms (GA), but they are usually slower and require more tuning (elitism, crossover, mutation...).
I have also tried some policy search algorithms on your problem and they seems to work well, although I initialized the grid randomly and do not know the exact optimal solution. If you provide some additional details (a test grid, the max number of steps and if the initial position is fixed or random) I can test them more precisely.

Algorithm To Make Best Use of Multiple Linear Container Space

I have a set number of equal sized linear "containers." For this example say I have 10 containers that can hold up to a maximum value of 28. These containers are filled serially with incoming objects of varying values. The objects will have a known minimum and maximum value, for this example the minimum would be 3.5 and the maximum 15. The objects can be any size between this minimum and maximum. The items leave the containers in an unknown order. If there is not enough room in any of the containers for the next incoming object it is rejected.
I am looking for an algorithm that utilizes the container space the most efficiently and minimizes the amount of rejected objects.
The absolute best solution will depend on the actual sizes, distribution of incoming objects, and so on. I would strongly recommend setting up realistic distributions that you experience in the real world as test code, and trying out different algorithms against it.
The obvious heuristic that I would want to try is to always put each object in the fullest bin that it can fit in.

MBS' algorithm for One Dimensional Bin Packing Prblem (minimum bin slack)

Am working on resolving the 1D bin packing problem, and as initial population, am going to start with the MBS' generator particles.
I was looking on the net for the MBS' (minimum bin slack) algorithm and couldn't find it .
please could someone help me ?
The MBS' is an improvement for the MBS (Minimum Bin Slack) heuristic which is based on the following steps :
At each step, an attempt is made to find a set of items (packing) that fits the bin capacity as much as possible.
In this sense, MBS is similar to Hoffmann’s algorithm for solving assembly line balancing problems.
At each stage, a list I’ of n’ items not assigned to bins so far, sorted in the decreasing order of sizes is kept.
Each time a packing is determined, the items involved are placed in a bin and removed from I’, preserving the sort order.
The process begins with I’= I and it ends when the list I’ becomes empty.
Each packing is determined in a search procedure that tests all possible subsets of items on the list I’ which maximally fit the capacity of bin.
The subset that leaves the least slack is adopted; If the algorithm finds a subset that fills the bin up completely, the search is stopped, and there is no better packing possible in this state.
The search is started from items of greater size, i.e., from the beginning of I’ because items of relatively big sizes are usually harder to pack in bins and, therefore, an attempt to pack them first should be undertaken.
[MBS Algorithm] http://i.stack.imgur.com/jUltR.png
MBS' :
It's identical to MBS except that it uses an initialisation procedure that speeds up the algorithm.
The following modification to MBS is proposed: before the one-packing search procedure is invoked, an item (the seed) is chosen and permanently fixed in the packing.
This can be done because every item must be placed in a bin anyway.
A good choice of seed is the item of the greatest size, i.e., the first on the list Z’.
This will leave the least space in the bin to fill during the search, thereby shortening the time of processing.
Moreover, the solution process will be forced to use larger, and for that reason more trouble-causing, items first.

How can I efficiently find the subset of activities that stay within a budget and maximizes utility?

I am trying to develop an algorithm to select a subset of activities from a larger list. If selected, each activity uses some amount of a fixed resource (i.e. the sum over the selected activities must stay under a total budget). There could be multiple feasible subsets, and the means of choosing from them will be based on calculating the opportunity cost of the activities not selected.
EDIT: There are two reasons this is not the 0-1 knapsack problem:
Knapsack requires integer values for the weights (i.e. resources consumed) whereas my resource consumption (i.e. mass in the knapsack parlance) is a continuous variable. (Obviously it's possible to pick some level of precision and quantize the required resources, but my bin size would have to be very small and Knapsack is O(2^n) in W.
I cannot calculate the opportunity cost a priori; that is, I can't evaluate the fitness of each one independently, although I can evaluate the utility of a given set of selected activities or the marginal utility from adding an additional task to an existing list.
The research I've done suggests a naive approach:
Define the powerset
For each element of the powerset, calculate it's utility based on the items not in the set
Select the element with the highest utility
However, I know there are ways to speed up execution time and required memory. For example:
fully enumerating a powerset is O(2^n), but I don't need to fully enumerate the list because once I've found a set of tasks that exceeds the budget I know that any set that adds more tasks is infeasible and can be rejected. That is if {1,2,3,4} is infeasible, so is {1,2,3,4} U {n}, where n is any one of the tasks remaining in the larger list.
Since I'm just summing duty the order of tasks doesn't matter (i.e. if {1,2,3} is feasible, so are {2,1,3}, {3,2,1}, etc.).
All I need in the end is the selected set, so I probably only need the best utility value found so far for comparison purposes.
I don't need to keep the list enumerations, as long as I can be sure I've looked at all the feasible ones. (Although I think keeping the duty sum for previously computed feasible sub-sets might speed run-time.)
I've convinced myself a good recursion algorithm will work, but I can't figure out how to define it, even in pseudo-code (which probably makes the most sense because it's going to be implemented in a couple of languages--probably Matlab for prototyping and then a compiled language later).
The knapsack problem is NP-complete, meaning that there's no efficient way of solving the problem. However there's a pseudo-polynomial time solution using dynamic programming. See the Wikipedia section on it for more details.
However if the maximum utility is large, you should stick with an approximation algorithm. One such approximation scheme is to greedily select items that have the greatest utility/cost. If the budget is large and the cost of each item is small, then this can work out very well.
EDIT: Since you're defining the utility in terms of items not in the set, you can simply redefine your costs. Negate the cost and then shift everything so that all your values are positive.
As others have mentioned, you are trying to solve some instance of the Knapsack problem. While theoretically, you are doomed, in practice you may still do a lot to increase the performance of your algorithm. Here are some (wildly assorted) ideas:
Be aware of Backtracking. This corresponds to your observation that once you crossed out {1, 2, 3, 4} as a solution, {1, 2, 3, 4} u {n} is not worth looking at.
Apply Dynamic Programming techniques.
Be clear about your actual requirements:
Maybe you don't need the best set? Will a good one do? I am not aware if there is an algorithm which provides a good solution in polynomial time, but there might well be.
Maybe you don't need the best set all the time? Using randomized algorithms you can solve some NP-Problems in polynomial time with the risk of failure in 1% (or whatever you deem "safe enough") of all executions.
(Remember: It's one thing to know that the halting problem is not solvable, but another to build a program that determines whether "hello world" implementations will run indefinetly.)
I think the following iterative algorithm will traverse the entire solution set and store the list of tasks, the total cost of performing them, and the opportunity cost of the tasks not performed.
It seems like it will execute in pseudo-polynomial time: polynomial in the number of activities and exponential in the number of activities that can fit within the budget.
ixCurrentSolution = 1
initialize empty set solution {
oc(ixCurrentSolution) = opportunity cost of doing nothing
tasklist(ixCurrentSolution) = empty set
costTotal(ixCurrentSolution) = 0
}
for ixTask = 1:cActivities
for ixSolution = 1:ixCurrentSolution
costCurrentSolution = costTotal(ixCurrentSolution) + cost(ixTask)
if costCurrentSolution < costMax
ixCurrentSolution++
costTotal(ixCurrentSolution) = costCurrentSolution
tasklist(ixCurrentSolution) = tasklist(ixSolution) U ixTask
oc(ixCurrentSolution) = OC of tasks not in tasklist(ixCurrentSolution)
endif
endfor
endfor

Resources