I'm having a bit of trouble trying to figure out how to create an algorithm that creates a schedule to minimise the smallest possible time used. Here is the problem.
Consider a program with processes 1,2,3,4...n . All processes must be completed for the program to work and some processes may be dependent on another process to complete. (E.G Process 2 is dependent on Process 1, meaning we must complete process 1 before starting process 2)
We are given an unlimited number of machines so that Processes can be run in parallel
All processes take the same time to complete
Design an algorithm to use the lowest amount of time to run the program, given the lists of dependencies for all processes as input.
Thanks for any help!
So the way I wanted to make this algorithm was to think of the problem as a DAG (directed acyclic graph), then use topological sort to create an order to complete the tasks however, I am confused as to how to find the quickest solution.
For example lets say we have:
A is dependent on nothing \
B is dependent on A \
C is dependent on A \
D is dependent on C \
E is dependent on nothing \
F is dependent on B, D, E \
If each process takes 1 second to complete and we can use unlimited machines to process them in parallel, then we can complete the program in 4 seconds right?
But how do I create an algorithm to find out that 4 second is the smallest number?
Sorry If I haven't been able to explain the problem clearly,
Since we have an unlimited number of machines, as soon as it is possible to process a task, it is optimal to allocate a new machine to do it. So we can do a greedy algorithm from this observation :
Let ToDo be an empty queue
Let time = 0
For every node u :
If u has no dependency :
Add (u, 1) at the end of the ToDo
While ToDo is not empty :
Let (u, t) be the head of ToDo
Let time = max(time, t)
For all nodes v depending on u :
If u was the only dependency of v :
Add (v, t+1) to the end of ToDo
Remove u from the graph //(Hence no node depend on u anymore)
Return time
Related
Parallelism in qoutes as I'm not actually referring to parallell programming (Threads/Forking etc.)
Currently working on a dependency graph problem, where given a DAG (Directed Acyclic Graph) where each vertex represents a task that must be completed, and an edge from one vertex v to another vertex u means that v must be completed before u can be completed. Each task takes a certain amount of time to complete.
Example:
dependency graph
(this is just an example, the program should be able to solve for any DAG)
Using topological sorting I've found multiple orders to complete all the tasks, if one task is completed at a time. But, I'm interested in introducing the idea that multiple tasks can be started and/or worked on at the same time. I am assuming infinite "manpower", meaning any number of tasks can be worked on simultaneously. I want to find a way to complete all tasks in the project in the fastest time possible.
My task(vertex) class has the following variables (Java):
class Task{
int id, time;
String name;
List<Task> outEdges;
List<Task> inEdges;
boolean isFinished;
Task(int id, String name, int time){
this.id = id;
this.name = name;
this.time = time;
isFinished = false;
outEdges = new ArrayList<Task>();
inEdges = new ArrayList<Task>();
//Tasks and edges between them are generated while reading from a file.
}
...
}
(As well as methods to get/manipulate them)
The graph itself is represented by an array of tasks.
What kinds of concepts/algorithms can I use to be able to do this?
Assuming you mean that if there is an arc in your graph from v to u, then v must be completed before u can be started.
This is simply finding the shortest project completion time, when you think of the graph as a project precedence graph. Given node j (task) with activity time dj >= 0, let sj denote its start time.
Then, the following linear program solves your problem
Minimize sN [i.e., minimize the starting time of the last activity]
such that sj >= si + di , forall (i,j) in graph [i.e., ensure starting time of jth activity is after completion of ith activity if j follows i in the precedence graph]
such that all si's >= 0 [i.e., all starting times are nonnegative. Without these constraints, problem is unbounded.]
Here, for convenience, 1 and N are dummy activities at the beginning and end of the project respectively, with 0 activity times. 1 is prior to all other nodes in the graph. N is posterior to all other nodes in the graph.
This will probably depend on the structure of the specific graph.
If arriving at only one final node is the required result, an algorithm such as depth first search can be used to provide the sequence of events. Once it's available, there's no option for paralelization since there is a strict sequence dependency.
If there is need to arrive to more than one nodes (and the graph is traverseable in a reasonable amount of time), an algorithm such as breadth first search could be used to provide the required sequences. Then, say that you want to arrive to n distinct vertices. A new semantic graph (let's call it g2) could be defined by the parts of the paths that are common towards arriving to the required vertices. That means that each edge in the new graph would consist of the concatenation of the common parts towards reaching these vertices. For g2, all edges could be launched in parallel.
Disclaimer: The above is not a rigidly proven algorithm, merely just an implementation idea.
I assume that you are also specified the number K of "workers" (processors) to complete the tasks. That is, at most K tasks can be worked on at the same time. Clearly, the time needed the complete all tasks will depend on K.
This problem is known as Precedence Constrained Scheduling.
If the number of workers K is larger or equal to the number of tasks N, then the solution outlined by Tryer is correct: the weight of the heaviest path in your dependency DAG will give you the time required. (The heaviest path is sometimes called the critical path and it can be computed in linear time using a specific shortest path algorithm.)
If K = 1, as you already noticed you just follow a topological order and the time required will be the sum of the tasks' times.
Unfortunately, in the interesting case where 1 < K < N, this is a hard (NP-hard) optimization problem and all algorithms known to compute the optimal solution are inefficient. However, you can still get an approximately optimal solution by feeding the topological order to a so-called list scheduling algorithm. The idea of list scheduling is very simple. At each step of the algorithm, you cycle through all your K workers. For any of them that is idle, you assign an unassigned task that is available and the dependencies of which have been already satisfied. Then you mark that task as assigned and continue. After all workers are busy or cannot be assigned other tasks, you then wait for some of the tasks to complete, and resume the algorithm as before.
It can be proven rigorously that the time T' at which the list scheduling algorithm will complete the last task is at most T* + P where T* is the optimal time and P is the weight of the critical path. Thus the algorithm will work well if the tasks on the critical path are a small fraction of the total.
I know the algorithm exists but i and having problems naming it and finding a suitable solutions.
My problem is as follows:
I have a set of J jobs that need to be completed.
All jobs take different times to complete, but the time is known.
I have a set of R resources.
Each recourse R may have any number from 1 to 100 instances.
A Job may need to use any number of resources R.
A job may need to use multiple instances of a resource R but never more than the resource R has instances. (if a resource only has 2 instances a job will never need more than 2 instances)
Once a job completes it returns all instances of all resources it used back into the pool for other jobs to use.
A job cannot be preempted once started.
As long as resources allow, there is no limit to the number of jobs that can simultaneously execute.
This is not a directed graph problem, the jobs J may execute in any order as long as they can claim their resources.
My Goal:
The most optimal way to schedule the jobs to minimize run time and/or maximize resource utilization.
I'm not sure how good this idea is, but you could model this as an integer linear program, as follows (not tested)
Define some constants,
Use[j,i] = amount of resource i used by job j
Time[j] = length of job j
Capacity[i] = amount of resource i available
Define some variables,
x[j,t] = job j starts at time t
r[i,t] = amount of resource of type i used at time t
slot[t] = is time slot t used
The constraints are,
// every job must start exactly once
(1). for every j, sum[t](x[j,t]) = 1
// a resource can only be used up to its capacity
(2). r[i,t] <= Capacity[i]
// if a job is running, it uses resources
(3). r[i,t] = sum[j | s <= t && s + Time[j] >= t] (x[j,s] * Use[j,i])
// if a job is running, then the time slot is used
(4). slot[t] >= x[j,s] iff s <= t && s + Time[j] >= t
The third constraint means that if a job was started recently enough that it's still running, then its resource usage is added to the currently used resources. The fourth constraint means that if a job was started recently enough that it's still running, then this time slot is used.
The objective function is the weighted sum of slots, with higher weights for later slots, so that it prefers to fill the early slots. In theory the weights must increase exponentially to ensure using a later time slot is always worse than any configuration that uses only earlier time slots, but solvers don't like that and in practice you can probably get away with using slower growing weights.
You will need enough slots such that a solution exists, but preferably not too many more than you end up needing, so I suggest you start with a greedy solution to give you a hopefully non-trivial upper bound on the number of time slots (obviously there is also the sum of the lengths of all tasks).
There are many ways to get a greedy solution, for example just schedule the jobs one by one in the earliest time slot it will go. It may work better to order them by some measure of "hardness" and put the hard ones in first, for example you could give them a score based on how badly they use a resource up (say, the sum of Use[j,i] / Capacity[i], or maybe the maximum? who knows, try some things) and then order by that score in decreasing order.
As a bonus, you may not always have to solve the full ILP problem (which is NP-hard, so sometimes it can take a while), if you solve just the linear relaxation (allowing the variables to take fractional values, not just 0 or 1) you get a lower bound, and the approximate greedy solutions give upper bounds. If they are sufficiently close, you can skip the costly integer phase and take a greedy solution. In some cases this can even prove the greedy solution optimal, if the rounded-up objective from the linear relaxation is the same as the objective of the greedy solution.
This might be a job for Dykstra's Algorithm. For your case, if you want to maximize resource utilization, then each node in the search space is the result of adding a job to the list of jobs you'll do at once. The edges will then be the resources which are left when you add a job to the list of jobs you'll do.
The goal then, is to find the path to the node which has an incoming edge which is the smallest value.
An alternative, which is more straight forward, is to view this as a knapsack problem.
To construct this problem as an instance of The Knapsack Problem, I'd do the following:
Assuming I have J jobs, j_1, j_2, ..., j_n and R resources, I want to find the subset of J such that when that subset is scheduled, R is minimized (I'll call that J').
in pseudo-code:
def knapsack(J, R, J`):
potential_solutions = []
for j in J:
if R > resources_used_by(j):
potential_solutions.push( knapsack(J - j, R - resources_used_by(j), J' + j) )
else:
return J', R
return best_solution_of(potential_solutions)
There are N modules in the project. Each module has
(i) Completion time denoted in number of hours (Hi) and may depend on other modules. If Module x depends on Module y then one needs to complete y before x. s Project manager, you are asked to deliver the project as early as possible. Provide an estimation of amount of time required to complete the project.
Input Format:
First line contains T, number of test cases.
For each test case: First line contains N, number of modules. Next N lines, each contain: (i) Module ID (Hi) Number of hours it takes to complete the module (D) Set of module ids that i depends on - integers delimited by space.
Output Format:
Output the minimum number of hours required to deliver the project.
Input: 1
5
1 5
2 6 1
3 3 2
4 2 3
5 1 3
output: 16
I know the problem is related to topological sorting.But cant get idea how to find total hours.
You are looking for the length of the critical path. The is the longest path through the network from start to finish in the digraph where the nodes are the tasks, arrows from a node A to node B represent prerequisite relationships (A must be done before B begins) and the weight of an arrow is the time it takes to complete the source node task. If there isn't any well-defined start and end node it is common to create dummy nodes for that purpose. Create a 0-cost arrow from the start node to all tasks with no prerequisites, and a 0-cost arrow from all nodes which aren't prerequisites to anything else to the end node. Furthermore, the start and end nodes themselves are just book-keeping devices, they themselves shouldn't correspond to tasks which take any time to complete.
Topological sorting doesn't find it for you but is rather a form of pre-processing that allows you to find the critical path in a single pass. You use it to sort the nodes in such a way that the first node listed has no prerequisites and, when you come to a node in the sorted list, you are guaranteed that all prerequisite nodes have been processed. You process them by assigning a minimum start time for each task. The first node (the start node) in the sorted list has start time 0. When you get to a node for which all prerequisite nodes have been processed, the min start time of that node is
max({m_i + t_i })
where i ranges over all prerequisite nodes, m_i is the min start time for node i and t_i is the time it takes to do the task for node i. The point is that m_i + t_i is the minimum finish time for node i and you take the max of such things because all prerequisite tasks must be finished before a given task can be begu. The minimum start time of the end node is the length of the critical task.
create a directed graph G if a depends on b add a directed edge in G from b to a apply topological sort on G it lets say we stored it in a array called TOPO[],intialize time=H(0)
now run a loop over TOPO array starting from the second element.
check if TOPO[i] depends on TOPO[i-1] if it is so we have to perform them one after the other so add their task times
time=time+H(i)
if TOPO[i] does not dependent on TOPo[i-1] then we can perform them together so take a maximum of thier task times
time=max(time,H(i))
after the end of the loop variable time will have your answer
"
do this for every component separately and take the maximum of all
I've been playing around with the forward-backward algorithm to find the most efficient (determined by a cost function dependent on how a current state differs from the next state) path to go from State 1 to State N. In the picture below, a short version of the problem can be seen with just 3 States and 2 Nodes per State. I do forward-backward algorithm on that and find the best path like normal. The red bits in the pictures are the paths checked during forward propagation bit in the code.
Now the interesting bit, I now want to find the best 3-State Length path (as before) but now only Nodes in the first State are known. The other 4 are now free-floating and can be considered to be in any State (State 2 or State 3). I want to know if you guys have a good idea of how to do this.
Picture: http://i.imgur.com/JrQ2tul.jpg
Note: Bear in mind the original problem consists of around 25 States and 100 Nodes per State. So, you'll know the State of around 100 Nodes in State 1 but the other 24*100 Nodes are Stateless. In this case, I want find a 25-State length path (with minimum cost).
Addendum: Someone pointed out a better algorithm would be Viterbi's algorithm. So here is a problem with more variables thrown in. Can you guys explain how would that be implemented? Same rules apply, the path should start from one of the Nodes in State 1 (Node a or Node b). Also, the cost function using the norm doesn't make sense in this case since we only have one property (Size of node) but in the actual problem I'm expecting a lot more properties.
A variation of Dijkstra's algorithm might be faster for your problem than the forward-backward algorithm, because it does not analyze all nodes at once. Dijkstra is a DP algorithm after all.
Let a node be specified by
Node:
Predecessor : Node
Total cost : Number
Visited nodes : Set of nodes (e.g. a hash set or other performant set)
Initialize the algorithm with
open set : ordered (by total cost) set of nodes = set of possible start nodes (set visitedNodes to the one-element set with the current node)
( = {a, b} in your example)
Then execute the algorithm:
do
n := pop element from open set
if(n.visitedNodes.count == stepTarget)
we're done, backtrace the path from this node
else
for each n2 in available nodes
if not n2 in n.visitedNodes
push copy of n2 to open set (the same node might appear multiple times in the set):
.cost := n.totalCost + norm(n2 - n)
.visitedNodes := n.visitedNodes u { n2 } //u = set union
.predecessor := n
next
loop
If calculating the norm is expensive, you might want to calculate it on demand and store it in a map.
(Before anyone asks, this is not homework.)
I have a set of workers with interests, i.e.:
Bob: Java, XML, Ruby
Susan: Java, HTML, Python
Fred: Python, Ruby
Sam: Java, Ruby
etc.
(There are actually somewhere in the range of 10-25 "interests" for each worker, and I have around 40-50 workers)
At the same time, I have a very large set of tasks that need to be distributed among the workers. Each task has to be assigned to at least 3 workers, and the workers must match at least one of the tasks' interests:
Task 1: Ruby, XML
Task 2: XHTML, Python
and so on. So Bob, Fred, or Sam could get Task 1; Susan or Fred could get Task 2.
This is all stored in a database thusly:
Task
id integer primary key
name varchar
TaskInterests
task_id integer
interest_id integer
Workers
id integer primary key
name varchar
max_assignments integer
WorkerInterests
worker_id
interest_id
Assignments
task_id
worker_id
date_assigned
Each worker has a maximum number of assignments they will do, around 10. Some interests are more rare than others (i.e. only 1 or 2 workers have listed them as a interest), some interests are more common (i.e. half of the workers list them).
The algorithm must:
Assign every task to 3 workers (it is
assumed that at least 3 of the
workers are interested in one of the
interests of the task).
Assign every worker 1 or more tasks
Ideally, the algorithm will:
Assign each worker a number of tasks proportional to their maximum assignments and the total number of tasks. For example, if Susan says she will do 20 tasks and most people will only do 10 tasks and there are 50 workers and 300 tasks, she should be assigned 12 tasks (20/10*(300/50)).
Assign a variety of tasks to each worker, so if Susan lists 4 interests she gets tasks that include 4 interests (rather than getting 10 tasks all with the same interest)
The most difficult aspect so far has been dealing with theses issues:
tasks having interests with few corresponding workers
workers who have few interests, especially
workers who have a few interests, for which there are relatively few tasks
This problem can be modeled as a
Maximum Flow Problem.
In a max-flow problem, you have a directed graph with two special nodes, the source and the sink. The edges in the graph have capacities, and your goal is to assign a flow through the graph from the source to the sink without exceeding any of the edge capacities.
With a (very) carefully crafted graph, we can find an assignment meeting your requirements from the maximum flow.
Let me number the requirements.
Required:
1. Workers are assigned no more than their maximum assignments.
2. Tasks can only be assigned to workers that match one of the task's interests.
3. Every task must be assigned to 3 workers.
4. Every worker must be assigned to at least 1 task.
Optional:
5. Each worker should be assigned a number of tasks proportional to that worker's maximum assignments
6. Each worker should be assigned a variety of tasks.
I will assume that the maximum flow is found using the
Edmonds-Karp Algorithm.
Let's first find a graph that meets requirements 1-3.
Picture the graph as 4 columns of nodes, where edges only go from nodes in a column to nodes in the neighboring column to the right.
In the first column we have the source node. In the next column we will have nodes for each of the workers. From the source, there is an edge to each worker with capacity equal to that worker's maximum assignments. This will enforce requirement 1.
In the third column, there is a node for each task. From each worker in the second column there is an edge to each task that that worker is interested in with a capacity of 1 (a worker is interested in a task if the intersection of their interests is non-empty). This will enforce requirement 2. The capacity of 1 will ensure that each worker takes only 1 of the 3 slots for each task.
In the fourth column we have the sink. There is an edge from each task to the sink with capacity 3. This will enforce requirement 3.
Now, we find a maximum flow in this graph using the Edmonds-Karp Algorithm. If this maximum flow is less than 3 * (# of tasks) then there is no assignment meeting requirements 1-3. If not, there is such an assignment and we can find it by examining the final augmented graph. In the augmented graph, if there is an edge from a task to a worker with capacity 1, then that worker is assigned to that task.
Now, we will modify our graph and algorithm to meet the rest of the requirements.
First, let's meet requirement 4. This will require a small change to the algorithm. Initially, set all the capacities from the source to the workers to 1. Find the max-flow in this graph. If the flow is not equal to the number of workers, then there is no assignment meeting requirement 4. Now, in your final residual graph, for each worker the edge from the source to that worker has capacity 0 and the reverse edge has capacity 1. Change these to that worker's maximum assignments - 1 and 0, respectively. Now continue Edmonds-Karp algorithm as before. Basically what we have done is first find an assignment such that each worker is assigned to exactly one task. Then delete the reverse edge from that task so that the worker will always be assigned to at least one task(though it may not be the one assigned to in the first pass).
Now let's meet requirement 5. Strictly speaking, this requirement just means that we divide each worker's maximum assignments by sum of all worker's maximum assignments / number of tasks. This will quite possibly not have a satisfying assignment. But that's ok. Initialize our graph with these new maximum assignments. Run Edmonds-Karp. If it finds a flow that saturates the edges from tasks to sink, we are done. Otherwise we can increment the capacities from sink to workers in the residual graph and continue running Edmonds-Karp. Repeat until we saturate the edges into the sink. Don't increment the capacities so much that a worker is assigned too many tasks. Also, technically, the increment for each worker should be proportional to that worker's maximum assignments. These are both easy to do.
Finally let's meet requirement 6. This one is a bit tricky. First, add a column between workers and tasks and remove all edges from workers to tasks. In this new column, for each worker add a node for each of that workers interests. From each of these new nodes, add an edge to each task with a matching interest with capacity 1. Add an edge from each worker to each of its interest nodes with capacity 1. Now, a flow in this graph would enforce that if a worker is assigned to n tasks, then the intersection of the union of those task's interests with that worker's interests has size at least n. Again, it is possible that there is a satisfying assignment without this assignment, but there is not one with it. We can handle this the same as requirement 5: run Edmonds-Karp to completion, if no satisfying assignment, increment the capacities from workers to their interest nodes and repeat.
Note that in this modified graph we no longer satisfy requirement 3, as a single worker may be assigned to multiple/all slots of a task if the intersection of their interests has size greater than 1. We can fix that. Add a new column of nodes between the interest nodes and the task nodes and delete the edges between those nodes. For each employee, in the new column insert a node for each task (so each employee has its own node for each task). From these new nodes, to their corresponding task to the right, add an edge with capacity 1. From each worker's interests node to that worker's task nodes, add an edge with capacity 1 from each interest to each task that matches.
-
EDIT: Let me try to clarify this a little. Let -(n)-> be an edge with n capacity.
Previously we had worker-(1)->task for each worker-task pair with a matching interest. Now we have worker-(k)->local interest-(1)->local task-(1)->global task. Now, you can think of a task being matched to a worker-interest pair. The first edge says that for a worker, each of its interests can be matched to k tasks. The second edge says that each of a worker's interests can only be matched once to each job. The third edge says that each task can only be assigned once to each worker. Note that you could push multiple flow from the worker to a local task (equal to the size of the intersection of their interests) but only 1 flow from the worker to the global task node due to the third edge.
-
Also note that we can't really mix this incrementing with the one for requirement 5 correctly. However, we can run the whole algorithm once for each capacity {1,2,...,r} for worker->interest edges. We then need a way to rank the assignments. That is, as we relax requirement 5 we can better meet requirement 6 and vice versa. However, there is another approach that I prefer for relaxing these constraints.
A better approach to requirement relaxation (inspired-by/taken-from templatetypedef)
When we want to be able to relax multiple requirements (e.g. 5 and 6), we can model it as a min-cost max-flow problem. This may be simpler than the incremental search that I described above.
For example, for requirement 5, set all the edge costs to 0. We have the initial edge from the source to the worker with the capacity equal to worker's maximum assignments / (sum of all worker's maximum assignments / number of tasks) and with cost 0. Then you can add another edge with the remaining capacity for that worker and cost 1. Another possibility would be to use some sort of progressive cost such that as you add tasks to a worker the cost to add another task to that user goes up. E.g. you could instead split a worker's remaining capacity up into individual edges with costs 1,2,3,4,....
A similar thing could then be done between the worker nodes and the local-interest nodes for requirement 6. The weighting would need to be balanced to reflect the relative importance of the different requirements.
This method is also sufficient to enforce requirement 4. Also, the costs for requirement 5 should probably be made such that they are proportional to a worker's maximum assignments. Then assigning 1 extra task to a worker with max 100 would not cost as much as assigning an extra to a worker with max 2.
Complexity Analysis
Let n = # of employees, m = # of tasks, k = max interests for a single task/worker, l = # of interests, j = maximum of maximum assignments.
Requirement 3 implies that n = O(m). Let's also assume that l = O(m) and j = O(m).
In the smaller graph (before the change for req. 6), the graph has n + m + 2 = O(m) vertices and at most n + m + k*min(n, m) = O(km) edges.
After the change it has 2 + n + n * l + n * m + m = O(nm) vertices and n + k * n + k * m * n + n * m + m = O(kmn) edges (technically we may need j * n + j * l more nodes and edges so that there are not multiple edges from one node to another, but this wouldn't change the asymptotic bound). Also note that no edge need have capacity > j.
Using the min-cost max-flow formulation, we can find a solution in O(VEBlogV) where V = # vertices, E = # edges, and B = max capacity on a single edge. In our case this gives O(kjn^2m^2log(nm)).
For problems where finding a direct solution is difficult it can be a good idea to use an approximation algorithm, an evaulation function and a method to improve the solution. There are a variety of approaches, such as genetic algorithms and simulated annealing.
The basic idea is to use some sort of simple algorithm (such as a greedy algorithm) to get something that is vaguely usable and make random modifications, keeping those modifications that improve the evaluation score and discarding those that make it worse.
With genetic algorithms a group of for example 100 random solutions is generated and scored and the best are kept and "bred" to produce a new generation of solutions with characteristics similar to the previous generations, but with some random mutations.
For simulated annealing the probablility of a slightly worse solutions being accepted is high initially, but decreases over time. This reduces the risk of getting stuck at a local optimium early on.
Try mapping your task to the stable marriage problem. Tasks become prospective wives `, and your staff become suitors.
You might want to add some extra algorithm for assigning preferences of each task to the staff, and vice-versa - you could assign some ideal proficiency neccessary for the components of each task, and then allow your staff to rank each task. You could assign a proficiency for each component that each staff member posses and use that to get each tasks preference in staff members.
Once you have the preferences then run the algorithm, post the results, then allow people to apply in pairs to you to swap assignments - after all this is a people problem and people work better when they have a degree of control.
So I gave this problem some thought and I think that you can get a good solution (for some definition of "good") by reducing it to an instance of min-cost max-flow (see this, for example). The idea is as follows. Suppose you are given as input a set of jobs J, each of which has a set of skills necessary, along with a set of workers W, each of whom has a set of talents. You are also given for each worker a constant k_i saying how many jobs you'd like them to do, as well as a constant m_i saying the maximum number of jobs you can allocate to them. Your goal is to assign the jobs to the workers in such a way that each job is done by a worker who has the skills, no worker does more than m_i jobs, and the number of the "excess" jobs done by the workers is minimized. For example, if the re are five workers who each want to do four tasks and the load is balanced so that two workers do four jobs, one does three, and one does five, the total excess is one, since one worker did one more job than was expected.
The reduction is as follows. For now, we'll ignore the balancing requirement and just see how tom reduce this to max-flow; we'll add load balancing at the end. Construct a graph G with a designated start node s and sink node t. Add to this graph a node for each job j and each worker w. There will be an edge from s to each of these j nodes of cost zero and capacity one. There will also be an edge from each w node to t with cost zero and capacity m_i. Finally, for each job j and worker w, if worker w has the talents necessary to complete job j, there is an edge from j to w with cost zero and capacity one.
The idea is that we want to push flow from s to t through the j and w nodes such that each flow path going through some j node to a w node means that job j should be given to worker w. The capacity restrictions on the edges from s to j nodes ensures that at most one unit of flow enters the j node, so the job is only assigned at most once. The capacity restriction on the edges from the w nodes to the node t prevent each worker from being assigned too many times. Since all capacities are integral, an integral max flow exists from s to t, and so a max-flow in this graph corresponds to an assignments of jobs to workers that is legal and doesn't exceed any worker's maximum load. You can check whether all jobs are assigned by looking at the total flow in the graph; if it's equal to the number of jobs, they've all been assigned.
This above construction, however, does nothing to balance worker loads. To fix this, we'll modify the construction a bit. Rather than having an edge from each w node to t, instead, for each w node, add two nodes to the graph, c and e, and connect them as follows. There is an edge from w_i to c_i with capacity k_i and cost zero, and an identical edge from c_i to t. There is also an edge from w_i to e_i with cost 1 and capacity m_i - k_i. There is also an edge from e_i to t with equal capacity and zero cost.
Intuitively, we haven't changed the amount of flow that leaves any w node, but we have changed how much that flow costs. Flow shunted to t via the c node is free, and so the worker can take on k_i jobs without incurring cost. Any jobs after that have to be routed through e, which costs one for each unit of flow crossing it. Finding a max-flow in this new graph still determines an assignment, but finding the min-cost max-flow in the graph finds the assignment that minimizes the excess jobs divvied up to workers.
Min-cost max flows can be solved in polynomial time with a few somewhat-well-known algorithms, so hopefully this is a useful answer!