I am self studying max flow and there was this problem:
the original problem is
Suppose we have a list of jobs
{J1, J1,..., Jm}
and a list of people that have applied for them
{P1, P2, P3,...,Pn}
each person have different interests and some of them have applied for multiple jobs (each person has a list of jobs they can do)
nobody is allowed to do more than 3 jobs.
so, this problem can be solved by finding a maximum flow in the graph below
I understand this solution, but
the harder version of the problem
what if these conditions are added?
the first 3 conditions of the easy version (lists of jobs and persons and each person has a list of interests or abilities) are still the same
the compony is employing only Vi persons for job Ji
the compony wants to employ as many people as possible
there is no limitations for the number of jobs a person is allowed to do.
What difference should I make in the graph so that my solution can satisfy those conditions as well?or if I need a different approach please tell me.
before anyone say anything, this is not homework. It's just self study, but I am studying Maximum flow and the problem was in that area so the solution should use Maximum flow.
For multiple persons for a single job:
The edge from Ji to t will have the capacity equal to the number of people for that job. E.g. If job #1 can have three people, this translates to a capacity of three for the edge from J1 to t.
For the requirement of hiring as many people as possible:
I don't think this is possible with a single flow-graph. Here is an algorithm for how it could be done:
Run the flow-algorithm once.
For each person:
Try to decrease the incoming capacity to one below the current flow-rate.
Run the flow-algorithm again.
While this does not decrease the total flow, repeat from (2.1.).
Increase the capacity by one, to restore the maximum flow.
Until no further persons gets added, repeat from (2.).
For no limitation on number of jobs:
The edges from s to Pi will have a maximum flow equal the number of applicable jobs for that person.
Related
Assume I have 4 lists: jobs, workers, mechanisms and mechanism equipment.
Currently I am firstly looping through jobs, lets call this the
jobLoop.
Inside jobLoop I'm looping through workerLoop, checking if
worker is available and has required competences to do the job.
If worker is OK, I'm looping through mechanismLoop, checking if worker can use the mechanism and if the mechanism is available. If no mechanisms are available, I fall back to workerLoop, looking for another correct worker.
If mechanism is OK, I loop through mechEquipmentLoop, checking checking if worker can use the equipment and if the equipment is available. If no equipment is available, I fall back to mechanismLoop, looking for another correct mechanism.
If mechanism equipment is finally okay, the algorithm is done. If not, the algorithm says items cannot be matched.
This is a simplified version, on each step there are many checks like if worker is allowed on the object where the job is done and so on.
I'm trying to think of a more efficient way to do this. Currently the time complexity for this algorithm should be roughly O(n^4), right? I'm not looking for code, just guidance on how to perform this.
IMHO - this algorithm is O(jwm*e) instead of O(n^4). j = number of jobs, w = number of workers, m= number of mechanisms, e = number of mech equipment.
If these lists don't change & if the answer is needed only once this is the best algorithm to execute. You need to visit all the inputs once at least.
Suppose these lists change and the same algo needs to execute for a given job multiple times you can do this.
Store the list of workers in BST (or a self-balanced tree-like AVL) with job competency as key. Suppose if there are multiple competencies for a worker then his data will be in all the competencies. The creation of the tree is O(wlogw) here w is the number of unique competency & worker combinations, not workers number alone. Deletion, Addition & Search will be O(logw). Here are we are assuming that competency - worker distribution is decent. Suppose if all workers have only one competency this will become O(w) once again.
The same applies to mechanism and equipment. This will make the search at every level to O(logm) and O(loge).
So for every job best case scenario of allocation is O(logw * logm * loge), with a overhead of O(wlogw + mlogm + eloge). For all jobs it will be (j * logw * logm * loge).
I have been playing around with algorithms and ILP for the single depot vehicle scheduling problem (SDVSP) and now want to extend my knowledge towards the multiple depot vehicle scheduling problem (MDVSP), as i would like to use this knowledge in a project of mine.
As for the question, I've found and implemented several algorithms for the MDSVP. However, one question i am very curious about is how to go about determining the amount of needed depots (and locations to an extend). Sadly enough i haven't been able to find any resources really which do not assume/require that the depots are set. Thus my question would be: How would i be able to approach a MDVSP in which i can determine the amount and locations of the depots?
(Edit) To clarify:
Assume we are given a set of trips T1, T2...Tn like usually in a SDVSP or MDVSP. Multiple trips can be driven in succession before returning to a depot. Leaving and returning to depots usually only happen at the start and end of a day. But as an extension to the normal problems, we can now determine the amount and locations of our depots, opposed to having set depots.
The objective is to find a solution in which all trips are driven with the minimal cost. The cost consists of the amount of deadhead (the distance which the car has to travel between trips, and from and to the depots), a fixed cost K per car, and a fixed cost C per depots.
I hope this clears up the question somewhat.
The standard approach involves adding |V| binary variables in ILP, one for each node where x_i = 1 if v_i is a depot and 0 otherwise.
However, the way the question is currently articulated, all x_i values will come out to be zero, since there is no "advantage" of making the node a depot and the total cost = (other cost factors) + sum_i (x_i) * FIXED_COST_PER_DEPOT.
Perhaps the question needs to be updated with some other constraint about the range of the car. For example, a car can only go so and so miles before returning to a depot.
I have a problem with finding a algorithm for sorting a dataset of people. I try to explain as detailed as possible:
The story starts with a survey. A bunch of people, lets say 600 can choose between 20-25 projects. They make a #1-wish, #2-wish and #3-wish, where #1 is the most wanted project they want to take part and wish 3 the "not-perfect-but-most-acceptable-choose".
These project are limited in their number of participants. Every project can join around 30 people (based on the number of people and count of projects).
The algorithm puts the people in the different projects and should find the best possible combination.
The problem is that you can't just put all the people with their number 1 wish X in the certain project and stuff all the other with also number 1 wish X in there number 2 wish because that would not be the most "happiest" situation for everybody.
You may can think of what I mean when you imagine that for everybody who get his number 1 wish you get 100 points, for everybody who get his number 2 wish 60 points, number 3 wish 30 points and who get not in one of his wishes 0 points. And you want to get as most points as possible.
I hope you get my problem. This is for a school-project day.
Is there something out there that could help me? Do you have any idea? I would be thankful for every tipp!!
Kind regards
You can solve this optimally by formulating it as a min cost network flow problem.
Add a node for each person, and one for each project.
Set cost for a flow between a person and a project according to their preferences.
(As Networkx provides a min cost flow, but not max cost flow I have set the costs to be
negative.)
For example, using Networkx and Python:
import networkx as nx
G=nx.DiGraph()
prefs={'Tom':['Project1','Project2','Project3'],
'Dick':['Project2','Project1','Project3'],
'Harry':['Project1','Project3','Project1']}
capacities={'Project1':2,'Project2':10,'Project3':4}
num_persons=len(prefs)
G.add_node('dest',demand=num_persons)
A=[]
for person,projectlist in prefs.items():
G.add_node(person,demand=-1)
for i,project in enumerate(projectlist):
if i==0:
cost=-100 # happy to assign first choices
elif i==1:
cost=-60 # slightly unhappy to assign second choices
else:
cost=-30 # very unhappy to assign third choices
G.add_edge(person,project,capacity=1,weight=cost) # Edge taken if person does this project
for project,c in capacities.items():
G.add_edge(project,'dest',capacity=c,weight=0)
flowdict = nx.min_cost_flow(G)
for person in prefs:
for project,flow in flowdict[person].items():
if flow:
print person,'joins',project
In this code Tom's number 1 choice is Project1, followed by Project2, then Project3.
The capacities dictionary specifies the upper limit on how many people can join each project.
My algorithm would be something like this:
mainloop
wishlevel = 1
loop
Distribute people into all projects according to wishlevel wish
loop through projects, counting population
If population exceeds maximum
Distribute excess non-redistributed people into their wishlevel+1 projects that are under-populated
tag distributed people as 'redistributed' to avoid moving again
endif
endloop
wishlevel = wishlevel + 1
loop until wishlevel == 3
mainloop until no project exceeds max population
This should make several passes through the data set until everything is evened out. This algorithm may result in an endless loop if you restrict redistribution of already-redistributed people in the event that one project fills up with such people as the algorithm progresses, so you might try it without that restriction.
I am studying greedy algorithms and I am wondering the solution for a different case.
For interval selection problem we want to pick the maximum number of activities that do not clash with each other, so selecting the job with the earliest finishing time works.
Another example; we have n jobs given and we want to buy as smallest number of resources as possible. Here, we can sort all the jobs from left to right, and when we encounter a new startpoint, we increment a counter and when we encounter an endpoint, we decrement the counter. So the largest value we get from this counter will be number of resources we need to buy.
But for example, what if we have n tasks but k resources? What if we cannot afford more then k resource? How should be a greedy solution to remove as few tasks as possible to satisfy this?
Also if there is a specific name for the last problem I wrote, I would be happy to hear that.
This looks like a general case of the version where we have only one resource.
Intuitively, it makes sense to still sort the jobs by end time and take them one by one in that order. Now, instead of the ending time of the last job, we keep track of the ending times of the last k jobs accepted into our resources. For each job, we check if the current jobs starting time is greater that the last job in any one of our resources. If no such resource is found, we skip that job and move ahead. If one resource is found, we assign that job to that resource and update ending time. If there are more than one resource able to take on that job, it makes sense to assign it to the resource with the latest end time.
I don't really have a proof of this greedy strategy, so it may well be wrong. But I cannot think of a case where changing the choice might enable us to fit more jobs.
Let´s pretend i have two buildings where i can build different units in.
A building can only build one unit at the same time but has a fifo-queue of max 5 units, which will be built in sequence.
Every unit has a build-time.
I need to know, what´s the fastest solution to get my units as fast as possible, considering the units already in the build-queues of my buildings.
"Famous" algorithms like RoundRobin doesn´t work here, i think.
Are there any algorithms, which can solve this problem?
This reminds me a bit of starcraft :D
I would just add an integer to the building queue which represents the time it is busy.
Of course you have to update this variable once per timeunit. (Timeunits are "s" here, for seconds)
So let's say we have a building and we are submitting 3 units, each take 5s to complete. Which will sum up to 15s total. We are in time = 0.
Then we have another building where we are submitting 2 units that need 6 timeunits to complete each.
So we can have a table like this:
Time 0
Building 1, 3 units, 15s to complete.
Building 2, 2 units, 12s to complete.
Time 1
Building 1, 3 units, 14s to complete.
Building 2, 2 units, 12s to complete.
And we want to add another unit that takes 2s, we can simply loop through the selected buildings and pick the one with the lowest time to complete.
In this case this would be building 2. This would lead to Time2...
Time 2
Building 1, 3 units, 13s to complete
Building 2, 3 units, 11s+2s=13s to complete
...
Time 5
Building 1, 2 units, 10s to complete (5s are over, the first unit pops out)
Building 2, 3 units, 10s to complete
And so on.
Of course you have to take care of the upper boundaries in your production facilities. Like if a building has 5 elements, don't assign something and pick the next building that has the lowest time to complete.
I don't know if you can implement this easily with your engine, or if it even support some kind of timeunits.
This will just result in updating all production facilities once per timeunit, O(n) where n is the number of buildings that can produce something. If you are submitting a unit this will take O(1) assuming that you keep the selected buildings in a sorted order, lowest first - so just a first element lookup. In this case you have to resort the list after manipulating the units like cancelling or adding.
Otherwise amit's answer seem to be possible, too.
This is NPC problem (proof at the end of the answer) so your best hope to find ideal solution is trying all possibilities (this will be 2^n possibilities, where n is the number of tasks).
possible heuristic was suggested in comment (and improved in comments by AShelly): sort the tasks from biggest to smallest, and put them in one queue, every task can now take element from the queue when done.
this is of course not always optimal, but I think will get good results for most cases.
proof that the problem is NPC:
let S={u|u is a unit need to be produced}. (S is the set containing all 'tasks')
claim: if there is a possible prefect split (both queues finish at the same time) it is optimal. let this time be HalfTime
this is true because if there was different optimal, at least one of the queues had to finish at t>HalfTime, and thus it is not optimal.
proof:
assume we had an algorithm A to produce the best solution at polynomial time, then we could solve the partition problem at polynomial time by the following algorithm:
1. run A on input
2. if the 2 queues finish exactly at HalfTIme - return True.
3. else: return False
this solution solves the partition problem because of the claim: if the partition exist, it will be returned by A, since it is optimal. all steps 1,2,3 run at polynomial time (1 for the assumption, 2 and 3 are trivial). so the algorithm we suggested solves partition problem at polynomial time. thus, our problem is NPC
Q.E.D.
Here's a simple scheme:
Let U be the list of units you want to build, and F be the set of factories that can build them. For each factory, track total time-til-complete; i.e. How long until the queue is completely empty.
Sort U by decreasing time-to-build. Maintain sort order when inserting new items
At the start, or at the end of any time tick after a factory completes a unit runs out of work:
Make a ready list of all the factories with space in the queue
Sort the ready list by increasing time-til-complete
Get the factory that will be done soonest
take the first item from U, add it to thact factory
Repeat until U is empty or all queues are full.
Googling "minimum makespan" may give you some leads into other solutions. This CMU lecture has a nice overview.
It turns out that if you know the set of work ahead of time, this problem is exactly Multiprocessor_scheduling, which is NP-Complete. Apparently the algorithm I suggested is called "Longest Processing Time", and it will always give a result no longer than 4/3 of the optimal time.
If you don't know the jobs ahead of time, it is a case of online Job-Shop Scheduling
The paper "The Power of Reordering for Online Minimum Makespan Scheduling" says
for many problems, including minimum
makespan scheduling, it is reasonable
to not only provide a lookahead to a
certain number of future jobs, but
additionally to allow the algorithm to
choose one of these jobs for
processing next and, therefore, to
reorder the input sequence.
Because you have a FIFO on each of your factories, you essentially do have the ability to buffer the incoming jobs, because you can hold them until a factory is completely idle, instead of trying to keeping all the FIFOs full at all times.
If I understand the paper correctly, the upshot of the scheme is to
Keep a fixed size buffer of incoming
jobs. In general, the bigger the
buffer, the closer to ideal
scheduling you get.
Assign a weight w to each factory according to
a given formula, which depends on
buffer size. In the case where
buffer size = number factories +1, use weights of (2/3,1/3) for 2 factories; (5/11,4/11,2/11) for 3.
Once the buffer is full, whenever a new job arrives, you remove the job with the least time to build and assign it to a factory with a time-to-complete < w*T where T is total time-to-complete of all factories.
If there are no more incoming jobs, schedule the remainder of jobs in U using the first algorithm I gave.
The main problem in applying this to your situation is that you don't know when (if ever) that there will be no more incoming jobs. But perhaps just replacing that condition with "if any factory is completely idle", and then restarting will give decent results.