Multiple depot vehicle scheduling - algorithm

I have been playing around with algorithms and ILP for the single depot vehicle scheduling problem (SDVSP) and now want to extend my knowledge towards the multiple depot vehicle scheduling problem (MDVSP), as i would like to use this knowledge in a project of mine.
As for the question, I've found and implemented several algorithms for the MDSVP. However, one question i am very curious about is how to go about determining the amount of needed depots (and locations to an extend). Sadly enough i haven't been able to find any resources really which do not assume/require that the depots are set. Thus my question would be: How would i be able to approach a MDVSP in which i can determine the amount and locations of the depots?
(Edit) To clarify:
Assume we are given a set of trips T1, T2...Tn like usually in a SDVSP or MDVSP. Multiple trips can be driven in succession before returning to a depot. Leaving and returning to depots usually only happen at the start and end of a day. But as an extension to the normal problems, we can now determine the amount and locations of our depots, opposed to having set depots.
The objective is to find a solution in which all trips are driven with the minimal cost. The cost consists of the amount of deadhead (the distance which the car has to travel between trips, and from and to the depots), a fixed cost K per car, and a fixed cost C per depots.
I hope this clears up the question somewhat.

The standard approach involves adding |V| binary variables in ILP, one for each node where x_i = 1 if v_i is a depot and 0 otherwise.
However, the way the question is currently articulated, all x_i values will come out to be zero, since there is no "advantage" of making the node a depot and the total cost = (other cost factors) + sum_i (x_i) * FIXED_COST_PER_DEPOT.
Perhaps the question needs to be updated with some other constraint about the range of the car. For example, a car can only go so and so miles before returning to a depot.

Related

Can two groups of N people find each other around a circle? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
This is an algorithmic problem and I'm not sure it has a solution. I think it's a specific case of a more generic computer science problem that has no solution but I'd rather not disclose which one to avoid planting biases. It came up from a real life situation in which mobile phones were out of credit and thus, we didn't have long range communications.
Two groups of people, each with 2 people (but it might be true for N people) arranged to meet at the center of a park but at the time of meeting, the park is closed. Now, they'll have to meet somewhere else around the park. Is there an algorithm each and every single individual could follow to converge all in one point?
For example, if each group splits in two and goes around and when they find another person keep on going with that person, they would all converge on the other side of the park. But if the other group does the same, then, they wouldn't be able to take the found members of the other group with them. This is not a possible solution.
I'm not sure if I explained well enough. I can try to draw a diagram.
Deterministic Solution for N > 1, K > 1
For N groups of K people each.
Since the problem is based on people whose mobile phones are out of credit, let's assume that each person in each group has their own phone. If that's not acceptable, then substitute the phone with a credit card, social security, driver's license, or any other item with numerical identification that is guaranteed to be unique.
In each group, each person must remember the highest number among that group, and the person with the highest number (labeled leader) must travel clockwise around the perimeter while the rest of the group stays put.
After the leader of each group meets the next group, they compare their number with the group's previous leader number.
If the leader's number is higher than the group's previous leader's number, then the leader and the group all continue along the perimeter of the park. If the group's previous leader's number is higher, then they all stay put.
Eventually the leader with the highest number will continue around the entire perimeter exactly 1 rotation, collecting the entire group.
Deterministic solution for N > 1, K = 1 (with one reasonable assumption of knowledge ahead-of-time)
In this case, each group only contains one person. Let's assume that the number used is a phone number, because it is then reasonable to also assume that at least one pair of people will know each other's numbers and so one of them will stay put.
For N = 2, this becomes trivially reduced to one person staying put and the other person going around clockwise.
For other cases, the fact that at least two people will initially know each other's numbers will effectively increase the maximum K to at least 2 (because the person or people who stay put will continue to stay put if the person they know has a higher number than the leader who shows up to meet them), but we still have to introduce one more step to the algorithm to make sure it will terminate.
The extra step is that if a leader has continued around the perimeter for exactly one rotation without adding anyone to the group, then the leader must leave their group behind and start over for one more rotation around the perimeter. This means that a leader with no group will continue indefinitely until they find someone else, which is good.
With this extra step, it is easy to see why we have to assume that at least one pair of people need to know each other's phone numbers ahead of time, because then we can guarantee that the person who stays put will eventually accumulate the entire group.
Feel free to leave comments or suggestions to improve the algorithm I've laid out or challenge me if you think I missed an edge case. If not, then I hope you liked my answer.
Update
For fun, I decided to write a visual demo of my solutions to the problem using d3. Feel free to play around with the parameters and restart the simulation with any initial state. Here's the link:
https://jsfiddle.net/patrob10114/c3d478ty/show/
Key
black - leader
white - follower
when clicked
blue - selected person
green - known by selected person
red - unknown by selected person
Note that collaboration occurs at the start of every step, so if two groups just combined in the current step, most people won't know the people from the opposite group until after the next step is invoked.
They should move towards the northernmost point of the park.
I'd send both groups in a random direction. If they went a half circle without meeting the other group, rerandomize the directions. This will make them meet in a few rounds most of the time, however there is an infinitely small chance that they still never meet.
It is not possible with a deterministic algorithm if
• we have to meet at some point on the perimeter,
• we are unable to distinguish points on the perimeter (or the algorithm is not allowed to use such a distinction),
• we are unable to distinguish individuals in the groups (or the algorithm is not allowed to use such a distinction),
• the perimeter is circular (see below for a more general case),
• we all follow the same algorithm, and
• the initial points may be anywhere on the perimeter.
Proof: With a deterministic algorithm we can deduce the final positions from the initial positions, but the groups could start evenly spaced around the perimeter, in which case the problem has rotational symmetry and so the solution will be unchanged by a 1/n rotation, which however has no fixed point on the perimeter.
Status of assumptions
Dropping various assumptions leads, as others have observed to various solutions:
Non-deterministic: As others have observed, various non-deterministic algorithms do provide a solution whose probability of termination tends to certainty as time tends to infinity; I suspect almost any random walk would do. (Many answers)
Points indistinguishable: Agree on a fixed point at which to meet if needed: flyx’s answer.
Individuals indistinguishable: If there is a perfect hash algorithm, choose those with the lowest hash to collect others: Patrick Roberts’s solution.
Same algorithm: Choose one in advance to collect the others (adapting Patrick Roberts’s solution).
Other assumptions can be weakened:
Non-circular perimeter: The condition that the perimeter be circular is rather artificial, but if the perimeter is topologically equivalent to a circle, this equivalence can be used to convert any solution to a solution to the circle problem.
Unrestricted initial points: Even if the initial points cannot be evenly spaced, as long as some points are distinct, a topological equivalence (as for a non-circular perimeter) reduces a solution to a solution to the circular case, showing that no solution can exist.
I think this question really belongs on Computer Science Stack Exchange.
This question heavily depends on what kind of operations do we have and what do you consider your environment looks like. I asked your this questions with no reply, so here is my interpretation:
The park is a 2d space, 2 groups are located randomly, each group has the same right/left (both are facing the park). Both have the same operations are programmed to do absolutely the same things (nothing like I go right, and you go left, because this makes the problem obvious). So the operations are: Go right/left/stop for x units of time. They can also figure out that they passed through their original position (the one in which they started). And they can be programmed in a loop.
If you have an ability to use randomness - everything is simple. You can come up with many solutions. For example: with probability 0.5 each of them decide to that they will either do 3 steps right and wait. Or one step right and wait. If you will do this operation in a loop and they will select different options, then clearly they will meet (one is faster than the other, so he will reach a slower person). If they both select the same operation, than they will make a circle and both reach their starting positions. In this case roll the dice one more time. After N circles the probability that they will meet will be 1 - 0.5^n (which approaches 1 very fast)
Surprisingly, there is a way to do it! But first we have to define our terms and assumptions.
We have N=2 "teams" of K=2 "agents" apiece. Each "agent" is running the same program. They can't tell north from south, but they can tell clockwise from counterclockwise. Agents in the same place can talk to each other; agents in different places can't.
Your suggested partial answer was: "If each group splits in two and goes around and when they find another person keep on going with that person, they would all converge on the other side of the park..." This implies that our agents have some (magic, axiomatic) face-to-face decision protocol, such that if Alice and Bob are on the same team and wake up at the same point on the circle, they can (magically, axiomatically) decide amongst themselves that Alice will head clockwise and Bob will head counterclockwise (as opposed to Alice and Bob always heading in exactly the same direction because by definition they react exactly the same way to the situation they're identically in).
One way to implement this magic decision protocol is to give each agent a personal random number generator. Whenever 2 or more agents are gathered at a certain point, they all roll a million-sided die, and whichever one rolls highest is acknowledged as the leader. So in your partial solution, Alice and Bob could each roll: whoever rolls higher (the "leader") goes clockwise and sends the other agent (the "follower") counterclockwise.
Okay, having solved the "how do our agents make decisions" issue, let's solve the actual puzzle!
Suppose our teams are (Alice and Bob) and (Carl and Dave). Alice and Carl are the initially elected leaders.
Step 1: Each team rolls a million-sided die to generate a random number. The semantics of this number are "The team with the higher number is the Master Team," but of course neither team knows right now who's got the higher number. But Alice and Bob both know that their number is let's say 424202, and Carl and Dave both know that their number is 373287.
Step 2: Each team sends its leader around the circle clockwise, while the follower stays stationary. Each leader stops moving when he gets to where the other team's follower is waiting. So now at one point on the circle we have Alice and Dave, and at the other point we have Carl and Bob.
Step 3: Alice and Dave compare numbers and realize that Alice's team is the Master Team. Likewise, Bob and Carl compare numbers and realize that Bob's team is the Master Team.
Step 4: Alice being the leader of the Master Team, she takes Dave with her clockwise around the circle. Bob and Carl (being a follower and a leader of a non-master team respectively) just stay put. When Alice and Dave reach Bob and Carl, the problem is solved!
Notice that Step 1 requires that both teams roll a million-sided die in isolation; if during Step 3 everyone realizes that there was a tie, they'll just have to backtrack and try again. Therefore this solution is still probabilistic... but you can make its expected time arbitrarily small by just replacing everyone's million-sided dice with trillion-sided, quintillion-sided, bazillion-sided... dice.
The general strategy here is to impose a pecking order on all N×K agents, and then bounce them around the circle until everyone is aware of the pecking order; then the top pecker can just sweep around the circle and pick everyone up.
Imposing a pecking order can be done by using the agents' personal random number generators.
The protocol for K>2 agents per team is identical to the K=2 case: you just glom all the followers together in Step 1. Alice (the leader) goes clockwise while Bobneric (the followers) stay still; and so on.
The protocol for K=1 agents per team is... well, it's impossible, because no matter what you do, you can't deterministically ensure that anyone will ever encounter another agent. You need a way for the agents to ensure, without communicating at all, that they won't all just circle clockwise around the park forever.
One thing that would help with (but not technically solve) the K=1 case would be to consider the relative speeds of the agents. You might be familiar with Floyd's "Tortoise and Hare" algorithm for finding a loop in a linked list. Well, if the agents are allowed to move at non-identical speeds, then you could certainly do a "continuous, multi-hare" version of that algorithm:
Step 1: Each agent rolls a million-sided die to generate a random number S, and starts running clockwise around the park at speed S.
Step 2: Whenever one agent catches up to another, both agents glom together and start running clockwise at a new random speed.
Step 3: Eventually, assuming that nobody picked exactly the same random speeds, everyone will have met up.
This protocol requires that Alice and Carl not roll identical numbers on their million-sided dice even when they are across the park from each other. IMHO, this is a very different assumption from the other protocol's assuming that Alice and Bob could roll different numbers on their million-sided dice when they were in the same place. With K=1, we're never guaranteed that two agents will ever be in the same place.
Anyway, I hope this helps. The solution for N>2 teams is left as an exercise for the reader, but my intuition is that it'll be easy to reduce the N>2 case to the N=2 case.
Each group sends out a scout with the remaining group members remaining stationary. Each group remembers the name of their scout. The scouts circle around clockwise, and whenever he meets a group, they compare names of their scouts:
If scout's name is earlier alphabetically: group follows him.
If scout's name is later: he joins the group and gives up his initial group identity.
By the time the lowest named scout makes it back the his starting location, everyone who hasn't stopped at his initial location should be following him.
There are some solutions here that to me are unsatisfactory since they require the two teams to agree a strategy in advance and all follow the same deterministic or probabilistic rules. If you had the opportunity to agree in advance what rules you're all going to follow, then as flyx points out you could just have agreed a backup meeting point. Restrictions that prevent the advance choice of a particular place or a particular leader are standard in the context of some problems with computer networks but distinctly un-natural for four friends planning to meet up. Therefore I will frame a strategy from the POV of only one team, assuming that there has been no prior discussion of the scenario between the two teams.
Note that it is not possible to be robust in the face of any strategy from the other team. The other team can always force a stalemate simply by adopting some pattern of movement that ensures those two will never meet again.
One of you sets out walking around the park. The other stands still, let us say at position X. This ensures that: (a) you will meet each other periodically at X, let us say every T seconds; and (b) for each member of the other team, no matter how they move around the perimeter of the park they must encounter at least one of your team at least every T seconds.
Now you have communication among all members of both groups, and (given sufficient time and passing-on of messages from one person to another) the problem resolves to the same problem as if your mobile phones were working. Choosing a leader by random number is one way to solve it as others have suggested. Note that there are still two issues: the first is a two-generals problem with communication, and I suppose you might feel that a mobile phone conversation allows for the generation of common knowledge whereas these relayed notes do not. The second is the possibility that the other team refuses to co-operate and you cannot agree a meeting point no matter what.
Notwithstanding the above problems, the question supposes that if they had communication that the groups would be able to agree a meeting-point. You have communication: agree a meeting point!
As for how to agree a meeting point, I think it requires some appeal to reason or good intention on the part of the other team. If they are due to meet again, then they will be very reluctant to take any action that results in them breaking their commitment to their partner. Therefore suggest to them both that after their next meeting, when all commitments can be forgiven, they proceed together to X by the shortest route. Listen to their counter-proposal and try to find some common solution.
To help reach a solution, you could pre-agree with your team-mate some variations you'd be willing to make to your plan, provided that they remain within some restrictions that ensure you will meet your team-mate again. For example, if the stationary team-mate agrees that they could be persuaded to set out clockwise, and the moving team-mate sets out anti-clockwise and agrees that they can be persuaded to do something different but not to cross point X in a clockwise direction, then you're guaranteed to meet again and so you can accept certain suggestions from the other team.
Just as an example, if a team following this strategy meets a team (unwisely) following your strategy, then one of my team will agree to go along with the one of your team they meet, and the other will refuse (since it would require them to make the forbidden movement above). This means that when your team meet together, they'll have one of my team with them for a group of three. The loose member of my team is on a collision course with that group of three provided your team doesn't do anything perverse.
I think forming any group of three is a win, so each member should do anything they can to attend a meeting of the other team, subject to the constraints they agreed to guarantee they'll meet up with their own team member again. A group of 3, once formed, should follow whatever agreement is in place to meet the loose member (and if the team of two contained within that 3 refuses to do this then they're saboteurs, there is no good reason for them to refuse). Within these restrictions, any kind of symmetry-breaking will allow the team following these principles to persuade/follow the other team into a 3-way and then a 4-way meeting.
In general some symmetry-breaking is required, if only because both teams might be following my strategy and therefore both have a stationary member at different points.
Assume the park is a circle. (for the sake of clarity)
Group A
Person A.1
Person A.2
Group B
Person B.1
Person B.2
We (group A) are currently at the bottom of the circle (90 degrees). We agree to go towards 0 degrees in opposite directions. I'm person A.1 and I go clockwise. I send Person A.2. counterclockwise.
In any possible scenario (B splits, B doesn't split, B has the same scheme, B has some elaborate scheme), each group might have conflicting information. So unless Group A has a gun to force Group B into submission, the new groups might make conflicting choices upon meeting.
Say for instance, A.1. meets B.1, and A.2. meets B.2. What do we (A.1 and B.1) do if B has the same scheme? Since the new groups can't know what the other group decides (whether to go with A's scheme, or B's scheme), each group might make different decision.
And we'll end up where we started... (i.e. two people at 0 degrees, and two people at 90 degrees). Let's call this checkpoint "First Iteration".
We might account for this and say that we'll come up with a scheme for the "Second Iteration". But then the same thing happens again. And for the third iteration, fourth iteration, ad infinitum.
Each iteration has a 50% chance of not working out.
Which means that after x iterations, your chances of not meeting up at a common point are at most 1-(0.5^x)
N.B. I thought about a bunch of scenarios, such as Group A agreeing to come back to their initial point, and communicating with each other what Group B plans to do. But no cigar, turns out even with very clever schemes the conflicting information issue always arises.
An interesting problem indeed. I'd like to suggest my version of the solution:
0 Every group picks a leader.
1: Leader and followers go opposite directions
2: They meet other group leaders or followers
3: They keep going the same direction as before, 90 degrees magnitude
4: By this time, all groups have made a half-circle around the perimeter, and invariably have met leaders again, theirs, or others'.
5: All Leaders change the next step direction to that of the followers around,and order them to follow.
6: Units from all groups meet at one point.
Refer to the attached file for an in-depth explanation. You will need MS Office Powerpoint 2007 or newer to view it. In case you don't have one, use pptx. viewer (Powerpoint viewer) as a free alternative.
Animated Solution (.pptx)
EDIT: I made a typo in the first slide. It reads "Yellow and red are selected", while it must be "Blue and red" instead.
Each group will split in two parts, and each part will go around the circle in the opposite direction (clockwise and counterclockwise).
Before they start, they choose some kind of random number (in a range large enough so that there is no possibility for two groups to have the same number... or a Guid in computer science : globally unique identifier). So one unique number per group.
If people of the same group meet first (the two parts meet), they are alone, so probably the other groups (if any) gave up.
If two groups meet : they follow the rule that say the biggest number leads the way. So when they meet they continue in the direction that had people with the biggest number.
At the end, the direction of the biggest number will lead them all to one point.
If they have no computer to choose this number, each group could use the full names of the people of the group merged together.
Edit : sorry I just see that this is very close to Patrick Roberts' solution
Another edit : what if each group has its own deterministic strategy ?
In the solution above, all works well if all the groups have the same strategy. But in a real life problem this is not the case (as they cant communicate).
If one group has a deterministic strategy and the others have none, they can agree to follow the deterministic approach and all is ok.
But if two groups have deterministic approaches (simply for instance, the same as above, but one group uses the biggest number and the other group follows the lowest number).
Is there a solution to that ?

How to run MCTS on a highly non-deterministic system?

I'm trying to implement a MCTS algorithm for the AI of a small game. The game is a rpg-simulation. The AI should decides what moves to play in battle. It's a turn base battle (FF6-7 style). There is no movement involved.
I won't go into details but we can safely assume that we know with certainty what move will chose the player in any given situation when it is its turn to play.
Games end-up when one party has no unit alive (4v4). It can take any number of turn (may also never end). There is a lot of RNG element in the damage computation & skill processing (attacks can hit/miss, crit or not, there is a lots of procs going on that can "proc" or not, buffs can have % value to happens ect...).
Units have around 6 skills each to give an idea of the branching factor.
I've build-up a preliminary version of the MCTS that gives poor results for now. I'm having trouble with a few things :
One of my main issue is how to handle the non-deterministic states of my moves. I've read a few papers about this but I'm still in the dark.
Some suggest determinizing the game information and run a MCTS tree on that, repeat the process N times to cover a broad range of possible game states and use that information to take your final decision. In the end, it does multiply by a huge factor our computing time since we have to compute N times a MCTS tree instead of one. I cannot rely on that since over the course of a fight I've got thousands of RNG element : 2^1000 MCTS tree to compute where i already struggle with one is not an option :)
I had the idea of adding X children for the same move but it does not seems to be leading to a good answer either. It smooth the RNG curve a bit but can shift it in the opposite direction if the value of X is too big/small compared to the percentage of a particular RNG. And since I got multiple RNG par move (hit change, crit chance, percentage to proc something etc...) I cannot find a decent value of X that satisfies every cases. More of a badband-aid than anythign else.
Likewise adding 1 node per RNG tuple {hit or miss ,crit or not,proc1 or not,proc2 or not,etc...} for each move should cover every possible situations but has some heavy drawbacks : with 5 RNG mecanisms only that means 2^5 node to consider for each move, it is way too much to compute. If we manage to create them all, we could assign them a probability ( linked to the probability of each RNG element in the node's tuple) and use that probability during our selection phase. This should work overall but be really hard on the cpu :/
I also cannot "merge" them in one single node since I've got no way of averaging the player/monsters stat's value accuractely based on two different game state and averaging the move's result during the move processing itself is doable but requieres a lot of simplifcation that are a pain to code and will hurt our accuracy really fast anyway.
Do you have any ideas how to approach this problem ?
Some other aspects of the algorithm are eluding me:
I cannot do a full playout untill a end state because A) It would take a lot of my computing time and B) Some battle may never ends (by design). I've got 2 solutions (that i can mix)
- Do a random playout for X turns
- Use an evaluation function to try and score the situation.
Even if I consider only health point to evaluate I'm failing to find a good evaluation function to return a reliable value for a given situation (between 1-4 units for the player and the same for the monsters ; I know their hp current/max value). What bothers me is that the fights can vary greatly in length / disparity of powers. That means that sometimes a 0.01% change in Hp matters (for a long game vs a boss for example) and sometimes it is just insignificant (when the player farm a low lvl zone compared to him).
The disparity of power and Hp variance between fights means that my Biais parameter in the UCB selection process is hard to fix. i'm currently using something very low, like 0.03. Anything > 0.1 and the exploration factor is so high that my tree is constructed depth by depth :/
For now I'm also using a biaised way to choose move during my simulation phase : it select the move that the player would choose in the situation and random ones for the AI, leading to a simulation biaised in favor of the player. I've tried using a pure random one for both, but it seems to give worse results. Do you think having a biaised simulation phase works against the purpose of the alogorithm? I'm inclined to think it would just give a pessimistic view to the AI and would not impact the end result too much. Maybe I'm wrong thought.
Any help is welcome :)
I think this question is way too broad for StackOverflow, but I'll give you some thoughts:
Using stochastic or probability in tree searches is usually called expectimax searches. You can find a good summary and pseudo-code for Expectimax Approximation with Monte-Carlo Tree Search in chapter 4, but I would recommend using a normal minimax tree search with the expectimax extension. There are a few modifications like Star1, Star2 and Star2.5 for a better runtime (similiar to alpha-beta pruning).
It boils down to not only having decision nodes, but also chance nodes. The probability of each possible outcome should be known and the expected value of each node is multiplied with its probability to know its real expected value.
2^5 nodes per move is high, but not impossibly high, especially for low number of moves and a shallow search. Even a 1-3 depth search shoulld give you some results. In my tetris AI, there are ~30 different possible moves to consider and I calculate the result of three following pieces (for each possible) to select my move. This is done in 2 seconds. I'm sure you have much more time for calculation since you're waiting for user input.
If you know what move the player is obvious, shouldn't it also obvious for your AI?
You don't need to consider a single value (hp), you can have several factors that are weighted different to calculate the expected value. If I come back to my tetris AI, there are 7 factors (bumpiness, highest piece, number of holes, ...) that are calculated, weighted and added together. To get the weights, you could use different methods, I used a genetic algorithm to find the combination of weights that resulted in most lines cleared.

Assigning jobs to people using Maximum Flow, harder version

I am self studying max flow and there was this problem:
the original problem is
Suppose we have a list of jobs
{J1, J1,..., Jm}
and a list of people that have applied for them
{P1, P2, P3,...,Pn}
each person have different interests and some of them have applied for multiple jobs (each person has a list of jobs they can do)
nobody is allowed to do more than 3 jobs.
so, this problem can be solved by finding a maximum flow in the graph below
I understand this solution, but
the harder version of the problem
what if these conditions are added?
the first 3 conditions of the easy version (lists of jobs and persons and each person has a list of interests or abilities) are still the same
the compony is employing only Vi persons for job Ji
the compony wants to employ as many people as possible
there is no limitations for the number of jobs a person is allowed to do.
What difference should I make in the graph so that my solution can satisfy those conditions as well?or if I need a different approach please tell me.
before anyone say anything, this is not homework. It's just self study, but I am studying Maximum flow and the problem was in that area so the solution should use Maximum flow.
For multiple persons for a single job:
The edge from Ji to t will have the capacity equal to the number of people for that job. E.g. If job #1 can have three people, this translates to a capacity of three for the edge from J1 to t.
For the requirement of hiring as many people as possible:
I don't think this is possible with a single flow-graph. Here is an algorithm for how it could be done:
Run the flow-algorithm once.
For each person:
Try to decrease the incoming capacity to one below the current flow-rate.
Run the flow-algorithm again.
While this does not decrease the total flow, repeat from (2.1.).
Increase the capacity by one, to restore the maximum flow.
Until no further persons gets added, repeat from (2.).
For no limitation on number of jobs:
The edges from s to Pi will have a maximum flow equal the number of applicable jobs for that person.

Coming up with factors for a weighted algorithm?

I'm trying to come up with a weighted algorithm for an application. In the application, there is a limited amount of space available for different elements. Once all the space is occupied, the algorithm should choose the best element(s) to remove in order to make space for new elements.
There are different attributes which should affect this decision. For example:
T: Time since last accessed. (It's best to replace something that hasn't been accessed in a while.)
N: Number of times accessed. (It's best to replace something which hasn't been accessed many times.)
R: Number of elements which need to be removed in order to make space for the new element. (It's best to replace the least amount of elements. Ideally this should also take into consideration the T and N attributes of each element being replaced.)
I have 2 problems:
Figuring out how much weight to give each of these attributes.
Figuring out how to calculate the weight for an element.
(1) I realize that coming up with the weight for something like this is very subjective, but I was hoping that there's a standard method or something that can help me in deciding how much weight to give each attribute. For example, I was thinking that one method might be to come up with a set of two sample elements and then manually compare the two and decide which one should ultimately be chosen. Here's an example:
Element A: N = 5, T = 2 hours ago.
Element B: N = 4, T = 10 minutes ago.
In this example, I would probably want A to be the element that is chosen to be replaced since although it was accessed one more time, it hasn't been accessed in a lot of time compared with B. This method seems like it would take a lot of time, and would involve making a lot of tough, subjective decisions. Additionally, it may not be trivial to come up with the resulting weights at the end.
Another method I came up with was to just arbitrarily choose weights for the different attributes and then use the application for a while. If I notice anything obviously wrong with the algorithm, I could then go in and slightly modify the weights. This is basically a "guess and check" method.
Both of these methods don't seem that great and I'm hoping there's a better solution.
(2) Once I do figure out the weight, I'm not sure which way is best to calculate the weight. Should I just add everything? (In these examples, I'm assuming that whichever element has the highest replacementWeight should be the one that's going to be replaced.)
replacementWeight = .4*T - .1*N - 2*R
or multiply everything?
replacementWeight = (T) * (.5*N) * (.1*R)
What about not using constants for the weights? For example, sure "Time" (T) may be important, but once a specific amount of time has passed, it starts not making that much of a difference. Essentially I would lump it all in an "a lot of time has passed" bin. (e.g. even though 8 hours and 7 hours have an hour difference between the two, this difference might not be as significant as the difference between 1 minute and 5 minutes since these two are much more recent.) (Or another example: replacing (R) 1 or 2 elements is fine, but when I start needing to replace 5 or 6, that should be heavily weighted down... therefore it shouldn't be linear.)
replacementWeight = 1/T + sqrt(N) - R*R
Obviously (1) and (2) are closely related, which is why I'm hoping that there's a better way to come up with this sort of algorithm.
What you are describing is the classic problem of choosing a cache replacement policy. Which policy is best for you, depends on your data, but the following usually works well:
First, always store a new object in the cache, evicting the R worst one(s). There is no way to know a priori if an object should be stored or not. If the object is not useful, it will fall out of the cache again soon.
The popular squid cache implements the following cache replacement algorithms:
Least Recently Used (LRU):
replacementKey = -T
Least Frequently Used with Dynamic Aging (LFUDA):
replacementKey = N + C
Greedy-Dual-Size-Frequency (GDSF):
replacementKey = (N/R) + C
C refers to a cache age factor here. C is basically the replacementKey of the item that was evicted last (or zero).
NOTE: The replacementKey is calculated when an object is inserted or accessed, and stored alongside the object. The object with the smallest replacementKey is evicted.
LRU is simple and often good enough. The bigger your cache, the better it performs.
LFUDA and GDSF both are tradeoffs. LFUDA prefers to keep large objects even if they are less popular, under the assumption that one hit to a large object makes up lots of hits for smaller objects. GDSF basically makes the opposite tradeoff, keeping many smaller objects over fewer large objects. From what you write, the latter might be a good fit.
If none of these meet your needs, you can calculate optimal values for T, N and R (and compare different formulas for combining them) by minimizing regret, the difference in performance between your formula and the optimal algorithm, using, for example, Linear regression.
This is a completely subjective issue -- as you yourself point out. And a distinct possibility is that if your test cases consist of pairs (A,B) where you prefer A to B, then you might find that you prefer A to B , B to C but also C over A -- i.e. its not an ordering.
If you are not careful, your function might not exist !
If you can define a scalar function of your input variables, with various parameters for coefficients and exponents, you might be able to estimate said parameters by using regression, but you will need an awful lot of data if you have many parameters.
This is the classical statistician's approach of first reviewing the data to IDENTIFY a model, and then using that model to ESTIMATE a particular realisation of the model. There are large books on this subject.

Time Aware Social Graph DS/Queries

Classic social networks can be represented as a graph/matrix.
With a graph/matrix one can easily compute
shortest path between 2 participants
reachability from A -> B
general statistics (reciprocity, avg connectivity, etc)
etc
Is there an ideal data structure (or a modification to graph/matrix) that enables easy computation of the above while being time aware?
For example,
Input
t = 0...100
A <-> B (while t = 0...10)
B <-> C (while t = 5...100)
C <-> A (while t = 50...100)
Sample Queries
Is A associated with B at any time? (yes)
Is A associated with B while B is associated with C? (yes. #t = 5...10)
Is C ever reachable from A (yes. # t=5 )
What you're looking for is an explicitly persistent data structure. There's a fair body of literature on this, but it's not that well known. Chris Okasaki wrote a pretty substantial book on the topic. Have a look at my answer to this question.
Given a full implementation of something like Driscoll et al.'s node-splitting structure, there are a few different ways to set up your queries. If you want to know about stuff true in a particular time range, you would only examine nodes containing data about that time range. If you wanted to know what time range something was true, you would start searching, and progressively tighten your bounds as you explore each new node. Just remember that your results might not always be contiguous - consider two people start dating, breaking up, and getting back together.
I would guess that there's probably at least one publication worth of unexplored territory in how to do interesting queries over persistent graphs, if not much more.

Resources