Schedule shifts for people working in pairs - algorithm

I would schedule shifts in a way that people works in pair.
1. The principal constraint is that from time to time each person should not work with the person which he have worked in the previous shift.
2. There aren't constraint about the time of the shifts, I just need to match a pair with a day.
For example if {A,B.. F} representing people then
Day 1: A-B
Day 2: C-D
Day 3: E-F
Day 4: A-B <--- Wrong, because so (1.) is violated
My solution is:
Define a fitness function T, so then T(A) is equal to 0 if A worked today or T(A)=c if A worked for the last time c days ago.
The steps would be:
Inizialize pairs randomly (respecting the costraints obviously)
Create a new pair (i,j) : take the element(the person) i with the highest fitness and the element j with the second highest fitness values provinding that T(j)!=T(i). (In way to don't ensure different a pair each time)
Update
There are a better way to solve a problem?
There is something similar in literature so I can consult? Like similar problems for examples?
Is the nurse shitfs problems similar to this?
Thank you.

Your problem is underspecified, leading to trivial solutions. For example, pick any three people, {A, B, C}, and schedule AB, AC, BC (repeating).
If you want a fairer solution: keep picking a random pair each day until you find a viable pair. There's at most N non-viable pairs and N(N-1)/2 possible pairs.
Here's one way to do it:
import random
def schedule(folk):
excluded = dict((p, None) for p in folk)
while True:
while True:
a, b = random.sample(folk, 2)
if excluded[a] != b and excluded[b] != a:
break
excluded[a], excluded[b] = b, a
yield a, b
for a, b in schedule('ABCDE'):
print '%s%s' % (a, b),

Related

How do I solve this dynamic programming problem?

I am using the C++ language.
Mark has 'N' days. Initially he is at position (h1,0) on the X-axis. On each day he can go to the co-ordinates (h1+a,0) or (h1+b,0) or (h1+c,0) . He can select any one of the choice he wants. Each day he can go to (+a , +b or +c).
At the N-th day, he has to reach the position (h2,0).
Count the number of ways in which Mark can reach (h2,0) in N days.
Values of N,h1,h2,a,b,c are large(co-ordinates and values of a,b,c can be negative as well, in some cases a=b or b=c or c=a or a=b=c)
My approach is:- At each day, I store the positions which he can reach on that particular day with the count(number of ways) to reach that position. I am using a map to do this. And this approach is not efficient.
Can somebody share a much more efficient approach ?
My second approach which must work is the variation of coin-exchange-problem :-)
Example:-
N=3,h1=0,h2=6,a=1,b=2,c=3
Answer : -7(number of ways)
1st way:-(1+2+3)
2nd way:-(1+3+2)
3rd way:-(2+1+3)
4th way:-(2+3+1)
5th way:-(3+1+2)
6th way:-(3+2+1)
7th way:-(2+2+2)
Format:-(Choice on 1st day+Choice on 2nd day+Choice on 3rd day)
Constraints:-
1<=N<=10^5 .
-10^9<=h1,h2,a,b,c<=10^9 .
If you know how much times a,b,c used then you can easily say the result.
For example, if we use x times of a, y times of b and z times of c to get h2 from h1 we can say that way of using x times of a, y times of b and z times of c is
factorial(x+y+z)/(factorial(x)*factorial(y)*factorial(z))
.
Now how can we know the values of x,y,z . There can be a lot of triplet of x,y,z.
Now , we can consider 0 to n every number as x.
So for each x from 0 to n,
xa+yb+z*c=(h2-h1)
y+z=n-x
As we know the values of x,a,b,c,h2,h1
we can rewrite the equation,
yb+zc=(h2-h1-xa)
yb+zc=k, where k= (h2-h1-xa)
Now solve the problem for the following equations:
yb+zc=k
y+z=n-x
So there can be solution or not for that equation such that y and z is an integer.
If there is a solution then these equation will be solvable.
After finding y and z, we can calculate the permutation for x by using
factorial(x+y+z)/(factorial(x)*factorial(y)*factorial(z))
.
So if there is no any solution, then you should skip current x.
By this way calculate y and z for each x from 0 to n and sum them.

How do I randomly equalize unequal values?

Say I have multiple unequal values a, b, c, d, e. Is it possible to turn these unequal values into equal values just by using random number generation?
Example: a=100, b=140, c=200, d=2, e=1000. I want the algorithm to randomly target these sets such that the largest value is targeted most often and the smallest value is left alone for the most parts.
Areas where I've run into problems: if I just use non-unique random number generation, then value e will end up going under the other values. If I use unique number generation, then the ration between the values doesn't change even if their absolute values do. I've tried using sets where a certain range of numbers have to be hit a certain number of times before the value changes. I haven't tried using a mix of unique/non-unique random numbers yet.
I want the ratio between the values to gradually approach 1 as the algorithm runs.
Another way to think about the problem: say these values a, b, c, d, e, are all equal. If we randomly choose one, each is as likely to be chosen as any other. After we choose one, we add 1 to that value. Then we run this process again. This time, the value that was picked last time is 1-larger than any other value so it's more likely to be picked than any one other value. This creates a snowball effect where the value picked first is likely to keep getting picked and achieve runaway growth. I'm looking for the opposite of this algorithm where we start after these originally-equal values have diverged and we bring them back to the originally-equal state.
I think this process is impossible because of entropy and the inherent one-way nature of existence.
Well, there is a technique called Inverse Weights, where you sample items inverse proportional to their previous appearance. Each time we sample a, b, c, d or e, we update their appearance numbers and recalculate probabilities. Simple python code, I sample numbers [0...4] as a, b, c, d, e and start with what you listed as appearances. After 100,000 samples they looks to be equidistributed
import numpy as np
n = np.array([100, 140, 200, 2, 1000])
for k in range(1, 100000):
p = (1.0 / n) # make probabilities inverse to weights
p /= np.sum(p) # normalization
a = np.random.choice(5, p = p) # sampling numbers in the range [0...5)
n[a] += 1 # update weights
print(n)
Output
[20260 20194 20290 20305 20392]

Optimization - distribute participants far from each other

This is my first question. I tried to find an answer for 2 days but I couldn't find what I was looking for.
Question: How can I minimize the amount of matches between students from the same school
I have a very practical case, I need to arrange a competition (tournament bracket)
but some of the participants might come from the same school.
Those from the same school should be put as far as possible from each other
for example: {A A A B B C} => {A B}, {A C}, {A B}
if there are more than half participants from one school, then there would be no other way but to pair up 2 guys from the same school.
for example: {A A A A B C} => {A B}, {A C}, {A A}
I don't expect to get code, just some keywords or some pseudo code on what you think would be a way of making this would be of great help!
I tried digging into constraint resolution algorithms and tournament bracket algorithms, but they don't consider minimising the amount of matches between students from same school.
Well, thank you so much in advance!
A simple algorithm (EDIT 2)
From the comments below: you have a single elimination tournament. You must choose the places of the players in the tournament bracket. If you look at your bracket, you see: players, but also pairs of players (players that play the match 1 against each other), pairs of pairs of players (winner of pair 1 against winner of pair 2 for the match 2), and so on.
The idea
Sort the students by school, the schools with the more students before the ones with the less students. e.g A B B B B C C -> B B B B C C A.
Distribute the students in two groups A and B as in a war card game: 1st student in A, 2nd student in B, 3rd student in A, 4th student in B, ...
Continue with groups A and B.
You have a recursion: the position of a player in the level k-1 (k=n-1 to 0) is ((pos at level k) % 2) * 2^k + (pos at level k) // 2 (every even goes to the left, every odd goes to the right)
Python code
Sort array by number of schools:
assert 2**math.log2(len(players)) == len(players) # n is the number of rounds
c = collections.Counter([p.school for p in players])
players_sorted_by_school_count = sorted(players, key=lambda p:-c[p.school])
Find the final position of every player:
players_sorted_for_tournament = [-1] * 2**n
for j, player in enumerate(players_sorted_by_school_count):
pos = 0
for e in range(n-1,-1,-1):
if j % 2 == 1:
pos += 2**e # to the right
j = j // 2
players_sorted_for_tournament[pos] = player
This should give groups that are diverse enough, but I'm not sure whether it's optimal or not. Waiting for comments.
First version: how to make pairs from students of different schools
Just put the students from a same school into a stack. You have as many stack as schools. Now, sort your stacks by number of students. In your first example {A A A B B C}, you get:
A
A B
A B C
Now, take the two top elements from the two first stacks. The stack sizes have changed: if needed, reorder the stacks and continue. When you have only one stack, make pairs from this stack.
The idea is to keep as many "schools-stacks" as possible as long as possible: you spare the students of small stacks until you have no choice but to take them.
Steps with your second example, {A A A A B C}:
A
A
A
A B C => output A, B
A
A
A C => output A, C
A
A => output A A
It's a matching problem (EDIT 1)
I elaborate on the comments below. You have a single elimination tournament. You must choose the places of the players in the tournament bracket. If you look at your bracket, you see: players, but also pairs of players (players that play the match 1 against each other), pairs of pairs of players (winner of pair 1 against winner of pair 2 for the match 2), and so on.
Your solution is to start with the set of all players and split it into two sets that are as diverse a possible. "Diverse" means here: the maximum number of different schools. To do so, you check all possible combinations of elements that split the set into two subsets of equals size. Then you perform recursively the same operation on those sets, until you arrive to the player level.
Another idea is to start with players and try to make pairs with other players from other school. Let's define a distance: 1 if two players are in the same school, 0 if they are in a different school. You want to make pairs with the minimum global distance.
This distance may be generalized for the pairs of players: take the number of common schools. That is: A B A B -> 2 (A & B), A B A C -> 1 (A), A B C D -> 0. You can imagine the distance between two sets (players, pairs, pairs of pairs, ...): the number of common schools. Now you can see this as a graph whose vertices are the sets (players, pairs, pairs of pairs, ...) and whose edges connect every pair of vertices with a weight that is the distance defined above. You are looking for a perfect matching (all vertices are matched) with a minimum weight.
The blossom algorithm or some of its variants seems to fit your needs, but it's probably overkill if the number of players is limited.
Create a two-dimensional array, where the first dimension will be for each school and the second dimension will be for each participant in this take-off.
Load them and you'll have everything you need linearly.
For example:
School 1 ------- Schol 2 -------- School 3
A ------------ B ------------- C
A ------------ B ------------- C
A ------------ B ------------- C
A ------------ B
A ------------ B
A
A
In the example above, we will have 3 schools (first dimension), with school 1 having 7 participants (second dimension), school 2 having 5 participants and school 3 having 3 participants.
You can also create a second array containing the resulting combinations and, for each chosen pair, delete this pair from the initial array in a loop until it is completely empty and the result array is completely full.
I think the algorithm in this answer could help.
Basically: group the students by school, and use the error tracking idea behind Bresenham's Algorithm to distribute the schools as far apart as possible. Then you pull out pairs from the list.

Neo4J - Traveling Salesman

I'm trying to solve an augmented TSP problem using a graph database, but I'm struggling. I'm great with SQL, but am a total noob on cypher. I've created a simple graph with cities (nodes) and flights (relationships).
THE SETUP: Travel to 8 different cities (1 city per week, no duplicates) with the lowest total flight cost. I'm trying to solve an optimal path to minimize the cost of the flights, which changes each week.
Here is a file on pastebin containing my nodes & relationships. Just run it against Neo4JShell to insert the data.
I started off using this article as a basis but it doesn't handle the changing distances (or in my case flight costs)
I know this is syntactically terrible/non-executable, but here's what I've done so far to get just two flights;
MATCH (a:CITY)-[F1:FLIGHT{week:1}]->(b:CITY) -[F2:FLIGHT{week:2}]->(c:CITY)
RETURN a,b,c;
But that doesn't run.
Next, I thought I'd just try to find all the cities & flights from week one, but it's not working right either as I get flights where week <> 1 as well as =1
MATCH (n) WHERE (n)-[:FLIGHT { week:1 }]->() RETURN n
Can anyone help out?
PS - I'm not married to using a graph DB to solve this, I've just read about them, and thought it would be well fitted to try it, plus gave me a reason to work with them, but so far, I'm not having much (or any) success.
Maybe this Cypher query will give you some ideas.
MATCH (from:Node {name: "Source node" })
MATCH path = (from)-[:CONNECTED_TO*6]->()
WHERE ALL(n in nodes(path) WHERE 1 = length(filter(m in nodes(path) WHERE m = n)))
AND length(nodes(path)) = 7
RETURN path,
reduce(distance = 0, edge in relationships(path) | distance + edge.distance)
AS totalDistance
ORDER BY totalDistance ASC
LIMIT 1
It does all permutations of available routes which are equal to the number of nodes (for this example it is 7), calculates lengths of all these paths and returns the shortest one.
neo4j may be a fine piece of software, but I wouldn't expect it to be of much help in solving this NP-hard problem. Instead, I would point you to an integer program solver (this one, perhaps, but I can't vouch for it) and suggest that you formulate this problem as an integer program as follows.
For each flight f, we create a 0-1 variable x(f) that is 1 if flight f is taken and 0 if flight f is not taken. The objective is to minimize the total cost of the flights (I'm going to assume that each purchase is an independent decision; if not, then you have some more work to do).
minimize sum_{flights f} cost(f) x(f)
Now we need some constraints. Each week, we purchase exactly one flight.
for all weeks i, sum_{flights f in week i} x(f) = 1
We can be in only one place at a time, so if we fly into city v for week i, then we fly out of city v for week i+1. We express this constraint with a strange but idiomatic linear equation.
for all weeks i, for all cities v,
sum_{flights f in week i to city v} x(f) -
sum_{flights f in week i+1 from city v} x(f) = 0
We can fly into each city at most once. We can fly out of each city at most once. This is how we enforce the constraint of visiting only once.
for all cities v,
sum_{flights f to city v} x(v) <= 1
for all cities v,
sum_{flights f from city v} x(v) <= 1
We're almost done. I'm going to assume at this point that the journey begins and ends in a home city u known ahead of time. For the first week, delete all flights not departing from u. For the last week, delete all flights not arriving at u. The flexibility of integer programming, however, means that it's easy to make other arrangements.

Algorithm to separate items of the same type

I have a list of elements, each one identified with a type, I need to reorder the list to maximize the minimum distance between elements of the same type.
The set is small (10 to 30 items), so performance is not really important.
There's no limit about the quantity of items per type or quantity of types, the data can be considered random.
For example, if I have a list of:
5 items of A
3 items of B
2 items of C
2 items of D
1 item of E
1 item of F
I would like to produce something like:
A, B, C, A, D, F, B, A, E, C, A, D, B, A
A has at least 2 items between occurences
B has at least 4 items between occurences
C has 6 items between occurences
D has 6 items between occurences
Is there an algorithm to achieve this?
-Update-
After exchanging some comments, I came to a definition of a secondary goal:
main goal: maximize the minimum distance between elements of the same type, considering only the type(s) with less distance.
secondary goal: maximize the minimum distance between elements on every type. IE: if a combination increases the minimum distance of a certain type without decreasing other, then choose it.
-Update 2-
About the answers.
There were a lot of useful answers, although none is a solution for both goals, specially the second one which is tricky.
Some thoughts about the answers:
PengOne: Sounds good, although it doesn't provide a concrete implementation, and not always leads to the best result according to the second goal.
Evgeny Kluev: Provides a concrete implementation to the main goal, but it doesn't lead to the best result according to the secondary goal.
tobias_k: I liked the random approach, it doesn't always lead to the best result, but it's a good approximation and cost effective.
I tried a combination of Evgeny Kluev, backtracking, and tobias_k formula, but it needed too much time to get the result.
Finally, at least for my problem, I considered tobias_k to be the most adequate algorithm, for its simplicity and good results in a timely fashion. Probably, it could be improved using Simulated annealing.
First, you don't have a well-defined optimization problem yet. If you want to maximized the minimum distance between two items of the same type, that's well defined. If you want to maximize the minimum distance between two A's and between two B's and ... and between two Z's, then that's not well defined. How would you compare two solutions:
A's are at least 4 apart, B's at least 4 apart, and C's at least 2 apart
A's at least 3 apart, B's at least 3 apart, and C's at least 4 apart
You need a well-defined measure of "good" (or, more accurately, "better"). I'll assume for now that the measure is: maximize the minimum distance between any two of the same item.
Here's an algorithm that achieves a minimum distance of ceiling(N/n(A)) where N is the total number of items and n(A) is the number of items of instance A, assuming that A is the most numerous.
Order the item types A1, A2, ... , Ak where n(Ai) >= n(A{i+1}).
Initialize the list L to be empty.
For j from k to 1, distribute items of type Ak as uniformly as possible in L.
Example: Given the distribution in the question, the algorithm produces:
F
E, F
D, E, D, F
D, C, E, D, C, F
B, D, C, E, B, D, C, F, B
A, B, D, A, C, E, A, B, D, A, C, F, A, B
This sounded like an interesting problem, so I just gave it a try. Here's my super-simplistic randomized approach, done in Python:
def optimize(items, quality_function, stop=1000):
no_improvement = 0
best = 0
while no_improvement < stop:
i = random.randint(0, len(items)-1)
j = random.randint(0, len(items)-1)
copy = items[::]
copy[i], copy[j] = copy[j], copy[i]
q = quality_function(copy)
if q > best:
items, best = copy, q
no_improvement = 0
else:
no_improvement += 1
return items
As already discussed in the comments, the really tricky part is the quality function, passed as a parameter to the optimizer. After some trying I came up with one that almost always yields optimal results. Thank to pmoleri, for pointing out how to make this a whole lot more efficient.
def quality_maxmindist(items):
s = 0
for item in set(items):
indcs = [i for i in range(len(items)) if items[i] == item]
if len(indcs) > 1:
s += sum(1./(indcs[i+1] - indcs[i]) for i in range(len(indcs)-1))
return 1./s
And here some random result:
>>> print optimize(items, quality_maxmindist)
['A', 'B', 'C', 'A', 'D', 'E', 'A', 'B', 'F', 'C', 'A', 'D', 'B', 'A']
Note that, passing another quality function, the same optimizer could be used for different list-rearrangement tasks, e.g. as a (rather silly) randomized sorter.
Here is an algorithm that only maximizes the minimum distance between elements of the same type and does nothing beyond that. The following list is used as an example:
AAAAA BBBBB CCCC DDDD EEEE FFF GG
Sort element sets by number of elements of each type in descending order. Actually only largest sets (A & B) should be placed to the head of the list as well as those element sets that have one element less (C & D & E). Other sets may be unsorted.
Reserve R last positions in the array for one element from each of the largest sets, divide the remaining array evenly between the S-1 remaining elements of the largest sets. This gives optimal distance: K = (N - R) / (S - 1). Represent target array as a 2D matrix with K columns and L = N / K full rows (and possibly one partial row with N % K elements). For example sets we have R = 2, S = 5, N = 27, K = 6, L = 4.
If matrix has S - 1 full rows, fill first R columns of this matrix with elements of the largest sets (A & B), otherwise sequentially fill all columns, starting from last one.
For our example this gives:
AB....
AB....
AB....
AB....
AB.
If we try to fill the remaining columns with other sets in the same order, there is a problem:
ABCDE.
ABCDE.
ABCDE.
ABCE..
ABD
The last 'E' is only 5 positions apart from the first 'E'.
Sequentially fill all columns, starting from last one.
For our example this gives:
ABFEDC
ABFEDC
ABFEDC
ABGEDC
ABG
Returning to linear array we have:
ABFEDCABFEDCABFEDCABGEDCABG
Here is an attempt to use simulated annealing for this problem (C sources): http://ideone.com/OGkkc.
I believe you could see your problem like a bunch of particles that physically repel eachother. You could iterate to a 'stable' situation.
Basic pseudo-code:
force( x, y ) = 0 if x.type==y.type
1/distance(x,y) otherwise
nextposition( x, force ) = coined?(x) => same
else => x + force
notconverged(row,newrow) = // simplistically
row!=newrow
row=[a,b,a,b,b,b,a,e];
newrow=nextposition(row);
while( notconverged(row,newrow) )
newrow=nextposition(row);
I don't know if it converges, but it's an idea :)
I'm sure there may be a more efficient solution, but here is one possibility for you:
First, note that it is very easy to find an ordering which produces a minimum-distance-between-items-of-same-type of 1. Just use any random ordering, and the MDBIOST will be at least 1, if not more.
So, start off with the assumption that the MDBIOST will be 2. Do a recursive search of the space of possible orderings, based on the assumption that MDBIOST will be 2. There are a number of conditions you can use to prune branches from this search. Terminate the search if you find an ordering which works.
If you found one that works, try again, under the assumption that MDBIOST will be 3. Then 4... and so on, until the search fails.
UPDATE: It would actually be better to start with a high number, because that will constrain the possible choices more. Then gradually reduce the number, until you find an ordering which works.
Here's another approach.
If every item must be kept at least k places from every other item of the same type, then write down items from left to right, keeping track of the number of items left of each type. At each point put down an item with the largest number left that you can legally put down.
This will work for N items if there are no more than ceil(N / k) items of the same type, as it will preserve this property - after putting down k items we have k less items and we have put down at least one of each type that started with at ceil(N / k) items of that type.
Given a clutch of mixed items you could work out the largest k you can support and then lay out the items to solve for this k.

Resources