I want to write an algorithm to find the sequential reward points.
The inviter gets (1/2)^k points for each confirmed invitation, where k is the level of the invitation: level 0
(people directly invited) yields 1 point, level 1 (people invited by someone invited by the original customer)
gives 1/2 points, level 2 invitations (people invited by someone on level 1) awards 1/4 points and so on.
Only the first invitation counts: multiple invites sent to the same person don't produce any further points,
even if they come from different inviters and only the first invitation counts.
For instance:
Input:
A recommends B
B accepts
B recommends C
C accepts
C recommends D
B recommends D
D accepts
would calculate as:
A receives 1 Point from the recommendation of B, 0.5 Point from the recommendation of C by B and
another 0.25 Point by the recommendation of D by C. A gets a total score of 1.75 Points.
B receives 1 Point from the recommendation of C and 0.5 Point from the recommendation of D by C. B receives no Points from the recommendation of D because D was invited by C before. B gets a total score of 1.5 Points.
C receives 1 Point from the recommendation of D. C gets a total score of 1 Point.
Output:
{ “A”: 1.75, “B”: 1.5, “C”: 1 }
What should be the algorithm for that? I think Dynamic Programing has to be use here.
This is simply an ancestors search in a tree. By keeping track of the depth, you know how many points to award.
Pseudocode
def add_points(accepter):
depth = 0
while accepter has an inviter:
accepter.inviter.points += (0.5)^depth
accepter = accepter.inviter
depth += 1
This algorithm is O(number of parents) and since you need to traverse all parents to update, you know you can't do any better complexity-wise.
I have the following problem:
A group of n players wants to play a set of matches. Each match has m
participants. I want to find a schedule with a minimum number of games
where every player meets every other player at least once and maximum
variety of opponents.
After some research I found that the "social golfer problem" seems to be a similar problem but I could not find a solution which I could adapt nor can I come up with an own solution.
Pseudocode (assuming there are flags inside the players):
function taking array of players (x) and players per game (y) {
array of players in this game (z)
for each player (t) in x {
if z.length == y {break out of loop}
check the flag of each player in (t), if the flag is not set {
check if z.length is less than y {
set flag and add it to array z
}
}
}
if z.length is less than 2, change the players in z's flags back to false
return (if z.length == 3, return z, or else return false);
}
Start with player A; (assume players A to F, 3 players per game)
By going through from top to bottom, we can eliminate possibilities. Start with each person playing all other players (that they have not already played, so for example, B skips C because B played with C in ABC) (in groups of 3). We can write a function to do this (see psuedocode at top)
A B C (save this game to a list of games, or increment a counter or something)
A D E
A F -missing (returned false so we did not save this)
B D E
B F -missing
C D E
C F -missing
D E F
Now, almost all players have played each other, if you only count the groups of 3. This is 5 games so far. Remove the games we've already counted, resulting in
A F -missing
B F -missing
C F -missing
What is in common here? They all have F. That means that F must play everyone in this list, so all we need to do is put F in the front.
We can now do F A B, and then C F + any random player. This is the minimum 7 games.
Basically, you can run the pseudocode over and over until it returns false 2 times in a row. When it has returned false 2 times in a row, you know that all flags have been set.
This may not be a complete solution, but... Consider a graph with n nodes. A match with m players can be represented by laying m-1 edges into the graph per round. The requirement that each player meet each other player at least once means that you will after some number of rounds have a complete graph.
For round (match) 1, lay an arbitrary set of m-1 edges. For each next round, lay m-1 edges that are not currently connecting two nodes. Repeat until the graph is complete.
Edit: Edges would need to be laid connected to ensure only m players are in a match for m-1 edges, which would make this a little more difficult. If you lay each round in a walk of the complete graph, the problem is then the same as finding the shortest walk of the complete graph. This answer to a different question may be relevant, and suggests the Floyd-Warshall algorithm.
There are N people at a party. Each one has some preferences of food and drinks. Given all the types of foods and drinks that a particular person prefers, find the maximum number of people that can be assigned a drink and a food of their choice.
A person may have several choices for both food and drinks, for example, a person may like Foods A,B,C and Drinks X,Y,Z. If we assign (A,Z) to the person, we consider the person to have been correctly assigned.
How do we solve this problem, considering that there are 2 constraints that we need to handle.
Let F be the set of all food there is, D be the set of all drink and P be the set of all people there is.
Build 2 bipartite graphs G and G' such that: for G: the first partite set is P and the second partite set is F, for G': the first partite set is P and the second partite set is D. Do maximal matching on both G and G' separately. Call M the maximum matching on G and M' the maximum matching on G'. M is a list of vertex-pair: (p1, f1), (p2,f2)... where pi and fi are people and food respectively. M' is also a list of vertex pair: (p1,d1), (p3,d3) ...
Now, merge M and M' by merging the pair with the same person: (p1,f1) + (p1,d1) = (p1,f1,d1) and that is the food-drink combo for p1. Say if p2 has a matching with f2 but p2 has no matching in G' (no drink), then ignore it.
A good algorithm for bipartite graph matching is Hopcroft-Karp algorithm. http://en.wikipedia.org/wiki/Hopcroft%E2%80%93Karp_algorithm.
I would schedule shifts in a way that people works in pair.
1. The principal constraint is that from time to time each person should not work with the person which he have worked in the previous shift.
2. There aren't constraint about the time of the shifts, I just need to match a pair with a day.
For example if {A,B.. F} representing people then
Day 1: A-B
Day 2: C-D
Day 3: E-F
Day 4: A-B <--- Wrong, because so (1.) is violated
My solution is:
Define a fitness function T, so then T(A) is equal to 0 if A worked today or T(A)=c if A worked for the last time c days ago.
The steps would be:
Inizialize pairs randomly (respecting the costraints obviously)
Create a new pair (i,j) : take the element(the person) i with the highest fitness and the element j with the second highest fitness values provinding that T(j)!=T(i). (In way to don't ensure different a pair each time)
Update
There are a better way to solve a problem?
There is something similar in literature so I can consult? Like similar problems for examples?
Is the nurse shitfs problems similar to this?
Thank you.
Your problem is underspecified, leading to trivial solutions. For example, pick any three people, {A, B, C}, and schedule AB, AC, BC (repeating).
If you want a fairer solution: keep picking a random pair each day until you find a viable pair. There's at most N non-viable pairs and N(N-1)/2 possible pairs.
Here's one way to do it:
import random
def schedule(folk):
excluded = dict((p, None) for p in folk)
while True:
while True:
a, b = random.sample(folk, 2)
if excluded[a] != b and excluded[b] != a:
break
excluded[a], excluded[b] = b, a
yield a, b
for a, b in schedule('ABCDE'):
print '%s%s' % (a, b),
I have a list of elements, each one identified with a type, I need to reorder the list to maximize the minimum distance between elements of the same type.
The set is small (10 to 30 items), so performance is not really important.
There's no limit about the quantity of items per type or quantity of types, the data can be considered random.
For example, if I have a list of:
5 items of A
3 items of B
2 items of C
2 items of D
1 item of E
1 item of F
I would like to produce something like:
A, B, C, A, D, F, B, A, E, C, A, D, B, A
A has at least 2 items between occurences
B has at least 4 items between occurences
C has 6 items between occurences
D has 6 items between occurences
Is there an algorithm to achieve this?
-Update-
After exchanging some comments, I came to a definition of a secondary goal:
main goal: maximize the minimum distance between elements of the same type, considering only the type(s) with less distance.
secondary goal: maximize the minimum distance between elements on every type. IE: if a combination increases the minimum distance of a certain type without decreasing other, then choose it.
-Update 2-
About the answers.
There were a lot of useful answers, although none is a solution for both goals, specially the second one which is tricky.
Some thoughts about the answers:
PengOne: Sounds good, although it doesn't provide a concrete implementation, and not always leads to the best result according to the second goal.
Evgeny Kluev: Provides a concrete implementation to the main goal, but it doesn't lead to the best result according to the secondary goal.
tobias_k: I liked the random approach, it doesn't always lead to the best result, but it's a good approximation and cost effective.
I tried a combination of Evgeny Kluev, backtracking, and tobias_k formula, but it needed too much time to get the result.
Finally, at least for my problem, I considered tobias_k to be the most adequate algorithm, for its simplicity and good results in a timely fashion. Probably, it could be improved using Simulated annealing.
First, you don't have a well-defined optimization problem yet. If you want to maximized the minimum distance between two items of the same type, that's well defined. If you want to maximize the minimum distance between two A's and between two B's and ... and between two Z's, then that's not well defined. How would you compare two solutions:
A's are at least 4 apart, B's at least 4 apart, and C's at least 2 apart
A's at least 3 apart, B's at least 3 apart, and C's at least 4 apart
You need a well-defined measure of "good" (or, more accurately, "better"). I'll assume for now that the measure is: maximize the minimum distance between any two of the same item.
Here's an algorithm that achieves a minimum distance of ceiling(N/n(A)) where N is the total number of items and n(A) is the number of items of instance A, assuming that A is the most numerous.
Order the item types A1, A2, ... , Ak where n(Ai) >= n(A{i+1}).
Initialize the list L to be empty.
For j from k to 1, distribute items of type Ak as uniformly as possible in L.
Example: Given the distribution in the question, the algorithm produces:
F
E, F
D, E, D, F
D, C, E, D, C, F
B, D, C, E, B, D, C, F, B
A, B, D, A, C, E, A, B, D, A, C, F, A, B
This sounded like an interesting problem, so I just gave it a try. Here's my super-simplistic randomized approach, done in Python:
def optimize(items, quality_function, stop=1000):
no_improvement = 0
best = 0
while no_improvement < stop:
i = random.randint(0, len(items)-1)
j = random.randint(0, len(items)-1)
copy = items[::]
copy[i], copy[j] = copy[j], copy[i]
q = quality_function(copy)
if q > best:
items, best = copy, q
no_improvement = 0
else:
no_improvement += 1
return items
As already discussed in the comments, the really tricky part is the quality function, passed as a parameter to the optimizer. After some trying I came up with one that almost always yields optimal results. Thank to pmoleri, for pointing out how to make this a whole lot more efficient.
def quality_maxmindist(items):
s = 0
for item in set(items):
indcs = [i for i in range(len(items)) if items[i] == item]
if len(indcs) > 1:
s += sum(1./(indcs[i+1] - indcs[i]) for i in range(len(indcs)-1))
return 1./s
And here some random result:
>>> print optimize(items, quality_maxmindist)
['A', 'B', 'C', 'A', 'D', 'E', 'A', 'B', 'F', 'C', 'A', 'D', 'B', 'A']
Note that, passing another quality function, the same optimizer could be used for different list-rearrangement tasks, e.g. as a (rather silly) randomized sorter.
Here is an algorithm that only maximizes the minimum distance between elements of the same type and does nothing beyond that. The following list is used as an example:
AAAAA BBBBB CCCC DDDD EEEE FFF GG
Sort element sets by number of elements of each type in descending order. Actually only largest sets (A & B) should be placed to the head of the list as well as those element sets that have one element less (C & D & E). Other sets may be unsorted.
Reserve R last positions in the array for one element from each of the largest sets, divide the remaining array evenly between the S-1 remaining elements of the largest sets. This gives optimal distance: K = (N - R) / (S - 1). Represent target array as a 2D matrix with K columns and L = N / K full rows (and possibly one partial row with N % K elements). For example sets we have R = 2, S = 5, N = 27, K = 6, L = 4.
If matrix has S - 1 full rows, fill first R columns of this matrix with elements of the largest sets (A & B), otherwise sequentially fill all columns, starting from last one.
For our example this gives:
AB....
AB....
AB....
AB....
AB.
If we try to fill the remaining columns with other sets in the same order, there is a problem:
ABCDE.
ABCDE.
ABCDE.
ABCE..
ABD
The last 'E' is only 5 positions apart from the first 'E'.
Sequentially fill all columns, starting from last one.
For our example this gives:
ABFEDC
ABFEDC
ABFEDC
ABGEDC
ABG
Returning to linear array we have:
ABFEDCABFEDCABFEDCABGEDCABG
Here is an attempt to use simulated annealing for this problem (C sources): http://ideone.com/OGkkc.
I believe you could see your problem like a bunch of particles that physically repel eachother. You could iterate to a 'stable' situation.
Basic pseudo-code:
force( x, y ) = 0 if x.type==y.type
1/distance(x,y) otherwise
nextposition( x, force ) = coined?(x) => same
else => x + force
notconverged(row,newrow) = // simplistically
row!=newrow
row=[a,b,a,b,b,b,a,e];
newrow=nextposition(row);
while( notconverged(row,newrow) )
newrow=nextposition(row);
I don't know if it converges, but it's an idea :)
I'm sure there may be a more efficient solution, but here is one possibility for you:
First, note that it is very easy to find an ordering which produces a minimum-distance-between-items-of-same-type of 1. Just use any random ordering, and the MDBIOST will be at least 1, if not more.
So, start off with the assumption that the MDBIOST will be 2. Do a recursive search of the space of possible orderings, based on the assumption that MDBIOST will be 2. There are a number of conditions you can use to prune branches from this search. Terminate the search if you find an ordering which works.
If you found one that works, try again, under the assumption that MDBIOST will be 3. Then 4... and so on, until the search fails.
UPDATE: It would actually be better to start with a high number, because that will constrain the possible choices more. Then gradually reduce the number, until you find an ordering which works.
Here's another approach.
If every item must be kept at least k places from every other item of the same type, then write down items from left to right, keeping track of the number of items left of each type. At each point put down an item with the largest number left that you can legally put down.
This will work for N items if there are no more than ceil(N / k) items of the same type, as it will preserve this property - after putting down k items we have k less items and we have put down at least one of each type that started with at ceil(N / k) items of that type.
Given a clutch of mixed items you could work out the largest k you can support and then lay out the items to solve for this k.