How to optimize the algorithm to find the max_depth_contact_series in a time varying graph? - algorithm

Assuming there is a time varying graph with N nodes named a1,a2,...,an and contact series as t node1 node2 meaning node1 contacts with node2 at time t
Assuming node a1 carries a message(there is only one copy of the message in the graph), from time 0, how many nodes can the message contact with at most in time T? The message can be transferred to another node freely at anytime. For example, a1 can chose to transfer it to a2 at time 2 or keeps the message until a1 contacts with a3 and transfers it to a3.
Here is an example to make it more clear. For a graph with 6 nodes and contact series:
1 a1 a2
2 a1 a3
3 a1 a4
4 a3 a5
6 a3 a6
10 a4 a3
During time 0~10 the message can contact with 4 nodes at most:a2,a3,a5,a6 with message tranferred from a1 to a3 at time 2.
Keep in mind the time series. Here a1 carries the message but transfers the message to a3 at time 2. Then at time 3 node a1 has no message so the message cant contact with a4. If a1 keeps message at time 2 instead of tranferring to a3, the message contacts with the list a2,a3,a4,a3. The contact set will be {a2,a3,a4} with size 3 which is smaller than 4.
How can I get the largest contact nodes set? Or just the number?
At present I get it with recursive algorithm but the cost is unbearable when T is large.

Related

DAX running total + starting value

I am fairly new to the DAX universe, but scrolling around I managed to successfully implement a cumulative (running) total, with a measure defined along this structure: Running_Total_QTY:=CALCULATE(SUM(Reporting[QTY]),FILTER(ALL(Reporting[DATE_R]),Reporting[DATE_R]<=MAX(Reporting[DATE_R])))
For a table that looks like this:
ID DATE_R QTY
A1 5/11/2018 9:00 5
A1 5/11/2018 9:01 10
A1 5/11/2018 9:01 -5
A1 5/11/2018 9:02 50
A1 5/11/2018 9:05 -20
B1 5/11/2018 9:00 3
B1 5/11/2018 9:01 -20
B1 5/11/2018 9:01 4
B1 5/11/2018 9:02 20
B1 5/11/2018 9:03 10
The problem is that I would need to add to this running total a starting QTY - QTY_INIT, which I receive from another table that looks like this:
ID1 QTY_INIT
A1 100
B1 200
By trial and error I have succeeded by creating a second measure that calculates the average (of 1 item!) defined like this:
Average_starting_quantity:=CALCULATE(AVERAGE(Starting_Quantity[QTY_INIT]),FILTER(ALL(Starting_Quantity[ID1]),Starting_Quantity[ID1]=LASTNONBLANK(Reporting[ID],TRUE())))
And then just adding the two measures together.
Running_plus_total:=[Running_Total_QTY]+[Average_starting_quantity]
This method works, but is very inefficient and very slow (the data set is quite big).
How can I add QTY_INIT from the second table directly without using a "fake" average (or max, min, etc..)? How can I optimize the measure for a faster performance?
Thanks in advance for any help.
Regards
How about this instead of your Average_starting_quantity?
StartingQty = LOOKUPVALUE(Starting_Quantity[QTY_INIT],
Starting_Quantity[ID1], MAX(Reporting[ID]))
If your tables are related on ID and ID1 with cross filter direction going both ways,
then you can just use
StartingQty = MAX(Starting_Quantity[QTY_INIT])
since the filter context on ID will flow through to ID1.

Uniqueness in Permutation and Combination

I am trying to create some pseudocode to generate possible outcomes for this scenario:
There is a tournament taking place, where each round all players in the tournament are in a group with other players of different teams.
Given x amount of teams, each team has exactly n amount of players. What are the possible outcomes for groups of size r where you can only have one player of each team AND the player must have not played with any of the other players already in previous rounds.
Example: 4 teams (A-D), 4 players each team, 4 players each grouping.
Possible groupings are: (correct team constraint)
A1, B1, C1, D1
A1, B3, C1, D2
But not: (violates same team constraint)
A1, A3, C2, D2
B3, C2, D4, B1
However, the uniqueness constraint comes into play in this grouping
A1, B1, C1, D1
A1, B3, C1, D2
While it does follow the constraints of playing with different teams, it has broken the rule of uniqueness of playing with different players. In this case A1 is grouped up twice with C1
At the end of the day the pseudocode should be able to create something like the following
Round 1 Round 2 Round 3 Round 4
a1 b1 a1 d4 a1 c2 a1 c4
c1 d1 b2 c3 b4 d3 d2 b3
a2 b2 a2 d1 a2 c3 a2 c1
c2 d2 b3 c4 b1 d4 d3 b4
a3 b3 a3 d2 a3 c4 a3 c2
c3 d3 b4 c1 b2 d1 d4 b1
a4 b4 a4 d3 a4 c1 a4 c3
c4 d4 b1 c2 b3 d2 d1 b2
In the example you see that in each round no player has been grouped up with another previous player.
If the number of players on a team is a prime power (2, 3, 4, 5, 7, 8, 9, 11, 13, 16, 17, 19, etc.), then here's an algorithm that creates a schedule with the maximum number of rounds, based on a finite affine plane.
We work in the finite field GF(n), where n is the number of players on a team. GF(n) has its own notion of multiplication; when n is a prime, it's multiplication mod n, and when n is higher power of some prime, it's multiplication of univariate polynomials mod some irreducible polynomial of the appropriate degree. Each team is identified by a nonzero element of GF(n); let the set of team identifiers be T. Each team member is identified by a pair in T×GF(n). For each nonzero element r of GF(n), the groups for round r are
{{(t, r*t + c) | t in T} | c in GF(n)},
where * and + denote multiplication and addition respectively in GF(n).
Implementation in Python 3
This problem is very closely related to the Social Golfer Problem. The Social Golfer Problem asks, given n players who each play once a day in g groups of size s (n = g×s), how many days can they be scheduled such that no player plays with any other player more than once?
The algorithms for finding solutions to instances of Social Golfer problems are a patchwork of constraint solvers and mathematical constructions, which together don't address very many cases satisfactorily. If the number of players on a team is equal to the group size, then solutions to this problem can be derived by interpreting the first day's schedule as the team assignments and then using the rest of the schedule. There may be other constructions.

Find the best set among the many sets based on it's item's cost

I have items in sets as a below example. Each item contains particular cost.
I have a max budget. I need to do combination in such a way that in each combination I need at least one item from each set and sum of the costs should be equal to my budget.
Example
A = [a1, a2, a3, a4, ... , a10]
B = [b1, b2, b3, b4, ... , b10]
C = [c1, c2, c3, c4, ... , c10] may be upto G
Max budget = 10
cost of a1 = 2
a2 = 8
b1 = 1
b2 = 7
c1 = 3
c2 = 1
etc
Output can be
[a1, b2, c2] i,e 2+7+1 = 10
[a2, b1, c2] i,e 8+1+1 = 10
[a1, b1, c1] i,e 2+1+3 = 6 Eliminated (since 6 != 10)
goes on
I can have max of 7 sets and 10 items in each. So maximum combinations will be 10^7. Is there any algorithm to achieve this easily. I followed brute force method and it is too expensive.
Thank you.

scheduling algorithm shortest job first

i am trying to understand how shortest job first algorithm works, am i doing this in the right way please help
Proc Burst1 Burst2
+------+---------+--------+
| A | 10 | 5 |
| B | 3 | 9 |
| C | 8 | 11 |
+------+---------+--------+
B1->3->C1->11->B2->20->A1->30->A2->35->C2->46
"Shortest job first" is not really an algorithm, but a strategy: among the jobs ready to execute always choose the job with the shortest execution time. Your sequence looks ok. In the beginning the following jobs are ready for execution (with execution time in parenthesis):
A1(10), B1(3), C1(8)
So B1 is chosen, after which also job B2 is ready to execute, so here is the updated list of ready jobs:
A1(10), B2(9), C1(8)
Now C1 is chosen, and so on.
There are variants of the strategy "shortest job first", where the total time over all bursts, i.e. A1 + A2, B1 + B2, ..., is taken into account. Then the chosen sequence would be:
B1, B2, A1, A2, C1, C2

What is the best way to distribute n forms in c categories between u users? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I have asked this question in cstheory too
I have a form distribution problem. There is n forms in c categories (each form in 1 category). And there is u users, which each user can receive forms from at least one category (but maybe more than one category).
The goal is to distribute forms between users, so each user receive the same amount of forms. I prefer to equally use categories.
For example:
If categories are:
C1 : 20 forms
C2 : 3 forms
C3 : 8 forms
C4 : 2 forms
And users are:
U1 with access to C1 and C2
U2 with access to C2
U3 with access to C3
U4 with access to C1 and C3
U5 with access to C2 and C4
The answer should be:
U1: 1 x C1 + 1 x C2 | 2 x C1 (preferred)
U2: 2 x C2
U3: 2 x C3
U4: 1 x C1 + 1 x C3 | 2 x C1 (preferred) | 2 x C3
U5: 2 x C4
And 23 forms remains.
Do you have any suggestion on how can I write such algorithm?
There could be a second question, which in that some Categories have a SHOULD CONTRIBUTE option. Which if set, all remaining forms in that category will distribute between users who have access to that. for example if C1 have this option enabled, the answer should be:
U1: 1 x C1 + 1 x C2 + 9 C1
U2: 2 x C2
U3: 2 x C3
U4: 2 x C3 (to minimize remaining forms in C3 category) + 10 C1
U5: 2 x C4
and remaining forms would be 0 in C1, 0 in C2, 4 in C3 and 0 in C4.
I think its kinda Bin Packing algorithm, but I am not sure and I don't know how to solve it! :(
Note: The above answers are not best answers, these are just what I think!
It seems to me that if you fix a number N of forms per user and ask the question: can we give N forms to each user? then you can turn this into a http://en.wikipedia.org/wiki/Maximum_flow_problem problem, where each user can receive flow/forms from their subset of categories, and there is an outflow of capacity N from each user. Also, if you can solve this problem for N you can solve it for all lesser values of N.
So you could solve the first problem by running max-flow lg (maximum N) times, using a binary chop to find out what the best possible value of N is. Since you can solve it by max flow, you can also solve it by linear programming. Doing it this way, perhaps just for the critical value of N, might allow you to favour some assignments over others, or perhaps to see where there are neighbouring feasible solutions, and then see if you can mix them to use categories equally.
Example - Create a source, and link it to each of the categories Ci, with the capacity of the link being the number of forms available in that category, so C1 gets a link from the source of capacity 20. Create links with their source's capacity between users and categories, where the user has access to the category, so U1 gets links to C1 and C2, but U2 only gets a link to C2. Now create links of capacity N from each user to a single sink. If there is an assignment of forms to users that gives every user N forms, then this will produce a maximum flow that fills every link from user to sink, and you can look at the flows between users and categories to see how to assign forms. You could start off with N = 3, because user 2 only has access to a total of 3 forms, so the answer can't be greater than that. That won't work because you have said the right answer has N = 2, so the max flow won't fill all the N=3 capacity links. So your program tries again at 3/2 = 1, and finds a solution - you have provided a solution for N = 2, so there must be one for N = 1. Now the program knows there is a solution for N = 1 but not one for N = 3 so it tries one halfway between at N = (1 + 3) / 2 = 2, and finds your solution. There is one for N = 2 but not for N = 3 so the N = 2 solution is the best you can do.

Resources