Problem
I have a set of events, some of them connected and these connections define order. Events must be held in defined order. A connection may contain a min and/or a max requirement for distance between connected events. Let the distance be in days.
I use a directed acyclic graph for a representation of my model.
I need to order this events on the given number of days respecting the defined order and min/max requirements. The distribution should tend to be even. The events should be stretched all over the given distance.
What ways or algorithms may you suggest on solving this problem? I tried to find some solution with topological sorting or constraint ordering but had little to no results.
Example
We have a set of events a, b, c with the following connections a -> b, b -> c, a -> c
The given number of days for distribution is 7.
a. without any requirements for distance between connections.
Then the best solution would be
1 2 3 4 5 6 7
a b c
b. with requirement where distance in days between events (a, b) is [1, 2].
Then the best solution would be
1 2 3 4 5 6 7
a b c
c. with requirements where distance in days between events (a, b) is [1, 2] and between events (a, c) is <= 4.
Then the best solution would be
1 2 3 4 5 6 7
a b c
d. with requirements where distance in days between events (a, b) is [1, 2], between events (a, c) is <= 4, between events (b, c) is >= 3.
Then the best solution would be
1 2 3 4 5 6 7
a b c
EDIT:
There may be multiple events per day:
if the number of events is greater than the number of days for distribution.
if we have have max = 0 requirement.
If we have several suitable solutions, then the best one will be where the distance between the current event and its neighbors is approximately the same. We aim for the distance between events to be (DAYS_FOR_DISTRIBUTION / NUMBER_OF_EVENTS) where DAYS_FOR_DISTRIBUTION > NUMBER_OF_EVENTS.
If we have several suitable solutions with the same distances between events, then the best is left shifted solution.
Examples of connected events are attached below
Find R = all events that have no in edges
LOOP while R contains one or more event
SELECT N from R with the largest number of out edges
IF first time through
Place N on day 1
ELSE
Place N in middle of largest gap between events
LOOP M over descendants of N in required order
Place M as far from other events as possible, within M's allowed range
Method.
Uses a constraint library to generate all event orderings satisfying the event constraints, without regard to (un)evenness, then finds the constrained solution with minimal unevenness iteratively.
Evenness ?
If unevenness is defined by looking at all the differences between days and event_days calculated and returning the max minus the min days difference then a need for clarification with your answer of:
1 2 3 4 5 6 7
a b c
Is that it has the values off to the left w.r.t. the days.
A better answer might be:
1 2 3 4 5 6 7
a b c
With the one shift to the right, their is less of a stretch from day 7 to any event day.
If you think the above is equivalent then you need to better define evenness - is it evennness over the extent of the event days perhaps?
STOP PRESS! Evenness has been edited by author and is now better described.
Code
The source is set to run your example if you just hit return on each prompt.
# -*- coding: utf-8 -*-
"""
Even distribution of directed acyclic graph with respect to edge length
https://stackoverflow.com/questions/71532362/even-distribution-of-directed-acyclic-graph-with-respect-to-edge-length
Created on Sat Mar 26 08:53:19 2022
#author: paddy
"""
# https://pypi.org/project/python-constraint/
from constraint import Problem
from itertools import product
#%% Inputs
days = input("Days: int = ")
try:
days = int(days.strip())
except:
days = 7
print(f"Using {days=}")
events = input("events: space_separated = ")
events = events.strip().split()
if not events:
events = list("abc")
print(f"Using {events=}")
constraint_funcs = []
while True:
constr = input("constraint: string_expression (. to end) = ").strip()
if not constr:
constraint_funcs = ["1 <= (b - a) <=2",
"(c - a) <= 4",
"(c - b) >= 3"]
break
if constr == '.':
break
constraint_funcs.append(constr)
print(f"\nUsing {constraint_funcs=}")
#%% Constraint Setup
print()
problem = Problem()
problem.addVariables(events, range(1, days+1))
for constr in constraint_funcs:
constr_events = sorted( set(events) & set(compile(constr, '<input>',
mode='eval').co_names))
expr = f"lambda {', '.join(constr_events)}: {constr}"
print(f" Add Constraint {expr!r}, On {constr_events}")
func = eval(expr)
problem.addConstraint(func, constr_events)
#%% Solution optimisation for "evenness"
print()
def unevenness(event_days: list[int], all_days: int):
"(Max - min diff between ordered events, leftmost event)"
maxdiff, mindiff = -1, all_days + 1
for event, next_event in zip(event_days, event_days[1:]):
diff = next_event - event
maxdiff = max(maxdiff, diff)
mindiff = min(mindiff, diff)
return maxdiff - mindiff, event_days[0]
def printit(solution, all_days):
drange = range(1, all_days+1)
# print(solution)
print(' '.join(str(i)[0] for i in drange))
for event, day in sorted(solution.items(), key=lambda kv: kv[::-1]):
print(' ' * (day - 1) + event )
print()
current_best = None, 9e99
for ans in problem.getSolutionIter():
unev = unevenness(sorted(ans.values()), days)
if current_best[0] is None or unev < current_best[1]:
current_best = ans, unev
print("Best so far:")
printit(ans, days)
The unevenness function has been updated. A better function might be to minimise the standard deviations of the days between successive events, but for this example, this works.
Output
Sample run using your constraints, but longer event names.
Days: int = 7
Using days=7
events: space_separated = arch belt card
Using events=['arch', 'belt', 'card']
constraint: string_expression (. to end) = 1 <= (belt - arch) <= 2
constraint: string_expression (. to end) = (card - arch) <= 4
constraint: string_expression (. to end) = (card - belt) >= 3
constraint: string_expression (. to end) = .
Using constraint_funcs=['1 <= (belt - arch) <= 2', '(card - arch) <= 4', '(card - belt) >= 3']
Add Constraint 'lambda arch, belt: 1 <= (belt - arch) <= 2', On ['arch', 'belt']
Add Constraint 'lambda arch, card: (card - arch) <= 4', On ['arch', 'card']
Add Constraint 'lambda belt, card: (card - belt) >= 3', On ['belt', 'card']
Best so far:
1 2 3 4 5 6 7
arch
belt
card
Best so far:
1 2 3 4 5 6 7
arch
belt
card
Best so far:
1 2 3 4 5 6 7
arch
belt
card
Note: The first letter of event names align with the day columns.
The second item in the return tuple of the unevenness function is the day that the earliest, (left-most), event happens. if the spread of the events are equal, this will tend to favour the solution further to the left when minimised.
Related
I've been staring at this problem for hours and I'm still as lost as I was at the beginning. It's been a while since I took discrete math or statistics so I tried watching some videos on youtube, but I couldn't find anything that would help me solve the problem in less than what seems to be exponential time. Any tips on how to approach the problem below would be very much appreciated!
A certain species of fern thrives in lush rainy regions, where it typically rains almost every day.
However, a drought is expected over the next n days, and a team of botanists is concerned about
the survival of the species through the drought. Specifically, the team is convinced of the following
hypothesis: the fern population will survive if and only if it rains on at least n/2 days during the
n-day drought. In other words, for the species to survive there must be at least as many rainy days
as non-rainy days.
Local weather experts predict that the probability that it rains on a day i ∈ {1, . . . , n} is
pi ∈ [0, 1], and that these n random events are independent. Assuming both the botanists and
weather experts are correct, show how to compute the probability that the ferns survive the drought.
Your algorithm should run in time O(n2).
Have an (n + 1)×n matrix such that C[i][j] denotes the probability that after ith day there will have been j rainy days (i runs from 1 to n, j runs from 0 to n). Initialize:
C[1][0] = 1 - p[1]
C[1][1] = p[1]
C[1][j] = 0 for j > 1
Now loop over the days and set the values of the matrix like this:
C[i][0] = (1 - p[i]) * C[i-1][0]
C[i][j] = (1 - p[i]) * C[i-1][j] + p[i] * C[i - 1][j - 1] for j > 0
Finally, sum the values from C[n][n/2] to C[n][n] to get the probability of fern survival.
Dynamic programming problems can be solved in a top down or bottom up fashion.
You've already had the bottom up version described. To do the top-down version, write a recursive function, then add a caching layer so you don't recompute any results that you already computed. In pseudo-code:
cache = {}
function whatever(args)
if args not in cache
compute result
cache[args] = result
return cache[args]
This process is called "memoization" and many languages have ways of automatically memoizing things.
Here is a Python implementation of this specific example:
def prob_survival(daily_probabilities):
days = len(daily_probabilities)
days_needed = days / 2
# An inner function to do the calculation.
cached_odds = {}
def prob_survival(day, rained):
if days_needed <= rained:
return 1.0
elif days <= day:
return 0.0
elif (day, rained) not in cached_odds:
p = daily_probabilities[day]
p_a = p * prob_survival(day+1, rained+1)
p_b = (1- p) * prob_survival(day+1, rained)
cached_odds[(day, rained)] = p_a + p_b
return cached_odds[(day, rained)]
return prob_survival(0, 0)
And then you would call it as follows:
print(prob_survival([0.2, 0.4, 0.6, 0.8])
So I have a brain teaser I read on one of the algorithm and puzzle meetups we have on our uni that goes like this:
There's a school that awards students that, during a given period, are
never late more than once and who don't ever happen to be absent for
three or more consecutive days. How many possible permutations with repetitions of
presence (or lack thereof) can we build for a given timeframe that
grant the student an award? Assume that each day is just a state
On-time, Late or Absent for the whole day, don't worry about specific
classes. Example: for three day timeframes, we can create 19 such
permutations with repetitions that grant an award.
I've already posted it on math.SE yesterday cause I was interested if there was some ready-bake formula we could derive to solve it but it turns out there isn't and all the transformations really are rather complex.
Thus, I'm asking here - how would you approach such a problem with an algorithm? I tried narrowing down the possibilities space but after a while taking all the possible permutations with repetitions became well too much and the algorithm started becoming really complex while I believe there should be some easy to implement way to solve it, especially since most of the puzzles we exchange on the meetup are rather like that.
Here is a simplified version of Python 3 code implementing the recursion in the answer by #ProgrammerPerson:
from functools import lru_cache
def count_variants(max_late, base_absent, period_length):
"""
max_late – maximum allowed number of days the student can be late;
base_absent – the number of consecutive days the student can be absent;
period_length – days in a period."""
#lru_cache(max_late * base_absent * period_length)
def count(late, absent, days):
if late < 0: return 0
if absent < 0: return 0
if days == 0: return 1
return (count(late, base_absent, days-1) + # Student is on time. Absent reset.
count(late-1, base_absent, days-1) + # Student is late. Absent reset.
count(late, absent-1, days-1)) # Student is absent.
return count(max_late, base_absent, period_length)
Run example:
In [2]: count_variants(1, 2, 3)
Out[2]: 19
This screams recursion (and/or dynamic programming)!
Suppose we try and solve a slightly general problem:
We give an award if a student is late no more than L times, and isn't
absent for A or more consecutive days.
Now we want to compute the number of possibilities for an n days time frame.
Call this method P(L, A, n)
Now try to build up a recursion based on three cases for the first day of the period.
1) If the student is on-time for the first day, then the number is simply
P(L, A, n-1)
2) If the student is late the first day, then the number is
P(L-1, A, n-1)
3) If the student is absent the first day, then the number is
P(L, A-1, n-1)
This gives us the recursion:
P(L, A, n) = P(L, A, n-1) + P(L-1, A, n-1) + P(L, A-1, n-1)
You can either memoize the recursion, or just have tables which you lookup.
Be careful about the base cases which are
P(0, *, *), P(*, 0, *) and P(*, *, 0) and can be computed by easy mathematical formulae.
Here is quick python code, with memoization + recursion to demonstrate:
import math
def binom(n, r):
return math.factorial(n)/(math.factorial(r)*math.factorial(n-r))
# The memoization table.
table = {}
def P(L, A, n):
if L == 0:
# Only ontime or absent.
# More absents than period.
if A > n:
return 2**n
# 2^n total possibilities.
# of that n-A+1 are non-rewarding.
return 2**n - (n - A + 1)
if A == 0:
# Only Late or ontime.
# need fewer than L+1 late.
# This is n choose 0 + n choose 1 + ... + n choose L
total = 0
for l in xrange(0, min(L,n)):
total += binom(n, l)
return total
if n == 0:
return 1
if (L, A, n) in table:
return table[(L, A, n)]
result = P(L, A, n-1) + P(L-1, A, n-1) + P(L, A-1, n-1)
table[(L, A, n)] = result
return result
print P(1, 3, 3)
Output is 19.
Let S(n) be the number of strings of length n without 3 repeated 1s.
Any such string (with length at least 3) ends in "0", "01" or "011" (and after removing the suffix, any string without three consecutive 1s can appear).
Then for n > 2, S(n) = S(n-1) + S(n-2) + S(n-3), and S(0)=1, S(1)=2, S(2)=4.
If you have a late day on day i (counting from 0), then you have S(i) ways of arranging absent days before, and S(n-i-1) ways of arranging absent days after.
Thus, the solution to the original problem is S(n) + sum(S(i)*S(n-i-1) | i = 0...n-1)
We can compute solutions iteratively like this:
def ways(n):
S = [1, 2, 4] + [0] * (n-2)
for i in xrange(3, n+1):
S[i] = S[i-1] + S[i-2] + S[i-3]
return S[n] + sum(S[i] * S[n-i-1] for i in xrange(n))
for i in xrange(1, 20):
print i, ways(i)
Output:
1 3
2 8
3 19
4 43
5 94
6 200
7 418
8 861
9 1753
10 3536
11 7077
12 14071
13 27820
14 54736
15 107236
16 209305
17 407167
18 789720
19 1527607
I recently encountered a much more difficult variation of this problem, but realized I couldn't generate a solution for this very simple case. I searched Stack Overflow but couldn't find a resource that previously answered this.
You are given a triangle ABC, and you must compute the number of paths of certain length that start at and end at 'A'. Say our function f(3) is called, it must return the number of paths of length 3 that start and end at A: 2 (ABA,ACA).
I'm having trouble formulating an elegant solution. Right now, I've written a solution that generates all possible paths, but for larger lengths, the program is just too slow. I know there must be a nice dynamic programming solution that reuses sequences that we've previously computed but I can't quite figure it out. All help greatly appreciated.
My dumb code:
def paths(n,sequence):
t = ['A','B','C']
if len(sequence) < n:
for node in set(t) - set(sequence[-1]):
paths(n,sequence+node)
else:
if sequence[0] == 'A' and sequence[-1] == 'A':
print sequence
Let PA(n) be the number of paths from A back to A in exactly n steps.
Let P!A(n) be the number of paths from B (or C) to A in exactly n steps.
Then:
PA(1) = 1
PA(n) = 2 * P!A(n - 1)
P!A(1) = 0
P!A(2) = 1
P!A(n) = P!A(n - 1) + PA(n - 1)
= P!A(n - 1) + 2 * P!A(n - 2) (for n > 2) (substituting for PA(n-1))
We can solve the difference equations for P!A analytically, as we do for Fibonacci, by noting that (-1)^n and 2^n are both solutions of the difference equation, and then finding coefficients a, b such that P!A(n) = a*2^n + b*(-1)^n.
We end up with the equation P!A(n) = 2^n/6 + (-1)^n/3, and PA(n) being 2^(n-1)/3 - 2(-1)^n/3.
This gives us code:
def PA(n):
return (pow(2, n-1) + 2*pow(-1, n-1)) / 3
for n in xrange(1, 30):
print n, PA(n)
Which gives output:
1 1
2 0
3 2
4 2
5 6
6 10
7 22
8 42
9 86
10 170
11 342
12 682
13 1366
14 2730
15 5462
16 10922
17 21846
18 43690
19 87382
20 174762
21 349526
22 699050
23 1398102
24 2796202
25 5592406
26 11184810
27 22369622
28 44739242
29 89478486
The trick is not to try to generate all possible sequences. The number of them increases exponentially so the memory required would be too great.
Instead, let f(n) be the number of sequences of length n beginning and ending A, and let g(n) be the number of sequences of length n beginning with A but ending with B. To get things started, clearly f(1) = 1 and g(1) = 0. For n > 1 we have f(n) = 2g(n - 1), because the penultimate letter will be B or C and there are equal numbers of each. We also have g(n) = f(n - 1) + g(n - 1) because if a sequence ends begins A and ends B the penultimate letter is either A or C.
These rules allows you to compute the numbers really quickly using memoization.
My method is like this:
Define DP(l, end) = # of paths end at end and having length l
Then DP(l,'A') = DP(l-1, 'B') + DP(l-1,'C'), similar for DP(l,'B') and DP(l,'C')
Then for base case i.e. l = 1 I check if the end is not 'A', then I return 0, otherwise return 1, so that all bigger states only counts those starts at 'A'
Answer is simply calling DP(n, 'A') where n is the length
Below is a sample code in C++, you can call it with 3 which gives you 2 as answer; call it with 5 which gives you 6 as answer:
ABCBA, ACBCA, ABABA, ACACA, ABACA, ACABA
#include <bits/stdc++.h>
using namespace std;
int dp[500][500], n;
int DP(int l, int end){
if(l<=0) return 0;
if(l==1){
if(end != 'A') return 0;
return 1;
}
if(dp[l][end] != -1) return dp[l][end];
if(end == 'A') return dp[l][end] = DP(l-1, 'B') + DP(l-1, 'C');
else if(end == 'B') return dp[l][end] = DP(l-1, 'A') + DP(l-1, 'C');
else return dp[l][end] = DP(l-1, 'A') + DP(l-1, 'B');
}
int main() {
memset(dp,-1,sizeof(dp));
scanf("%d", &n);
printf("%d\n", DP(n, 'A'));
return 0;
}
EDITED
To answer OP's comment below:
Firstly, DP(dynamic programming) is always about state.
Remember here our state is DP(l,end), represents the # of paths having length l and ends at end. So to implement states using programming, we usually use array, so DP[500][500] is nothing special but the space to store the states DP(l,end) for all possible l and end (That's why I said if you need a bigger length, change the size of array)
But then you may ask, I understand the first dimension which is for l, 500 means l can be as large as 500, but how about the second dimension? I only need 'A', 'B', 'C', why using 500 then?
Here is another trick (of C/C++), the char type indeed can be used as an int type by default, which value is equal to its ASCII number. And I do not remember the ASCII table of course, but I know that around 300 will be enough to represent all the ASCII characters, including A(65), B(66), C(67)
So I just declare any size large enough to represent 'A','B','C' in the second dimension (that means actually 100 is more than enough, but I just do not think that much and declare 500 as they are almost the same, in terms of order)
so you asked what DP[3][1] means, it means nothing as the I do not need / calculate the second dimension when it is 1. (Or one can think that the state dp(3,1) does not have any physical meaning in our problem)
In fact, I always using 65, 66, 67.
so DP[3][65] means the # of paths of length 3 and ends at char(65) = 'A'
You can do better than the dynamic programming/recursion solution others have posted, for the given triangle and more general graphs. Whenever you are trying to compute the number of walks in a (possibly directed) graph, you can express this in terms of the entries of powers of a transfer matrix. Let M be a matrix whose entry m[i][j] is the number of paths of length 1 from vertex i to vertex j. For a triangle, the transfer matrix is
0 1 1
1 0 1.
1 1 0
Then M^n is a matrix whose i,j entry is the number of paths of length n from vertex i to vertex j. If A corresponds to vertex 1, you want the 1,1 entry of M^n.
Dynamic programming and recursion for the counts of paths of length n in terms of the paths of length n-1 are equivalent to computing M^n with n multiplications, M * M * M * ... * M, which can be fast enough. However, if you want to compute M^100, instead of doing 100 multiplies, you can use repeated squaring: Compute M, M^2, M^4, M^8, M^16, M^32, M^64, and then M^64 * M^32 * M^4. For larger exponents, the number of multiplies is about c log_2(exponent).
Instead of using that a path of length n is made up of a path of length n-1 and then a step of length 1, this uses that a path of length n is made up of a path of length k and then a path of length n-k.
We can solve this with a for loop, although Anonymous described a closed form for it.
function f(n){
var as = 0, abcs = 1;
for (n=n-3; n>0; n--){
as = abcs - as;
abcs *= 2;
}
return 2*(abcs - as);
}
Here's why:
Look at one strand of the decision tree (the other one is symmetrical):
A
B C...
A C
B C A B
A C A B B C A C
B C A B B C A C A C A B B C A B
Num A's Num ABC's (starting with first B on the left)
0 1
1 (1-0) 2
1 (2-1) 4
3 (4-1) 8
5 (8-3) 16
11 (16-5) 32
Cleary, we can't use the strands that end with the A's...
You can write a recursive brute force solution and then memoize it (aka top down dynamic programming). Recursive solutions are more intuitive and easy to come up with. Here is my version:
# search space (we have triangle with nodes)
nodes = ["A", "B", "C"]
#cache # memoize!
def recurse(length, steps):
# if length of the path is n and the last node is "A", then it's
# a valid path and we can count it.
if length == n and ((steps-1)%3 == 0 or (steps+1)%3 == 0):
return 1
# we don't want paths having len > n.
if length > n:
return 0
# from each position, we have two possibilities, either go to next
# node or previous node. Total paths will be sum of both the
# possibilities. We do this recursively.
return recurse(length+1, steps+1) + recurse(length+1, steps-1)
Given: set A = {a0, a1, ..., aN-1} (1 ≤ N ≤ 100), with 2 ≤ ai ≤ 500.
Asked: Find the sum of all least common multiples (LCM) of all subsets of A of size at least 2.
The LCM of a setB = {b0, b1, ..., bk-1} is defined as the minimum integer Bmin such that bi | Bmin, for all 0 ≤ i < k.
Example:
Let N = 3 and A = {2, 6, 7}, then:
LCM({2, 6}) = 6
LCM({2, 7}) = 14
LCM({6, 7}) = 42
LCM({2, 6, 7}) = 42
----------------------- +
answer 104
The naive approach would be to simply calculate the LCM for all O(2N) subsets, which is not feasible for reasonably large N.
Solution sketch:
The problem is obtained from a competition*, which also provided a solution sketch. This is where my problem comes in: I do not understand the hinted approach.
The solution reads (modulo some small fixed grammar issues):
The solution is a bit tricky. If we observe carefully we see that the integers are between 2 and 500. So, if we prime factorize the numbers, we get the following maximum powers:
2 8
3 5
5 3
7 3
11 2
13 2
17 2
19 2
Other than this, all primes have power 1. So, we can easily calculate all possible states, using these integers, leaving 9 * 6 * 4 * 4 * 3 * 3 * 3 * 3 states, which is nearly 70000. For other integers we can make a dp like the following: dp[70000][i], where i can be 0 to 100. However, as dp[i] is dependent on dp[i-1], so dp[70000][2] is enough. This leaves the complexity to n * 70000 which is feasible.
I have the following concrete questions:
What is meant by these states?
Does dp stand for dynamic programming and if so, what recurrence relation is being solved?
How is dp[i] computed from dp[i-1]?
Why do the big primes not contribute to the number of states? Each of them occurs either 0 or 1 times. Should the number of states not be multiplied by 2 for each of these primes (leading to a non-feasible state space again)?
*The original problem description can be found from this source (problem F). This question is a simplified version of that description.
Discussion
After reading the actual contest description (page 10 or 11) and the solution sketch, I have to conclude the author of the solution sketch is quite imprecise in their writing.
The high level problem is to calculate an expected lifetime if components are chosen randomly by fair coin toss. This is what's leading to computing the LCM of all subsets -- all subsets effectively represent the sample space. You could end up with any possible set of components. The failure time for the device is based on the LCM of the set. The expected lifetime is therefore the average of the LCM of all sets.
Note that this ought to include the LCM of sets with only one item (in which case we'd assume the LCM to be the element itself). The solution sketch seems to sabotage, perhaps because they handled it in a less elegant manner.
What is meant by these states?
The sketch author only uses the word state twice, but apparently manages to switch meanings. In the first use of the word state it appears they're talking about a possible selection of components. In the second use they're likely talking about possible failure times. They could be muddling this terminology because their dynamic programming solution initializes values from one use of the word and the recurrence relation stems from the other.
Does dp stand for dynamic programming?
I would say either it does or it's a coincidence as the solution sketch seems to heavily imply dynamic programming.
If so, what recurrence relation is being solved? How is dp[i] computed from dp[i-1]?
All I can think is that in their solution, state i represents a time to failure , T(i), with the number of times this time to failure has been counted, dp[i]. The resulting sum would be to sum all dp[i] * T(i).
dp[i][0] would then be the failure times counted for only the first component. dp[i][1] would then be the failure times counted for the first and second component. dp[i][2] would be for the first, second, and third. Etc..
Initialize dp[i][0] with zeroes except for dp[T(c)][0] (where c is the first component considered) which should be 1 (since this component's failure time has been counted once so far).
To populate dp[i][n] from dp[i][n-1] for each component c:
For each i, copy dp[i][n-1] into dp[i][n].
Add 1 to dp[T(c)][n].
For each i, add dp[i][n-1] to dp[LCM(T(i), T(c))][n].
What is this doing? Suppose you knew that you had a time to failure of j, but you added a component with a time to failure of k. Regardless of what components you had before, your new time to fail is LCM(j, k). This follows from the fact that for two sets A and B, LCM(A union B} = LCM(LCM(A), LCM(B)).
Similarly, if we're considering a time to failure of T(i) and our new component's time to failure of T(c), the resultant time to failure is LCM(T(i), T(c)). Note that we recorded this time to failure for dp[i][n-1] configurations, so we should record that many new times to failure once the new component is introduced.
Why do the big primes not contribute to the number of states?
Each of them occurs either 0 or 1 times. Should the number of states not be multiplied by 2 for each of these primes (leading to a non-feasible state space again)?
You're right, of course. However, the solution sketch states that numbers with large primes are handled in another (unspecified) fashion.
What would happen if we did include them? The number of states we would need to represent would explode into an impractical number. Hence the author accounts for such numbers differently. Note that if a number less than or equal to 500 includes a prime larger than 19 the other factors multiply to 21 or less. This makes such numbers amenable for brute forcing, no tables necessary.
The first part of the editorial seems useful, but the second part is rather vague (and perhaps unhelpful; I'd rather finish this answer than figure it out).
Let's suppose for the moment that the input consists of pairwise distinct primes, e.g., 2, 3, 5, and 7. Then the answer (for summing all sets, where the LCM of 0 integers is 1) is
(1 + 2) (1 + 3) (1 + 5) (1 + 7),
because the LCM of a subset is exactly equal to the product here, so just multiply it out.
Let's relax the restriction that the primes be pairwise distinct. If we have an input like 2, 2, 3, 3, 3, and 5, then the multiplication looks like
(1 + (2^2 - 1) 2) (1 + (2^3 - 1) 3) (1 + (2^1 - 1) 5),
because 2 appears with multiplicity 2, and 3 appears with multiplicity 3, and 5 appears with multiplicity 1. With respect to, e.g., just the set of 3s, there are 2^3 - 1 ways to choose a subset that includes a 3, and 1 way to choose the empty set.
Call a prime small if it's 19 or less and large otherwise. Note that integers 500 or less are divisible by at most one large prime (with multiplicity). The small primes are more problematic. What we're going to do is to compute, for each possible small portion of the prime factorization of the LCM (i.e., one of the ~70,000 states), the sum of LCMs for the problem derived by discarding the integers that could not divide such an LCM and leaving only the large prime factor (or 1) for the other integers.
For example, if the input is 2, 30, 41, 46, and 51, and the state is 2, then we retain 2 as 1, discard 30 (= 2 * 3 * 5; 3 and 5 are small), retain 41 as 41 (41 is large), retain 46 as 23 (= 2 * 23; 23 is large), and discard 51 (= 3 * 17; 3 and 17 are small). Now, we compute the sum of LCMs using the previously described technique. Use inclusion-exclusion to get rid of the subsets whose LCM whose small portion properly divides the state instead of being exactly equal. Maybe I'll work a complete example later.
What is meant by these states?
I think here, states refer to if the number is in set B = {b0, b1, ..., bk-1} of LCMs of set A.
Does dp stand for dynamic programming and if so, what recurrence relation is being solved?
dp in the solution sketch stands for dynamic programming, I believe.
How is dp[i] computed from dp[i-1]?
It's feasible that we can figure out the state of next group of LCMs from previous states. So, we only need array of 2, and toggle back and forth.
Why do the big primes not contribute to the number of states? Each of them occurs either 0 or 1 times. Should the number of states not be multiplied by 2 for each of these primes (leading to a non-feasible state space again)?
We can use Prime Factorization and exponents only to present the number.
Here is one example.
6 = (2^1)(3^1)(5^0) -> state "1 1 0" to represent 6
18 = (2^1)(3^2)(5^0) -> state "1 2 0" to represent 18
Here is how we can get LMC of 6 and 18 using Prime Factorization
LCM (6,18) = (2^(max(1,1)) (3^ (max(1,2)) (5^max(0,0)) = (2^1)(3^2)(5^0) = 18
2^9 > 500, 3^6 > 500, 5^4 > 500, 7^4>500, 11^3 > 500, 13^3 > 500, 17^3 > 500, 19^3 > 500
we can use only count of exponents of prime number 2,3,5,7,11,13,17,19 to represent the LCMs in the set B = {b0, b1, ..., bk-1}
for the given set A = {a0, a1, ..., aN-1} (1 ≤ N ≤ 100), with 2 ≤ ai ≤ 500.
9 * 6 * 4 * 4 * 3 * 3 * 3 * 3 <= 70000, so we only need two of dp[9][6][4][4][3][3][3][3] to keep tracks of all LCMs' states. So, dp[70000][2] is enough.
I put together a small C++ program to illustrate how we can get sum of LCMs of the given set A = {a0, a1, ..., aN-1} (1 ≤ N ≤ 100), with 2 ≤ ai ≤ 500. In the solution sketch, we need to loop through 70000 max possible of LCMs.
int gcd(int a, int b) {
int remainder = 0;
do {
remainder = a % b;
a = b;
b = remainder;
} while (b != 0);
return a;
}
int lcm(int a, int b) {
if (a == 0 || b == 0) {
return 0;
}
return (a * b) / gcd(a, b);
}
int sum_of_lcm(int A[], int N) {
// get the max LCM from the array
int max = A[0];
for (int i = 1; i < N; i++) {
max = lcm(max, A[i]);
}
max++;
//
int dp[max][2];
memset(dp, 0, sizeof(dp));
int pri = 0;
int cur = 1;
// loop through n x 70000
for (int i = 0; i < N; i++) {
for (int v = 1; v < max; v++) {
int x = A[i];
if (dp[v][pri] > 0) {
x = lcm(A[i], v);
dp[v][cur] = (dp[v][cur] == 0) ? dp[v][pri] : dp[v][cur];
if ( x % A[i] != 0 ) {
dp[x][cur] += dp[v][pri] + dp[A[i]][pri];
} else {
dp[x][cur] += ( x==v ) ? ( dp[v][pri] + dp[v][pri] ) : ( dp[v][pri] ) ;
}
}
}
dp[A[i]][cur]++;
pri = cur;
cur = (pri + 1) % 2;
}
for (int i = 0; i < N; i++) {
dp[A[i]][pri] -= 1;
}
long total = 0;
for (int j = 0; j < max; j++) {
if (dp[j][pri] > 0) {
total += dp[j][pri] * j;
}
}
cout << "total:" << total << endl;
return total;
}
int test() {
int a[] = {2, 6, 7 };
int n = sizeof(a)/sizeof(a[0]);
int total = sum_of_lcm(a, n);
return 0;
}
Output
total:104
The states are one more than the powers of primes. You have numbers up to 2^8, so the power of 2 is in [0..8], which is 9 states. Similarly for the other states.
"dp" could well stand for dynamic programming, I'm not sure.
The recurrence relation is the heart of the problem, so you will learn more by solving it yourself. Start with some small, simple examples.
For the large primes, try solving a reduced problem without using them (or their equivalents) and then add them back in to see their effect on the final result.
This problem is taken from interviewstreet.com
Given array of integers Y=y1,...,yn, we have n line segments such that
endpoints of segment i are (i, 0) and (i, yi). Imagine that from the
top of each segment a horizontal ray is shot to the left, and this ray
stops when it touches another segment or it hits the y-axis. We
construct an array of n integers, v1, ..., vn, where vi is equal to
length of ray shot from the top of segment i. We define V(y1, ..., yn)
= v1 + ... + vn.
For example, if we have Y=[3,2,5,3,3,4,1,2], then v1, ..., v8 =
[1,1,3,1,1,3,1,2], as shown in the picture below:
For each permutation p of [1,...,n], we can calculate V(yp1, ...,
ypn). If we choose a uniformly random permutation p of [1,...,n], what
is the expected value of V(yp1, ..., ypn)?
Input Format
First line of input contains a single integer T (1 <= T <= 100). T
test cases follow.
First line of each test-case is a single integer N (1 <= N <= 50).
Next line contains positive integer numbers y1, ..., yN separated by a
single space (0 < yi <= 1000).
Output Format
For each test-case output expected value of V(yp1, ..., ypn), rounded
to two digits after the decimal point.
Sample Input
6
3
1 2 3
3
3 3 3
3
2 2 3
4
10 2 4 4
5
10 10 10 5 10
6
1 2 3 4 5 6
Sample Output
4.33
3.00
4.00
6.00
5.80
11.15
Explanation
Case 1: We have V(1,2,3) = 1+2+3 = 6, V(1,3,2) = 1+2+1 = 4, V(2,1,3) =
1+1+3 = 5, V(2,3,1) = 1+2+1 = 4, V(3,1,2) = 1+1+2 = 4, V(3,2,1) =
1+1+1 = 3. Average of these values is 4.33.
Case 2: No matter what the permutation is, V(yp1, yp2, yp3) = 1+1+1 =
3, so the answer is 3.00.
Case 3: V(y1 ,y2 ,y3)=V(y2 ,y1 ,y3) = 5, V(y1, y3, y2)=V(y2, y3, y1) =
4, V(y3, y1, y2)=V(y3, y2, y1) = 3, and average of these values is
4.00.
A naive solution to the problem will run forever for N=50. I believe that the problem can be solved by independently calculating a value for each stick. I still need to know if there is any other efficient approach for this problem. On what basis do we have to independently calculate value for each stick?
We can solve this problem, by figure out:
if the k th stick is put in i th position, what is the expected ray-length of this stick.
then the problem can be solve by adding up all the expected length for all sticks in all positions.
Let expected[k][i] be the expected ray-length of k th stick put in i th position, let num[k][i][length] be the number of permutations that k th stick put in i th position with ray-length equals to length, then
expected[k][i] = sum( num[k][i][length] * length ) / N!
How to compute num[k][i][length]? For example, for length=3, consider the following graph:
...GxxxI...
Where I is the position, 3 'x' means we need 3 sticks that are strictly lower then I, and G means we need a stick that are at least as high as I.
Let s_i be the number of sticks that are smaller then the k th the stick, and g_i be the number of sticks that are greater or equal to the k th stick, then we can choose any one of g_i to put in G position, we can choose any length of s_i to fill the x position, so we have:
num[k][i][length] = P(s_i, length) * g_i * P(n-length-1-1)
In case that all the positions before I are all smaller then I, we don't need a greater stick in G, i.e. xxxI...., we have:
num[k][i][length] = P(s_i, length) * P(n-length-1)
And here's a piece of Python code that can solve this problem:
def solve(n, ys):
ret = 0
for y_i in ys:
s_i = len(filter(lambda x: x < y_i, ys))
g_i = len(filter(lambda x: x >= y_i, ys)) - 1
for i in range(n):
for length in range(1, i+1):
if length == i:
t_ret = combination[s_i][length] * factorial[length] * factorial[ n - length - 1 ]
else:
t_ret = combination[s_i][length] * factorial[length] * g_i * factorial[ n - length - 1 - 1 ]
ret += t_ret * length
return ret * 1.0 / factorial[n] + n
This is the same question as https://cs.stackexchange.com/questions/1076/how-to-approach-vertical-sticks-challenge and my answer there (which is a little simpler than those given earlier here) was:
Imagine a different problem: if you had to place k sticks of equal heights in n slots then the expected distance between sticks (and the expected distance between the first stick and a notional slot 0, and the expected distance between the last stick and a notional slot n+1) is (n+1)/(k+1) since there are k+1 gaps to fit in a length n+1.
Returning to this problem, a particular stick is interested in how many sticks (including itself) as as high or higher. If this is k, then the expected gap before it is also (n+1)/(k+1).
So the algorithm is simply to find this value for each stick and add up the expectation. For example, starting with heights of 3,2,5,3,3,4,1,2, the number of sticks with a greater or equal height is 5,7,1,5,5,2,8,7 so the expectation is 9/6+9/8+9/2+9/6+9/6+9/3+9/9+9/8 = 15.25.
This is easy to program: for example a single line in R
V <- function(Y){(length(Y) + 1) * sum(1 / (rowSums(outer(Y, Y, "<=")) + 1) )}
gives the values in the sample output in the original problem
> V(c(1,2,3))
[1] 4.333333
> V(c(3,3,3))
[1] 3
> V(c(2,2,3))
[1] 4
> V(c(10,2,4,4))
[1] 6
> V(c(10,10,10,5,10))
[1] 5.8
> V(c(1,2,3,4,5,6))
[1] 11.15
As you correctly, noted we can solve problem independently for each stick.
Let F(i, len) is number of permutations, that ray from stick i is exactly len.
Then answer is
(Sum(by i, len) F(i,len)*len)/(n!)
All is left is to count F(i, len). Let a(i) be number of sticks j, that y_j<=y_i. b(i) - number of sticks, that b_j>b_i.
In order to get ray of length len, we need to have situation like this.
B, l...l, O
len-1 times
Where O - is stick #i. B - is stick with bigger length, or beginning. l - is stick with heigth, lesser then ith.
This gives us 2 cases:
1) B is the beginning, this can be achieved in P(a(i), len-1) * (b(i)+a(i)-(len-1))! ways.
2) B is bigger stick, this can be achieved in P(a(i), len-1)*b(i)*(b(i)+a(i)-len)!*(n-len) ways.
edit: corrected b(i) as 2nd term in (mul)in place of a(i) in case 2.