Mathematica: "15 digits of Sqrt[x] yield 42+ million digits of x" - wolfram-mathematica

I did this in Mathematica to compute Sqrt[5]:
a[0] = 2
a[n_] := a[n] = a[n-1] - (a[n-1]^2-5)/2/a[n-1]
How close is a[25] to Sqrt[5]?
N[Sqrt[5]-a[25]] // FortranForm
4.440892098500626e-16
And how close is a[25]^2 to 5?
N[a[25]^2-5] // FortranForm
8.305767102358763e-42074769
This seems odd to me. My estimate: if x is within 10^-n of Sqrt[5],
then x^2 is within 10^(-2*n) of 5, give or take. No? In fact:
a[25]^2 = (Sqrt[5]-4.440892098500626e-16)^2 ~ 5 - 2*5*4.440892098500626e-16
(expanding (a-b)^2), so the accuracy should be only about 14 digits
(or n digits in general).
Of course, Newton's method yielding only 15 accurate digits in 25
iterations also seems odd.
Am I losing precision too early in the calculations above? Note that:
N[Log[Sqrt[5]-a[25]]] // FortranForm
-35.35050620855721
agrees w/ the 15 digit precision above, even though I do N[] after
taking the Log (so it should be accurate).

The problem is how Mma is calculating your sequence.
a[n] are Rational numbers. Lets see the order of magnitude for the Numerators, in a loglog scale:
a[0] = 2
a[n_] := a[n] = a[n-1] - (a[n-1]^2-5)/2/a[n-1]
ListPlot#Table[Log[10, Log[10, Numerator[ a[i]]]], {i, 1, 25}]
so, your numerators are increasing as a double exponential.
The 10^-16 precision is achieved much before a[25]:
For[i = 1, i < 5, i++,
Print["dif[", i, "]= ", N[a[i] - Sqrt[5], 16]]
]
dif[1]= 0.01393202250021030
dif[2]= 0.00004313361132141470
dif[3]= 4.160143063513508*10^-10
dif[4]= 3.869915959583412*10^-20
Afterward, you have start controlling the precision for the division as the Numerator for a[5] has already 20 digits.

Related

Ruby's digits method performance

I'm solving some Project Euler problems using Ruby, and specifically here I'm talking about problem 25 (What is the index of the first term in the Fibonacci sequence to contain 1000 digits?).
At first, I was using Ruby 2.2.3 and I coded the problem as such:
number = 3
a = 1
b = 2
while b.to_s.length < 1000
a, b = b, a + b
number += 1
end
puts number
But then I found out that version 2.4.2 has a method called digits which is exactly what I needed. I transformed to code to:
while b.digits.length < 1000
And when I compared the two methods, digits was much slower.
Time
./025/problem025.rb 0.13s user 0.02s system 80% cpu 0.190 total
./025/problem025.rb 2.19s user 0.03s system 97% cpu 2.275 total
Does anyone have an idea why?
Ruby's digits
... is implemented in rb_int_digits.
Which for non-tiny numbers (i.e., most of your numbers) uses rb_int_digits_bigbase.
Which extracts digit after digit naively with division/modulo by base.
So it should take quadratic time (at least with a small base such as 10).
Ruby's to_s
... is implemented in int_to_s.
Which uses rb_int2str.
Which for non-tiny numbers uses rb_big2str.
Which uses rb_big2str1.
Which might use big2str_gmp if available (which sounds/looks like it uses the fast GMP library) or ...
... uses big2str_generic.
Which uses big2str_karatsuba (sweet, I recognize that name!).
Which looks like it has something to do with ...
... Karatsuba's algorithm, which is a fast multiplication algorithm. If you multiply two n-digit numbers the naive way you learned in school, you take n2 single-digit products. Karatsuba on the other hand only needs about n1.585, which is quite a lot better. And I didn't read into this further, but I suspect what Ruby does here is also this efficient. Eric Lippert's answer with a base conversion algorithm uses Karatsuba multiplication and says "this [base conversion] algorithm is utterly dominated by the cost of the multiplication".
Comparing quadratic to n1.585 over the number lengths from 1 digit to 1000 digits gives factor 15:
(1..1000).sum { |i| i**2 } / (1..1000).sum { |i| i**1.585 }
=> 15.150583254950678
Which is roughly the factor you observed as well. Of course that's a rather naive comparison, but, well, why not.
GMP by the way apparently uses/used a "near O(n * log(n)) FFT-based multiplication algorithm".
Thanks to #Drenmi's answer for motivating me to dig into the source after all. I hope I did this right, no guarantees, I'm a Ruby beginner. But that's why I left all the links there for you to check for yourself :-P
Integer#digits doesn't just "split" the number. From the documentation:
Returns the array including the digits extracted by place-value
notation with radix base of int.
This extraction is done even if a base argument is omitted. The relevant source:
# ruby/numeric.c:4809
while (!FIXNUM_P(num) || FIX2LONG(num) > 0) {
VALUE qr = rb_int_divmod(num, base);
rb_ary_push(digits, RARRAY_AREF(qr, 1));
num = RARRAY_AREF(qr, 0);
}
As you can see, this process includes repeated modulo arithmetics, which likely accounts for the additional runtime.
Many ruby methods create objects (strins, arrays, etc.)
In ruby, object creation in ruby is "expensive".
For instance to_s creates a string and digits creates an array every time the while condition is evaluated.
If you want to optimize your example, you can do the following:
# create the smallest possible 1000 digits number
max = 10**999
number = 3
a = 1
b = 2
# do not create objects in while condition
while b < max
a, b = b, a + b
number += 1
end
puts number
I have not answered your question, but wish to suggest an improved algorithm for the problem you have addressed. For a given number of decimal digits, n, I have implemented the following algorithm.
estimate the number f of Fibonacci numbers ("FNs") that have n or fewer decimal digits.
compute the fth and (f-1)st FNs, and the number of digits m in the fth FN.
if m >= n back down from down from the (f-1)st FN until the (f-1)st FN has fewer than n decimal digits, at which time the fth FN is the smallest FN to have n decimal digits.
if m < n increase the fth FN until the it has n decimal digits, at which time it is the smallest FN to have n decimal digits.
The key is to compute a close estimate f in the first step.
Code
AVG_FNs_PER_DIGIT = 4.784971966781667
def first_fibonacci_with_n_digits(n)
return [1, 1] if n == 1
idx = (n * AVG_FNs_PER_DIGIT).round
fn, prev_fn = fib(idx)
fn.to_s.size >= n ? fib_down(n, fn, prev_fn, idx) : fib_up(n, fn, prev_fn, idx)
end
def fib(idx)
a = 1
b = 2
(idx - 2).times {a, b = b, a + b }
[b, a]
end
def fib_up(n, b, a, idx)
loop do
a, b = b, a + b
idx += 1
break [idx, b] if b.to_s.size == n
end
end
def fib_down(n, b, a, idx)
loop do
a, b = b - a, a
break [idx, b] if a.to_s.size == n - 1
idx -= 1
end
end
Benchmarks
In computing each Fibonacci number two operations are typically performed:
compute the number of digits in the last-computed Fibonacci number and if that number is equal to the target number of digits, terminate (for reasons made clear in the Explanation section below, it cannot be larger than the target number); else
compute the next number in the Fibonacci sequence.
By contrast, the method I have proposed performs the first step a relatively small number of times.
How important is the first step relative to the second and how does the use of n.digits.size compare with that of n.to_s.size in the first step? Let's run some benchmarks to find out.
def use_to_s(ndigits)
case ndigits
when 1
[1, 1]
else
a = 1
b = 2
idx = 3
loop do
break [idx, b] if b.to_s.length == ndigits
a, b = b, a + b
idx += 1
end
end
end
def use_digits(ndigits)
case ndigits
when 1
[1, 1]
else
a = 1
b = 2
idx = 3
loop do
break [idx, b] if b.digits.size == ndigits
a, b = b, a + b
idx += 1
end
end
end
require 'fruity'
def test(ndigits)
nfibs, last_fib = use_to_s(ndigits)
puts "\nndigits = #{ndigits}, nfibs=#{nfibs}, last_fib=#{last_fib}"
compare do
try_use_to_s { use_to_s(ndigits) }
try_use_digits { use_digits(ndigits) }
try_estimate { first_fibonacci_with_n_digits(ndigits) }
end
end
test 20
ndigits = 20, nfibs=93, last_fib=12200160415121876738
Running each test 128 times. Test will take about 1 second.
try_estimate is faster than try_use_to_s by 2x ± 0.1
try_use_to_s is faster than try_use_digits by 80.0% ± 10.0%
test 100
ndigits = 100, nfibs=476, last_fib=13447...37757 (90 digits omitted)
Running each test 16 times. Test will take about 4 seconds.
try_estimate is faster than try_use_to_s by 5x ± 0.1
try_use_to_s is faster than try_use_digits by 10x ± 1.0
test 500
ndigits = 500, nfibs=2390, last_fib=13519...63145 (490 digits omitted)
Running each test 2 times. Test will take about 27 seconds.
try_estimate is faster than try_use_to_s by 9x ± 0.1
try_use_to_s is faster than try_use_digits by 60x ± 1.0
test 1000
ndigits = 1000, nfibs=4782, last_fib=10700...27816 (990 digits omitted)
Running each test once. Test will take about 1 minute.
try_estimate is faster than try_use_to_s by 12x ± 10.0
try_use_to_s is faster than try_use_digits by 120x ± 100.0
There are two main take-aways from these results:
"try_estimate" is the fastest because it performs the first step relatively few times; and
the use of to_s is much faster than that of digits.
Further to the first of these observations note that the initial estimates of the index of the first FN having a given number of digits, compared to the actual index, are as follows:
for 20 digits: 96 est. vs 93 actual
for 100 digits: 479 est. vs 476 actual
for 500 digits: 2392 est. vs 2390 actual
for 1000 digits: 4785 est. vs 4782 actual
The deviation was at most 3, meaning numbers of digits had to be calculated for at most 3 FNs to obtain the desired result.
Explanation
The only explanation of the methods given in the section Code above is the derivation of the constant AVG_FNs_PER_DIGIT, which is used to calculate an estimate of the index of the first FN having the specified number of digits.
The derivation of this constant derives from the question and selected answer given here. (The Wiki for Fibonacci numbers provides a good overview of the mathematical properties of FNs.)
It is known that the first 7 FNs (including zero) have one digit; thereafter the FNs gain an additional digit every 4 or 5 FNs (i.e., sometimes 4, else 5). Therefore, as a very crude calculation, we see that to calculate the first FN with n digits, n >= 2, it will not be less than the 4*nth FN. For n = 1000, that would be 4,000. (In fact, the 4,782nd is the smallest to have 1,000 digits.) In other words, we don't need to calculate the number of digits in the first 4,000 FNs. We can improve on this estimate, however.
As n approaches infinity, the ratio of ranges 10**n...10**(n+1) (n-digit intervals) that contain 5 FNs to those that contain 4 FNs can be computed as follows.
LOG_10 = Math.log(10)
#=> 2.302585092994046
GR = (1 + Math.sqrt(5))/2
#=> 1.618033988749895
LOG_GR = Math.log(GR)
#=> 0.48121182505960347
RATIO_5to4 = (LOG_10 - 4*LOG_GR)/(5*LOG_GR - LOG_10)
#=> 3.6505564183095474
where GR is the Golden Ratio.
Over a large number of n-digit intervals let n4 be the number of those intervals containing 4 FNs and n5 be the number containing 5 FNs. The average number of FNs per interval is therefore (n4*4 + n5*5)/(n4 + n5). Since n5/n4 converges to RATIO_5to4, n5 approaches RATIO_5to4 * n4 in the limit (discarding roundoff error). If we substitute out n5, and let
b = 1/(1 + RATIO_5to4)
#=> 0.21502803321833364
we find the average number of FNs per n-digit interval converges to
avg = b * 4 + (1-b) *5
#=> 4.784971966781667
If fn is the first FN to have n decimal digits, the number of FNs in the sequence up to an including fn can therefore be approximated to be
n * avg
If, for example, the estimate of the index of the first FN to have 1000 decimal digits would be 1000 * 4.784971966781667).round #=> 4785.

Number of ways to reach N from 0 using only 2 or 3?

I am solving this problem where we need to reach from X=0 to X=N.We can only take a step of 2 or 3 at a time.
For each step of 2 we have a probability of 0.2 and for each step of 3 we have a probability of 0.8.How can we find the total probability to reach N.
e.g. for reaching 5,
2+3 with probability =0.2 * 0.8=0.16
3+2 with probability =0.8 * 0.2=0.16 total = 0.32.
My initial thoughts:
Number of ways can be found out by simple Fibonacci sequence.
f(n)=f(n-3)+f(n-2);
But how do we remember the numbers so that we can multiply them to find the probability?
This can be solved using Dynamic programming.
Lets call the function F(N) = probability to reach 0 using only 2 and 3 when the starting number is N
F(N) = 0.2*F(N-2) + 0.3*F(N-3)
Base case:
F(0) = 1 and F(k)= 0 where k< 0
So the DP code would be somthing like that:
F[0] = 1;
for(int i = 1;i<=N;i++){
if(i>=3)
F[i] = 0.2*F[i-2] + 0.8*F[i-3];
else if(i>=2)
F[i] = 0.2*F[i-2];
else
F[i] = 0;
}
return F[N];
This algorithm would run in O(N)
Some clarifications about this solution: I assume the only allowed operation for generating the number from 2s and 3s is addition (your definition would allow substraction aswell) and the input-numbers are always valid (2 <= input). Definition: a unique row of numbers means: no other row with the same number of 3s and 2s in another order is in scope.
We can reduce the problem into multiple smaller problems:
Problem A: finding all sequences of numbers that can sum up to the given number. (Unique rows of numbers only)
Start by finding the minimum-number of 3s required to build the given number, which is simply input % 2. The maximum-number of 3s that can be used to build the input can be calculated this way:
int max_3 = (int) (input / 3);
if(input - max_3 == 1)
--max_3;
Now all sequences of numbers that sum up to input must hold between input % 2 and max_3 3s. The 2s can be easily calculated from a given number of 3s.
Problem B: calculating the probability for a given list and it's permutations to be the result
For each unique row of numbers, we can easily derive all permutations. Since these consist of the same number, they have the same likeliness to appear and produce the same sum. The likeliness can be calculated easily from the row: 0.8 ^ number_of_3s * 0.2 ^ number_of_2s. Next step would be to calculate the number of different permuatations. Calculating all distinct sets with a specific number of 2s and 3s can be done this way: Calculate all possible distributions of 2s in the set: (number_of_2s + number_of_3s)! / (number_of_3s! * numer_of_2s!). Basically just the number of possible distinct permutations.
Now from theory to praxis
Since the math is given, the rest is pretty straight forward:
define prob:
input: int num
output: double
double result = 0.0
int min_3s = (num % 2)
int max_3s = (int) (num / 3)
if(num - max_3 == 1)
--max_3
for int c3s in [min_3s , max_3s]
int c2s = (num - (c3s * 3)) / 2
double p = 0.8 ^ c3s * 0.2 * c2s
p *= (c3s + c2s)! / (c3s! * c2s!)
result += p
return result
Instead of jumping into the programming, you can use math.
Let p(n) be the probability that you reach the location that is n steps away.
Base cases:
p(0)=1
p(1)=0
p(2)=0.2
Linear recurrence relation
p(n+3)=0.2 p(n+1) + 0.8 p(n)
You can solve this in closed form by finding the exponential solutions to the linear recurrent relation.
c^3 = 0.2 c + 0.8
c = 1, (-5 +- sqrt(55)i)/10
Although this was cubic, c=1 will always be a solution in this type of problem since there is a constant nonzero solution.
Because the roots are distinct, all solutions are of the form a1(1)^n + a2((-5+sqrt(55)i) / 10)^n + a3((-5-sqrt(55)i)/10)^n. You can solve for a1, a2, and a3 using the initial conditions:
a1=5/14
a2=(99-sqrt(55)i)/308
a3=(99+sqrt(55)i)/308
This gives you a nonrecursive formula for p(n):
p(n)=5/14+(99-sqrt(55)i)/308((-5+sqrt(55)i)/10)^n+(99+sqrt(55)i)/308((-5-sqrt(55)i)/10)^n
One nice property of the non-recursive formula is that you can read off the asymptotic value of 5/14, but that's also clear because the average value of a jump is 2(1/5)+ 3(4/5) = 14/5, and you almost surely hit a set with density 1/(14/5) of the integers. You can use the magnitudes of the other roots, 2/sqrt(5)~0.894, to see how rapidly the probabilities approach the asymptotics.
5/14 - (|a2|+|a3|) 0.894^n < p(n) < 5/14 + (|a2|+|a3|) 0.894^n
|5/14 - p(n)| < (|a2|+|a3|) 0.894^n
f(n, p) = f(n-3, p*.8) + f(n -2, p*.2)
Start p at 1.
If n=0 return p, if n <0 return 0.
Instead of using the (terribly inefficient) recursive algorithm, start from the start and calculate in how many ways you can reach subsequent steps, i.e. using 'dynamic programming'. This way, you can easily calculate the probabilities and also have a complexity of only O(n) to calculate everything up to step n.
For each step, memorize the possible ways of reaching that step, if any (no matter how), and the probability of reaching that step. For the zeroth step (the start) this is (1, 1.0).
steps = [(1, 1.0)]
Now, for each consecutive step n, get the previously computed possible ways poss and probability prob to reach steps n-2 and n-3 (or (0, 0.0) in case of n < 2 or n < 3 respectively), add those to the combined possibilities and probability to reach that new step, and add them to the list.
for n in range(1, 10):
poss2, prob2 = steps[n-2] if n >= 2 else (0, 0.0)
poss3, prob3 = steps[n-3] if n >= 3 else (0, 0.0)
steps.append( (poss2 + poss3, prob2 * 0.2 + prob3 * 0.8) )
Now you can just get the numbers from that list:
>>> for n, (poss, prob) in enumerate(steps):
... print "%s\t%s\t%s" % (n, poss, prob)
0 1 1.0
1 0 0.0
2 1 0.2
3 1 0.8
4 1 0.04
5 2 0.32 <-- 2 ways to get to 5 with combined prob. of 0.32
6 2 0.648
7 3 0.096
8 4 0.3856
9 5 0.5376
(Code is in Python)
Note that this will get you both the number of possible ways of reaching a certain step (e.g. "first 2, then 3" or "first 3, then 2" for 5), and the probability to reach that step in one go. Of course, if you need only the probability, you can just use single numbers instead of tuples.

Better Algorithm to find the maximum number who's square divides K :

Given a number K which is a product of two different numbers (A,B), find the maximum number(<=A & <=B) who's square divides the K .
Eg : K = 54 (6*9) . Both the numbers are available i.e 6 and 9.
My approach is fairly very simple or trivial.
taking the smallest of the two ( 6 in this case).Lets say A
Square the number and divide K, if its a perfect division, that's the number.
Else A = A-1 ,till A =1.
For the given example, 3*3 = 9 divides K, and hence 3 is the answer.
Looking for a better algorithm, than the trivial solution.
Note : The test cases are in 1000's so the best possible approach is needed.
I am sure someone else will come up with a nice answer involving modulus arithmetic. Here is a naive approach...
Each of the factors can themselves be factored (though it might be an expensive operation).
Given the factors, you can then look for groups of repeated factors.
For instance, using your example:
Prime factors of 9: 3, 3
Prime factors of 6: 2, 3
All prime factors: 2, 3, 3, 3
There are two 3s, so you have your answer (the square of 3 divides 54).
Second example of 36 x 9 = 324
Prime factors of 36: 2, 2, 3, 3
Prime factors of 9: 3, 3
All prime factors: 2, 2, 3, 3, 3, 3
So you have two 2s and four 3s, which means 2x3x3 is repeated. 2x3x3 = 18, so the square of 18 divides 324.
Edit: python prototype
import math
def factors(num, dict):
""" This finds the factors of a number recursively.
It is not the most efficient algorithm, and I
have not tested it a lot. You should probably
use another one. dict is a dictionary which looks
like {factor: occurrences, factor: occurrences, ...}
It must contain at least {2: 0} but need not have
any other pre-populated elements. Factors will be added
to this dictionary as they are found.
"""
while (num % 2 == 0):
num /= 2
dict[2] += 1
i = 3
found = False
while (not found and (i <= int(math.sqrt(num)))):
if (num % i == 0):
found = True
factors(i, dict)
factors(num / i, dict)
else:
i += 2
if (not found):
if (num in dict.keys()):
dict[num] += 1
else:
dict[num] = 1
return 0
#MAIN ROUTINE IS HERE
n1 = 37 # first number (6 in your example)
n2 = 41 # second number (9 in your example)
dict = {2: 0} # initialise factors (start with "no factors of 2")
factors(n1, dict) # find the factors of f1 and add them to the list
factors(n2, dict) # find the factors of f2 and add them to the list
sqfac = 1
# now find all factors repeated twice and multiply them together
for k in dict.keys():
dict[k] /= 2
sqfac *= k ** dict[k]
# here is the result
print(sqfac)
Answer in C++
int func(int i, j)
{
int k = 54
float result = pow(i, 2)/k
if (static_cast<int>(result)) == result)
{
if(i < j)
{
func(j, i);
}
else
{
cout << "Number is correct: " << i << endl;
}
}
else
{
cout << "Number is wrong" << endl;
func(j, i)
}
}
Explanation:
First recursion then test if result is a positive integer if it is then check if the other multiple is less or greater if greater recursive function tries the other multiple and if not then it is correct. Then if result is not positive integer then print Number is wrong and do another recursive function to test j.
If I got the problem correctly, I see that you have a rectangle of length=A, width=B, and area=K
And you want convert it to a square and lose the minimum possible area
If this is the case. So the problem with your algorithm is not the cost of iterating through mutliple iterations till get the output.
Rather the problem is that your algorithm depends heavily on the length A and width B of the input rectangle.
While it should depend only on the area K
For example:
Assume A =1, B=25
Then K=25 (the rect area)
Your algorithm will take the minimum value, which is A and accept it as answer with a single
iteration which is so fast but leads to wrong asnwer as it will result in a square of area 1 and waste the remaining 24 (whatever cm
or m)
While the correct answer here should be 5. which will never be reached by your algorithm
So, in my solution I assume a single input K
My ideas is as follows
x = sqrt(K)
if(x is int) .. x is the answer
else loop from x-1 till 1, x--
if K/x^2 is int, x is the answer
This might take extra iterations but will guarantee accurate answer
Also, there might be some concerns on the cost of sqrt(K)
but it will be called just once to avoid misleading length and width input

Find the sum of least common multiples of all subsets of a given set

Given: set A = {a0, a1, ..., aN-1} (1 &leq; N &leq; 100), with 2 &leq; ai &leq; 500.
Asked: Find the sum of all least common multiples (LCM) of all subsets of A of size at least 2.
The LCM of a setB = {b0, b1, ..., bk-1} is defined as the minimum integer Bmin such that bi | Bmin, for all 0 &leq; i < k.
Example:
Let N = 3 and A = {2, 6, 7}, then:
LCM({2, 6}) = 6
LCM({2, 7}) = 14
LCM({6, 7}) = 42
LCM({2, 6, 7}) = 42
----------------------- +
answer 104
The naive approach would be to simply calculate the LCM for all O(2N) subsets, which is not feasible for reasonably large N.
Solution sketch:
The problem is obtained from a competition*, which also provided a solution sketch. This is where my problem comes in: I do not understand the hinted approach.
The solution reads (modulo some small fixed grammar issues):
The solution is a bit tricky. If we observe carefully we see that the integers are between 2 and 500. So, if we prime factorize the numbers, we get the following maximum powers:
2 8
3 5
5 3
7 3
11 2
13 2
17 2
19 2
Other than this, all primes have power 1. So, we can easily calculate all possible states, using these integers, leaving 9 * 6 * 4 * 4 * 3 * 3 * 3 * 3 states, which is nearly 70000. For other integers we can make a dp like the following: dp[70000][i], where i can be 0 to 100. However, as dp[i] is dependent on dp[i-1], so dp[70000][2] is enough. This leaves the complexity to n * 70000 which is feasible.
I have the following concrete questions:
What is meant by these states?
Does dp stand for dynamic programming and if so, what recurrence relation is being solved?
How is dp[i] computed from dp[i-1]?
Why do the big primes not contribute to the number of states? Each of them occurs either 0 or 1 times. Should the number of states not be multiplied by 2 for each of these primes (leading to a non-feasible state space again)?
*The original problem description can be found from this source (problem F). This question is a simplified version of that description.
Discussion
After reading the actual contest description (page 10 or 11) and the solution sketch, I have to conclude the author of the solution sketch is quite imprecise in their writing.
The high level problem is to calculate an expected lifetime if components are chosen randomly by fair coin toss. This is what's leading to computing the LCM of all subsets -- all subsets effectively represent the sample space. You could end up with any possible set of components. The failure time for the device is based on the LCM of the set. The expected lifetime is therefore the average of the LCM of all sets.
Note that this ought to include the LCM of sets with only one item (in which case we'd assume the LCM to be the element itself). The solution sketch seems to sabotage, perhaps because they handled it in a less elegant manner.
What is meant by these states?
The sketch author only uses the word state twice, but apparently manages to switch meanings. In the first use of the word state it appears they're talking about a possible selection of components. In the second use they're likely talking about possible failure times. They could be muddling this terminology because their dynamic programming solution initializes values from one use of the word and the recurrence relation stems from the other.
Does dp stand for dynamic programming?
I would say either it does or it's a coincidence as the solution sketch seems to heavily imply dynamic programming.
If so, what recurrence relation is being solved? How is dp[i] computed from dp[i-1]?
All I can think is that in their solution, state i represents a time to failure , T(i), with the number of times this time to failure has been counted, dp[i]. The resulting sum would be to sum all dp[i] * T(i).
dp[i][0] would then be the failure times counted for only the first component. dp[i][1] would then be the failure times counted for the first and second component. dp[i][2] would be for the first, second, and third. Etc..
Initialize dp[i][0] with zeroes except for dp[T(c)][0] (where c is the first component considered) which should be 1 (since this component's failure time has been counted once so far).
To populate dp[i][n] from dp[i][n-1] for each component c:
For each i, copy dp[i][n-1] into dp[i][n].
Add 1 to dp[T(c)][n].
For each i, add dp[i][n-1] to dp[LCM(T(i), T(c))][n].
What is this doing? Suppose you knew that you had a time to failure of j, but you added a component with a time to failure of k. Regardless of what components you had before, your new time to fail is LCM(j, k). This follows from the fact that for two sets A and B, LCM(A union B} = LCM(LCM(A), LCM(B)).
Similarly, if we're considering a time to failure of T(i) and our new component's time to failure of T(c), the resultant time to failure is LCM(T(i), T(c)). Note that we recorded this time to failure for dp[i][n-1] configurations, so we should record that many new times to failure once the new component is introduced.
Why do the big primes not contribute to the number of states?
Each of them occurs either 0 or 1 times. Should the number of states not be multiplied by 2 for each of these primes (leading to a non-feasible state space again)?
You're right, of course. However, the solution sketch states that numbers with large primes are handled in another (unspecified) fashion.
What would happen if we did include them? The number of states we would need to represent would explode into an impractical number. Hence the author accounts for such numbers differently. Note that if a number less than or equal to 500 includes a prime larger than 19 the other factors multiply to 21 or less. This makes such numbers amenable for brute forcing, no tables necessary.
The first part of the editorial seems useful, but the second part is rather vague (and perhaps unhelpful; I'd rather finish this answer than figure it out).
Let's suppose for the moment that the input consists of pairwise distinct primes, e.g., 2, 3, 5, and 7. Then the answer (for summing all sets, where the LCM of 0 integers is 1) is
(1 + 2) (1 + 3) (1 + 5) (1 + 7),
because the LCM of a subset is exactly equal to the product here, so just multiply it out.
Let's relax the restriction that the primes be pairwise distinct. If we have an input like 2, 2, 3, 3, 3, and 5, then the multiplication looks like
(1 + (2^2 - 1) 2) (1 + (2^3 - 1) 3) (1 + (2^1 - 1) 5),
because 2 appears with multiplicity 2, and 3 appears with multiplicity 3, and 5 appears with multiplicity 1. With respect to, e.g., just the set of 3s, there are 2^3 - 1 ways to choose a subset that includes a 3, and 1 way to choose the empty set.
Call a prime small if it's 19 or less and large otherwise. Note that integers 500 or less are divisible by at most one large prime (with multiplicity). The small primes are more problematic. What we're going to do is to compute, for each possible small portion of the prime factorization of the LCM (i.e., one of the ~70,000 states), the sum of LCMs for the problem derived by discarding the integers that could not divide such an LCM and leaving only the large prime factor (or 1) for the other integers.
For example, if the input is 2, 30, 41, 46, and 51, and the state is 2, then we retain 2 as 1, discard 30 (= 2 * 3 * 5; 3 and 5 are small), retain 41 as 41 (41 is large), retain 46 as 23 (= 2 * 23; 23 is large), and discard 51 (= 3 * 17; 3 and 17 are small). Now, we compute the sum of LCMs using the previously described technique. Use inclusion-exclusion to get rid of the subsets whose LCM whose small portion properly divides the state instead of being exactly equal. Maybe I'll work a complete example later.
What is meant by these states?
I think here, states refer to if the number is in set B = {b0, b1, ..., bk-1} of LCMs of set A.
Does dp stand for dynamic programming and if so, what recurrence relation is being solved?
dp in the solution sketch stands for dynamic programming, I believe.
How is dp[i] computed from dp[i-1]?
It's feasible that we can figure out the state of next group of LCMs from previous states. So, we only need array of 2, and toggle back and forth.
Why do the big primes not contribute to the number of states? Each of them occurs either 0 or 1 times. Should the number of states not be multiplied by 2 for each of these primes (leading to a non-feasible state space again)?
We can use Prime Factorization and exponents only to present the number.
Here is one example.
6 = (2^1)(3^1)(5^0) -> state "1 1 0" to represent 6
18 = (2^1)(3^2)(5^0) -> state "1 2 0" to represent 18
Here is how we can get LMC of 6 and 18 using Prime Factorization
LCM (6,18) = (2^(max(1,1)) (3^ (max(1,2)) (5^max(0,0)) = (2^1)(3^2)(5^0) = 18
2^9 > 500, 3^6 > 500, 5^4 > 500, 7^4>500, 11^3 > 500, 13^3 > 500, 17^3 > 500, 19^3 > 500
we can use only count of exponents of prime number 2,3,5,7,11,13,17,19 to represent the LCMs in the set B = {b0, b1, ..., bk-1}
for the given set A = {a0, a1, ..., aN-1} (1 ≤ N ≤ 100), with 2 ≤ ai ≤ 500.
9 * 6 * 4 * 4 * 3 * 3 * 3 * 3 <= 70000, so we only need two of dp[9][6][4][4][3][3][3][3] to keep tracks of all LCMs' states. So, dp[70000][2] is enough.
I put together a small C++ program to illustrate how we can get sum of LCMs of the given set A = {a0, a1, ..., aN-1} (1 ≤ N ≤ 100), with 2 ≤ ai ≤ 500. In the solution sketch, we need to loop through 70000 max possible of LCMs.
int gcd(int a, int b) {
int remainder = 0;
do {
remainder = a % b;
a = b;
b = remainder;
} while (b != 0);
return a;
}
int lcm(int a, int b) {
if (a == 0 || b == 0) {
return 0;
}
return (a * b) / gcd(a, b);
}
int sum_of_lcm(int A[], int N) {
// get the max LCM from the array
int max = A[0];
for (int i = 1; i < N; i++) {
max = lcm(max, A[i]);
}
max++;
//
int dp[max][2];
memset(dp, 0, sizeof(dp));
int pri = 0;
int cur = 1;
// loop through n x 70000
for (int i = 0; i < N; i++) {
for (int v = 1; v < max; v++) {
int x = A[i];
if (dp[v][pri] > 0) {
x = lcm(A[i], v);
dp[v][cur] = (dp[v][cur] == 0) ? dp[v][pri] : dp[v][cur];
if ( x % A[i] != 0 ) {
dp[x][cur] += dp[v][pri] + dp[A[i]][pri];
} else {
dp[x][cur] += ( x==v ) ? ( dp[v][pri] + dp[v][pri] ) : ( dp[v][pri] ) ;
}
}
}
dp[A[i]][cur]++;
pri = cur;
cur = (pri + 1) % 2;
}
for (int i = 0; i < N; i++) {
dp[A[i]][pri] -= 1;
}
long total = 0;
for (int j = 0; j < max; j++) {
if (dp[j][pri] > 0) {
total += dp[j][pri] * j;
}
}
cout << "total:" << total << endl;
return total;
}
int test() {
int a[] = {2, 6, 7 };
int n = sizeof(a)/sizeof(a[0]);
int total = sum_of_lcm(a, n);
return 0;
}
Output
total:104
The states are one more than the powers of primes. You have numbers up to 2^8, so the power of 2 is in [0..8], which is 9 states. Similarly for the other states.
"dp" could well stand for dynamic programming, I'm not sure.
The recurrence relation is the heart of the problem, so you will learn more by solving it yourself. Start with some small, simple examples.
For the large primes, try solving a reduced problem without using them (or their equivalents) and then add them back in to see their effect on the final result.

What is the correct algorthm for a logarthmic distribution curve between two points?

I've read a bunch of tutorials about the proper way to generate a logarithmic distribution of tagcloud weights. Most of them group the tags into steps. This seems somewhat silly to me, so I developed my own algorithm based on what I've read so that it dynamically distributes the tag's count along the logarthmic curve between the threshold and the maximum. Here's the essence of it in python:
from math import log
count = [1, 3, 5, 4, 7, 5, 10, 6]
def logdist(count, threshold=0, maxsize=1.75, minsize=.75):
countdist = []
# mincount is either the threshold or the minimum if it's over the threshold
mincount = threshold<min(count) and min(count) or threshold
maxcount = max(count)
spread = maxcount - mincount
# the slope of the line (rise over run) between (mincount, minsize) and ( maxcount, maxsize)
delta = (maxsize - minsize) / float(spread)
for c in count:
logcount = log(c - (mincount - 1)) * (spread + 1) / log(spread + 1)
size = delta * logcount - (delta - minsize)
countdist.append({'count': c, 'size': round(size, 3)})
return countdist
Basically, without the logarithmic calculation of the individual count, it would generate a straight line between the points, (mincount, minsize) and (maxcount, maxsize).
The algorithm does a good approximation of the curve between the two points, but suffers from one drawback. The mincount is a special case, and the logarithm of it produces zero. This means the size of the mincount would be less than minsize. I've tried cooking up numbers to try to solve this special case, but can't seem to get it right. Currently I just treat the mincount as a special case and add "or 1" to the logcount line.
Is there a more correct algorithm to draw a curve between the two points?
Update Mar 3: If I'm not mistaken, I am taking the log of the count and then plugging it into a linear equation. To put the description of the special case in other words, in y=lnx at x=1, y=0. This is what happens at the mincount. But the mincount can't be zero, the tag has not been used 0 times.
Try the code and plug in your own numbers to test. Treating the mincount as a special case is fine by me, I have a feeling it would be easier than whatever the actual solution to this problem is. I just feel like there must be a solution to this and that someone has probably come up with a solution.
UPDATE Apr 6: A simple google search turns up a many of the tutorials I've read, but this is probably the most complete example of stepped tag clouds.
UPDATE Apr 28: In response to antti.huima's solution: When graphed, the curve that your algorithm creates lies below the line between the two points. I've been trying to juggle the numbers around but still can't seem to come up with a way to flip that curve to the other side of the line. I'm guessing that if the function was changed to some form of logarithm instead of an exponent it would do exactly what I'd need. Is that correct? If so, can anyone explain how to achieve this?
Thanks to antti.huima's help, I re-thought out what I was trying to do.
Taking his method of solving the problem, I want an equation where the logarithm of the mincount is equal to the linear equation between the two points.
weight(MIN) = ln(MIN-(MIN-1)) + min_weight
min_weight = ln(1) + min_weight
While this gives me a good starting point, I need to make it pass through the point (MAX, max_weight). It's going to need a constant:
weight(x) = ln(x-(MIN-1))/K + min_weight
Solving for K we get:
K = ln(MAX-(MIN-1))/(max_weight - min_weight)
So, to put this all back into some python code:
from math import log
count = [1, 3, 5, 4, 7, 5, 10, 6]
def logdist(count, threshold=0, maxsize=1.75, minsize=.75):
countdist = []
# mincount is either the threshold or the minimum if it's over the threshold
mincount = threshold<min(count) and min(count) or threshold
maxcount = max(count)
constant = log(maxcount - (mincount - 1)) / (maxsize - minsize)
for c in count:
size = log(c - (mincount - 1)) / constant + minsize
countdist.append({'count': c, 'size': round(size, 3)})
return countdist
Let's begin with your mapping from the logged count to the size. That's the linear mapping you mentioned:
size
|
max |_____
| /
| /|
| / |
min |/ |
| |
/| |
0 /_|___|____
0 a
where min and max are the min and max sizes, and a=log(maxcount)-b. The line is of y=mx+c where x=log(count)-b
From the graph, we can see that the gradient, m, is (maxsize-minsize)/a.
We need x=0 at y=minsize, so log(mincount)-b=0 -> b=log(mincount)
This leaves us with the following python:
mincount = min(count)
maxcount = max(count)
xoffset = log(mincount)
gradient = (maxsize-minsize)/(log(maxcount)-log(mincount))
for c in count:
x = log(c)-xoffset
size = gradient * x + minsize
If you want to make sure that the minimum count is always at least 1, replace the first line with:
mincount = min(count+[1])
which appends 1 to the count list before doing the min. The same goes for making sure the maxcount is always at least 1. Thus your final code per above is:
from math import log
count = [1, 3, 5, 4, 7, 5, 10, 6]
def logdist(count, maxsize=1.75, minsize=.75):
countdist = []
mincount = min(count+[1])
maxcount = max(count+[1])
xoffset = log(mincount)
gradient = (maxsize-minsize)/(log(maxcount)-log(mincount))
for c in count:
x = log(c)-xoffset
size = gradient * x + minsize
countdist.append({'count': c, 'size': round(size, 3)})
return countdist
what you have is that you have tags whose counts are from MIN to MAX; the threshold issue can be ignored here because it amounts to setting every count below threshold to the threshold value and taking the minimum and maximum only afterwards.
You want to map the tag counts to "weights" but in a "logarithmic fashion", which basically means (as I understand it) the following. First, the tags with count MAX get max_weight weight (in your example, 1.75):
weight(MAX) = max_weight
Secondly, the tags with the count MIN get min_weight weight (in your example, 0.75):
weight(MIN) = min_weight
Finally, it holds that when your count decreases by 1, the weight is multiplied with a constant K < 1, which indicates the steepness of the curve:
weight(x) = weight(x + 1) * K
Solving this, we get:
weight(x) = weight_max * (K ^ (MAX - x))
Note that with x = MAX, the exponent is zero and the multiplicand on the right becomes 1.
Now we have the extra requirement that weight(MIN) = min_weight, and we can solve:
weight_min = weight_max * (K ^ (MAX - MIN))
from which we get
K ^ (MAX - MIN) = weight_min / weight_max
and taking logarithm on both sides
(MAX - MIN) ln K = ln weight_min - ln weight_max
i.e.
ln K = (ln weight_min - ln weight_max) / (MAX - MIN)
The right hand side is negative as desired, because K < 1. Then
K = exp((ln weight_min - ln weight_max) / (MAX - MIN))
So now you have the formula to calculate K. After this you just apply for any count x between MIN and MAX:
weight(x) = max_weight * (K ^ (MAX - x))
And you are done.
On a log scale, you just plot the log of the numbers linearly (in other words, pretend you're plotting linearly, but take the log of the numbers to be plotted first).
The zero problem can't be solved analytically--you have to pick a minimum order of magnitude for your scale, and no matter what you can't ever reach zero. If you want to plot something at zero, your choices are to arbitrarily give it the minimum order of magnitude of the scale, or to omit it.
I don't have the exact answer, but i think you want to look up Linearizing Exponential Data. Start by calculate the equation of the line passing through the points and take the log of both sides of that equation.

Resources