Skiena the Algorithm Design Manual - Geometric Series Clarification - algorithm

Picture taken from book.
That is an explanation of a geometric series from the book, which I do not understand.
Constant ratio is a right?
So let's take first term (just the sum function), for n = 5, and constant ratio = 2.
So we will have this:
2^0 + 2^1 + 2^2 + 2^3 + 2^4 + 2^5 = 1 + 2 + 4 + 8 + 16 + 32 = 63
No if I use the RHS,
a(a^n+1 - 1)/(a - 1).
So it will give this: 2(2^5+1 - 1)/(2 - 1) for n = 5 this gives 126.
How can they be equal ?
Also it says later on: 'when a > 1 the sum grows rapidly with each new term..' Is he talking about space complexity ?
Because I do not get the big-theta notation. So for n = 5 and a = 2 it will take Big-Theta(64), 64 (2^6) steps?
Here is some ruby code:
n = 5
a = 2
sum = 0
for i in 0..n do
sum = sum + a**i
end
puts sum # prints 63
I can see n+1 steps.
Any help understanding this please?

The formula in the book is wrong, there is an extra a factor (n=0 should yield 1, not a).
"The sum grows rapidly" is just about the values of the sum, it does not describe the complexity of computing it.

Related

Cost analysis for implementing a stack as an array?

Please Refer to answer 2 of the material above. I can follow the text up to that point. I always seem to loose conceptualisation when there's no illustration maybe due to the fact that I'm new to math notation.
I understand the cost for an expensive operation (double the array when the stack is full)
1 + 2 + 4 + 8 + ... + 2^i where i is the index of that sequence. So index 0 = 1, 1 = 2, 2 = 4 and 3 = 8.
I can see the sequence for costly operations but I get confused with the following explanation.
Now, in any sequence of n operations, the total cost for resizing is
1 + 2 + 4 + 8 + ... + 2^i for some 2^i < n (if all operations are pushes
then 2^i will be the largest power of 2 less than n). This sum is at
most 2n − 1. Adding in the additional cost of n for
inserting/removing, we get a total cost < 3n, and so our amortised
cost per operation is < 3
I don't understand that explanation?
the total cost for resizing is 1 + 2 + 4 + 8 + ... + 2^i for some 2^i < n
What does it mean for some 2^i < n
does it say that the number of operations n will always be larger than 2^i? and does n stand for the number of operations or the length of the array?
And the following I just don't follow:
if all operations are pushes then 2^i will be the largest power of 2
less than n. This sum is at most 2n − 1.
Could someone illustrate this please?
n is the largest stack size, intrinsic array size at this moment is the least power of two 2^(i+1)>=n, so the last operation of expansion takes 2^i<n time.
For example, if array reaches size n=11, the last expansion causes grow from 8 to 16 with 8 items moved
About the second question: sum of geometric progression
1 + 2 + 4 + 8 + ... + 2^i = 2^(i+1) - 1

formula for the sum of n+n/2+n/3+...+n/n

so I got this algorithm I need to calculate its time complexity
which goes like
for i=1 to n do
k=i
while (k<=n) do
FLIP(A[k])
k = k + i
where A is an array of booleans, and FLIP is as it is, flipping the current value. therefore it's O(1).
Now I understand that the inner while loop should be called
n/1+n/2+n/3+...+n/n
If I'm correct, but is there a formula out there for such calculation?
pretty confused here
The more exact computation is T(n) \sum((n-i)/i) for i = 1 to n (because k is started from i). Hence, the final sum is n + n/2 + ... + n/n - n = n(1 + 1/2 + ... + 1/n) - n, approximately. We knew 1 + 1/2 + ... + 1/n = H(n) and H(n) = \Theta(\log(n)). Hence, T(n) = \Theta(n\log(n)). The -n has not any effect on the asymptotic computaional cost, as n = o(n\log(n)).
Lets say we want to calculate sum of this equation
n + n / 2 + n / 3 + ... + n / n
=> n ( 1 + 1 / 2 + 1 / 3 + ..... + 1 / n )
Then in bracket ( 1 + 1 / 2 + 1 / 3 + ... + 1 / n ) this is a well known Harmonic series and i am afraid there is no proven formula to calculate Harmonic series.
The given problem boils down to calculate below sum -Sum of harmonic series
Although this sum can't be calculated accurately, however you can still find asymptotic upper bound for this sum, which is approximately O(log(n)).
Hence answer to above problem will be - O(nlog(n))

Calculating Time Complexity (2 Simple Algorithms)

Here are two algorithms (pseudo-code):
Alg1(n)
1. int x = n
2. int a = 0
3. while(x > 1) do
3.1. for i = 1 to x do
3.1.1 a = a + 1
3.2 x = x - n/5
Alg2(n)
1. int x = n
2. int a = 0
3. while(x > 1) do
3.1. for i = 1 to x do
3.1.1 a = a + 1
3.2 x = x/5
The difference is on line 3.2.
Time complexity:
Alg1: c + c +n*n*2c = 2c + 2cn² = 2c(n² + 1) = O(n²)
Alg2: c + c +n*n*2c = 2c + 2cn² = 2c(n² + 1) = O(n²)
I wanted to know if the calculation is correct.
Thanks!
No, I'm afraid you aren't correct.
In the first algorithm, the line:
x = x - n/5
Makes the while loop O(1) - it will run five times, however large n is. The for loop is O(N), so it's O(N) overall.
In algorithm 2, by contrast, x decreases as
x = x/5
As x = n to start with, this while loop runs in O(logN). However, the inner for loop also reduces as logN each time. Therefore you are carrying out n + n/5 + n/25 + ... operations, for O(N) again.
Below a formal method to deduce the order of growth of both your algorithms (I hope you're comfortable with Sigma Notation):
Algorithm 1 :
you decrease the value of x by 5 , so you do n + n-5 + n-10 +.... which is O(N^2)
Algorithm 2 :
you decrease the value of x by n/5 , so you do n + n/5 + n/25 +.... which is O(N logN)
see wikipedia for big-oh {O()} notation

Calculating T(n) Time Complexity of an Algorithm

I am looking for some clarification in working out the time efficiency of an Algorithm, specifically T(n). The algorithm below is not as efficient as it could be, though it's a good example to learn from I believe. I would appreciate a line-by-line confirmation of the sum of operations in the code:
Pseudo-code
1. Input: array X of size n
2. Let A = an empty array of size n
3. For i = 0 to n-1
4. Let s = x[0]
5. For j = 0 to i
6. Let sum = sum + x[j]
7. End For
8. Let A[i] = sum / (i+1)
9. End For
10. Output: Array A
My attempt at calculating T(n)
1. 1
2. n
3. n
4. n(2)
5. n(n-1)
6. n(5n)
7. -
8. n(6)
9. -
10. 1
T(n) = 1 + n + n + 2n + n^2 - n + 5n^2 + 6n + 1
= 6n^2 + 9n + 2
So, T(n) = 6n^2 + 9n + 2 is what I arrive at, from this I derive Big-O of O(n^2).
What errors, if any have I made in my calculation...
Edit: ...in counting the primitive operations to derive T(n)?
Your result O(n^2) is correct and is given by the two nested loops. I would prefer the derivation like
0 + 1 + 2 + + (n-1) = (n-1)n/2 = O(n^2)
that follows from observing the nested loops.
I'm not really sure on your methodology but O(n^2) does seem to be correct. At each iteration through the first loop you do a sub loop of the previous elements. Therefore you're looking at 1 the first time 2 the second then 3 then... then n the final time. This is equivalent to the sum from 1 to n which gives you complexity of n^2.

Algorithm Recurrence formula

I am reading Algorithms in C++ by Robert Sedgewick. Basic recurrences section it was mentioned as This recurrence arises for a recursive program that loops through the input to eliminate one item
Cn = cn-1 + N, for N >=2 with C1 = 1.
Cn is about Nsquare/2. Evaluating the sum 1 + 2 +...+ N is elementary. in addition to this following statement is mentioned.
" This result - twice the value sought - consists of N terms, each of which sums to N +1
I need help in understanding abouve statement what are N terms here and how each sums to
N +1, aslo what does "twice the value sought" means.
Thanks for your help
I think he refers to this basic mathematical trick to calculate that sum. Although, it's difficult to conclude anything from such short passage you cited.
Let's assume N = 100. E.g., the sum is 1 + 2 + 3 + .. + 99 + 100.
Now, let's group pairs of elements with sum 101: 1 + 100, 2 + 99, 3 + 98, ..., 50 + 51. That gives us 50 (N/2) pairs with sum 101 (N + 1) in each: thus the overall sum is 50*101.
Anyway, could you provide a bit more context to that quote?
The recurrence formula means:
C1 = 1
C2 = C1 + 2 = 1 + 2 = 3
C3 = C2 + 3 = 3 + 3 = 6
C4 = C3 + 4 = 6 + 4 = 10
C5 = C4 + 5 = 10 + 5 = 15
etc.
But you can also write it directly:
C5 = 1 + 2 + 3 + 4 + 5 = 15
And then use the old trick:
1 + 2 + 3 + ... + N
+ N + N-1 + N-2 + ... + 1
-------------------------
(N+1) ... (N+1)
= (N+1) * N
From there we get : 1 + 2 + ... N = N * (N+1) / 2
For the anecdote, the above formula was found by the great mathematician Carl Friedrich Gauss, when he was at school.
From there we can deduce a recursive algorithm is O(N square) and that is probably what Robert Sedgewick is doing.

Resources