Prove Number of Multiplications in Exponentiation by Squaring Algorithm - algorithm

I am trying to find the number of multiplications required when executing an algorithm which uses Exponentiation by Squaring I was reading about on Wikipedia. The section, Computational Complexity, mentions that the algorithm requires at most floor(log n). How could I go about proving this?
I have this pseudocode:
expo(a, n)
if n == 0
return 1
if n == 1
return a
if n is even
b = expo(a, n/2)
return b*b
return a * expo(a, n-1)
With this, I also have the following relation
Number of multiplications = T(n)
T(n) = 0 if n<2; a*a^(n-1) if n is odd; (a^(n/2))^2 is n is even
I've attempted using bit-strings representing the base, a, and noting binary operations which need to be completed. i.e. 5 = 101_2. All 1's require inverting and then bit-shifting to the right. All 0's simply require bit-shifting to the right. These operations then can represent multiplication, as described by this chart I produced:
exponent n 0 1 2 3 4 5 6 7 8
bits in n 1 1 2 2 3 3 3 3 4
0-bits in n 1 0 1 0 2 1 1 0 3
1-bits in n 0 1 1 2 1 2 2 1 1
binary operations for a^n 0 0 1 2 2 3 3 4 3
multiplications for a^n 0 0 1 2 2 3 3 4 3
Edit
As pointed out by Henry in the comments below, the number of multiplications can be found using # of bits in binary representation + # of 1 bits in binary representation - 1. To prevent getting lost in the math, I will assume the amount of 1-bits is given by some function b(n). Then, T(n) = floor(log_2 n) + b(n) - 1
Proving for n = 2:
2_10 = 10_2 -> b(2) = 1
-> T(2) = floor(log_2 2) + b(2) - 1 = 1 + 1 - 1 = 1
This agrees with the observation table above.
Assume true for k.
Prove for k+1:
T(k+1) = floor(log_2 (k+1)) + b(k+1) - 1
After this formula, in terms of k+1, I am not so sure what to do. I would appreciate any insight.

Related

Cost analysis for implementing a stack as an array?

Please Refer to answer 2 of the material above. I can follow the text up to that point. I always seem to loose conceptualisation when there's no illustration maybe due to the fact that I'm new to math notation.
I understand the cost for an expensive operation (double the array when the stack is full)
1 + 2 + 4 + 8 + ... + 2^i where i is the index of that sequence. So index 0 = 1, 1 = 2, 2 = 4 and 3 = 8.
I can see the sequence for costly operations but I get confused with the following explanation.
Now, in any sequence of n operations, the total cost for resizing is
1 + 2 + 4 + 8 + ... + 2^i for some 2^i < n (if all operations are pushes
then 2^i will be the largest power of 2 less than n). This sum is at
most 2n − 1. Adding in the additional cost of n for
inserting/removing, we get a total cost < 3n, and so our amortised
cost per operation is < 3
I don't understand that explanation?
the total cost for resizing is 1 + 2 + 4 + 8 + ... + 2^i for some 2^i < n
What does it mean for some 2^i < n
does it say that the number of operations n will always be larger than 2^i? and does n stand for the number of operations or the length of the array?
And the following I just don't follow:
if all operations are pushes then 2^i will be the largest power of 2
less than n. This sum is at most 2n − 1.
Could someone illustrate this please?
n is the largest stack size, intrinsic array size at this moment is the least power of two 2^(i+1)>=n, so the last operation of expansion takes 2^i<n time.
For example, if array reaches size n=11, the last expansion causes grow from 8 to 16 with 8 items moved
About the second question: sum of geometric progression
1 + 2 + 4 + 8 + ... + 2^i = 2^(i+1) - 1

Find out the Complexity of algorithm?

Find out the complexity of an algorithm that measures the number of the print statements in an algorithm that considers a positive integer n and prints 1 one time, 2 two times, 3 three times, and n for n times.
That is
1
2 2
3 3 3
……………
……………
n n n n ……..n (n times)
Assuming the problem is to find the algorithmic complexity of an algorithm, which when given a number n will print every number from 1 to n, printing 1 once, 2 twice, 3 thrice, and so on...
Your algorithmic complexity has an upper bound of O(n²).
This is because for n there are n prints. Realistically, if you want the tilde approximation, it should be ~O( (n² + n) /2) because you average out the sequence.
For n = 5, you print 1 + 2 + 3 + 4 + 5 times... which is 15.
For n = 6, you print 1 + 2 + 3 + 4 + 5 + 6 times... which is 21.
For n = 10, you print 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 + 10 times... which is 55 times.
Since the actual algorithmic complexity is indeed O( (n² + n) / 2), your largest order of magnitude of complexity is n². You are better off approximating your algorithmic complexity as O(n²) because your n² will quickly outgrow your n with a large enough input size.

Computing Predecessor and Successor

I come across an interesting question and I want to discuss it in order to see how it will be approached by different people:
Let n be a natural number, the task is to implement a function f so that
f(n) = n + 1 if 2 divides n
f(n) = n - 1 if 2 does not divide n
Condition: The implementation must not use conditional constructs
My Answer is f(n) = n xor 1
You could do:
f(n) = n + 1 - 2 * (n % 2)
because
(n % 2) == 0 if 2 divides n and therefore f(n) = n + 1 - 0 and
(n % 2) == 1 if 2 does not divide n and therefore f(n) = n + 1 - 2 = n - 1

Understanding Big Oh Careercup Cracking Coding Interview

There is an example question in the book Careercup Cracking Coding Interview(CCIS).
Print all positive integer solutions to the equation
a3 + b3=c3 + d3
and d are integers between 1 and 1000.
They gave three solutions two of which I will show here.
Example 1
1 n = 1000
2 for a from 1 to n
3 for b from 1 to n
4 for c from 1 to n
S for d from 1 to n
6 if a^3 + b^3 == c^3 + d^3
7 print a, b, c, d
Example 2
1 n = 1000
2 for a from 1 to n
3 for b from 1 to n
4 for c from 1 to n
5 d = pow(a3 + b3 - c3 , 1/3) // Will round to int
6 if a^3 + b^3 == c^3 + d^3 / / Validate that the value works
7 print a, b, c, d
The book states that the first question is O(n4) and the second one is O(n3). My question is why are they ignoring the complexity of pow
You can say that they are not ignoring it, but assuming that the complexity is O(1). The justification can be the following:
You need to make a function that calculates a cubic root (integer value) of some number from 0 to 1000^3. How would you implement it? An easy way is a binary search (better ways exist, like numerical methods). How many iterations will it take you: log2(1000^3) which is approximately 30. So kind of O(1).
Big O expresses how the function grows with n. The pow function, especially with the second argument being 1/3, doesn't grow with n. That is to say pow is O(1). You can think of O(1) as an identity function. O(n) + O(1) = O(n) just like 2 + 0 = 2.

Skiena the Algorithm Design Manual - Geometric Series Clarification

Picture taken from book.
That is an explanation of a geometric series from the book, which I do not understand.
Constant ratio is a right?
So let's take first term (just the sum function), for n = 5, and constant ratio = 2.
So we will have this:
2^0 + 2^1 + 2^2 + 2^3 + 2^4 + 2^5 = 1 + 2 + 4 + 8 + 16 + 32 = 63
No if I use the RHS,
a(a^n+1 - 1)/(a - 1).
So it will give this: 2(2^5+1 - 1)/(2 - 1) for n = 5 this gives 126.
How can they be equal ?
Also it says later on: 'when a > 1 the sum grows rapidly with each new term..' Is he talking about space complexity ?
Because I do not get the big-theta notation. So for n = 5 and a = 2 it will take Big-Theta(64), 64 (2^6) steps?
Here is some ruby code:
n = 5
a = 2
sum = 0
for i in 0..n do
sum = sum + a**i
end
puts sum # prints 63
I can see n+1 steps.
Any help understanding this please?
The formula in the book is wrong, there is an extra a factor (n=0 should yield 1, not a).
"The sum grows rapidly" is just about the values of the sum, it does not describe the complexity of computing it.

Resources