What is the time complexity to calculate 2^5000? - algorithm

What is the time complexity to calculate 2^5000 ?
I approached it, by recursion but then it leads to O(N) where N = power of a number. Is there any way to reduce this time complexity ?

I think that you are interested in general approach, not only in this given example.
You can calculate N-th integer power using Log(N) operations with exponentiation by squaring approach
But note that number 2^N consists of about N binary digits (bits) and simple writing in memory is O(N) operation

Related

Algorithm and Complexity

What does it mean when we say that an algorithm X is asymptotically more efficient than Y?
we consider the growth of the algorithm in terms of input size. I am not getting the concept properly.
The growth of algorithm appears when we use containers such as Array,stack,queue and other data structures.If an array size is taken from the user then it will take O(N)(big-oh of N size) in terms of space complexity.
In terms of Time complexity, if there is any loop in the program running for n number of time then it will take O(N)(big-oh of N) time complexity.
These are the two main attributes while judging the growth of any algorithm.

What would the big O notation for this function?

What would be the worst time complexity big O notation for the following pseudocode? (assuming the function call is an O(1)) I'm very new to big O notation so I'm unsure of an answer but I was thinking O(log(n)) because the while loop parameters multiplied by 2 each time or would that just be O(loglog(n))? Or am I wrong on both counts? Any input/help is appreciated, I'm trying to grasp the concept of big O notation for worst time complexity which I just started learning. Thanks!
i ← 1
while(i<n)
doSomething(...)
i ← i * 2
done
If i is doubling every time, then the number of times the loop will execute is the number of times you can double i before reaching n. Or to write it mathematically, if x is the number of times the loop will execute we have 2^x <= n. Solving for x gives x <= log_2(n). Therefore the number of times the loop will execute is O(log(n))
i is growing exponentially, thus loop will be done in logarithmic time, O(log(n))
O(log(n)) is correct when you want to state the time complexity of that algorithm in terms of the number n. However in computer science complexity is often stated in the size of the input, i.e. the number of bits. Then your algorithm would be linear, i.e. in O(k) where k is the input size.
Typically, other operations like addition are also said to be linear not logarithmic. A logarithmic complexity usually means that an algorithm does not have to consider the complete input. (E.g. binary search).
If this is part of an exercise or you want to discuss complexity of algorithms in a computer science context this difference is important.
Also, if one would want to be really pedantic: comparison on large integers is not a constant time operation, and if you are considering the usual integer types, the algorithm is basically constant time as it only needs up to 32 or 64 iterations.

Amortized Runtime Cost for an algorithm alternating between O(n^2) & O(n^4)

If I implement an algorithm that runs at O(n^4) at the current timestep and then O(n^2) at the next.
is the complexity still the max[O(n^4), O(n^2)] ?
Is there a way to get a polynomial in the range [2, 4) for the complexity? I.e something like O(n^2.83) on average
How would I calculate the average runtime cost amortized from t=0...inf ? Is it just [O(n^2) + O(n^4)] / 2 ?
O(n2) is negligible over O(n4) since the quotient of the first on the second has a zero limit when n grows indefinitely.
So your algorithm is just O(n4)
Read wikipage on Big 0 notation and any good textbooks about limits of polynomials.

Time complexity in Big-O

What would be the BigO time of this algorithm
Input: Array A sorting n>=1 integers
Output: The sum of the elements at even cells in A
s=A[0]
for i=2 to n-1 by increments of 2
{
s=s+A[i]
}
return s
I think the function for this equation is F(n)=n*ceiling(n/2) but how do you convert that to bigO
The time complexity for that algorithm would be O(n), since the amount of work it does grows linearly with the size of the input. The other way to look at it is that loops over the input once - ignore the fact that it only looks at half of the values, that doesn't matter for Big-O complexity).
The number of operations is not proportional to n*ceiling(n/2), but rather n/2 which is O(n). Because of the meaning of big-O, (which includes the idea of an arbitrary coefficient), O(n) and O(n/2) are absolutely equivalent - so it is always written as O(n).
This is an O(n) algorithm since you look at ~n/2 elements.
Your algorithm will do N/2 iterations given that there are N elements in the array. Each iteration requires constant time to complete. This gives us O(N) complexity, and here's why.
Generally, the running time of an algorithm is a function f(x) from the size of data. Saying that f(x) is O(g(x)) means that there exists some constant c such that for all sufficiently large values of x f(x) <= cg(x). Easy to see how this applies in our case: if we assume that each iteration takes a unit of time, obviously f(N) <= 1/2N.
A formal manner to obtain the exact number of operations and your algorithm's order of growth:

Integer multiplication algorithm using a divide and conquer approach?

As homework, I should implement integer multiplication on numbers of a 1000 digits using a divide and conquer approach that works below O(n). What algorithm should I look into?
Schönhage–Strassen algorithm is the one of the fastest multiplication algorithms known. It takes O(n log n log log n) time.
Fürer's algorithm is the fastest large number multiplication algorithm known so far and takes O(n*log n * 2O(log*n)) time.
I don't think any multiplication algorithm could take less than or even equal to O(n). That's simply not possible.
Take a look at the Karatsuba algorithm. It involves a recursion step which you can easily model with divide-and-conquer.

Resources