What is the time complexity of Binary to Decimal Conversion ?
I think If there are K bits in a binary number, then TC would be O(K), but, as we always tend to go for N bits in a binary number, then TC would be O(N).
How i got this ,as i think like how many digits a decimal number would include to represent N bit binary numbers.
Then, it is 10^k - 1 = 2^N - 1 => K = N Log2 base10 => which gives TC = O(N).
Can someone clarify this ?
Also, Is there is any chance to reduce this Time complexity ?
10^k - 1 = 2^N - 1
Starting point.
K = N Log2 base10
-1 was present both left and right, so we added 1. we have:
10^k = 2^N now.
To get K from 10^k, we put both left and right into log of base 10. Since logarithm is the power you need to put the base to get the parameter, log(10^k) of base 10 will yield K. On the other hand, N Log2 base10 is the value we reach because of the logarithmization.
Since K, the number of bits is the complexity, the complexity is directly related to N Log2 base 10, which is N multiplied by a positive scalar. If N is assumed to be infinite, the scalar will not mean much of a difference analytically, therefore, we can simplify N Log2 base 10 to O(N)
A decimal digit can represent log(10)/log(2) binary digits, approximately 3.22. But you want binary to decimal, so assume you need 4 bits per decimal digit. In any event, your output size is O(N) where N is your input size.
Now, that tells you very little about the time complexity of the unstated algorithm.
For small N, we can just do a table look up for O(1), but what about when N is large, e.g. 10^6?
A naive algorithm will process one bit of input at a time, doing a multiply/add on the output decimal. That's O(N) on the input, with each step costing O(N) because that's the cost to shift/add on the output. So O(N^2) is your time complexity.
Related
def dividing(n):
while n!=1:
n=math.floor(n/2)
return True
There's that code, my initial thought is that it just has a complexity of n since it n is just the input, no squares no anything but when researching about n/2, I found out that it's Big-O complexity is Log(m), so now im confused, is the big-o complexity of this is log(m)? If yes, why is that so?
Let's assume we count everything within the scope of the while loop as a basic operation.
while(invariant) { basic-operation }
Then, to find an upper asymptotic bound for the number of basic operations of the dividing function, we need an upper bound on, given an input n to the function, how many times the while loop executes.
We may simple reverse the loop and it will become quite apparent (ignore the flooring):
// The value of 'n' until termination of
// the while loop, in reverse (n here == n_start)
1 + 2 + 4 + ... + n
= 2^0 + 2^1 + 2^2 + ... + 2^(log2(n))
= sum_{i=0}^{i=log2(n_start)} 2^i
The sum expression runs i, which in our context is the loop variable, from 0 to log2(n) by steps of 1, meaning the while loop runs (ignoring flooring) log2(n) + 1 times, in turn meaning that O(log2(n)) provides an upper asymptotic bound for the time complexity of your function.
In this kind of situation, where the input is a single number, we cannot assume that arithmetic operations take constant time. Formally, the "input size" of the algorithm should be measured in bits, and it takes more time to divide a number that takes more bits to represent.
Your code actually works with floating-point numbers, which means the actual magnitude of the number is not directly related to the number of bits required to represent it (and we will have to ignore that real floating-point numbers have a fixed size in bits, otherwise the concepts of "input size" and "time complexity" simply make no sense). It is simpler to analyse a similar algorithm which works on integers; let's instead say the algorithm takes an integer and does n = n // 2 or n = n >> 1 instead of n = math.floor(n / 2). An integer n takes O(log n) bits to represent.
The actual amount of time depends on the algorithm used to perform the division; we can say that the time complexity is O(D(n) log n) where D(n) is the time complexity of the division algorithm. There are various division algorithms with different time complexities, which also depend on the time complexity of the multiplication algorithm used. On the other hand, since dividing by 2 is equivalent to a right-shift by 1 bit, if we write the algorithm to use a bit-shift (or if the division algorithm optimises it to a bit-shift in this special case), we will have D(n) = O(log n) because bit-shifting takes linear time in the number of bits. In that case, the original algorithm's time complexity would be O(log^2 n).
I'm taking an algorithms class and I repeatedly have trouble when I'm asked to analyze the runtime of code when there is a line with multiplication or division. How can I find big-theta of multiplying an n digit number with an m digit number (where n>m)? Is it the same as multiplying two n digit numbers?
For example, right now I'm attempting to analyze the following line of code:
return n*count/100
where count is at most 100. Is the asymptotic complexity of this any different from n*n/100? or n*n/n?
You can always look up here Computational complexity of mathematical operations.
In your complexity of n*count/100 is O(length(n)) as 100 is a constant and length(count) is at most 3.
In general multiplication of two numbers n and m digits length, takes O(nm), the same time required for division. Here i assume we are talking about long division. There are many sophisticated algorithms which will beat this complexity.
To make things clearer i will provide an example. Suppose you have three numbers:
A - n digits length
B - m digits length
C - p digits length
Find complexity of the following formula:
A * B / C
Multiply first. Complexity of A * B it is O(nm) and as result we have number D, which is n+m digits length. Now consider D / C, here complexity is O((n+m)p), where overall complexity is sum of the two O(nm + (n+m)p) = O(m(n+p) + np).
Divide first. So, we divide B / C, complexity is O(mp) and we have m digits number E. Now we calculate A * E, here complexity is O(nm). Again overall complexity is O(mp + nm) = O(m(n+p)).
From the analysis you can see that it is beneficial to divide first. Of course in real life situation you would account for numerical stability as well.
From Modern Computer Arithmetic:
Assume the larger operand has size
m, and the smaller has size n ≤ m, and denote by M(m,n) the corresponding
multiplication cost.
When m is an exact multiple of n, say m = kn, a trivial strategy is to cut the
larger operand into k pieces, giving M(kn,n) = kM(n) + O(kn).
Suppose m ≥ n and n is large. To use an evaluation-interpolation scheme,
we need to evaluate the product at m + n points, whereas balanced k by k
multiplication needs 2k points. Taking k ≈ (m+n)/2, we see that M(m,n) ≤ M((m + n)/2)(1 + o(1)) as n → ∞. On the other hand, from the discussion
above, we have M(m,n) ≤ ⌈m/n⌉M(n)(1 + o(1)).
I always thought the complexity of:
1 + 2 + 3 + ... + n is O(n), and summing two n by n matrices would be O(n^2).
But today I read from a textbook, "by the formula for the sum of the first n integers, this is n(n+1)/2" and then thus: (1/2)n^2 + (1/2)n, and thus O(n^2).
What am I missing here?
The big O notation can be used to determine the growth rate of any function.
In this case, it seems the book is not talking about the time complexity of computing the value, but about the value itself. And n(n+1)/2 is O(n^2).
You are confusing complexity of runtime and the size (complexity) of the result.
The running time of summing, one after the other, the first n consecutive numbers is indeed O(n).1
But the complexity of the result, that is the size of “sum from 1 to n” = n(n – 1) / 2 is O(n ^ 2).
1 But for arbitrarily large numbers this is simplistic since adding large numbers takes longer than adding small numbers. For a precise runtime analysis, you indeed have to consider the size of the result. However, this isn’t usually relevant in programming, nor even in purely theoretical computer science. In both domains, summing numbers is usually considered an O(1) operation unless explicitly required otherwise by the domain (i.e. when implementing an operation for a bignum library).
n(n+1)/2 is the quick way to sum a consecutive sequence of N integers (starting from 1). I think you're confusing an algorithm with big-oh notation!
If you thought of it as a function, then the big-oh complexity of this function is O(1):
public int sum_of_first_n_integers(int n) {
return (n * (n+1))/2;
}
The naive implementation would have big-oh complexity of O(n).
public int sum_of_first_n_integers(int n) {
int sum = 0;
for (int i = 1; i <= n; i++) {
sum += n;
}
return sum;
}
Even just looking at each cell of a single n-by-n matrix is O(n^2), since the matrix has n^2 cells.
There really isn't a complexity of a problem, but rather a complexity of an algorithm.
In your case, if you choose to iterate through all the numbers, the the complexity is, indeed, O(n).
But that's not the most efficient algorithm. A more efficient one is to apply the formula - n*(n+1)/2, which is constant, and thus the complexity is O(1).
So my guess is that this is actually a reference to Cracking the Coding Interview, which has this paragraph on a StringBuffer implementation:
On each concatenation, a new copy of the string is created, and the
two strings are copied over, character by character. The first
iteration requires us to copy x characters. The second iteration
requires copying 2x characters. The third iteration requires 3x, and
so on. The total time therefore is O(x + 2x + ... + nx). This reduces
to O(xn²). (Why isn't it O(xnⁿ)? Because 1 + 2 + ... n equals n(n+1)/2
or, O(n²).)
For whatever reason I found this a little confusing on my first read-through, too. The important bit to see is that n is multiplying n, or in other words that n² is happening, and that dominates. This is why ultimately O(xn²) is just O(n²) -- the x is sort of a red herring.
You have a formula that doesn't depend on the number of numbers being added, so it's a constant-time algorithm, or O(1).
If you add each number one at a time, then it's indeed O(n). The formula is a shortcut; it's a different, more efficient algorithm. The shortcut works when the numbers being added are all 1..n. If you have a non-contiguous sequence of numbers, then the shortcut formula doesn't work and you'll have to go back to the one-by-one algorithm.
None of this applies to the matrix of numbers, though. To add two matrices, it's still O(n^2) because you're adding n^2 distinct pairs of numbers to get a matrix of n^2 results.
There's a difference between summing N arbitrary integers and summing N that are all in a row. For 1+2+3+4+...+N, you can take advantage of the fact that they can be divided into pairs with a common sum, e.g. 1+N = 2+(N-1) = 3+(N-2) = ... = N + 1. So that's N+1, N/2 times. (If there's an odd number, one of them will be unpaired, but with a little effort you can see that the same formula holds in that case.)
That is not O(N^2), though. It's just a formula that uses N^2, actually O(1). O(N^2) would mean (roughly) that the number of steps to calculate it grows like N^2, for large N. In this case, the number of steps is the same regardless of N.
Adding the first n numbers:
Consider the algorithm:
Series_Add(n)
return n*(n+1)/2
this algorithm indeed runs in O(|n|^2), where |n| is the length (the bits) of n and not the magnitude, simply because multiplication of 2 numbers, one of k bits and the other of l bits runs in O(k*l) time.
Careful
Considering this algorithm:
Series_Add_pseudo(n):
sum=0
for i= 1 to n:
sum += i
return sum
which is the naive approach, you can assume that this algorithm runs in linear time or generally in polynomial time. This is not the case.
The input representation(length) of n is O(logn) bits (any n-ary coding except unary), and the algorithm (although it is running linearly in the magnitude) it runs exponentially (2^logn) in the length of the input.
This is actually the pseudo-polynomial algorithm case. It appears to be polynomial but it is not.
You could even try it in python (or any programming language), for a medium length number like 200 bits.
Applying the first algorithm the result comes in a split second, and applying the second, you have to wait a century...
1+2+3+...+n is always less than n+n+n...+n n times. you can rewrite this n+n+..+n as n*n.
f(n) = O(g(n)) if there exists a positive integer n0 and a positive
constant c, such that f(n) ≤ c * g(n) ∀ n ≥ n0
since Big-Oh represents the upper bound of the function, where the function f(n) is the sum of natural numbers up to n.
now, talking about time complexity, for small numbers, the addition should be of a constant amount of work. but the size of n could be humongous; you can't deny that probability.
adding integers can take linear amount of time when n is really large.. So you can say that addition is O(n) operation and you're adding n items. so that alone would make it O(n^2). of course, it will not always take n^2 time, but it's the worst-case when n is really large. (upper bound, remember?)
Now, let's say you directly try to achieve it using n(n+1)/2. Just one multiplication and one division, this should be a constant operation, no?
No.
using a natural size metric of number of digits, the time complexity of multiplying two n-digit numbers using long multiplication is Θ(n^2). When implemented in software, long multiplication algorithms must deal with overflow during additions, which can be expensive. Wikipedia
That again leaves us to O(n^2).
It's equivalent to BigO(n^2), because it is equivalent to (n^2 + n) / 2 and in BigO you ignore constants, so even though the squared n is divided by 2, you still have exponential growth at the rate of square.
Think about O(n) and O(n/2) ? We similarly don't distinguish the two, O(n/2) is just O(n) for a smaller n, but the growth rate is still linear.
What that means is that as n increase, if you were to plot the number of operations on a graph, you would see a n^2 curve appear.
You can see that already:
when n = 2 you get 3
when n = 3 you get 6
when n = 4 you get 10
when n = 5 you get 15
when n = 6 you get 21
And if you plot it like I did here:
You see that the curve is similar to that of n^2, you will have a smaller number at each y, but the curve is similar to it. Thus we say that the magnitude is the same, because it will grow in time complexity similarly to n^2 as n grows bigger.
answer of sum of series of n natural can be found using two ways. first way is by adding all the numbers in loop. in this case algorithm is linear and code will be like this
int sum = 0;
for (int i = 1; i <= n; i++) {
sum += n;
}
return sum;
it is analogous to 1+2+3+4+......+n. in this case complexity of algorithm is calculated as number of times addition operation is performed which is O(n).
second way of finding answer of sum of series of n natural number is direst formula n*(n+1)/2. this formula use multiplication instead of repetitive addition. multiplication operation has not linear time complexity. there are various algorithm available for multiplication which has time complexity ranging from O(N^1.45) to O (N^2). therefore in case of multiplication time complexity depends on the processor's architecture. but for the analysis purpose time complexity of multiplication is considered as O(N^2). therefore when one use second way to find the sum then time complexity will be O(N^2).
here multiplication operation is not same as the addition operation. if anybody has knowledge of computer organisation subject then he can easily understand the internal working of multiplication and addition operation. multiplication circuit is more complex than the adder circuit and require much higher time than the adder circuit to compute the result. so time complexity of sum of series can't be constant.
I'm starting to study computational complexity, BigOh notation and the likes, and I was tasked to do an integer factorization algorithm and determine its complexity. I've written the algorithm and it is working, but I'm having trouble calculating the complexity. The pseudo code is as follows:
DEF fact (INT n)
BEGIN
INT i
FOR (i -> 2 TO i <= n / i STEP 1)
DO
WHILE ((n MOD i) = 0)
DO
PRINT("%int X", i)
n -> n / i
DONE
DONE
IF (n > 1)
THEN
PRINT("%int", n)
END
What I attempted to do, I think, is extremely wrong:
f(x) = n-1 + n-1 + 1 + 1 = 2n
so
f(n) = O(n)
Which I think it's wrong because factorization algorithms are supposed to be computationally hard, they can't even be polynomial. So what do you suggest to help me? Maybe I'm just too tired at this time of the night and I'm screwing this all up :(
Thank you in advance.
This phenomenon is called pseudopolynomiality: a complexity that seems to be polynomial, but really isn't. If you ask whether a certain complexity (here, n) is polynomial or not, you must look at how the complexity relates to the size of the input. In most cases, such as sorting (which e.g. merge sort can solve in O(n lg n)), n describes the size of the input (the number of elements). In this case, however, n does not describe the size of the input; it is the input value. What, then, is the size of n? A natural choice would be the number of bits in n, which is approximately lg n. So let w = lg n be the size of n. Now we see that O(n) = O(2^(lg n)) = O(2^w) - in other words, exponential in the input size w.
(Note that O(n) = O(2^(lg n)) = O(2^w) is always true; the question is whether the input size is described by n or by w = lg n. Also, if n describes the number of elements in a list, one should strictly speaking count the bits of every single element in the list in order to get the total input size; however, one usually assumes that in lists, all numbers are bounded in size (to e.g. 32 bits)).
Use the fact that your algorithm is recursive. If f(x) is the number of operations take to factor, if n is the first factor that is found, then f(x)=(n-1)+f(x/n). The worst case for any factoring algorithm is a prime number, for which the complexity of your algorithm is O(n).
Factoring algorithms are 'hard' mainly because they are used on obscenely large numbers.
In big-O notation, n is the size of input, not the input itself (as in your case). The size of the input is lg(n) bits. So basically your algorithm is exponential.
What's the complexity of a recursive program to find factorial of a number n? My hunch is that it might be O(n).
If you take multiplication as O(1), then yes, O(N) is correct. However, note that multiplying two numbers of arbitrary length x is not O(1) on finite hardware -- as x tends to infinity, the time needed for multiplication grows (e.g. if you use Karatsuba multiplication, it's O(x ** 1.585)).
You can theoretically do better for sufficiently huge numbers with Schönhage-Strassen, but I confess I have no real world experience with that one. x, the "length" or "number of digits" (in whatever base, doesn't matter for big-O anyway of N, grows with O(log N), of course.
If you mean to limit your question to factorials of numbers short enough to be multiplied in O(1), then there's no way N can "tend to infinity" and therefore big-O notation is inappropriate.
Assuming you're talking about the most naive factorial algorithm ever:
factorial (n):
if (n = 0) then return 1
otherwise return n * factorial(n-1)
Yes, the algorithm is linear, running in O(n) time. This is the case because it executes once every time it decrements the value n, and it decrements the value n until it reaches 0, meaning the function is called recursively n times. This is assuming, of course, that both decrementation and multiplication are constant operations.
Of course, if you implement factorial some other way (for example, using addition recursively instead of multiplication), you can end up with a much more time-complex algorithm. I wouldn't advise using such an algorithm, though.
When you express the complexity of an algorithm, it is always as a function of the input size. It is only valid to assume that multiplication is an O(1) operation if the numbers that you are multiplying are of fixed size. For example, if you wanted to determine the complexity of an algorithm that computes matrix products, you might assume that the individual components of the matrices were of fixed size. Then it would be valid to assume that multiplication of two individual matrix components was O(1), and you would compute the complexity according to the number of entries in each matrix.
However, when you want to figure out the complexity of an algorithm to compute N! you have to assume that N can be arbitrarily large, so it is not valid to assume that multiplication is an O(1) operation.
If you want to multiply an n-bit number with an m-bit number the naive algorithm (the kind you do by hand) takes time O(mn), but there are faster algorithms.
If you want to analyze the complexity of the easy algorithm for computing N!
factorial(N)
f=1
for i = 2 to N
f=f*i
return f
then at the k-th step in the for loop, you are multiplying (k-1)! by k. The number of bits used to represent (k-1)! is O(k log k) and the number of bits used to represent k is O(log k). So the time required to multiply (k-1)! and k is O(k (log k)^2) (assuming you use the naive multiplication algorithm). Then the total amount of time taken by the algorithm is the sum of the time taken at each step:
sum k = 1 to N [k (log k)^2] <= (log N)^2 * (sum k = 1 to N [k]) =
O(N^2 (log N)^2)
You could improve this performance by using a faster multiplication algorithm, like Schönhage-Strassen which takes time O(n*log(n)*log(log(n))) for 2 n-bit numbers.
The other way to improve performance is to use a better algorithm to compute N!. The fastest one that I know of first computes the prime factorization of N! and then multiplies all the prime factors.
The time-complexity of recursive factorial would be:
factorial (n) {
if (n = 0)
return 1
else
return n * factorial(n-1)
}
So,
The time complexity for one recursive call would be:
T(n) = T(n-1) + 3 (3 is for As we have to do three constant operations like
multiplication,subtraction and checking the value of n in each recursive
call)
= T(n-2) + 6 (Second recursive call)
= T(n-3) + 9 (Third recursive call)
.
.
.
.
= T(n-k) + 3k
till, k = n
Then,
= T(n-n) + 3n
= T(0) + 3n
= 1 + 3n
To represent in Big-Oh notation,
T(N) is directly proportional to n,
Therefore,
The time complexity of recursive factorial is O(n).
As there is no extra space taken during the recursive calls,the space complexity is O(N).