Simple while loop Big-O complexity - big-o

int a = 3;
while (a <= n) {
a = a * a;
}
My version is that its complexity is:http://www.mmoprophet.com/stuff/big-o.jpg
Is there such a thing?

That's not right. a can't be a part of the big-O formula since it's just a temporary variable.
Off the top of my head, if we take multiplication to be a constant-time operation, then the number of multiplications performed will be O(log log n). If you were multiplying by a constant every iteration then it would be O(log n). Because you're multiplying by a growing number each iteration then there's another log.
Think of it as the number of digits doubling each iteration. How many times can you double the number of digits before you exceed the limit? The number of digits is log n and you can double the starting digits log2 log n times.
As for the other aspect of the question, yes, O(a-th root of n) could be a valid complexity class for some constant a. It would more familiarly be written as O(n1/a).

Well, you could actually go into an infinite loop!
Assume 32 bit integers:
Try this:
int a = 3 ;
int n = 2099150850;
while (a <= n)
{
a = a*a;
}
But assuming there are no integer overflows about, other posters are right, it is O(log logn) if you assume O(1) multiplication.
An easy way to see this is:
xn+1 = xn2.
Take x1 = a.
Taking logs.
tn = log xn
Then
tn+1 = 2tn
I will leave the rest to you.
It becomes more interesting if you consider the complexity of multiplication of two k digits numbers.

The number of loop iterations is Ο(log log n). The loop body itself does an assignment (which we can consider to be constant) and a multiplication. The best known multiplication algorithm so far has a step complexity of Ο(n log n×2Ο(log* n)), so, all together, the complexity is something like:
Ο(log log n × n log n×2Ο(log* n))
In more readable form:
Ο(log log n × n log n×2^Ο(log* n)) http://TeXHub.Com/b/TyhcbG9nXGxvZyBuXCxcdGltZXNcLG4gXGxvZyBuXCxcdGltZXNcLDJee08oXGxvZ14qIG4pfSk=

After i-th iteration (i=0,1,...) value of a is 32i. There will be O(log log n) iterations, and assuming arithmetic operations in O(1) this is time complexity.

Related

Running time of this algorithm for calculating square root

I have this algorithm
int f(int n){
int k=0;
While(true){
If(k == n*n) return k;
k++;
}
}
My friend says that it cost O(2^n). I don’t understand why.
The input is n , the while loop iterate n*n wich is n^2, hence the complexity is O(n^2).
This is based on your source code, not on the title.
For the title, this link my help, complexity of finding the square root,
From the answer Emil Jeřábek I quote:
The square root of an n-digit number can be computed in time O(M(n)) using e.g. Newton’s iteration, where M(n) is the time needed to multiply two n-digit integers. The current best bound on M(n) is n log n 2^{O(log∗n)}, provided by Fürer’s algorithm.
You may look at the interesting entry for sqrt on wikipedia
In my opinion the time cost is O(n^2).
This function will return the k=n^2 value after n^2 while's iterations.
I'm Manuel's friend,
what you don't consider is that input n has length of log(n)... the time complexity would be n ^ 2 if we considered the input length equal to n, but it's not.
So let consider x = log(n) (the length of the input), now we have that n = 2^(x) = 2^(logn) = n and so far all correct.
Now if we calculate the cost as a function of n we get n ^ 2, but n is equal to 2^(x) and we need to calculate the cost as a function of x (because time complexity is calculated on the length of the input, not on the value), so :
O(f) = n^2 = (2^(x))^2 = 2^(2x) = O(2^x)
calculation with excel
"In computer science, big O notation is used to classify algorithms according to how their run time or space requirements grow as the input size grows." (https://en.wikipedia.org/wiki/Big_O_notation)
here's another explanation where the algorithm in question is the primality test : Why naive primality test algorithm is not polynomial

what is the time complexity of this method which checks if a number k can be represented as n^p

Time complexity of below method? I'm calculating it as log(n)*log(n)= log(n)
public int isPower(int A) {
if (A == 1)
return 1;
for (int i = (int)Math.sqrt(A); i > 1; i--){
int p = A;
while (p % i == 0) {
p = p / i;
}
if (p == 1)
return 1;
}
return 0;
}
Worst-case complexity:
for(..) runs sqrt(A) times
Then while(..) depends on prime factorization of A=p_1^e1*p_2^e_2*..*p_n^e_n, so it is Max(e_1,e_2,..,e_n) worst-case, or roughly Max(log_p_1(A),log_p_2(A),..)
At most while(..) will execute log(A) times roughly.
so total rough worst-case complexity = sqrt(A)*log(A) leaving out constant factors
Worst-case complexity happens for numbers A which are products of different integers ie A = n_1^e_1*n_2^e_2*..
Average-case complexity:
Given than numbers which are products of different integers are more numerous than numbers which are simply powers of a single integer, in a given range, then choosing a number at random, is more likely to be product of different integers, ie A = n_1^e_1*n_2^e_2... Thus average-case complexity is roughly the same as worst-case complexity ie sqrt(A)*log(A)
Best-case complexity:
Best-case complexity happens when the number A is indeed a power of a single integer/prime ie A = n^e. Then the algorithm in this case takes less time. I leave it as an exercise to compute best-case complexity.
PS. Another way to see this is to understand that checking if a number is a power of a prime/integer, effectively one has to factor the number to its prime factorisation (which is what is done in this algorithm), which is effectively of the same complexity (see for example complexity of factoring by trial division).
SO should have mathjax support as cs.stackexchange has :p !
You iterate from sqrt(A) to 2. Then u tried to factorize. For prime number your code iterate sqrt(A) times . its best case. if number is 2^30 then ur code execute
sqrt(2^30) * 30 means sqrt(n) * log(n) times.
So your code complexity: sqrt(n) * log(n)

Big O, what is the complexity of summing a series of n numbers?

I always thought the complexity of:
1 + 2 + 3 + ... + n is O(n), and summing two n by n matrices would be O(n^2).
But today I read from a textbook, "by the formula for the sum of the first n integers, this is n(n+1)/2" and then thus: (1/2)n^2 + (1/2)n, and thus O(n^2).
What am I missing here?
The big O notation can be used to determine the growth rate of any function.
In this case, it seems the book is not talking about the time complexity of computing the value, but about the value itself. And n(n+1)/2 is O(n^2).
You are confusing complexity of runtime and the size (complexity) of the result.
The running time of summing, one after the other, the first n consecutive numbers is indeed O(n).1
But the complexity of the result, that is the size of “sum from 1 to n” = n(n – 1) / 2 is O(n ^ 2).
1 But for arbitrarily large numbers this is simplistic since adding large numbers takes longer than adding small numbers. For a precise runtime analysis, you indeed have to consider the size of the result. However, this isn’t usually relevant in programming, nor even in purely theoretical computer science. In both domains, summing numbers is usually considered an O(1) operation unless explicitly required otherwise by the domain (i.e. when implementing an operation for a bignum library).
n(n+1)/2 is the quick way to sum a consecutive sequence of N integers (starting from 1). I think you're confusing an algorithm with big-oh notation!
If you thought of it as a function, then the big-oh complexity of this function is O(1):
public int sum_of_first_n_integers(int n) {
return (n * (n+1))/2;
}
The naive implementation would have big-oh complexity of O(n).
public int sum_of_first_n_integers(int n) {
int sum = 0;
for (int i = 1; i <= n; i++) {
sum += n;
}
return sum;
}
Even just looking at each cell of a single n-by-n matrix is O(n^2), since the matrix has n^2 cells.
There really isn't a complexity of a problem, but rather a complexity of an algorithm.
In your case, if you choose to iterate through all the numbers, the the complexity is, indeed, O(n).
But that's not the most efficient algorithm. A more efficient one is to apply the formula - n*(n+1)/2, which is constant, and thus the complexity is O(1).
So my guess is that this is actually a reference to Cracking the Coding Interview, which has this paragraph on a StringBuffer implementation:
On each concatenation, a new copy of the string is created, and the
two strings are copied over, character by character. The first
iteration requires us to copy x characters. The second iteration
requires copying 2x characters. The third iteration requires 3x, and
so on. The total time therefore is O(x + 2x + ... + nx). This reduces
to O(xn²). (Why isn't it O(xnⁿ)? Because 1 + 2 + ... n equals n(n+1)/2
or, O(n²).)
For whatever reason I found this a little confusing on my first read-through, too. The important bit to see is that n is multiplying n, or in other words that n² is happening, and that dominates. This is why ultimately O(xn²) is just O(n²) -- the x is sort of a red herring.
You have a formula that doesn't depend on the number of numbers being added, so it's a constant-time algorithm, or O(1).
If you add each number one at a time, then it's indeed O(n). The formula is a shortcut; it's a different, more efficient algorithm. The shortcut works when the numbers being added are all 1..n. If you have a non-contiguous sequence of numbers, then the shortcut formula doesn't work and you'll have to go back to the one-by-one algorithm.
None of this applies to the matrix of numbers, though. To add two matrices, it's still O(n^2) because you're adding n^2 distinct pairs of numbers to get a matrix of n^2 results.
There's a difference between summing N arbitrary integers and summing N that are all in a row. For 1+2+3+4+...+N, you can take advantage of the fact that they can be divided into pairs with a common sum, e.g. 1+N = 2+(N-1) = 3+(N-2) = ... = N + 1. So that's N+1, N/2 times. (If there's an odd number, one of them will be unpaired, but with a little effort you can see that the same formula holds in that case.)
That is not O(N^2), though. It's just a formula that uses N^2, actually O(1). O(N^2) would mean (roughly) that the number of steps to calculate it grows like N^2, for large N. In this case, the number of steps is the same regardless of N.
Adding the first n numbers:
Consider the algorithm:
Series_Add(n)
return n*(n+1)/2
this algorithm indeed runs in O(|n|^2), where |n| is the length (the bits) of n and not the magnitude, simply because multiplication of 2 numbers, one of k bits and the other of l bits runs in O(k*l) time.
Careful
Considering this algorithm:
Series_Add_pseudo(n):
sum=0
for i= 1 to n:
sum += i
return sum
which is the naive approach, you can assume that this algorithm runs in linear time or generally in polynomial time. This is not the case.
The input representation(length) of n is O(logn) bits (any n-ary coding except unary), and the algorithm (although it is running linearly in the magnitude) it runs exponentially (2^logn) in the length of the input.
This is actually the pseudo-polynomial algorithm case. It appears to be polynomial but it is not.
You could even try it in python (or any programming language), for a medium length number like 200 bits.
Applying the first algorithm the result comes in a split second, and applying the second, you have to wait a century...
1+2+3+...+n is always less than n+n+n...+n n times. you can rewrite this n+n+..+n as n*n.
f(n) = O(g(n)) if there exists a positive integer n0 and a positive
constant c, such that f(n) ≤ c * g(n) ∀ n ≥ n0
since Big-Oh represents the upper bound of the function, where the function f(n) is the sum of natural numbers up to n.
now, talking about time complexity, for small numbers, the addition should be of a constant amount of work. but the size of n could be humongous; you can't deny that probability.
adding integers can take linear amount of time when n is really large.. So you can say that addition is O(n) operation and you're adding n items. so that alone would make it O(n^2). of course, it will not always take n^2 time, but it's the worst-case when n is really large. (upper bound, remember?)
Now, let's say you directly try to achieve it using n(n+1)/2. Just one multiplication and one division, this should be a constant operation, no?
No.
using a natural size metric of number of digits, the time complexity of multiplying two n-digit numbers using long multiplication is Θ(n^2). When implemented in software, long multiplication algorithms must deal with overflow during additions, which can be expensive. Wikipedia
That again leaves us to O(n^2).
It's equivalent to BigO(n^2), because it is equivalent to (n^2 + n) / 2 and in BigO you ignore constants, so even though the squared n is divided by 2, you still have exponential growth at the rate of square.
Think about O(n) and O(n/2) ? We similarly don't distinguish the two, O(n/2) is just O(n) for a smaller n, but the growth rate is still linear.
What that means is that as n increase, if you were to plot the number of operations on a graph, you would see a n^2 curve appear.
You can see that already:
when n = 2 you get 3
when n = 3 you get 6
when n = 4 you get 10
when n = 5 you get 15
when n = 6 you get 21
And if you plot it like I did here:
You see that the curve is similar to that of n^2, you will have a smaller number at each y, but the curve is similar to it. Thus we say that the magnitude is the same, because it will grow in time complexity similarly to n^2 as n grows bigger.
answer of sum of series of n natural can be found using two ways. first way is by adding all the numbers in loop. in this case algorithm is linear and code will be like this
int sum = 0;
for (int i = 1; i <= n; i++) {
sum += n;
}
return sum;
it is analogous to 1+2+3+4+......+n. in this case complexity of algorithm is calculated as number of times addition operation is performed which is O(n).
second way of finding answer of sum of series of n natural number is direst formula n*(n+1)/2. this formula use multiplication instead of repetitive addition. multiplication operation has not linear time complexity. there are various algorithm available for multiplication which has time complexity ranging from O(N^1.45) to O (N^2). therefore in case of multiplication time complexity depends on the processor's architecture. but for the analysis purpose time complexity of multiplication is considered as O(N^2). therefore when one use second way to find the sum then time complexity will be O(N^2).
here multiplication operation is not same as the addition operation. if anybody has knowledge of computer organisation subject then he can easily understand the internal working of multiplication and addition operation. multiplication circuit is more complex than the adder circuit and require much higher time than the adder circuit to compute the result. so time complexity of sum of series can't be constant.

This algorithm does not have a quadratic run time right?

I recently had an interview and was given a small problem that I was to code up.
The problem was basically find duplicates in an array of length n, using constant space in O(n). Each element is in the range 1-(n-1) and guaranteed to be a duplicate. This is what I came up with:
public int findDuplicate(int[] vals) {
int indexSum=0;
int valSum=0;
for (int i=0; i< vals.length; i++) {
indexSum += i;
valSum += vals[i];
}
return valSum - indexSum;
}
Then we got into a discussion about the runtime of this algorithm. A sum of series from 0 -> n = (n^2 + n)/2 which is quadratic. However, isn't the algorithm O(n) time? The number of operations are bound by the length of the array right?
What am I missing? Is this algorithm O(n^2)?
The fact that the sum of the integers from 0 to n is O(n^2) is irrelevant here.
Yes you run through the loop exactly O(n) times.
The big question is, what order of complexity are you assuming on addition?
If O(1) then yeah, this is linear. Most people will assume that addition is O(1).
But iwhat if addition is actually O(b) (b is bits, and in our case b = log n)? If you are going to assume this, then this algorithm is actually O(n * log n) (adding n numbers, each needs log n bits to represent).
Again, most people assume that addition is O(1).
Algorithms researchers have standardized on the unit-cost RAM model, where words are Theta(log n) bits and operations on words are Theta(1) time. An alternative model where operations on words are Theta(log n) time is not used any more because it's ridiculous to have a RAM that can't recognize palindromes in linear time.
Your algorithm clearly runs in time O(n) and extra space O(1), since convention is for the default unit of space to be the word. Your interviewer may have been worried about overflow, but your algorithm works fine if addition and subtraction are performed modulo any number M ≥ n, as would be the case for two's complement.
tl;dr Whatever your interviewer's problem was is imaginary or rooted in an improper understanding of the conventions of theoretical computer science.
You work on each on n cells one time each. Linear time.
Yes the algorithm is linear*. The result of valSum doesn't affect the running time. Take it to extreme, the function
int f(int[] vals) {
return vals.length * vals.length;
}
gives n2 in 1 multiplication. Obviously this doesn't mean f is O(n2) ;)
(*: assuming addition is O(1))
The sum of i from i=0 to n is n*(n+1)/2 which is bounded by n^2 but that has nothing to do with running time... that's just the closed form of the summation.
The running time of your algorithm is linear, O(n), where n is the number of elements in your array (assuming the addition operation is a constant time operation, O(1)).
I hope this helps.
Hristo

Determining complexity of an integer factorization algorithm

I'm starting to study computational complexity, BigOh notation and the likes, and I was tasked to do an integer factorization algorithm and determine its complexity. I've written the algorithm and it is working, but I'm having trouble calculating the complexity. The pseudo code is as follows:
DEF fact (INT n)
BEGIN
INT i
FOR (i -> 2 TO i <= n / i STEP 1)
DO
WHILE ((n MOD i) = 0)
DO
PRINT("%int X", i)
n -> n / i
DONE
DONE
IF (n > 1)
THEN
PRINT("%int", n)
END
What I attempted to do, I think, is extremely wrong:
f(x) = n-1 + n-1 + 1 + 1 = 2n
so
f(n) = O(n)
Which I think it's wrong because factorization algorithms are supposed to be computationally hard, they can't even be polynomial. So what do you suggest to help me? Maybe I'm just too tired at this time of the night and I'm screwing this all up :(
Thank you in advance.
This phenomenon is called pseudopolynomiality: a complexity that seems to be polynomial, but really isn't. If you ask whether a certain complexity (here, n) is polynomial or not, you must look at how the complexity relates to the size of the input. In most cases, such as sorting (which e.g. merge sort can solve in O(n lg n)), n describes the size of the input (the number of elements). In this case, however, n does not describe the size of the input; it is the input value. What, then, is the size of n? A natural choice would be the number of bits in n, which is approximately lg n. So let w = lg n be the size of n. Now we see that O(n) = O(2^(lg n)) = O(2^w) - in other words, exponential in the input size w.
(Note that O(n) = O(2^(lg n)) = O(2^w) is always true; the question is whether the input size is described by n or by w = lg n. Also, if n describes the number of elements in a list, one should strictly speaking count the bits of every single element in the list in order to get the total input size; however, one usually assumes that in lists, all numbers are bounded in size (to e.g. 32 bits)).
Use the fact that your algorithm is recursive. If f(x) is the number of operations take to factor, if n is the first factor that is found, then f(x)=(n-1)+f(x/n). The worst case for any factoring algorithm is a prime number, for which the complexity of your algorithm is O(n).
Factoring algorithms are 'hard' mainly because they are used on obscenely large numbers.
In big-O notation, n is the size of input, not the input itself (as in your case). The size of the input is lg(n) bits. So basically your algorithm is exponential.

Resources