I have an algorithm and I am trying to figure out the best-case scenario (asymptotic notation) for this algorithm that solves the maximum subarray sum problem:
-- Pseudocode --
// Input: An n-element array A of numbers, indexed from 1 to n.
// Output: The maximum subarray sum of Array A.
Algorithm MaxSubSlow(A):
m = 0.
for j = 1 to n do:
for k = j to n do:
s = 0
for i = j to k do:
s = s + A[i]
if s > m then:
m = s
return m
Looking at the algorithm, using asymptotic notation math it is easy to determine the worst-case scenario (each loop runs, in its worst case, all n times) so the worst-case complexity class would be O(N^3).
However, my textbook states that this algorithm also runs in Big-Omega(N^3) time; that is, the lower bound is equal to its upper bound. It does not offer an explanation as to why, though.
How would you formally calculate and prove this? Do you have to prove that for the algorithm, there is a subset of numbers (i, j, k) such that each loop in the algorithm will run at least n times? If so, how do you do that?
Intuitively, even if you change the input keeping the same size n, it performs the same number of operations: the number of operation only depends from the input size, not the specific input. So best and worst case scenario are the same.
The computation of o() is exactly the same as O() in this case
Related
I have this algorithm
int f(int n){
int k=0;
While(true){
If(k == n*n) return k;
k++;
}
}
My friend says that it cost O(2^n). I don’t understand why.
The input is n , the while loop iterate n*n wich is n^2, hence the complexity is O(n^2).
This is based on your source code, not on the title.
For the title, this link my help, complexity of finding the square root,
From the answer Emil Jeřábek I quote:
The square root of an n-digit number can be computed in time O(M(n)) using e.g. Newton’s iteration, where M(n) is the time needed to multiply two n-digit integers. The current best bound on M(n) is n log n 2^{O(log∗n)}, provided by Fürer’s algorithm.
You may look at the interesting entry for sqrt on wikipedia
In my opinion the time cost is O(n^2).
This function will return the k=n^2 value after n^2 while's iterations.
I'm Manuel's friend,
what you don't consider is that input n has length of log(n)... the time complexity would be n ^ 2 if we considered the input length equal to n, but it's not.
So let consider x = log(n) (the length of the input), now we have that n = 2^(x) = 2^(logn) = n and so far all correct.
Now if we calculate the cost as a function of n we get n ^ 2, but n is equal to 2^(x) and we need to calculate the cost as a function of x (because time complexity is calculated on the length of the input, not on the value), so :
O(f) = n^2 = (2^(x))^2 = 2^(2x) = O(2^x)
calculation with excel
"In computer science, big O notation is used to classify algorithms according to how their run time or space requirements grow as the input size grows." (https://en.wikipedia.org/wiki/Big_O_notation)
here's another explanation where the algorithm in question is the primality test : Why naive primality test algorithm is not polynomial
I am confused about the pseudopolynomial time in compare to polynomial time
input(n);
for (int i=0; i<n;i++){
doStuff; }
The runtime would be O(n) but writing out the number n takes x=O(log n) bits.
So, if we let x be the number of bits required to write out the input n, the runtime of this algorithm is actually O(2^x), which is not a polynomial in x.
Is this conclusion correct?
Edit: Look at simple primetest.
function isPrime(n):
for i from 2 to n - 1:
if (n mod i) = 0, return false
return true
The runtime would be O(n). But remember, the formal definition of time complexity talks about the complexity of the algorithm as a function of the number of bits of input.
Therefore, if we let x be the number of bits required to write out the input n, the runtime of this algorithm is actually O(2^x), which is not a polynomial in x.
EDIT2: i got all your points but look at Knapsack problem.
// Input:
// Values (stored in array v)
// Weights (stored in array w)
// Number of distinct items (n)
// Knapsack capacity (W)
for j from 0 to W do:
m[0, j] := 0
for i from 1 to n do:
for j from 0 to W do:
if w[i] > j then:
m[i, j] := m[i-1, j]
else m[i, j] := max(m[i-1, j], m[i-1, j-w[i]] + v[i])
if you guys are right it would mean that Knapsack problem has runtime o(n*W), therefore it has polynomial time !
Alex does 64 push-ups everyday.
and
Alex does 2^6 push-ups everyday.
If above two lines mean same to you, then O(n) and O(2^x) doesn't matter :)
O(2^x)
=> O(2^log_2(n))
=> n [as we know x^log_x(y) = y]
The formal definition of time complexity talks about the complexity of
the algorithm as a function of the number of bits of input.
Yes, you're right. But the idea of Big-O analysis is about the growth rate of algorithm with growth of input, not the precise counting of exactly how many times my loop iterates.
As for example, when n = 32, the algorithm complexity is O(2^5), but with growth of n, for example when n = 1048576, the complexity will be O(2^20). So, complexity increases with input increases.
n or 2^(log_2(n)) are all about presenting same numeric amount differently. As long as the growth rate of the algorithm is linearly proportional to the growth rate of input, the algorithm is linear - no matter whether we represent the input n as e^x or log(y).
Edit
Quoted from Wikipedia
The O(nW) complexity does not contradict the fact that the knapsack
problem is NP-complete, since W, unlike n, is not polynomial in
the length of the input to the problem. The length of the W input to
the problem is proportional to the number of bits in W, log W, not
to W itself.
You first two snippet was about to n which has obviously polynomial growth.
Since,
x = ceil(log_2(n)), 2^x becomes 2^log_2(n), which is nothing but n (using a^log_a(b) = b).
Remember to analyse the runtime of an algorithm only in the terms of your input variables, and not doing something fancy like counting the bits it would require, since (in this case, for example) the number of bits itself is a logarithm of the number!
Say I have following algorithm:
for(int i = 1; i < N; i *= 3) {
sum++
}
I need to calculate the complexity using tilde-notation, which basically means that I have to find a tilde-function so that when I divide the complexity of the algorithm by this tilde-function, the limit in infinity has to be 1.
I don't think there's any need to calculate the exact complexity, we can ignore the constants and then we have a tilde-complexity.
By looking at the growth off the index, I assume that this algorithm is
~ log N
But rather than having a binary logarithmic function, the base in this case is 3.
Does this matter for the exact notation? Is the order of growth exactly the same and thus can we ignore the base when using Tilde-notation? Do I approach this correctly?
You are right, the for loop executes ceil(log_3 N) times, where log_3 N denotes the base-3 logarithm of N.
No, you cannot ignore the base when using the tilde notation.
Here's how we can derive the time complexity.
We will assume that each iteration of the for loop costs C, for some constant C>0.
Let T(N) denote the number of executions of the for-loop. Since at j-th iteration the value of i is 3^j, it follows that the number of iterations that we make is the smallest j for which 3^j >= N. Taking base-3 logarithms of both sides we get j >= log_3 N. Because j is an integer, j = ceil(log_3 N). Thus T(N) ~ ceil(log_3 N).
Let S(N) denote the time complexity of the for-loop. The "total" time complexity is thus C * T(N), because the cost of each of T(N) iterations is C, which in tilde notation we can write as S(N) ~ C * ceil*(log_3 N).
I always thought the complexity of:
1 + 2 + 3 + ... + n is O(n), and summing two n by n matrices would be O(n^2).
But today I read from a textbook, "by the formula for the sum of the first n integers, this is n(n+1)/2" and then thus: (1/2)n^2 + (1/2)n, and thus O(n^2).
What am I missing here?
The big O notation can be used to determine the growth rate of any function.
In this case, it seems the book is not talking about the time complexity of computing the value, but about the value itself. And n(n+1)/2 is O(n^2).
You are confusing complexity of runtime and the size (complexity) of the result.
The running time of summing, one after the other, the first n consecutive numbers is indeed O(n).1
But the complexity of the result, that is the size of “sum from 1 to n” = n(n – 1) / 2 is O(n ^ 2).
1 But for arbitrarily large numbers this is simplistic since adding large numbers takes longer than adding small numbers. For a precise runtime analysis, you indeed have to consider the size of the result. However, this isn’t usually relevant in programming, nor even in purely theoretical computer science. In both domains, summing numbers is usually considered an O(1) operation unless explicitly required otherwise by the domain (i.e. when implementing an operation for a bignum library).
n(n+1)/2 is the quick way to sum a consecutive sequence of N integers (starting from 1). I think you're confusing an algorithm with big-oh notation!
If you thought of it as a function, then the big-oh complexity of this function is O(1):
public int sum_of_first_n_integers(int n) {
return (n * (n+1))/2;
}
The naive implementation would have big-oh complexity of O(n).
public int sum_of_first_n_integers(int n) {
int sum = 0;
for (int i = 1; i <= n; i++) {
sum += n;
}
return sum;
}
Even just looking at each cell of a single n-by-n matrix is O(n^2), since the matrix has n^2 cells.
There really isn't a complexity of a problem, but rather a complexity of an algorithm.
In your case, if you choose to iterate through all the numbers, the the complexity is, indeed, O(n).
But that's not the most efficient algorithm. A more efficient one is to apply the formula - n*(n+1)/2, which is constant, and thus the complexity is O(1).
So my guess is that this is actually a reference to Cracking the Coding Interview, which has this paragraph on a StringBuffer implementation:
On each concatenation, a new copy of the string is created, and the
two strings are copied over, character by character. The first
iteration requires us to copy x characters. The second iteration
requires copying 2x characters. The third iteration requires 3x, and
so on. The total time therefore is O(x + 2x + ... + nx). This reduces
to O(xn²). (Why isn't it O(xnⁿ)? Because 1 + 2 + ... n equals n(n+1)/2
or, O(n²).)
For whatever reason I found this a little confusing on my first read-through, too. The important bit to see is that n is multiplying n, or in other words that n² is happening, and that dominates. This is why ultimately O(xn²) is just O(n²) -- the x is sort of a red herring.
You have a formula that doesn't depend on the number of numbers being added, so it's a constant-time algorithm, or O(1).
If you add each number one at a time, then it's indeed O(n). The formula is a shortcut; it's a different, more efficient algorithm. The shortcut works when the numbers being added are all 1..n. If you have a non-contiguous sequence of numbers, then the shortcut formula doesn't work and you'll have to go back to the one-by-one algorithm.
None of this applies to the matrix of numbers, though. To add two matrices, it's still O(n^2) because you're adding n^2 distinct pairs of numbers to get a matrix of n^2 results.
There's a difference between summing N arbitrary integers and summing N that are all in a row. For 1+2+3+4+...+N, you can take advantage of the fact that they can be divided into pairs with a common sum, e.g. 1+N = 2+(N-1) = 3+(N-2) = ... = N + 1. So that's N+1, N/2 times. (If there's an odd number, one of them will be unpaired, but with a little effort you can see that the same formula holds in that case.)
That is not O(N^2), though. It's just a formula that uses N^2, actually O(1). O(N^2) would mean (roughly) that the number of steps to calculate it grows like N^2, for large N. In this case, the number of steps is the same regardless of N.
Adding the first n numbers:
Consider the algorithm:
Series_Add(n)
return n*(n+1)/2
this algorithm indeed runs in O(|n|^2), where |n| is the length (the bits) of n and not the magnitude, simply because multiplication of 2 numbers, one of k bits and the other of l bits runs in O(k*l) time.
Careful
Considering this algorithm:
Series_Add_pseudo(n):
sum=0
for i= 1 to n:
sum += i
return sum
which is the naive approach, you can assume that this algorithm runs in linear time or generally in polynomial time. This is not the case.
The input representation(length) of n is O(logn) bits (any n-ary coding except unary), and the algorithm (although it is running linearly in the magnitude) it runs exponentially (2^logn) in the length of the input.
This is actually the pseudo-polynomial algorithm case. It appears to be polynomial but it is not.
You could even try it in python (or any programming language), for a medium length number like 200 bits.
Applying the first algorithm the result comes in a split second, and applying the second, you have to wait a century...
1+2+3+...+n is always less than n+n+n...+n n times. you can rewrite this n+n+..+n as n*n.
f(n) = O(g(n)) if there exists a positive integer n0 and a positive
constant c, such that f(n) ≤ c * g(n) ∀ n ≥ n0
since Big-Oh represents the upper bound of the function, where the function f(n) is the sum of natural numbers up to n.
now, talking about time complexity, for small numbers, the addition should be of a constant amount of work. but the size of n could be humongous; you can't deny that probability.
adding integers can take linear amount of time when n is really large.. So you can say that addition is O(n) operation and you're adding n items. so that alone would make it O(n^2). of course, it will not always take n^2 time, but it's the worst-case when n is really large. (upper bound, remember?)
Now, let's say you directly try to achieve it using n(n+1)/2. Just one multiplication and one division, this should be a constant operation, no?
No.
using a natural size metric of number of digits, the time complexity of multiplying two n-digit numbers using long multiplication is Θ(n^2). When implemented in software, long multiplication algorithms must deal with overflow during additions, which can be expensive. Wikipedia
That again leaves us to O(n^2).
It's equivalent to BigO(n^2), because it is equivalent to (n^2 + n) / 2 and in BigO you ignore constants, so even though the squared n is divided by 2, you still have exponential growth at the rate of square.
Think about O(n) and O(n/2) ? We similarly don't distinguish the two, O(n/2) is just O(n) for a smaller n, but the growth rate is still linear.
What that means is that as n increase, if you were to plot the number of operations on a graph, you would see a n^2 curve appear.
You can see that already:
when n = 2 you get 3
when n = 3 you get 6
when n = 4 you get 10
when n = 5 you get 15
when n = 6 you get 21
And if you plot it like I did here:
You see that the curve is similar to that of n^2, you will have a smaller number at each y, but the curve is similar to it. Thus we say that the magnitude is the same, because it will grow in time complexity similarly to n^2 as n grows bigger.
answer of sum of series of n natural can be found using two ways. first way is by adding all the numbers in loop. in this case algorithm is linear and code will be like this
int sum = 0;
for (int i = 1; i <= n; i++) {
sum += n;
}
return sum;
it is analogous to 1+2+3+4+......+n. in this case complexity of algorithm is calculated as number of times addition operation is performed which is O(n).
second way of finding answer of sum of series of n natural number is direst formula n*(n+1)/2. this formula use multiplication instead of repetitive addition. multiplication operation has not linear time complexity. there are various algorithm available for multiplication which has time complexity ranging from O(N^1.45) to O (N^2). therefore in case of multiplication time complexity depends on the processor's architecture. but for the analysis purpose time complexity of multiplication is considered as O(N^2). therefore when one use second way to find the sum then time complexity will be O(N^2).
here multiplication operation is not same as the addition operation. if anybody has knowledge of computer organisation subject then he can easily understand the internal working of multiplication and addition operation. multiplication circuit is more complex than the adder circuit and require much higher time than the adder circuit to compute the result. so time complexity of sum of series can't be constant.
I'm starting to study computational complexity, BigOh notation and the likes, and I was tasked to do an integer factorization algorithm and determine its complexity. I've written the algorithm and it is working, but I'm having trouble calculating the complexity. The pseudo code is as follows:
DEF fact (INT n)
BEGIN
INT i
FOR (i -> 2 TO i <= n / i STEP 1)
DO
WHILE ((n MOD i) = 0)
DO
PRINT("%int X", i)
n -> n / i
DONE
DONE
IF (n > 1)
THEN
PRINT("%int", n)
END
What I attempted to do, I think, is extremely wrong:
f(x) = n-1 + n-1 + 1 + 1 = 2n
so
f(n) = O(n)
Which I think it's wrong because factorization algorithms are supposed to be computationally hard, they can't even be polynomial. So what do you suggest to help me? Maybe I'm just too tired at this time of the night and I'm screwing this all up :(
Thank you in advance.
This phenomenon is called pseudopolynomiality: a complexity that seems to be polynomial, but really isn't. If you ask whether a certain complexity (here, n) is polynomial or not, you must look at how the complexity relates to the size of the input. In most cases, such as sorting (which e.g. merge sort can solve in O(n lg n)), n describes the size of the input (the number of elements). In this case, however, n does not describe the size of the input; it is the input value. What, then, is the size of n? A natural choice would be the number of bits in n, which is approximately lg n. So let w = lg n be the size of n. Now we see that O(n) = O(2^(lg n)) = O(2^w) - in other words, exponential in the input size w.
(Note that O(n) = O(2^(lg n)) = O(2^w) is always true; the question is whether the input size is described by n or by w = lg n. Also, if n describes the number of elements in a list, one should strictly speaking count the bits of every single element in the list in order to get the total input size; however, one usually assumes that in lists, all numbers are bounded in size (to e.g. 32 bits)).
Use the fact that your algorithm is recursive. If f(x) is the number of operations take to factor, if n is the first factor that is found, then f(x)=(n-1)+f(x/n). The worst case for any factoring algorithm is a prime number, for which the complexity of your algorithm is O(n).
Factoring algorithms are 'hard' mainly because they are used on obscenely large numbers.
In big-O notation, n is the size of input, not the input itself (as in your case). The size of the input is lg(n) bits. So basically your algorithm is exponential.