What is the time complexity of for ( int i =0; i <n %10;++ i ) - big-o

I think the answer to this question is o(1) since the loop only iterates from 0 to 9 but when discussing this same question with my friend he told me its o(n) since according to him "the number of iterations is directly proportional to n."
Which one of us is correct here? Is it O(1) or O(n)?

Neither of you is entirely correct, as you aren't properly specifying the input size.
The number of iterations is bounded by the constant 10, but you need to know how long it takes to compute n % 10 in order to discover that constant. n % 10 is not proportional to n, but neither is it independent of n.
After you get n % 10, your loop has O(1) iterations. The total complexity depends on how long it takes to find n % 10, as well as how long each iteration takes.
Keep in mind that n is not the size of your input. The input size is the number of bits you need to represent the value n, which is N = O(log n).

Thats actually a very interesting and definition dependent question.
Since n%10 is not proportional to the asymptotic complexity of n, but it can't be viewed as a constant either, because it depends on n.
The only not-incorrect solution is to make n%10 constant, since it can be statically (in the sense of exaustibly in advance) written as
int f = n % 10; // % itself is in O(1)
int iterations;
switch(f){
case 0:
iterations = 0;
break;
case 1:
iterations = 1;
break;
case 2:
iterations = 2;
break;
// ...
case 9:
iterations = 9;
break;
default: iterations = 0; break;
}
for(int i = 0; i < iterations; i++){
//...
}
Which is obviously in O(1).
Saying your for loop corresponds to O(n) would directly suggest that it increases in computational complecity linearly with n, which is only the case if the module operator is in O(n), which it is not for most CPUs.

Related

Time complexity of an algorithm that runs 1+2+...+n times;

To start off I found this stackoverflow question that references the time complexity to be O(n^2), but it doesn't answer the question of why O(n^2) is the time complexity but instead asks for an example of such an algorithm. From my understanding an algorithm that runs 1+2+3+...+n times would be
less than O(n^2). For example, take this function
function(n: number) {
let sum = 0;
for(let i = 0; i < n; i++) {
for(let j = 0; j < i+1; j++) {
sum += 1;
}
}
return sum;
}
Here are some input and return values
num
sum
1
1
2
3
3
6
4
10
5
15
6
21
7
28
From this table you can see that this algorithm runs in less than O(n^2) but more than O(n). I also realize than algorithm that runs 1+(1+2)+(1+2+3)+...+(1+2+3+...+n) is true O(n^2) time complexity. For the algorithm stated in the problem, do we just say it runs in O(n^2) because it runs more than O(log n) times?
It's known that 1 + 2 + ... + n has a short form of n * (n + 1) / 2. Even if you didn't know that, you have to consider that, when i gets to n, the inner loop runs at most n times. So you have exactly n times (for outer loop i), each running at most n times (for inner loop j), so the O(n^2) becomes more apparent.
I agree that the complexity would be exactly n^2 if the inner loop also ran from 0 to n, so you have your reasons to think that a loop i from 0 to n and another loop j from 0 to i has to perform better and that's true, but with big Oh notation you're actually measuring the degree of algorithm's complexity, not the exact number of operations.
p.s. O(log n) is usually achieved when you split the main problem into sub-problems.
I think you should interpret the table differently. The O(N^2) complexity says that if you double the input N, the runtime should quadruple (take 4 times as long). In this case, the function(n: number) returns a number mirroring its runtime. I use f(N) as a short for it.
So say N goes from 1 to 2, which means the input has doubled (2/1 = 2). The runtime then has gone from f(1) to f(2), which means it has increased f(2)/f(1) = 3/1 = 3 times. That is not 4 times, but the Big-O complexity measure is asymptotic, dealing with the situation where N approaches infinity. If we test another input doubling from the table, we have f(6)/f(3) = 21/6 = 3.5. It is already closer to 4.
Let us now stray outside the table and try more doublings with bigger N. For example we have f(200)/f(100) = 20100/5050 = 3.980 and f(5000)/f(2500) = 12502500/3126250 = 3.999. The trend is clear. As N approaches infinity, a doubled input tends toward a quadrupled runtime. And that is the hallmark of O(N^2).

Better understanding of (log n) time complexity

I'm a bit confused on (log n). Given this code
public static boolean IsPalindrome(String s) {
char[] chars = s.toCharArray();
for (int i = 0; i < (chars.length / 2); i++) {
if (chars[i] != chars[(chars.length - i - 1)])
return false;
}
return true;
}
}
I am looping n/2 times. So, as length of n increase, my time is increasing half the time of n. In my opinion, I thought that's exactly what log n was? But the person who wrote this code said this is still O(N).
In what case of a loop, can something be (log n)? For example this code:
1. for (int i = 0; i < (n * .8); i++)
Is this log n? I'm looping 80% of n length.
What about this one?
2. for (int i = 1; i < n; i += (i * 1.2))
Is that log n? If so, why.
1. for (int i = 0; i < (n * .8); i++)
In the first case basically you can replace 0.8n with another variable, let's call it m.
for (int i = 0; i < m; i++) You're looping m number of times. You're increasing value of i one unit in each iteration. Since m and n are just variable names, the Big-O complexity of the above loop is O(n).
2. for (int i = 0; i < n; i += (i * 1.2))
In the second scenario, you're not incrementing the value of i, the value of i is always going to be 0. And it is a classic case of an for-infinite loop.
What you're looking for is 2. for (int i = 1; i <= n; i += (i * 1.2)) Here, you're incrementing the value of i logarithmically(but not to the base 2).
Consider for (int i = 1; i <= n; i += i) The value of i doubles after every iteration. value of i is going to be 1, 2, 4, 8, 16, 32, 64.. Let's say n value is 64, your loop is going to terminate in 7 iterations, which is (log(64) to the base 2) + 1(+1 because we are starting the loop from 1) number of operations. Hence it becomes a logarithmic operation.
2. for (int i = 1; i <= n; i += (i * 1.2)) In your case as well the solution is a logarithmic solution, but it is not to the base 2. The base of your logarithmic operation is 2.2, But in big-O notation it boils down to a O(log(n))
I think you miss what is time complexity and how the big O notation work.
The Big O notation is used to describe the asymptotic behavior of the algorithm as the size of the problem growth (to infinity). Particular coefficients do not matter.
As a simple intuition, if when you increase n by a factor of 2, the number of steps you need to perform also increases by about 2 times, it is a linear time complexity or what is called O(n).
So let's get back to your examples #1 and #2:
yes, you do only chars.length/2 loop iterations but if the length of the s is doubled, you also double the number of iterations. This is exactly the linear time complexity
similarly to the previous case you do 0.8*n iterations but if n is doubled, you do twice as many iterations. Again this is linear
The last example is different. The coefficient 1.2 doesn't really matter. What matters is that you add i to itself. Let's re-write that statement a bit
i += (i * 1.2)
is the same as
i = i + (i * 1.2)
which is the same as
i = 2.2 * i
Now you clearly see that each iteration you more than double i. So if you double n you'll only need one more iteration (or even the same). This is a sign of a fundamentally sub-linear time complexity. And yes this is an example of O(log(n)) because for a big n you need only about log(n, base=2.2) iterations and it is true that
log(n, base=a) = log(n, base=b) / log(n, base=b) = constant * log(x, base=b)
where constant is 1/log(a, base=b)

Presumingly simple runtime analysis, need explanation

According to todays lecture, the first loop has a runtime of the order O(n), while the second loop has a runtime of the order O(log(n)).
for (int i = 0; i < n; i++) { // O(n)
stuff(); // O(1)
}
for (int i = 1; i < n; i*=4) { // O(log(n))
stuff(); // O(1)
}
Could someone please elaborate on why?
The first loop will do a constant time operation exactly n times. Therefore it is O(n).
The second loop (starting from i = 1 not i = 0, you had a typo that I fixed) executes its body for i set to 1, 4, 16, 64, ... that is, 4^0, 4^1, 4^2, 4^3, ... up until n.
4^k < n when k < log_4(n). Therefore the body of the second loop executes O(log(n)) times, because log base 4 and log base e differ by only a constant coefficient.
Time complexity is calculated in terms of how the numbers of times all unit time statements in the code are executed as function of n (input size.)
Input size = n and for loop runs in O(N) and stuff runs of O(N)*O(1) hence overall O(N)
for loop runs till 4^k-1 < n where k is number of iterations. Taking log on both sides of inequality we get (k-1)*log4 < logn, k < logn/log4 +1, k=O(logn) because log4 is constant. Stuff runs in O(logn)*O(1) hence overall O(logn)

Time complexity of this primality testing algorithm?

I have the following code which determines whether a number is prime:
public static boolean isPrime(int n){
boolean answer = (n>1)? true: false;
for(int i = 2; i*i <= n; ++i)
{
System.out.printf("%d\n", i);
if(n%i == 0)
{
answer = false;
break;
}
}
return answer;
}
How can I determine the big-O time complexity of this function? What is the size of the input in this case?
Think about the worst-case runtime of this function, which happens if the number is indeed prime. In that case, the inner loop will execute as many times as possible. Since each iteration of the loop does a constant amount of work, the total work done will therefore be O(number of loop iterations).
So how many loop iterations will there be? Let's look at the loop bounds:
for(int i = 2; i*i <= n; ++i)
Notice that this loop will keep executing as long as i2 ≤ n. Therefore, the loop will terminate as soon as i ≥ √n + 1. Consequently, the loop will end up running O(√n) times, so the worst-case time complexity of the function is O(√n).
As to your second question - what is the size of the input? - typically, when looking at primality testing algorithms (or other algorithms that work on large numbers), the size of the input is defined to be the number of bits required to write out the input. In your case, since you're given a number n, the number of bits required to write out n is Θ(log n). This means that "polynomial time" in this case would be something like O(logk n). Your runtime, O(√n), is not considered polynomial time because O(√n) = O((2log n)1/2), which is exponentially larger than the number of bits required to write out the input.
Hope this helps!

Big Oh nested while

I am having some challenges with big-oh problems. These are NOT homework problems. I am writing these problems to better understand the concept here.
function func(n)
{
int k,i = 0;
while(k < n){ < --- Guessing this outer loop is O(n/2)
k = n + 2
i = 0;
while(i < k){ <--- Not sure what this is?
i ++;
i = i * i;
}
}
}
I would really like it if you can explain to me what is going on in the inner loop and how your logic ends up at the big-o notation that you finally end up at.
The outer loop, with its test (k < n) and its step, k = n + 2, will run one time, providing an O(1) factor of complexity.
The inner loop has test (i < k) which is to say (i < n+2), and has steps i++; i=i*i; At the end,
i = (...(((1+1)^2+1)^2+1)^2+ ... )^2 > n+2`
which makes the value of i super-exponential. That is, i grows faster than exp(exp(p)) in p passes so that overall complexity is less than O(log log n). This is a tighter bound than the previously-mentioned O(log n), which also is an upper bound but not as tight.
While #alestanis has provided what looks to me like a much more accurate analysis of this problem than those in the comments, I still don't think it's quite right.
Let's create a small test program that prints out the values of i produced by the inner loop:
#include <iostream>
void inner(double k) {
double i;
i = 0.0;
while(i < k) {
i ++;
i = i * i;
std::cout << i << "\n";
}
}
int main() {
inner(1e200);
return 0;
}
When I run this, the result I get is:
1
4
25
676
458329
2.10066e+011
4.41279e+022
1.94727e+045
3.79186e+090
1.43782e+181
1.#INF
If the number of iterations were logarithmic, then the number of iterations to reach a particular number should be proportional to the number of digits in the limit. For example, if it were logarithmic, it should take around 180 iterations to reach 1e181, give or take some (fairly small) constant factor. That's clearly not the case here at all -- as is easily visible by looking at the exponents of the results in scientific notation, this is approximately doubling the number of digits every iteration, where logarithmic would mean it was roughly adding one digit every iteration.
I'm not absolutely certain, but I believe that puts the inner loop at something like O(log log N) instead of just O(log N). I think it's pretty easy to agree that the outer loop is probably intended to be O(N) (though it's currently written to execute only once), putting the overall complexity at O(N log log N).
I feel obliged to add that from a pragmatic viewpoint, O(log log N) can often be treated as essentially constant -- as shown above, the highest limit you can specify with a typical double precision floating point number is reached in only 11 iterations. As such, for most practical purposes, the overall complexity can be treated as O(N).
[Oops -- didn't notice he'd answered as I was writing this, but it looks like #jwpat7 has reached about the same conclusion I did. Kudos to him/her.]
The second loop squares the value of i until it reaches k. If we ignore the constant term, this loop runs in O(log k) time.
Why? Because if you solve i^m = k you get m = constant * log(k).
The outer loop, as you said, runs in O(n) time.
As bigger values of k depend on n, you can say the inner loop runs in O(log n) which gives you an overall complexity of O(n log n).

Resources