void Ordena(TipoItem *A, int n)
{
TipoItem x;
int i, j;
for (i = 1; i < n; i++)
{
for (j = n; j > i; j--)
{
if (A[j].chave < A[j - 1].chave)
{
x = A[j];
A[j] = A[j - 1];
A[j - 1] = x;
}
}
}
}
I believe the worst case is when the array is in descending order, am I right?
About the asymptotic cost in terms of number of movements, is it O(n²) or O(2n²) ?
I've just started learning about asymptotic cost (as you can tell).
As you were saying the worst-case scenario here is the one where the array is in descending order because you will have to execute the if statement every single time. However, since we are talking about the asymptotic notation, it is quite irrelevant whether or not you execute the if statement because the cost of those three instructions is actually constant(i.d. O(1)). Therefore, the important thing here is how many times you actually have to loop through the elements in the array and this happens exactly, if you do the math, n^2/2 + n/2 times. So, the computational complexity is O(n^2) because the predominant part here is n^2/2, and the asymptotic notation doesn't take into account the multiplicative factor 1/2, even if sometimes these factors could influence the execution time.
Related
I'm a bit confused on (log n). Given this code
public static boolean IsPalindrome(String s) {
char[] chars = s.toCharArray();
for (int i = 0; i < (chars.length / 2); i++) {
if (chars[i] != chars[(chars.length - i - 1)])
return false;
}
return true;
}
}
I am looping n/2 times. So, as length of n increase, my time is increasing half the time of n. In my opinion, I thought that's exactly what log n was? But the person who wrote this code said this is still O(N).
In what case of a loop, can something be (log n)? For example this code:
1. for (int i = 0; i < (n * .8); i++)
Is this log n? I'm looping 80% of n length.
What about this one?
2. for (int i = 1; i < n; i += (i * 1.2))
Is that log n? If so, why.
1. for (int i = 0; i < (n * .8); i++)
In the first case basically you can replace 0.8n with another variable, let's call it m.
for (int i = 0; i < m; i++) You're looping m number of times. You're increasing value of i one unit in each iteration. Since m and n are just variable names, the Big-O complexity of the above loop is O(n).
2. for (int i = 0; i < n; i += (i * 1.2))
In the second scenario, you're not incrementing the value of i, the value of i is always going to be 0. And it is a classic case of an for-infinite loop.
What you're looking for is 2. for (int i = 1; i <= n; i += (i * 1.2)) Here, you're incrementing the value of i logarithmically(but not to the base 2).
Consider for (int i = 1; i <= n; i += i) The value of i doubles after every iteration. value of i is going to be 1, 2, 4, 8, 16, 32, 64.. Let's say n value is 64, your loop is going to terminate in 7 iterations, which is (log(64) to the base 2) + 1(+1 because we are starting the loop from 1) number of operations. Hence it becomes a logarithmic operation.
2. for (int i = 1; i <= n; i += (i * 1.2)) In your case as well the solution is a logarithmic solution, but it is not to the base 2. The base of your logarithmic operation is 2.2, But in big-O notation it boils down to a O(log(n))
I think you miss what is time complexity and how the big O notation work.
The Big O notation is used to describe the asymptotic behavior of the algorithm as the size of the problem growth (to infinity). Particular coefficients do not matter.
As a simple intuition, if when you increase n by a factor of 2, the number of steps you need to perform also increases by about 2 times, it is a linear time complexity or what is called O(n).
So let's get back to your examples #1 and #2:
yes, you do only chars.length/2 loop iterations but if the length of the s is doubled, you also double the number of iterations. This is exactly the linear time complexity
similarly to the previous case you do 0.8*n iterations but if n is doubled, you do twice as many iterations. Again this is linear
The last example is different. The coefficient 1.2 doesn't really matter. What matters is that you add i to itself. Let's re-write that statement a bit
i += (i * 1.2)
is the same as
i = i + (i * 1.2)
which is the same as
i = 2.2 * i
Now you clearly see that each iteration you more than double i. So if you double n you'll only need one more iteration (or even the same). This is a sign of a fundamentally sub-linear time complexity. And yes this is an example of O(log(n)) because for a big n you need only about log(n, base=2.2) iterations and it is true that
log(n, base=a) = log(n, base=b) / log(n, base=b) = constant * log(x, base=b)
where constant is 1/log(a, base=b)
I am learning about Big-O and although I started to understand things, I still am not able to correctly measure the Big-O of an algorithm.
I've got a code:
int n = 10;
int count = 0;
int k = 0;
for (int i = 1; i < n; i++)
{
for (int p = 200; p > 2*i; p--)
{
int j = i;
while (j < n)
{
do
{
count++;
k = count * j;
} while (k > j);
j++;
}
}
}
which I have to measure the Big-O and Exact Runtime.
Let me start, the first for loop is O(n) because it depends on n variable.
The second for loop is nested, therefore makes the big-O till now an O(n^2).
So how we gonna calculate the while (j < n) (so only three loops till now) and how we gonna calculate the do while(k > j) if it appears, makes 4 loops, such as in this case?
A comprehend explanation would be really helpful.
Thank you.
Unless I'm much mistaken, this program has an infinite loop and therefore it's time complexity cannot usefully be analyzed.
In particular
do
{
count++;
k = count * j;
} while (k > j);
as soon as this loop is entered for the second time and count = 2, k will be set greater to j, and will remain so indefinitely (ignoring integer overflow, which will happen pretty quickly).
I understand that you're learning Big-Oh notation, but creating toy examples like this probably isn't the best way to understand Big-Oh. I would recommend reading a well-known algorithms textbook where they walk you through new algorithms, explaining and analyzing the time and space complexity as they do so.
I am assuming that the while loop should be:
while (k < j)
Now, in this case, the first for loop would take O(n) time. The second loop would take O(p) time.
Now, for the third loop,
int j = i;`
while (j < n){
...
j++;
}
could be rewritten as
for(j=i;j<n;j++)
meaning it shall take O(n) time.
For the last loop, the value of k increases exponentially.
Consider it to be same as
for(k = count*j ;k<j ;j++,count++)
Hence it shall take O(logn) time.
The total time complexity is O(n^2*p*logn).
I have some doubts about the time complexities of these algorithms:
These are all the possible options for these algorithms
Algorithm 1
int i=0, j=0, sum = 0;
while i*i < n {
while j*j < n {
sum = sum + i * j;
j = j+2;
}
i = i+5;
}
//I don't really know what this one might be, but my poor logic suggests O(sqrt(n))
Algorithm 2
sum = 0;
for (i=0; i<=n; i++){
j = n;
while j>i {
sum = sum + j - i;
j = j – 1;
}
}
//While I know this algorithm has some nested loops, I don't know if the actual answer is O(n^2)
Algorithm 3
for (int i=0; i<=n; i++)
for (int j=0; j<=n; j++){
k = 0;
while k<n {
c = c+ 1;
k = K + 100;
}
}
//I believe this one has a time complexity of O(n^3) since it has 3 nested loops
Algorithm 4
Algorithm int f(int n) {
if n==0 or n == 1
return n
else
return f(n-2)+f(n-1)
//I know for a fact this algorithm is O(2^n) since it is the poor implementation of fibonacci
I think I have an idea of what the answers might be but I would like to get a second opinion on them. Thanks in advance.
Okay so this is what I think, Answers in bold
Algorithm 1
I think this is the most interesting algorithm. Let's build it up from a simple case
Let's temporarily assume it was just 2 nested loops that just checked for i,j being <n and not the the squares of those values. Now the fact that i and j get incremented by 5 and 2 respectively is inconsequential. And the complexity would have just been O(n2).
Now let's factor in the fact that the check is in fact i*i < n and j * j < n. This means that effective value ofyour iterators is their square and not just their absolute value. If it had been just one loop the complexity would have been O(sqrt(n)). But since we have 2 nested loops the complexity become O(sqrt(n) * sqrt(n)) which becomes O(n)
Algorithm 2
Your reasoning is right. It is O(n2)
Algorithm 3
Your reasoning is right. It is O(n3)
Algorithm 4
I don't think this is a fibonacci implementation but your time complexity guess is correct. Why I don't think this is a fibonacci is because this algorithm takes in a large number and works in reverse. You could send in something like 10 which isn't even a fibonacci number.
A nice way to think of this is as a binary tree. Every node, in your case every call to the fn that is not 1 or 0 spawns 2 children. Each call minimally removes 1 from the initial value sent in, say n. Therefore there would be n levels. The maximum number of nodes in a binary tree of depth n is 2n - 1. Therefore O(2n) is right
Here is problem in which we have to calculate the time complexity of given function
f(i) = 2*f(i+1) + 3*f(i+2)
For (int i=0; i < n; i++)
F[i] = 2*f[i+1]
What i think is the complexity of this algorithm is O(2^n) + O(n) which ultimately is O(2^n).
Please correct me if i am wrong?
Firstly, all the information you required to work these out in future is here.
To answer your question. Because you have not provided a definition of f(i) in terms of I itself it is impossible to determine the actual complexity from what you have written above. However, in general for loops like
for (i = 0; i < N; i++) {
sequence of statements
}
executes N times, so the sequence of statements also executes N times. If we assume the statements are O(1), the total time for the for loop is N * O(1), which is O(N) overall. In your case above, if I take the liberty of re-writing it as
f(0) = 0;
f(1) = 1;
f(i+2) = 2*f(i+1) + 3*f(i)
for (int i=0; i < n; i++)
f[i] = 2*f[i+2]
Then we have a well defined sequence of operations and it should be clear that the complexity for the n operations is, like the example I have given above, n * O(1), which is O(n).
I hope this helps.
What is the complexity given for the following problem is O(n). Shouldn't it be
O(n^2)? That is because the outer loop is O(n) and inner is also O(n), therefore n*n = O(n^2)?
The answer sheet of this question states that the answer is O(n). How is that possible?
public static void q1d(int n) {
int count = 0;
for (int i = 0; i < n; i++) {
count++;
for (int j = 0; j < n; j++) {
count++;
}
}
}
The complexity for the following problem is O(n^2), how can you obtain that? Can someone please elaborate?
public static void q1E(int n) {
int count = 0;
for (int i = 0; i < n; i++) {
count++;
for (int j = 0; j < n/2; j++) {
count++;
}
}
}
Thanks
The first example is O(n^2), so it seems they've made a mistake. To calculate (informally) the second example, we can do n * (n/2) = (n^2)/2 = O(n^2). If this doesn't make sense, you need to go and brush up what the meaning of something being O(n^k) is.
The complexity of both code is O(n*n)
FIRST
The outer loop runs n times and the inner loop varies from 0 to n-1 times
so
total = 1 + 2 + 3 + 4 ... + n
which if you add the arithmetic progression is n * ( n + 1 ) / 2 is O(n*n)
SECOND
The outer loop runs n times and the inner loop varies from 0 to n-1/2 times
so
total = 1 + 1/2 + 3/2 + 4/2 ... + n/2
which if you add the arithmetic progression is n * ( n + 1 ) / 4 is also O(n*n)
First case is definitely O(n^2)
The second is O(n^2) as well because you omit constants when calculate big O
Your answer sheet is wrong, the first algorithm is clearly O(n^2).
Big-Oh notation is "worst case" so when calculating the Big-Oh value, we generally ignore multiplications / divisions by constants.
That being said, your second example is also O(n^2) in the worst case because, although the inner loop is "only" 1/2 n, the n is the clear bounding factor. In practice the second algorithm will be less than O(n^2) operations -- but Big-Oh is intended to be a "worst case" (ie. maximal bounding) measurement, so the exact number of operations is ignored in favor of focusing on how the algorithm behaves as n approaches infinity.
Both are O(n^2). Your answer is wrong. Or you may have written the question incorrectly.