complexity/runtime of pseudocode using big-O notation - pseudocode

I need a little help with a problem. I just started reading about O-notation but I'm still new when it comes to analysing code.
So here's the problem:
The following pseudocode is given, where A is a number field whose elements over the indices 1 to length(A) can be accessed. i is made up of whole numbers so the results of the division are rounded down. What's the complexity of the function SkipPrint?
1: procedure SkipPrint(A)
2: i <- length(A)
3: do
4: print(A[i])
5: i <- i/2
6: while i>0
So I think the complexity O(n) since the function needs to go through the array but only once, right? (line 2) Every other line is of lesser magnitude so it stays O(n)?
Thanks in advance. Your help is appreciated.
Salute

This would be O(n), yes. You are correct. This is because there is a single loop that iterates in one direction. (i always get smaller in this case)

Let's say n = length(A) and assume first that we are in the simpler case where length(A) = 2^m, for some integer m.
Then i, being halved at every step, would have the values:
2^m, 2^{m-1}, 2^{m-2}, ..., 2, 1, 0
which shows that the loop will run m times until i reaches 0. Since n = 2^m, this means that m = lg n and therefore the complexity is O(lg n).
In the general case, define m := floor(lg n). The analysis above shows that the loop will iterate m times until i becomes 0. Thus, the complexity would be O(floor(lg n)), which is the same as O(lg n).

Related

Time complexity of recursive function dividing the input value by 2/3 everytime

I know that the time complexity of a recursive function dividing its input by /2 is log n base 2,I have come across some interesting scenarios on
https://stackoverflow.com/a/42038565/8169857
Kindly help me to understand the logic behind the scenarios in the answer regarding the derivation of the formula
It's back to the recursion tree. Why for 1/2 is O(log2(n))? Because if n = 2^k, you should divide k times to reach to 1. Hence, the number of computation is k = log2(n) comparison at most. Now suppose it is (c-1)/c. Hence, if n = (c/(c-1))^k, we need log_{c/(c-1)}(n) operations to reach to 1.
Now as for any constant c > 1, limit log2(n)/log_{c/(c-1)}(n), n \to \infty is equal to a constant greater than zero, log_{c/(c-1)}(n) = \Theta(log2(n)). Indeed, you can say this for any constants a, b > 1, log_a(n) = \Theta(log_b(n)). Now, the proof is completed.

Big O Recursive Method

I have a method called binary sum
Algorithm BinarySum(A, i, n):
Input: An array A and integers i and n
Output: The sum of the n integers in A starting at index i
if n = 1 then
return A[i]
return BinarySum(A, i, n/ 2) + BinarySum(A, i + n/ 2, n/ 2)
Ignoring the fact of making a simple problem complicated I have been asked to find the Big O. Here is my thought process. For an array of size N I will be making 1 + 2 + 4 .. + N recursive calls. This is close to half the sum from 1 to N so I will say it is about N(N + 1)/4. After making this many calls now I need to add them together. So once again I need to perform N(N+1)/4 additions. Adding them together we are left with N^2 as the dominate term.
So would the big O of this algorithm be O(N^2)? Or am I doing something wrong. It feels strange to have binary recursion and not have a 2^n or log n in the final answer
There are in-fact 2^n and log n terms in the final result... sort of.
For each call to a sub-array of length n, two recursive calls are made to both halves of this array, plus a constant amount of work (if-statement, addition, pushing onto the call stack etc). Thus the recurrence relation is given by:
At this point we could just use the Master theorem to directly arrive at the final result - O(n). But let's instead derive it by repeated expansion:
The stopping condition n = 1 gives the maximum value of m (ignoring rounding):
In step (*) we used the standard formula for geometric series. So as you can see the answer does involve log n and 2^n terms in a sense, but they "cancel" out to give a simple linear term, which is the same as for a simple loop.

What is the time complexity of the code?

Is the time complexity of the following code O(NV^2)?
for i from 1 to N:
for j from 1 to V:
for k from 1 to A[i]://max(A) = V
z = z + k
yeah,whenever we talk about O-notation, we always think about the upper-bound(OR the worst case).
So,the complexity for this code will be equal to
O(N*V*maximum_value_of_A)
=O(N*V*V) // since,maximum value of A=V,so third loop can maximally iterate from 1 to V---V times
=O(N*V^2).
For sure it is O(NV^2) as it means the code is never slower than that. Because max(A) = V, you can say the worst case would be when at every index of A there is V. If so, then the complexity can be limited to O(NV*V).
You can calculate very roughly that the complexity of the for k loop can be O(avg(A)). This allows us to say that the whole function is Omega(NV*avg(A)), where avg(A) <= V.
Theta notation (meaning asympthotical complexity) would can be stated like Theta(NV*O(V)), O(V) representing complexity of a function which will never grow faster than V, but is not constant.

Find pairs with given difference

Given n, k and n number of integers. How would you find the pairs of integers for which their difference is k?
There is a n*log n solution, but I cannot figure it out.
You can do it like this:
Sort the array
For each item data[i], determine its two target pairs, i.e. data[i]+k and data[i]-k
Run a binary search on the sorted array for these two targets; if found, add both data[i] and data[targetPos] to the output.
Sorting is done in O(n*log n). Each of the n search steps take 2 * log n time to look for the targets, for the overall time of O(n*log n)
For this problem exists the linear solution! Just ask yourself one question. If you have a what number should be in the array? Of course a+k or a-k (A special case: k = 0, required an alternative solution). So, what now?
You are creating a hash-set (for example unordered_set in C++11) with all values from the array. O(1) - Average complexity for each element, so it's O(n).
You are iterating through the array, and check for each element Is present in the array (x+k) or (x-k)?. You check it for each element, in set in O(1), You check each element once, so it's linear (O(n)).
If you found x with pair (x+k / x-k), it is what you are looking for.
So it's linear (O(n)). If you really want O(n lg n) you should use a set on tree, with checking is_exist in (lg n), then you have O(n lg n) algorithm.
Apposition: No need to check x+k and x-k, just x+k is sufficient. Cause if a and b are good pair then:
if a < b then
a + k == b
else
b + k == a
Improvement: If you know a range, you can guarantee linear complexity, by using bool table (set_tab[i] == true, when i is in table.).
Solution similar to one above:
Sort the array
set variables i = 0; j = 1;
check the difference between array[i] and array[j]
if the difference is too small, increase j
if the difference is too big, increase i
if the difference is the one you're looking for, add it to results and increase j
repeat 3 and 4 until the end of array
Sorting is O(n*lg n), the next step is, if I'm correct, O(n) (at most 2*n comparisons), so the whole algorithm is O(n*lg n)

Exponential complexity of a program

I got a program which depends on two entries of sizes m and n respectively. If T(m,n) is the running time of the problem, it follows:
T(m,n)=T(m-1,n-1)+T(m-1,n)+T(m,n-1)+C
for a given constant C.
I could prove that the time complexity is in Omega(2^min(m,n)). However, it seems that it is of complexity Omega(2^max(m,n)) (it was just confirmed to me) but I can't find a formal proof. Anyone has a trick?
Thanks in advance!
From the top of my head:
I assume the recursion of T(m,n) stops when you reach T(0,x) or T(x,0).
You have 3 factors contributing to the complexity:
Factor 1: T(m-1,n-1) decreases both m and n, so its lenght until m or n become 0 is min(m,n) steps (See note below)
Factor 2: T(m-1,n) decreases m only, so its length until m=0 is m steps.
Factor 3: T(m,n-1) same as above but until n=0 is n steps.
The overall complexity is the greater of all complexities, so it must be related to maximum of
max( min(m,n) steps, m steps, n steps) = max(m,n)
rather than min(m,n).
I guess you can fill in the details. The constant C does not contribute, or more precisely contributes with O(1), which is the lowest of all complexities here.
Note about item 1: This factor has also a branch for m-1 until 0 and n-1 until 0, so strictly speaking it s complexity will also be max(m,n)
First, you should define that, for all values of x, T(0, x) = T(x, 0) = 0 to have your recursion stop - or at least T(0, x) = T(x, 0) = C.
Second, "time complexity is in Omega(2^min(m,n))" must obviously be wrong. Set m=10000 and n=1. Now try to prove to me that the complexity is the same as m=1 and n=1. The T(m-1, n-1) and T(m, n-1) parts disappear quickly, but you still have to walk the T(m-1, n) part.
Third, this observation directly leads to 2^max(m, n). Try to find out the number of recursion steps for a few low values of m and n. Then try to make up a formula for the number of steps depending on m and n. (Hint: Fibonacci). When you have that formula, you're finished.

Resources