I have come up with the following algorithm for a task. primarily the for-loop in line 23-24 that is making me unsure.
function TOP-DOWN-N(C, n, p)
let N[1...n, 1...C] be a new array initialized to -1 for all indicies
return N(C, n)
function N(C, n)
if N[C,n] >= 0 then
return N[C,n]
if C < 0 or n < 1 then
return 0
elif C = 0 then
return 1
elif C > 0 and i > 0 then
r = 0
for i = 0 to n do
r += N(C-p[n-i], n-(i+1))
N[C,n] = r
return N
Let's ignore the fact that this algorithm is implemented recursively. In general, if a dynamic programming algorithm is building an array of N results, and computing each result requires using the values of k other results from that array, then its time complexity is in Ω(Nk), where Ω indicates a lower bound. This should be clear: it takes Ω(k) time to use k values to compute a result, and you have to do that N times.
From the other side, if the computation doesn't do anything asymptotically more time-consuming than reading k values from the array, then O(Nk) is also an upper bound, so the time complexity is Θ(Nk).
So, by that logic we should expect that the time complexity of your algorithm is Θ(n2C), because it builds an array of size nC, computing each result uses Θ(n) other results from that array, and that computation is not dominated by something else.
However, your algorithm has an advantage over an iterative implementation because it doesn't necessarily compute every result in the array. For example, if the number 1 isn't in the array p then your algorithm won't compute N(C-1, n') for any n'; and if the numbers in p are all greater than or equal to C, then the loop is only executed once and the running time is dominated by having to initialize the array of size nC.
It follows that Θ(n2C) is the worst-case time complexity, and the best-case time complexity is Θ(nC).
Related
Does manipulating n have any impact on the O of an algorithm?
recursive code for example:
Public void Foo(int n)
{
n -= 1;
if(n <= 0) return;
n -= 1;
if(n <= 0) return;
Foo(n)
}
Does the reassignment of n impact O(N)? Sounds intuitive to me...
Does this algorithm have O(N) by dropping the constant? Technically, since it's decrementing n by 2, it would not have the same mathematical effect as this:
public void Foo(int n) // O(Log n)
{
if(n <= 0) return;
Console.WriteLine(n);
Foo(n / 2);
}
But wouldn't the halving of n contribute to O(N), since you are only touching half of the amount of n? To be clear, I am learning O Notation and it's subtleties. I have been looking for cases such that are like the first example, but I am having a hard time finding such a specific answer.
The reassignment of n itself is not really what matters when talking about O notation. As an example consider a simple for-loop:
for i in range(n):
do_something()
In this algorithm, we do something n times. This would be equivalent to the following algorithm
while n > 0:
do_something()
n -= 1
And is equivalent to the first recursive function you presented. So what really matters is how many computations is done compared to the input size, which is the original value of n.
For this reason, all these three algorithms would be O(n) algorithms, since all three of them decreases the 'input size' by one each time. Even if they had increased it by 2, it would still be a O(n) algorithm, since constants doesn't matter when using O notation. Thus the following algorithm is also a O(n) algorithm.
while n > 0:
do something()
n -= 2
or
while n > 0:
do_something()
n -= 100000
However, the second recursive function you presented is a O(log n) algorithm (even though it does not have a base case and would techniqually run till the stack overflows), as you've written in the comments. Intuitively, what happens i that when halving the input size every time, this exactly corresponds to taking the logarithm in base two of the original input number. Consider the following:
n = 32. The algorithm halves every time: 32 -> 16 -> 8 -> 4 -> 2 -> 1.
In total, we did 5 computations. Equivalently log2(32) = 5.
So to recap, what matters is the original input size and how many computations is done compared to this input size. Whatever constant may affect the computations does not matter.
If I misunderstood your question, or you have follow up questions, feel free to comment this answer.
O(n) time is simply one loop, O(n²) is a loop inside a loop where both loops run at kn times (k is an integer). The pattern continues. However, all finite integers k of O(nᵏ) can be constructed by hand that you can simply nest loops inside another, but what about O(nⁿ) where n is an arbitrary value that approaches infinity?
I was thinking a while loop would work here but how do we set up a break condition. Additionally, I believe O(nⁿ) complexity can be implemented using recursion but how would that look pseudocode-wise?
How do you construct a piece of algorithm that runs in O(nⁿ) using only loops or recursion?
A very simple iterative solution would be to calculate nn and then count up to it:
total = 1
for i in range(n):
total *= n
for i in range(total):
# Do something that does O(1) work
This could also be written recursively:
def calc_nn(n, k):
if k == 0:
return 1
return n * calc_nn(n, k - 1)
def count_to(k):
if k != 0:
count_to(k - 1)
def recursive_version(n):
count_to(calc_nn(n, n))
I am learning about calculating the time complexity of an algorithm, and there are two examples that I can't get my head around why their time complexity is different than I calculated.
After doing the reading I learned that the for-loop with counter increasing once each iteration has the time complexity of O(n) and the nested for-loop with different iteration conditions is O(n*m).
This is the first question where I provided the time complexity to be O(n) but the solution says it was O(1):
function PrintColours():
colours = { "Red", "Green", "Blue", "Grey" }
foreach colour in colours:
print(colour)
This is the second one where I provided the time complexity to be O(n^2) but the solution says its O(n):
function CalculateAverageFromTable(values, total_rows, total_columns):
sum = 0
n = 0
for y from 0 to total_rows:
for x from 0 to total_columns:
sum += values[y][x]
n += 1
return sum / n
What am I getting wrong with these two questions?
There are several ways for denoting the runtime of an algorithm. One of most used notation is the Big - O notation.
Link to Wikipedia: https://en.wikipedia.org/wiki/Big_O_notation
big O notation is used to classify algorithms according to how their
run time or space requirements grow as the input size grows.
Now, while the mathematical definition of the notation might be daunting, you can think of it as a polynomial function of input size where you strip away all the constants and lower degree polynomials.
For ex: ax^2 + bx + c in Big-O would be O(x^2) (we stripped away all the constants a,b and c and lower degree polynomial bx)
Now, let's consider your examples. But before doing so, let's assume each operation takes a constant time c.
First example:
Input is: colours = { "Red", "Green", "Blue", "Grey" } and you are looping through these elements in your for loop. As the input size is four, the runtime would be 4 * c. It's constant runtime and constant runtime is written as O(1) in Big-O
Second example:
The inner for loop runs for total_columns times and it has two operations
for x from 0 to total_columns:
sum += values[y][x]
n += 1
So, it'll take 2c * total_columns times. And, the outer for loop runs for total_rows times, resulting in total time of total_rows * (2c * total_columns) = 2c * total_rows * total_columns. In Big-O it'd be written as O(total_rows * total_columns) (we stripped away the constant)
When you get out of outer loop, n which was set to 0 initially, would become total_rows * total_columns and that's why they mentioned the answer to be O(n).
One good definition of time complexity is:
"It is the number of operations an algorithm performs to complete its
task with respect to the input size".
If we think the following question input size can be defined as X= total_rows*total_columns. Then, what is the number of operations? It is X again because there will be X addition because of the operation sum += values[y][x] (neglect increment operation for n += 1 for simplicity). Then, think that we double array size from X to 2*X. How many operations there will be? It is 2*X again. As you can see, increase in number of operations is linear when we increase input size which makes time complexity O(N).
function CalculateAverageFromTable(values, total_rows, total_columns):
sum = 0
n = 0
for y from 0 to total_rows:
for x from 0 to total_columns:
sum += values[y][x]
n += 1
return sum / n
For your first question, the reason is that colours is a set. In python, {} defines a set. Accessing elements from unordered set is O(1) time complexity regardless of the input size. For furher information you can check here.
for(i=1;i<=n;i=pow(2,i)) { print i }
What will be the time complexity of this?
Approximate kth term for value of i will be pow(2,(pow(2,pow(2,pow(2, pow(2,pow(2,...... k times)))))))
How can the above value, let's say kth value of i < n be solved for k.
What you have is similar to tetration(2,n) but its not it as you got wrong ending condition.
The complexity greatly depends on the domain and implementation. From your sample code I infer real domain and integers.
This function grows really fast so after 5 iterations you need bigints where even +,-,*,/,<<,>> are not O(1). Implementation of pow and print have also a great impact.
In case of small n<tetration(2,4) you can assume the complexity is O(1) as there is no asymptotic to speak of for such small n.
Beware pow is floating point in most languages and powering 2 by i can be translated into simple bit shift so let assume this:
for (i=1;i<=n;i=1<<i) print(i);
We could use previous state of i to compute 1<<i like this:
i0=i; i<<=(i-i0);
but there is no speedup on such big numbers.
Now the complexity of decadic print(i) is one of the following:
O( log(i)) // power of 10 datawords (like 1000000000 for 32 bit)
O((log(i))^2) // power of 2 datawords naive print implementation
O( log(i).log(log(i))) // power of 2 datawords subdivision or FFT based print implementation
The complexity of bit shift 1<<i and comparison i<=n is:
O(log(i)) // power of 2 datawords
So choosing the best implementation for print in power of 2 datawords lead to iteration:
O( log(i).log(log(i) + log(i) + log(i) ) -> O(log(i).log(log(i)))
At first look one would think we would need to know the number of iterations k from n:
n = tetration(2,k)
k = slog2(n)
or Knuth's notation which is directly related to Ackermann function:
n = 2↑↑k
k = 2↓↓n
but the number of iterations is so small in comparison to inner complexity of the stuff inside loop and next iterations grows so fast that the previous iteration is negligible fraction of the next one so we can ignore them all and only consider the last therm/iteration...
After all these assumptions I got final complexity:
O(log(n).log(log(n)))
Is the time complexity of the following code O(NV^2)?
for i from 1 to N:
for j from 1 to V:
for k from 1 to A[i]://max(A) = V
z = z + k
yeah,whenever we talk about O-notation, we always think about the upper-bound(OR the worst case).
So,the complexity for this code will be equal to
O(N*V*maximum_value_of_A)
=O(N*V*V) // since,maximum value of A=V,so third loop can maximally iterate from 1 to V---V times
=O(N*V^2).
For sure it is O(NV^2) as it means the code is never slower than that. Because max(A) = V, you can say the worst case would be when at every index of A there is V. If so, then the complexity can be limited to O(NV*V).
You can calculate very roughly that the complexity of the for k loop can be O(avg(A)). This allows us to say that the whole function is Omega(NV*avg(A)), where avg(A) <= V.
Theta notation (meaning asympthotical complexity) would can be stated like Theta(NV*O(V)), O(V) representing complexity of a function which will never grow faster than V, but is not constant.