Does manipulating n have any impact on the O of an algorithm?
recursive code for example:
Public void Foo(int n)
{
n -= 1;
if(n <= 0) return;
n -= 1;
if(n <= 0) return;
Foo(n)
}
Does the reassignment of n impact O(N)? Sounds intuitive to me...
Does this algorithm have O(N) by dropping the constant? Technically, since it's decrementing n by 2, it would not have the same mathematical effect as this:
public void Foo(int n) // O(Log n)
{
if(n <= 0) return;
Console.WriteLine(n);
Foo(n / 2);
}
But wouldn't the halving of n contribute to O(N), since you are only touching half of the amount of n? To be clear, I am learning O Notation and it's subtleties. I have been looking for cases such that are like the first example, but I am having a hard time finding such a specific answer.
The reassignment of n itself is not really what matters when talking about O notation. As an example consider a simple for-loop:
for i in range(n):
do_something()
In this algorithm, we do something n times. This would be equivalent to the following algorithm
while n > 0:
do_something()
n -= 1
And is equivalent to the first recursive function you presented. So what really matters is how many computations is done compared to the input size, which is the original value of n.
For this reason, all these three algorithms would be O(n) algorithms, since all three of them decreases the 'input size' by one each time. Even if they had increased it by 2, it would still be a O(n) algorithm, since constants doesn't matter when using O notation. Thus the following algorithm is also a O(n) algorithm.
while n > 0:
do something()
n -= 2
or
while n > 0:
do_something()
n -= 100000
However, the second recursive function you presented is a O(log n) algorithm (even though it does not have a base case and would techniqually run till the stack overflows), as you've written in the comments. Intuitively, what happens i that when halving the input size every time, this exactly corresponds to taking the logarithm in base two of the original input number. Consider the following:
n = 32. The algorithm halves every time: 32 -> 16 -> 8 -> 4 -> 2 -> 1.
In total, we did 5 computations. Equivalently log2(32) = 5.
So to recap, what matters is the original input size and how many computations is done compared to this input size. Whatever constant may affect the computations does not matter.
If I misunderstood your question, or you have follow up questions, feel free to comment this answer.
Link to problem: Ugly Numbers
How would you find the Big O of the brute force (Simple Method Approach) solution for Ugly-Numbers.
I see that for this part of the code:
/* Function to check if a number is ugly or not */
int isUgly(int no)
{
no = maxDivide(no, 2);
no = maxDivide(no, 3);
no = maxDivide(no, 5);
return (no == 1)? 1 : 0;
}
Each step takes log_2(x) + log_3(x) + log_5(x) steps, where x = no
So this would mean the runtime is (log_2(x) + log_3(x) + log_5(x))n where x is the result of the output. However, the result of an algorithm can't be a part of the Big O notation right? If it can't, this would be reduced to cn right? Where c > result. What is the proper method of proof for this?
Ugly numbers are also also known as regular numbers. As you can see from the Wikipedia article, it is known that the number of regular numbers up to m is
(ln(m*sqrt(30))^3 / (6*ln(2)*ln(3)*ln(5)) + O(ln(m))
In other words, your getNthUglyNo will call isUgly to get the n-th regular number
~ 1/sqrt(30) * exp((n*6*ln(2)*ln(3)*ln(5))^(1/3))
times.
The probability that a uniform random integer number x between 0 and M is divisible by 2^y is asymptotically1/2^y and so the mean number of times that the loop in the call maxDivide(no, 2); iterates is O(1) and equivalently for maxDivide(no, 3); and maxDivide(no, 5);.
Consequently your algorithm is
Theta( exp((n*6*ln(2)*ln(3)*ln(5))^(1/3)) )
which is approximately
Theta( exp(1.9446 * n^(1/3)) )
Also note that plugging in n = 500 into the asymptotic number of iterations mentioned above does give you 921498, which is pretty close to the number of iterations #sowrov found in their answer (937500).
The complexity of isUgly method is O(log N), where N is the input. Because the complexity of maxDivide is O(log N) and calling that function a fixed amount of times (3 in this case) does not change the complexity.
However, the result of an algorithm can't be a part of the Big O notation right?
Yes, the result of a function is irrelevant while calculating the complexity of that function.
The time complexity of the getNthUglyNo is unknown or ~infinite! For N=500 it runes 937500 times!
I’m studying algorithm - time complexity and recursion.
I’m actually ok with solving recursion, cuz it’s simple math. But code part is the problem.
For example, This is the problem I’ve brought :
https://brilliant.org/practice/big-o-notation/?problem=complexityrun-time-analysis-2-2
public int P(int x , int n){
if (n == 0){
return 1;
}
if (n % 2 == 1){
int y = P(x, (n - 1) / 2);
return x * y * y;
}
else{
int y = P(x, n / 2);
return y * y;
}
}
It is a simple power function. T(n)=O(g(n)) is running time of this function for large, and I have to find it.
The solution says,
“When the power is odd an extra multiplication operation is performed. To work out time complexity, let us first look at the worst scenario, meaning let us assume that one additional multiplication operation is needed.”
However, I do not understand the next part, the solution says that :
Recursion relation is
T(n) = T(n/2) + 3, T(1)=1
1) Why is the constant part 3?
if (n % 2 == 1){
int y = P(x, (n - 1) / 2);
return x * y * y;
}
2) I actually don’t get exactly why T(1)=1 also.
I’m puzzled with.. which operations should we consider while calculating time complexity?
For example, T(1)=1 part must be related with
if (n == 0){
return 1;
}
if (n % 2 == 1){
int y = P(x, (n - 1) / 2);
return x * y * y;
}
This part, and I want to ask whether T(1)=1 comes from if statement/assign statement/return statement..
I understand afterwards, solving the recursion relation above, but I’m stuck with the recursion relation itself.
Please help me algo gurus..
which operations should we consider while calculating time complexity?
The answer will disappoint you a bit: it doesn't matter what operations you count. That's why we use big-Oh in analysing algorithms and expressing their time/memory requirements. It is an asymptotic notation that describes what happens to the algorithm for large values of n. By the definition of Big-Oh, we can say that both 1/2n^2 and 10n^2+6n+100 are O(n^2), even if they are not the same function. Counting all the operations, will just increase some constant factors, and that's why it doesn't really matter which ones you count.
By the above, the constants are simply O(1). This disregards details, since both 10 and 10000 are O(1), for example.
One could argue that specifying the exact number of operations in the expression T(n) = T(n/2) + 3 is not very correct, since there is no definition for what an operation is, and moreover the same operation might take a different amount of time on different computers, so exactly counting the number of operations is a bit meaningless at best and simply wrong at worst. A better way of saying it is T(n) = T(n/2) + O(1).
T(1)=1 represents the base case, which is solved in constant time (read: a constant number of operations at each time). Again, a better (more formal) way of saying that is T(1)=O(1).
How do I prove that this algorithm is O(loglogn)
i <-- 2
while i < n
i <-- i*i
Well, I believe we should first start with n / 2^k < 1, but that will yield O(logn). Any ideas?
I want to look at this in a simple way, what happends after one iteration, after two iterations, and after k iterations, I think this way I'll be able to understand better how to compute this correctly. What do you think about this approach? I'm new to this, so excuse me.
Let us use the name A for the presented algorithm. Let us further assume that the input variable is n.
Then, strictly speaking, A is not in the runtime complexity class O(log log n). A must be in (Omega)(n), i.e. in terms of runtime complexity, it is at least linear. Why? There is i*i, a multiplication that depends on i that depends on n. A naive multiplication approach might require quadratic runtime complexity. More sophisticated approaches will reduce the exponent, but not below linear in terms of n.
For the sake of completeness, the comparison < is also a linear operation.
For the purpose of the question, we could assume that multiplication and comparison is done in constant time. Then, we can formulate the question: How often do we have to apply the constant time operations > and * until A terminates for a given n?
Simply speaking, the multiplication reduces the effort logarithmic and the iterative application leads to a further logarithmic reduce. How can we show this? Thankfully to the simple structure of A, we can transform A to an equation that we can solve directly.
A changes i to the power of 2 and does this repeatedly. Therefore, A calculates 2^(2^k). When is 2^(2^k) = n? To solve this for k, we apply the logarithm (base 2) two times, i.e., with ignoring the bases, we get k = log log n. The < can be ignored due to the O notation.
To answer the very last part of the original question, we can also look at examples for each iteration. We can note the state of i at the end of the while loop body for each iteration of the while loop:
1: i = 4 = 2^2 = 2^(2^1)
2: i = 16 = 4*4 = (2^2)*(2^2) = 2^(2^2)
3: i = 256 = 16*16 = 4*4 = (2^2)*(2^2)*(2^2)*(2^2) = 2^(2^3)
4: i = 65536 = 256*256 = 16*16*16*16 = ... = 2^(2^4)
...
k: i = ... = 2^(2^k)
So I can picture what an algorithm is that has a complexity of n^c, just the number of nested for loops.
for (var i = 0; i < dataset.len; i++ {
for (var j = 0; j < dataset.len; j++) {
//do stuff with i and j
}
}
Log is something that splits the data set in half every time, binary search does this (not entirely sure what code for this looks like).
But what is a simple example of an algorithm that is c^n or more specifically 2^n. Is O(2^n) based on loops through data? Or how data is split? Or something else entirely?
Algorithms with running time O(2^N) are often recursive algorithms that solve a problem of size N by recursively solving two smaller problems of size N-1.
This program, for instance prints out all the moves necessary to solve the famous "Towers of Hanoi" problem for N disks in pseudo-code
void solve_hanoi(int N, string from_peg, string to_peg, string spare_peg)
{
if (N<1) {
return;
}
if (N>1) {
solve_hanoi(N-1, from_peg, spare_peg, to_peg);
}
print "move from " + from_peg + " to " + to_peg;
if (N>1) {
solve_hanoi(N-1, spare_peg, to_peg, from_peg);
}
}
Let T(N) be the time it takes for N disks.
We have:
T(1) = O(1)
and
T(N) = O(1) + 2*T(N-1) when N>1
If you repeatedly expand the last term, you get:
T(N) = 3*O(1) + 4*T(N-2)
T(N) = 7*O(1) + 8*T(N-3)
...
T(N) = (2^(N-1)-1)*O(1) + (2^(N-1))*T(1)
T(N) = (2^N - 1)*O(1)
T(N) = O(2^N)
To actually figure this out, you just have to know that certain patterns in the recurrence relation lead to exponential results. Generally T(N) = ... + C*T(N-1) with C > 1means O(x^N). See:
https://en.wikipedia.org/wiki/Recurrence_relation
Think about e.g. iterating over all possible subsets of a set. This kind of algorithms is used for instance for a generalized knapsack problem.
If you find it hard to understand how iterating over subsets translates to O(2^n), imagine a set of n switches, each of them corresponding to one element of a set. Now, each of the switches can be turned on or off. Think of "on" as being in the subset. Note, how many combinations are possible: 2^n.
If you want to see an example in code, it's usually easier to think about recursion here, but I can't think od any other nice and understable example right now.
Consider that you want to guess the PIN of a smartphone, this PIN is a 4-digit integer number. You know that the maximum number of bits to hold a 4-digit number is 14 bits. So, you will have to guess the value, the 14-bit correct combination let's say, of this PIN out of the 2^14 = 16384 possible values!!
The only way is to brute force. So, for simplicity, consider this simple 2-bit word that you want to guess right, each bit has 2 possible values, 0 or 1. So, all the possibilities are:
00
01
10
11
We know that all possibilities of an n-bit word will be 2^n possible combinations. So, 2^2 is 4 possible combinations as we saw earlier.
The same applies to the 14-bit integer PIN, so guessing the PIN would require you to solve a 2^14 possible outcome puzzle, hence an algorithm of time complexity O(2^n).
So, those types of problems, where combinations of elements in a set S differs, and you will have to try to solve the problem by trying all possible combinations, will have this O(2^n) time complexity. But, the exponentiation base does not have to be 2. In the example above it's of base 2 because each element, each bit, has two possible values which will not be the case in other problems.
Another good example of O(2^n) algorithms is the recursive knapsack. Where you have to try different combinations to maximize the value, where each element in the set, has two possible values, whether we take it or not.
The Edit Distance problem is an O(3^n) time complexity since you have 3 decisions to choose from for each of the n characters string, deletion, insertion, or replace.
int Fibonacci(int number)
{
if (number <= 1) return number;
return Fibonacci(number - 2) + Fibonacci(number - 1);
}
Growth doubles with each additon to the input data set. The growth curve of an O(2N) function is exponential - starting off very shallow, then rising meteorically.
My example of big O(2^n), but much better is this:
public void solve(int n, String start, String auxiliary, String end) {
if (n == 1) {
System.out.println(start + " -> " + end);
} else {
solve(n - 1, start, end, auxiliary);
System.out.println(start + " -> " + end);
solve(n - 1, auxiliary, start, end);
}
In this method program prints all moves to solve "Tower of Hanoi" problem.
Both examples are using recursive to solve problem and had big O(2^n) running time.
c^N = All combinations of n elements from a c sized alphabet.
More specifically 2^N is all numbers representable with N bits.
The common cases are implemented recursively, something like:
vector<int> bits;
int N
void find_solution(int pos) {
if (pos == N) {
check_solution();
return;
}
bits[pos] = 0;
find_solution(pos + 1);
bits[pos] = 1;
find_solution(pos + 1);
}
Here is a code clip that computes value sum of every combination of values in a goods array(and value is a global array variable):
fun boom(idx: Int, pre: Int, include: Boolean) {
if (idx < 0) return
boom(idx - 1, pre + if (include) values[idx] else 0, true)
boom(idx - 1, pre + if (include) values[idx] else 0, false)
println(pre + if (include) values[idx] else 0)
}
As you can see, it's recursive. We can inset loops to get Polynomial complexity, and using recursive to get Exponential complexity.
Here are two simple examples in python with Big O/Landau (2^N):
#fibonacci
def fib(num):
if num==0 or num==1:
return num
else:
return fib(num-1)+fib(num-2)
num=10
for i in range(0,num):
print(fib(i))
#tower of Hanoi
def move(disk , from, to, aux):
if disk >= 1:
# from twoer , auxilart
move(disk-1, from, aux, to)
print ("Move disk", disk, "from rod", from_rod, "to rod", to_rod)
move(disk-1, aux, to, from)
n = 3
move(n, 'A', 'B', 'C')
Assuming that a set is a subset of itself, then there are 2ⁿ possible subsets for a set with n elements.
think of it this way. to make a subset, lets take one element. this element has two possibilities in the subset you're creating: present or absent. the same applies for all the other elements in the set. multiplying all these possibilities, you arrive at 2ⁿ.