What's the complexity of func1 if func2 = O(n) and func3 = O(n^2) ?
void func1 (int n) {
int i, j, k;
for (k=0;k<n;k++) // O(n)
printf("%d",k);
i=0;
while (i<2*n) { // O(n)
for (j=i;j>1;j--) // Not executed
for (k=15;k>0;k--)
func2(n);
for (j=n;j>1;j--) // O(n)
func3(n); // O(n^2)
i++;
}
}
So that's O(n^2)O(n)O(n) + O(n) = max(O(n^4),O(n)) = O(n^4) ?
Thanks!
void func1 (int n) {
int i, j, k;
for (k=0;k<n;k++)
printf("%d",k);
i=0;
while (i<2*n) { // O(2*n) = O(n)
for (j=i;j>1;j--) // O(n)
for (k=15;k>0;k--) // O(15) = O(1)
func2(n); // O(n)
for (j=n;j>1;j--) // O(n)
func3(n); // O(n^2)
i++;
}
}
For sequences, find the maximum of the steps.
As a rule of thumb, nested loops multiply, but you may need to examine the ranges carefully if they are not independent make sure this is the case (see Paul's comment for an example).
Related
if a function's statements execution increases with input but with a limit, would it be considered O(n) or O(1)?
for example:
void func(int n)
{
if (n > 1000)
{
for (int i = 0; i < 1000; i++)
{
//do thing
}
}
else
{
for (int i = 0; i < n; i++)
{
//do same thing
}
}
}
is this function O(n) or O(1)?
It is O(1), not O(n).
Big-O analysis is asymptotic: it intentionally ignores an arbitrarily large initial section of the performance function, in order to accurately describe the large-scale behavior.
Question 1.
My Answer: 1 + n^2 * n = n^3
Correct Answer: O(n)
void f(int n) {
if (n<1000000)
return 0;
for (int i=0; i<(n*n); i++)
return f(n-1);
}
Question 2.
My Answer: n * n * logn = n^2*logn
Correct Answer: O(n^3)
int f(int n) {
for (int i=1; i< (n/2); i++)
for(double j =0; j <100; j+= (100.0/n)
doSomething (n/2); //doSomething(m) has time complexity of O(m)
return 0;
}
Question 3.
My Answer: 1 + n * (logn + 1) = O(nlogn)
Correct Answer: O(logn)
int f(n) {
if (n<1) return;
g(n-1);
}
void g(n) {
f(n/2)
doOne(); // doOne has time complexity O(1)
}
Question 1
void f(int n) {
if (n<1000000)
return 0;
for (int i=0; i<(n*n); i++)
return f(n-1);
}
The for loop is not looping at all because the content is a return statement, so you have at most one loop iteration. This means you can can simplify this code to:
void f(int n) {
if (n<=0)
return 0;
return f(n-1);
}
(simplified in regard for the O(n) analysis)
Here you see why it is O(n) because it is counting down until it hit the recursion stop condition. The fact that there is a "high" value check for n<100000 doesn't matter when you call it with something like f(5*10^300);
Question 2
int f(int n) {
for (int i=1; i< (n/2); i++)
for(double j =0; j <100; j+= (100.0/n)
doSomething (n/2); //doSomething(m) has time complexity of O(m)
return 0;
}
In regard for the O(n) analysis you can simplify some lines:
for (int i=1; i< (n/2); i++)
This can be simplified to:
for (int i=1; i<n; i++)
Therefore it's O(n) as already identified by you.
for(double j =0; j <100; j+= (100.0/n)
This can be simplified as:
for(double j =0; j <1; j+= (1.0/n) (divided by 100)
for(double j =0; j <n; j+= 1.0) (multiplied by n)
And again, a simple O(n) loop.
doSomething (n/2);
That is by definition a O(n) statement.
So in total you have O(n)*O(n)*O(n), which is O(n^3).
Question 3
int f(n) {
if (n<1) return;
g(n-1);
}
void g(n) {
f(n/2)
doOne(); // doOne has time complexity O(1)
}
Not sure how you got the O(n*log(n)) here because not every n value is checked. In fact you start at n and the steps are n/2, n/4, n/8, n/16, n/32, n/64, n/128, ... and at some point you reached the terminate condition if (n<1). This is a simple O(log(n)) loop.
So far i worked is on the above with time complexity n^2, can anyone helper increase the efficiency to nlogn or below?
bool checkDuplicates( int array[], int n)
{
int i,j;
for( i = 0; i < n; i++ )
{
for( j = i+1; j < n; j++ )
{
if( array[i] == array[j] )
return true;
}
}
return false;
}
you can quicksort the array (n log(n)), then finding duplicates becomes linear (only need to check consecutive elements).
Use a sorting algorithm (e.g. mergesort which has O(n log n)) and then check for consecutive elements that are equal.
Or modify the merge step of mergesort to detect equal elements.
Also, use size_t for indices, not int.
Assuming you can alter array, do that first (n log n provided you're using a good algorithm). Next, walk array and compare sequential elements (n).
If you use C++, then this is a way:
#include <set>
bool hasDuplicates(int array[], int n)
{
std::set<int> no_duplicates(array, array + n);
return no_duplicates.size() != n;
}
You can use Sorting, Set or even Unordered Map to achieve your task.
If you can alter the contents of the array, prefer sorting
If you can't alter the contents of the array, set is a good choice
If you can't alter the contents of the array & want to achieve O(N) complexity prefer unordered map
Using sorting
// Checking duplicates with Sorting, Complexity: O(nlog(n))
bool checkDuplicatesWithSorting(int array[], int n)
{
sort(array, array+n);
for(int i = 1; i < n; i++ )
{
if(array[i]==array[i-1])
return true;
}
return false;
}
Using Set
// Checking duplicates with a Set, Complexity: O(nlog(n))
bool checkDuplicatesWithSet(int array[], int n)
{
set <int> s(array, array+n);
return s.size() != n;
}
Using Unordered Map
// Checking duplicates with a Unordered Map, Complexity: O(n) (Average Case)
bool checkDuplicatesWithHashMap(int array[], int n)
{
unordered_map<int, bool>uo_map;
for(int i=0;i<n;i++){
if(uo_map.find(array[i]) == uo_map.end())
uo_map[array[i]] = true;
else return true;
}
return false;
}
Live Code
When I was reading the space complexity of merge sort, I got the space complexity of that is O(n+logn). O(logn) is calculated when we consider the stack frame size of the recursive procedures.
But the heapsort also utilizes the recursive procedure, which is the heapify procedure. Why the space complexity of heapsort is O(1)?
and the code I am reading is
```java
public class HeapSort {
public void buildheap(int array[]){
int length = array.length;
int heapsize = length;
int nonleaf = length / 2 - 1;
for(int i = nonleaf; i>=0;i--){
heapify(array,i,heapsize);
}
}
public void heapify(int array[], int i,int heapsize){
int smallest = i;
int left = 2*i+1;
int right = 2*i+2;
if(left<heapsize){
if(array[i]>array[left]){
smallest = left;
}
else smallest = i;
}
if(right<heapsize){
if(array[smallest]>array[right]){
smallest = right;
}
}
if(smallest != i){
int temp;
temp = array[i];
array[i] = array[smallest];
array[smallest] = temp;
heapify(array,smallest,heapsize);
}
}
public void heapsort(int array[]){
int heapsize = array.length;
buildheap(array);
for(int i=0;i<array.length-1;i++){
// swap the first and the last
int temp;
temp = array[0];
array[0] = array[heapsize-1];
array[heapsize-1] = temp;
// heapify the array
heapsize = heapsize - 1;
heapify(array,0,heapsize);
}
}
```
The space complexity of the Java code you posted is not O(1) because it consumes a non-constant amount of stack space.
However this is not the usual way to implement heapsort. The recursion in the heapify method can easily replaced by a simple while loop (without introducing any additional data structures like a stack). If you do that, it will run in O(1) space.
The heapify() function can by implemented tail-recursively. Many functional languages guarantee that tail-recursive functions use a constant amount of stack space.
I'm still learning about complexity measurement using the Big O Notation, was wondering if I'm correct to say that following method's complexity is O(n*log4n), where the "4" is a subscript.
public static void f(int n)
{
for (int i=n; i>0; i--)
{
int j = n;
while (j>0)
j = j/4;
}
}
Yes, You are correct, that the complexity of the function is O(n*log_4(n))
Log_4(n) = ln(n) / ln(4) and ln(4) is a constant, so if Your function has complexity O(n*log_4(n)) it is also true, that it has a complexity of O(n*ln(n))
Did you mean
public static void f(int n)
{
for (int i=n; i>0; i--)
{
int j = i; // Not j = n.
while (j>0)
j = j/4;
}
}
?
In that case, too you are correct. It is O(nlogn). Using the 4 as subscript is correct too, but it only makes it more cumbersome to write.
Even with the j=n line, it is O(nlogn).
In fact to be more accurate you can say it is Theta(n logn).
yes you are right, complexity is n* log4(n)