can someone explain to me how i can calculate time complexity of h3(worst case) here...
given this code :
int g3(int n) {
if (n <= 1)
return 2;
int goo = g3(n / 2);
return goo * goo; //i have trouble with this line`
}
int h3(int n) {
return g3(g3(n)); //trouble with this one too
}
i've tried to calculate the complexity based on what i calculated it's Big O of nlog(n) , however it's wrong ...
is there a quick and technical method to solve these kinds of problems fast and correctly ?
(I usually use recursive tree method to calculate time complexity)
g3 have O(log n) complexity, n is divided by 2 in all the values of n, the result is a logarithmic function. h3 have O(log n), and this is because complexity depends on function g3, doesnt matter the function composition in the return value g(g(n)) and the value of n, the key is the g function complexity. The algorithm needs use all values of n and do something logarithmic to give the n*logn complexity
Related
I have this algorithm
int f(int n){
int k=0;
While(true){
If(k == n*n) return k;
k++;
}
}
My friend says that it cost O(2^n). I don’t understand why.
The input is n , the while loop iterate n*n wich is n^2, hence the complexity is O(n^2).
This is based on your source code, not on the title.
For the title, this link my help, complexity of finding the square root,
From the answer Emil Jeřábek I quote:
The square root of an n-digit number can be computed in time O(M(n)) using e.g. Newton’s iteration, where M(n) is the time needed to multiply two n-digit integers. The current best bound on M(n) is n log n 2^{O(log∗n)}, provided by Fürer’s algorithm.
You may look at the interesting entry for sqrt on wikipedia
In my opinion the time cost is O(n^2).
This function will return the k=n^2 value after n^2 while's iterations.
I'm Manuel's friend,
what you don't consider is that input n has length of log(n)... the time complexity would be n ^ 2 if we considered the input length equal to n, but it's not.
So let consider x = log(n) (the length of the input), now we have that n = 2^(x) = 2^(logn) = n and so far all correct.
Now if we calculate the cost as a function of n we get n ^ 2, but n is equal to 2^(x) and we need to calculate the cost as a function of x (because time complexity is calculated on the length of the input, not on the value), so :
O(f) = n^2 = (2^(x))^2 = 2^(2x) = O(2^x)
calculation with excel
"In computer science, big O notation is used to classify algorithms according to how their run time or space requirements grow as the input size grows." (https://en.wikipedia.org/wiki/Big_O_notation)
here's another explanation where the algorithm in question is the primality test : Why naive primality test algorithm is not polynomial
Time complexity of below method? I'm calculating it as log(n)*log(n)= log(n)
public int isPower(int A) {
if (A == 1)
return 1;
for (int i = (int)Math.sqrt(A); i > 1; i--){
int p = A;
while (p % i == 0) {
p = p / i;
}
if (p == 1)
return 1;
}
return 0;
}
Worst-case complexity:
for(..) runs sqrt(A) times
Then while(..) depends on prime factorization of A=p_1^e1*p_2^e_2*..*p_n^e_n, so it is Max(e_1,e_2,..,e_n) worst-case, or roughly Max(log_p_1(A),log_p_2(A),..)
At most while(..) will execute log(A) times roughly.
so total rough worst-case complexity = sqrt(A)*log(A) leaving out constant factors
Worst-case complexity happens for numbers A which are products of different integers ie A = n_1^e_1*n_2^e_2*..
Average-case complexity:
Given than numbers which are products of different integers are more numerous than numbers which are simply powers of a single integer, in a given range, then choosing a number at random, is more likely to be product of different integers, ie A = n_1^e_1*n_2^e_2... Thus average-case complexity is roughly the same as worst-case complexity ie sqrt(A)*log(A)
Best-case complexity:
Best-case complexity happens when the number A is indeed a power of a single integer/prime ie A = n^e. Then the algorithm in this case takes less time. I leave it as an exercise to compute best-case complexity.
PS. Another way to see this is to understand that checking if a number is a power of a prime/integer, effectively one has to factor the number to its prime factorisation (which is what is done in this algorithm), which is effectively of the same complexity (see for example complexity of factoring by trial division).
SO should have mathjax support as cs.stackexchange has :p !
You iterate from sqrt(A) to 2. Then u tried to factorize. For prime number your code iterate sqrt(A) times . its best case. if number is 2^30 then ur code execute
sqrt(2^30) * 30 means sqrt(n) * log(n) times.
So your code complexity: sqrt(n) * log(n)
Given the following code, what is the complexity of 3. and how would I represent simple algorithms with the following complexities?
O(n²+n)
O(n²+2n)
O(logn)
O(nlogn)
var collection = new[] {1,2,3};
var collection2 = new[] {1,2,3};
//1.
//On
foreach(var i in c1)
{
}
//2.
//On²
foreach(var i in c1)
{
foreach(var j in c1)
{
}
}
//3.
//O(nⁿ⁺ᵒ)?
foreach(var i in c1)
{
foreach(var j in c2)
{
}
}
3 is O(n*m), or O(n^2) if the two collections are the same size.
O(n^2+n) is pointless because n is smaller than n^2. Just write O(n^2).
Most decent comparison sort algorithms run at O(n*log(n)). If you don't know any, look on Wikipedia.
A binary search is O(log(n)).
The outer foreach is executed n = |c1| times (where |x| is the size of c1), while the inner foreach is executed m = |c2| times. That's O(n * m) times in total.
how would I represent simple algorithms with the following complexities?
O(n²+n)
This is the same as O(n^2). Something that takes O(n^2) time would be drinking a toast with every other person at a party, assuming that there's always exactly two people in a toast, and only one person does the toasting at a time.
O(n²+2n)
Same as above; the O(n^2) term dominates. Another example of an O(n^2) effort is planting trees in a square garden of length n, assuming it takes constant time to plant each tree, and that once you plant a tree other trees are excluded from its vicinity.
O(logn)
An example of this would be finding a word in a dictionary by repeatedly picking the midpoint of the region of pages you need to search next. (In other words, a binary search.)
O(nlogn)
Use the above algorithm, but now you have to find every word in the dictionary.
There is no O(n²+n) or O(n^2 + 2n). Leaving aside most of the mathematical foundations of algorithmic complexity, you at least need to know that it is "aymptotic." As N approaches infinity, the value of n^2 + n is dominated by the n ^ 2 term, so that is the asymptotic complexity of n^2 + n.
3's complexity is O(I * J), where I and J are the size of the inputs in c1 and c2.
Truth be told O(n²+n) & O(n²+2n) are the same.
Complexity of 3 is O(m*n).
There is no complexity O(n2+n) or O(n2+2n). It is just O(n2). This is because n is o(n2).
Example of O(log(n)) is binary search.
Example of O(n*log(n)) is merge sort.
What is the time and space complexity of:
int superFactorial4(int n, int m)
{
if(n <= 1)
{
if(m <= 1)
return 1;
else
n = m -= 1;
}
return n*superFactorial4(n-1, m);
}
It runs recursively by decreasing the value of n by 1 until it equals 1 and then it will either decrease the value of m by 1 or return 1 in case m equals 1.
I think the complexity depends on both n and m so maybe it's O(n*m).
Actually, it looks to be closer to O(N+m^2) to me. n is only used for the first "cycle".
Also, in any language that doesn't do tail call optimization the space complexity is likely to be "fails". In languages that support the optimization, the space complexity is more like O(1).
The time complexity is O(n+m^2), space complexity the same.
Reasoning: with a fixed value of m, the function makes n recursive calls to itself, each does constant work, so the complexity of calls with fixed m is n. Now, when n reaches zero, it becomes m-1 and m becomes m-1 too. So the next fixed-m-phase will take m-1, the next m-2 and so on. So you get a sum (m-1)+(m-2)+...+1 which is O(m^2).
The space complexity is equal, because for each recursive call, the recursion takes constant space and you never return from the recursion except at the end, and there is no tail recursion.
The time complexity of a Factorial function using recursion
pseudo code:
int fact(n)
{
if(n==0)
{
return 1;
}
else if(n==1)
{
return 1;
}
else if
{
return n*f(n-1);
}
}
time complexity;
let T(n) be the number of steps taken to compute fact(n).
we know in each step F(n)= n*F(n-1)+C
F(n-1)= (n-1)*F(n-2)+c
substitute this in F(n), we get
F(n)= n*(n-1)*F(n-2)+(n+1)c
using big o notation now we can say that
F(n)>= n*F(n-1)
F(n)>= n*(n-1)*F(n-2)
.
.
.
.
.
F(n)>=n!F(n-k)
T(n)>=n!T(n-k)
n-k=1;
k=n-1;
T(n)>=n!T(n-(n-1))
T(n)>=n!T(1)
since T(1)=1
T(n)>=1*n!
now it is in the form of
F(n)>=c(g(n))
so we can say that time complexity of factorial using recursion is
T(n)= O(n!)
What is the Worst Case Time Complexity t(n) :-
I'm reading this book about algorithms and as an example
how to get the T(n) for .... like the selection Sort Algorithm
Like if I'm dealing with the selectionSort(A[0..n-1])
//sorts a given array by selection sort
//input: An array A[0..n - 1] of orderable elements.
//output: Array A[0..n-1] sorted in ascending order
let me write a pseudocode
for i <----0 to n-2 do
min<--i
for j<--i+1 to n-1 do
ifA[j]<A[min] min <--j
swap A[i] and A[min]
--------I will write it in C# too---------------
private int[] a = new int[100];
// number of elements in array
private int x;
// Selection Sort Algorithm
public void sortArray()
{
int i, j;
int min, temp;
for( i = 0; i < x-1; i++ )
{
min = i;
for( j = i+1; j < x; j++ )
{
if( a[j] < a[min] )
{
min = j;
}
}
temp = a[i];
a[i] = a[min];
a[min] = temp;
}
}
==================
Now how to get the t(n) or as its known the worst case time complexity
That would be O(n^2).
The reason is you have a single for loop nested in another for loop. The run time for the inner for loop, O(n), happens for each iteration of the outer for loop, which again is O(n). The reason each of these individually are O(n) is because they take a linear amount of time given the size of the input. The larger the input the longer it takes on a linear scale, n.
To work out the math, which in this case is trivial, just multiple the complexity of the inner loop by the complexity of the outer loop. n * n = n^2. Because remember, for each n in the outer loop, you must again do n for the inner. To clarify: n times for each n.
O(n * n).
O(n^2)
By the way, you shouldn't mix up complexity (denoted by big-O) and the T function. The T function is the number of steps the algorithm has to go through for a given input.
So, the value of T(n) is the actual number of steps, whereas O(something) denotes a complexity. By the conventional abuse of notation, T(n) = O( f(n) ) means that the function T(n) is of at most the same complexity as another function f(n), which will usually be the simplest possible function of its complexity class.
This is useful because it allows us to focus on the big picture: We can now easily compare two algorithms that may have very different-looking T(n) functions by looking at how they perform "in the long run".
#sara jons
The slide set that you've referenced - and the algorithm therein
The complexity is being measured for each primitive/atomic operation in the for loop
for(j=0 ; j<n ; j++)
{
//...
}
The slides rate this loop as 2n+2 for the following reasons:
The initial set of j=0 (+1 op)
The comparison of j < n (n ops)
The increment of j++ (n ops)
The final condition to check if j < n (+1 op)
Secondly, the comparison within the for loop
if(STudID == A[j])
return true;
This is rated as n ops. Thus the result if you add up +1 op, n ops, n ops, +1 op, n ops = 3n+2 complexity. So T(n) = 3n+2
Recognize that T(n) is not the same as O(n).
Another doctoral-comp flashback here.
First, the T function is simply the amount of time (usually in some number of steps, about which more below) an algorithm takes to perform a task. What a "step" is, is somewhat defined by the use; for example, it's conventional to count the number of comparisons in sorting algorithms, but the number of elements searched in search algorithms.
When we talk about the worst-case time of an algorithm, we usually express that with "big-O notation". Thus, for example, you hear that bubble sort takes O(n²) time. When we use big O notation, what we're really saying is that the growth of some function -- in this case T -- is no faster than the growth of some other function times a constant. That is
T(n) = O(n²)
means for any n, no matter how large, there is a constant k for which T(n) ≤ kn². A point of some confustion here is that we're using the "=" sign in an overloaded fashion: it doesn't mean the two are equal in the numerical sense, just that we are saying that T(n) is bounded by kn².
In the example in your extended question, it looks like they're counting the number of comparisons in the for loop and in the test; it would help to be able to see the context and the question they're answering. In any case, though, it shows why we like big-O notation: W(n) here is O(n). (Proof: there exists a constant k, namely 5, for which W(n) ≤ k(3n)+2. It follows by the definition of O(n).)
If you want to learn more about this, consult any good algorithms text, eg, Introduction to Algorithms, by Cormen et al.
write pseudo codes to search, insert and remove student information from the hash table. calculate the best and the worst case time complexities
3n + 2 is the correct answer as far as the loop is concerned. At each step of the loop, 3 atomic operations are done. j++ is actually two operations, not one. and j