i'm needing some help to find what is the complexity order of this function:
int cerca_ciclos (int vet[],int nivel)
{
int i,j,cont=0;
if (vet[nivel-2] == vet[nivel-1]) {
return 1;
}
for (i=2;i <= nivel/2; i++)
{
for (j=0;j<i;j++)
{
if (vet[nivel-j-1] == vet[nivel-1-i]){
cont++;
}
}
if (cont == i){
return 1;
}
}
return 0;
}
the variable Nivel will never overflow the pré defined limit that i setted 35.
You can do the following with Sigma notation:
The algorithm runs in time-complexity O(nivel²)
the outer for-loop runs from 0 to nivel/2 and the inner runs from 0 to i, which is between 0 and nivel/2. So the count of steps to compute are
total = 1 + 2 + 3 + 4 + 5 + ... + nivel/2
Gauß' said this is somthing like 1/2⋅(nivel/2)⋅(nivel/2+1) so it is quadratic in nivel.
But due to nivel is boundet to 35 (or any other constant number), the complexity will be O(1) cause you can find a constant value that is bigger than the "worst case" number of steps, that the algorithm has to compute.
Related
I'm not sure if it's suitable to ask such a basic question but I was just wondering how to calculate a function that has another function inside.
From my understanding of Big O notation, if there are only multiple non-nested for loop, then it's O(n), and depending on the number of nesting loops, it increases by n squared.
but how about a function like below when there's a helper function within a function along with while loop and for loop.
function solution(nums) {
let container = [];
let zero = 0;
let one = 1;
let two = 2;
let answer = 0;
function isPrime(n) {
for (let i = 2; i < n; i++) {
if (n % i === 0) {
return false;
}
}
return true;
}
while (nums) {
container.push(nums[zero] + nums[one] + nums[two]);
two++;
if (two === nums.length) (one++, two = one + 1);
if (one === nums.length -1) (zero++, one = zero + 1, two = one + 1);
if (zero === nums.length-2) break;
}
for (let i = 0; i < container.length; i++) {
if (isPrime(container[i]) === true) answer++;
}
return answer;
}
I've read few articles about it and I'm guessing the above function is O(n log n)? I'm sorry for asking this question. I'm a beginner and I don't have a community or anybody to ask. I've always heard this place is "the" place to get programming question answers.
Thanks in advance!!
The complexity is O(n^2).
The inner function's (isPrime()) complexity itself is O(i), where i is its input.
Since isPrime() is called for each value in the container, and the container contains O(n) elements, after being filled up in previous loop, the total number of operations it is going to do is: O(1 + 2 + 3 + ... + n).
Now, 1 + 2 + .... + n is Sum of arithmetic progression, and itself is in O(n^2), thus your solution() is O(n^2).
when I submit to leetcode, it run case 500/502 but failed, reason: 1808548329. But when I run it on my own mac, it gave the same answer as the accepted one.
my code :
int trailingZeroes(int n) {
int count = 0;
int tmp = 0; //check every number in [1, i]
for (int i = 1; i <= n; i++) {
tmp = i;
while (tmp % 5 == 0) {
count++;
tmp /= 5;
}
}
return count;
}
and the ac answer:
int trailingZeroes2(int n) {
return n == 0 ? 0 : n / 5 + trailingZeroes(n / 5);
}
they run the same result, on my mac:
std::cout << trailingZeroes(1808548329) << std::endl; //452137076
std::cout << trailingZeroes2(1808548329) << std::endl; //452137076
Is the reason that first solution not accepted because of time complexity?
(cus' i am running it on my own mac, but it gives the same answer that the ac gave)
how can i calculate the time complexity of the first solution,
is it O(NlogN) ? I am not sure. can u do me a favor? : -)
edited, remove pics.
Your solution is O(n).
The inner loop repeats at least once every 5 items
The inner loop repeats at least twice every 25 items
...
The inner loop repeats at least k times every 5^k items.
Summing it together gives you that the inner loop runs:
n/5 + n/25 + n/125 + ... + 1 =
n (1/5 + 1/25 + 1/125 + ... + 1/n)
This is sum of geometric series, which is in O(n)
In addition, the outer loop itself has O(n) iterations, with each constant cost, if ignoring the inner loops, so this remains O(n).
The alternative solution, however runs in O(logn), which is significantly more efficient.
For the following code fragment, what is the order of growth in terms of N?
int sum = 0;
for (int i = 1; i <= N; i = i*2)
for (int j = 1; j <= N; j = j*2)
for (int k = 1; k <= i; k++)
sum++;
I have figured that there is lgN term, but I am stuck on evaluating this part : lgN(1 + 4 + 8 + 16 + ....). What will the last term of the sequence be? I need the last term to calculate the sum.
You have a geometric progression in your outer loops, so there is a closed form for the sum of which you want to take the log:
1 + 2 + 4 + ... + 2^N = 2^(N+1) - 1
To be precise, your sum is
1 + ... + 2^(floor(ld(N))
with ld denoting the logarithm to base 2.
The outer two loops are independent from each other, while the innermost loop only depends on i. There is a single operation (increment) in the innermost loop, which means that the number of visits to the innermost loop equals the summation result.
\sum_i=1..( floor(ld(N)) ) {
\sum_j=1..( floor(ld(N)) ) {
\sum_k=1..2^i { 1 }
}
}
// adjust innermost summation bounds
= \sum_i=1..( floor(ld(N)) ) {
\sum_j=1..( floor(ld(N)) ) {
-1 + \sum_k=0..2^i { 1 }
}
}
// swap outer summations and resolve innermost summation
= \sum_j=1..( floor(ld(N)) ) {
\sum_i=1..( floor(ld(N)) ) {
2^i
}
}
// resolve inner summation
= \sum_j=1..( floor(ld(N)) ) {
2^(floor(ld(N)) + 1) - 2
}
// resolve outer summation
= ld(N) * N - 2 * floor(ld(N))
This amounts to O(N log N) ( the second term in the expression vanishes asymptotically wrt to the first ) in Big-Oh notation.
To my understanding, the outer loop will take log N steps, the next loop will also take log N steps, and the innermost loop will take at most N steps (although this is a very rough bound). In total, the loop has a runtime complexity of at most ((log N)^2)*N, which can probably be improved.
I'm reading this Big O article (and some other book references) trying to figure out what changes affect my algorithm.
so given the following O(N^2) code:
bool ContainsDuplicates(String[] strings)
{
for(int i = 0; i < strings.Length; i++)
{
for(int j = 0; j < strings.Length; j++)
{
if(i == j) // Don't compare with self
{
continue;
}
if(strings[i] == strings[j])
{
return true;
}
}
}
return false;
}
I made the following change:
bool ContainsDuplicates(String[] strings)
{
for(int i = 0; i < strings.Length; i++)
{
for(int j = 0; j < strings.Length; j++)
{
if(i != j) // Don't compare with self
{
if(strings[i] == strings[j])
{
return true;
}
}
}
}
return false;
}
Now both IF's are nested and 'continue' is removed. Does this algorithm really became a O(N^2 + 1) ? and why ?
As far as I see the IF check was there before regardless, so initially thought it would still be a O(N^2).
Big O is describing how execution time grows as a chosen parameter becomes large.
In your example, if we wanted to be exact, the formula would be:
Time taken = Time(start) + Time(external loop) * N + Time (continue) * N + Time (no continue) * N^2
Which can be rewritten as
Time taken = a + b * N + c * N^2
Now, as N becomes larger and larger, it's clear that overall this will be shaped like a parabola. The order zero and order one terms become irrelevant as N grows to infinity.
Time taken (large N) ~= c * N^2
Finally, since we are interested in discussing qualitatively and not quantitatively, we simply describe the algorirhm as N^2
O(N^2) means that the algorithm will behave approximately as c * N^2 for large values of N
It is a similar concept to o(x) in calculus (with the difference that small-o is for parameters going to zero.
I'm trying to compute the big-O time complexity of this selection sort implementation:
void selectionsort(int a[], int n)
{
int i, j, minimum, index;
for(i=0; i<(n-1); i++)
{
minimum=a[n-1];
index=(n-1);
for(j=i; j<(n-1); j++)
{
if(a[j]<minimum)
{
minimum=a[j];
index=j;
}
}
if (i != index)
{
a[index]=a[i];
a[i]=minimum;
}
}
}
How might I go about doing this?
Formally, you can obtain the exact number of iterations with the order of growth using the methodology below:
Executing the following fragment code (synthetic version of the original code), sum will equal the closed form of T(n).
sum = 0;
for( i = 0 ; i < ( n - 1 ) ; i ++ ) {
for( j = i ; j < ( n - 1 ) ; j ++ ) {
sum ++;
}
}
Let's begin by looking at the inside of the outer loop. It does O(1) work with the initial assignments, then has a loop that runs n - i times, then does O(1) more work at the end to perform the swap. Therefore, the runtime is Θ(n - i).
If we sum up from i going from 0 up to n - 1, we get the following:
n + (n - 1) + (n - 2) + ... + 1
This famous sum works out to Θ(n2), so the runtime would be Θ(n2), matching the known runtime of this algorithm.
Hope this helps!