factorial trailing zeroes BigO problems - algorithm

when I submit to leetcode, it run case 500/502 but failed, reason: 1808548329. But when I run it on my own mac, it gave the same answer as the accepted one.
my code :
int trailingZeroes(int n) {
int count = 0;
int tmp = 0; //check every number in [1, i]
for (int i = 1; i <= n; i++) {
tmp = i;
while (tmp % 5 == 0) {
count++;
tmp /= 5;
}
}
return count;
}
and the ac answer:
int trailingZeroes2(int n) {
return n == 0 ? 0 : n / 5 + trailingZeroes(n / 5);
}
they run the same result, on my mac:
std::cout << trailingZeroes(1808548329) << std::endl; //452137076
std::cout << trailingZeroes2(1808548329) << std::endl; //452137076
Is the reason that first solution not accepted because of time complexity?
(cus' i am running it on my own mac, but it gives the same answer that the ac gave)
how can i calculate the time complexity of the first solution,
is it O(NlogN) ? I am not sure. can u do me a favor? : -)
edited, remove pics.

Your solution is O(n).
The inner loop repeats at least once every 5 items
The inner loop repeats at least twice every 25 items
...
The inner loop repeats at least k times every 5^k items.
Summing it together gives you that the inner loop runs:
n/5 + n/25 + n/125 + ... + 1 =
n (1/5 + 1/25 + 1/125 + ... + 1/n)
This is sum of geometric series, which is in O(n)
In addition, the outer loop itself has O(n) iterations, with each constant cost, if ignoring the inner loops, so this remains O(n).
The alternative solution, however runs in O(logn), which is significantly more efficient.

Related

Confusion about complexity of two dependent loops?

for(i=1; i < n; i++){
for(j=1; j <= i; j++){
statement1;
}
}
outer loop = O(N)
Inner loop = N(N-1)/2
Total = N*N(N-1)/2 = N^3
it seems n^3 is complexity of these nested loops. but accordings to books, its complexity is n^2 from N(N-1)/2 .
The only interesting thing to count is how often statement1 will be executed.
Therefore, note that something like
for (int i = 0; i < 2; i++)
for (int j = 0; j < 3; j++)
statement1;
triggers 2 * 3 = 6 executions. So you count how often the inner loop gets executed per outer loop iteration.
However, in your example you did a mistake and multiplied the iterations of the outer loop with the total iterations of the inner loop, not the number of iterations per outer loop iteration.
In the example above that would be like 2 * 6 = 12 instead of only 2 * 3 = 6.
Let's take a closer look at what happens in your specific example. The outer loop triggers n iterations of the inner loop. The inner loop first yields 1 iteration. In the next iteration of the outer loop it will yield 2 iterations, then 3 and so on.
In total you will thus receive 1 + 2 + ... + n = (n^2 - n)/2 iterations of the inner loop. Again, note the 'in total'. So statement1 will in total be executed (n^2 - n)/2 times. The outer loops iterations are already taken into account for the computation of the inner loops total runs, no additional multiplication.
(n^2 - n)/2 is obviously in O(n^2) due to its asymptotic complexity. Intuitively only the biggest factor plays a role, we can drop other stuff by estimating with <=.
(n^2 - n)/2
<= n^2 - n
<= n^2 in O(n^2)
for(i=1; i < n; i++){
for(j=1; j <= i; j++){
statement1;
}
}
In order to simplify the problem, let's assume that n is 5 here.
So line 1 will execute 5 times since it will check and increment i value 5 times.
line 2 will execute (5-1)=4 times because for i=5, it will not execute but line 1 will execute for i=5.
line 3 will execute for 1 time, 2 times 3 times and so on, each time i is incremented.
Take complexity of 3rd line into context and you'll find that it is executing 1+2+3+4=10 times. It's simply the sum of numbers from 1 to 4 or you can say, n(n+1)/2 where n=4.
We can ignore the complexity of line 1 and line 2 since they are constant and in asymptotic notation , the complexity will be O(n^2).
You can think about the 2 nested loops as checking all the cells on the diagonal and below the diagonal on a N x N matrix.
So you'll always do a number of operations close to 1/2 of N^2. So the total number of operations of your code will be N^2 * constant. By the definition of Big-O notation, that means that your code runtime complexity is O(n^2).
Here is a simple code to help you understand my explanation.
#include <vector>
#include <iostream>
using std::vector;
using std::cout;
using std::endl;
// These function count the number of times that your code will execute statement1
int count(int N){
int total = 0;
for(int l = 0; l < N; ++l){
for(int r = 0; r <= l; ++r){
total++;
}
}
return total;
}
// this code will show the cells of the matrix that you are "executing"
int showMatrix(int N){
vector<vector<bool> > mat(N, vector<bool>(N, false) );
for(int l = 0; l < N; ++l){
for(int r = 0; r <= l; ++r){
mat[l][r] = true;
}
}
for(int line = 0; line < N; ++line){
for(int column = 0; column < N; ++column){
cout << (mat[line][column] == true ? "1 " : "0 ");
}
cout << endl;
}
}
int main(){
showMatrix(10);
cout << count(10) << endl;
return 0;
}

How to calculate time complexity of a recursive function?

What is the time complexity of the following code?
My guess:
The for loop runs for constant time i.e. 3. And the function calls itself with n/3. So 'n' is contracted by 3 times every time and the time complexity is O(log3N)?
void function(int n){
if(n == 1)
return 1;
for(int i = 0; i < 3; i++){
cout << "Hello";
}
function(n/3);
}
Yes, it's O(log3N). Call the amount of work done by the loop C. The first few calls will go:
f(n) = C + f(n/3) = C + C + f(n/9) = C + ... + C + f(1)
The number of times C appears will be the number of times you can divide n by 3 before it gets to 1, which is exactly log3n. So the total work is C*log3n, or O(log3N).

Running time of algorithms

In this algorithm
int j=1;
while (j<=n/2) {
int i=1;
while (i <= j) {
cout << j << " "<< i << endl;
i++;
}
cout << endl;
j++;
}
Would the running time of this algorithm be T(n)=(n^2/2)+n+4
for (int i=2; i <=n; i++) {
for (int j=0; j <= n;) {
cout << i << " "<< j << endl;
j=j+(n/4);
}
cout << endl;
}
It would be T(n)=(n-2)^2+2
for the first one. T(n)=sum of number from 1 to n/2 because, the outer loop while be entered n/2 times and for those n/2 times the inner loop will enter 1 time and first turn, 2 times and the 2nd turn, 3 times at the 3rd turn...
T(n) = ((n/2)/2) * ((n/2)+1) = n/4 * (n/2+1) = n/4 * ((n+2)/2)
Maybe you can simplify it more by doing the multiplication.
The 2nd one. T(n) = (n+1) * (n/4) because the outer loop will enter n+1 times and for each of those times the inner loop will enter n/4 times.
T(n) = (n+1) * (n/4)
In the 2nd one, the loop increment is proportional to the loop end point. The number of iterations doesn't increase with n, only the range of values. Starting from j=0, it takes at most 5 increments of j+=(n/4) for j <= n to become false. (Only 4 if n is a multiple of 4). Either way, this is O(1).
So the 2nd version's inner loop does ~5 operations, and there's a cout << endl outside the loop, so each iteration of the outer loop does ~6 print operations. (If they're to a terminal, it will only be line-buffered, so count cost by number of lines printed = number of system calls. If it's going to a file, it'll be block buffered by default, so cost ~= number of operations ~= number of 4k blocks of data printed.)
The 2nd version's outer loop runs from i=2 .. n, so it runs n-1 times, printing 6 lines each time. (or 5 if n%4 == 0). T(loop2) = 6n.
Hassan's analysis looks ok for the first loop.

Complexity of double-nested loop

What is the order of growth of the worst case running time of the following code fragment as a function of N?
int cnt = 0;
for (int i = 1; i <= N; i = i*4)
for (int j = 0; j < i; j++)
{ cnt++; }
I now for example that first loop execute ~log(4, N) times and the second loop execute ~N times. But how to combine this knowlege to find the answer?
What is the general way to find that kind of complexity?
Maybe, we need to know how much time the body of the inner loop is executed?
For example 1 + 4 + 16 + 64 + ... + N
Geometric progression = (x^n - 1)/(x-1) where n=Log(4,N), so the result is
(x^log(x, N) - 1)/ (x-1) = (4N - 1)/3
Let's N belong to the interval [4^k; 4^(k+1)), then we have got sum:
sum 4^i, i=0..k = (4^(k+1)-1)/3 = O(n)
I was late some minutes and minus...

Identify and state the running time using Big-O

For each of the following algorithms, identify and state the running time using Big-O.
//i for (int i = 0; Math.sqrt(i) < n; i++)
cout << i << endl;
//ii for (int i = 0; i < n; i++){
cout << i << endl;
int k = n;
while (k > 0)
{
k /= 2;
cout << k << endl;
} // while
}
//iii
int k = 1;
for (int i = 0; i < n; i++)
k = k * 2;
for (int j = 0; j < k; j++)
cout << j << endl;
I've calculate the loop times for the first question using n=1 and n=2. The loop in i will run n^2-1 times. Please help and guide me to identify the Big-O notation.
(i) for (int i = 0; Math.sqrt(i) < n; i++)
cout << i << endl;
The loop will run until squareRoot(i) < N , or until i < N^2. Thus the running time will be O(N^2), ie. quadratic.
(ii) for (int i = 0; i < n; i++){
cout << i << endl;
int k = n;
while (k > 0)
{
k /= 2;
cout << k << endl;
} // while
}
The outer loop will run for N iterations. The inner loop will run for logN iterations(because the inner loop will run for k=N, N/2, N/(2^2), N/(2^3), ...logN times). Thus the running time will be O(N logN), ie. linearithmic.
(iii)
int k = 1;
for (int i = 0; i < n; i++)
k = k * 2;
for (int j = 0; j < k; j++)
cout << j << endl;
The value of k after the execution of the first loop will be 2^n as k is multiplied by 2 n times. The second loop runs k times. Thus it will run for 2^n iterations. Running time is O(2^N), ie. exponential.
For the first question, you will have to loop until Math.sqrt(i) >= n, that means that you will stop when i >= n*n, thus the first program runs in O(n^2).
For the second question, the outer loop will execute n times, and the inner loop keeps repeatedly halving k (which is initially equal to n). So the inner loop executes log n times, thus the total time complexity is O(n log n).
For the third question, the first loop executes n times, and on each iteration you double the value of k which is initially 1. After the loop terminates, you will have k = 2^n, and the second loop executes k times, so the total complexity will be O(2^n)
Couple hints may allow you to solve most of running time complexity problems in CS tests/homeworks.
If something decrease by a factor of 2 on each iteration, that's a log(N). In your second case the inner loop index is halved each time.
Geometric series,
a r^0 + a r^1 + a r^2 ... = a (r^n - 1) / (r - 1).
Write out third problem:
2 + 4 + 8 + 16 ... = 2^1 + 2^2 + 2^3 + 2^4 + ...
and use the closed form formula.
Generally it helps to look for log2 and to write few terms to see if there is a repeatable pattern.
Other common questions require you to know factorials and its approximation (Sterling's approximation)
Using Sigma Notation, you can formally obtain the following results:
(i)
(ii)
(iii)

Resources