In this algorithm
int j=1;
while (j<=n/2) {
int i=1;
while (i <= j) {
cout << j << " "<< i << endl;
i++;
}
cout << endl;
j++;
}
Would the running time of this algorithm be T(n)=(n^2/2)+n+4
for (int i=2; i <=n; i++) {
for (int j=0; j <= n;) {
cout << i << " "<< j << endl;
j=j+(n/4);
}
cout << endl;
}
It would be T(n)=(n-2)^2+2
for the first one. T(n)=sum of number from 1 to n/2 because, the outer loop while be entered n/2 times and for those n/2 times the inner loop will enter 1 time and first turn, 2 times and the 2nd turn, 3 times at the 3rd turn...
T(n) = ((n/2)/2) * ((n/2)+1) = n/4 * (n/2+1) = n/4 * ((n+2)/2)
Maybe you can simplify it more by doing the multiplication.
The 2nd one. T(n) = (n+1) * (n/4) because the outer loop will enter n+1 times and for each of those times the inner loop will enter n/4 times.
T(n) = (n+1) * (n/4)
In the 2nd one, the loop increment is proportional to the loop end point. The number of iterations doesn't increase with n, only the range of values. Starting from j=0, it takes at most 5 increments of j+=(n/4) for j <= n to become false. (Only 4 if n is a multiple of 4). Either way, this is O(1).
So the 2nd version's inner loop does ~5 operations, and there's a cout << endl outside the loop, so each iteration of the outer loop does ~6 print operations. (If they're to a terminal, it will only be line-buffered, so count cost by number of lines printed = number of system calls. If it's going to a file, it'll be block buffered by default, so cost ~= number of operations ~= number of 4k blocks of data printed.)
The 2nd version's outer loop runs from i=2 .. n, so it runs n-1 times, printing 6 lines each time. (or 5 if n%4 == 0). T(loop2) = 6n.
Hassan's analysis looks ok for the first loop.
Related
You and your friend successfully robbed a jewelry store full of diamond rings. Now, you want to distribute your loot among yourselves but you both got little greedy. You both decided to try your luck while distributing rings. Your bag contains N rings. You both decided to take turns one by one and pick some rings until there is none left in the bag. Since you are the mastermind of the plan, you will take the first turn. The rule goes like this-
Take one ring from the bag.
Take half of the available rings. This rule can only be applied if the number of rings in the bag is even.
Both robbers will try to maximize the number of rings they have. Find the maximum number of rings you can get at the end of the distribution if both you and the opponent plays optimally.
Input
The first line contains a single integer N(1 ≤ N ≤ 1000000) denotes the number of rings in the bag.
Output
Maximum number of rings you can get.
Sample Input
6
Sample Output
4
Stuck in this question anybody help me, any language works preferably Python.
Here is a O(logn) solution. Let call f(n) the solution
It is easy to check that:
- if (n odd) f(n) = n - f(n-1)
- if (n even) f(n) = max(n - f(n-1), n - f(n/2))
Explanation: in n is odd the only possibility is to pick one ring, and if n is even, we have the choice, pick one ring or half of the packet.
These relations allow a simple iterative solution, O(n) complexity.
However, a recursive solution with a complexity O(logn) is obtained by noting the following:
if n == 4k+2 -> better to take half of the rings. In next step, the other can only take one ring
if n == 4k (and n > 4) -> better to take one ring only. Then, the other can only take one and then, we arrive in the previous favorable case
Here is a simple code to illustrate this algorithm.
It compares the results of the O(n) iterative solution and the O(logn) recursive solution.
#include <iostream>
#include <algorithm>
#include <vector>
// Iterative O(n) solution
int rings_iterative (int n) {
std::vector<int> f (n+1);
f[0] = 0;
for (int i = 1; i <= n; ++i) {
if (i%2) {
f[i] = i - f[i-1];
} else {
f[i] = std::max(i - f[i/2], i - f[i-1]);
}
}
return f[n];
}
// Recursive O(logn) solution
int rings_recursive (int n) {
//std::cout << "n = " << n << std::endl;
if (n == 0) return 0;
if (n%2) return n - rings_recursive (n-1);
if (((n/2) % 2) || (n <= 4)) { // n = 4k+2 -> divide by 2
return n - rings_recursive(n/2);
} else { // n = 4k -> take one ring only
return 1 + rings_recursive(n-2);
}
return -1;
}
int main () {
int n = 999876;
int ring1, ring2;
ring1 = rings_iterative (n);
ring2 = rings_recursive (n);
std::cout << "n = " << n << " -> " << ring1 << " and " << ring2 << std::endl;
int n_min = 643831;
int n_max = 653875;
for (n = n_min; n <= n_max; ++n) {
ring1 = rings_iterative (n);
ring2 = rings_recursive (n);
if (ring1 != ring2) {
std::cout << "n = " << n << " -> " << ring1 << " and " << ring2 << std::endl;
return 1;
}
}
std::cout << "No difference\n";
return 0;
}
when I submit to leetcode, it run case 500/502 but failed, reason: 1808548329. But when I run it on my own mac, it gave the same answer as the accepted one.
my code :
int trailingZeroes(int n) {
int count = 0;
int tmp = 0; //check every number in [1, i]
for (int i = 1; i <= n; i++) {
tmp = i;
while (tmp % 5 == 0) {
count++;
tmp /= 5;
}
}
return count;
}
and the ac answer:
int trailingZeroes2(int n) {
return n == 0 ? 0 : n / 5 + trailingZeroes(n / 5);
}
they run the same result, on my mac:
std::cout << trailingZeroes(1808548329) << std::endl; //452137076
std::cout << trailingZeroes2(1808548329) << std::endl; //452137076
Is the reason that first solution not accepted because of time complexity?
(cus' i am running it on my own mac, but it gives the same answer that the ac gave)
how can i calculate the time complexity of the first solution,
is it O(NlogN) ? I am not sure. can u do me a favor? : -)
edited, remove pics.
Your solution is O(n).
The inner loop repeats at least once every 5 items
The inner loop repeats at least twice every 25 items
...
The inner loop repeats at least k times every 5^k items.
Summing it together gives you that the inner loop runs:
n/5 + n/25 + n/125 + ... + 1 =
n (1/5 + 1/25 + 1/125 + ... + 1/n)
This is sum of geometric series, which is in O(n)
In addition, the outer loop itself has O(n) iterations, with each constant cost, if ignoring the inner loops, so this remains O(n).
The alternative solution, however runs in O(logn), which is significantly more efficient.
for(i=1; i < n; i++){
for(j=1; j <= i; j++){
statement1;
}
}
outer loop = O(N)
Inner loop = N(N-1)/2
Total = N*N(N-1)/2 = N^3
it seems n^3 is complexity of these nested loops. but accordings to books, its complexity is n^2 from N(N-1)/2 .
The only interesting thing to count is how often statement1 will be executed.
Therefore, note that something like
for (int i = 0; i < 2; i++)
for (int j = 0; j < 3; j++)
statement1;
triggers 2 * 3 = 6 executions. So you count how often the inner loop gets executed per outer loop iteration.
However, in your example you did a mistake and multiplied the iterations of the outer loop with the total iterations of the inner loop, not the number of iterations per outer loop iteration.
In the example above that would be like 2 * 6 = 12 instead of only 2 * 3 = 6.
Let's take a closer look at what happens in your specific example. The outer loop triggers n iterations of the inner loop. The inner loop first yields 1 iteration. In the next iteration of the outer loop it will yield 2 iterations, then 3 and so on.
In total you will thus receive 1 + 2 + ... + n = (n^2 - n)/2 iterations of the inner loop. Again, note the 'in total'. So statement1 will in total be executed (n^2 - n)/2 times. The outer loops iterations are already taken into account for the computation of the inner loops total runs, no additional multiplication.
(n^2 - n)/2 is obviously in O(n^2) due to its asymptotic complexity. Intuitively only the biggest factor plays a role, we can drop other stuff by estimating with <=.
(n^2 - n)/2
<= n^2 - n
<= n^2 in O(n^2)
for(i=1; i < n; i++){
for(j=1; j <= i; j++){
statement1;
}
}
In order to simplify the problem, let's assume that n is 5 here.
So line 1 will execute 5 times since it will check and increment i value 5 times.
line 2 will execute (5-1)=4 times because for i=5, it will not execute but line 1 will execute for i=5.
line 3 will execute for 1 time, 2 times 3 times and so on, each time i is incremented.
Take complexity of 3rd line into context and you'll find that it is executing 1+2+3+4=10 times. It's simply the sum of numbers from 1 to 4 or you can say, n(n+1)/2 where n=4.
We can ignore the complexity of line 1 and line 2 since they are constant and in asymptotic notation , the complexity will be O(n^2).
You can think about the 2 nested loops as checking all the cells on the diagonal and below the diagonal on a N x N matrix.
So you'll always do a number of operations close to 1/2 of N^2. So the total number of operations of your code will be N^2 * constant. By the definition of Big-O notation, that means that your code runtime complexity is O(n^2).
Here is a simple code to help you understand my explanation.
#include <vector>
#include <iostream>
using std::vector;
using std::cout;
using std::endl;
// These function count the number of times that your code will execute statement1
int count(int N){
int total = 0;
for(int l = 0; l < N; ++l){
for(int r = 0; r <= l; ++r){
total++;
}
}
return total;
}
// this code will show the cells of the matrix that you are "executing"
int showMatrix(int N){
vector<vector<bool> > mat(N, vector<bool>(N, false) );
for(int l = 0; l < N; ++l){
for(int r = 0; r <= l; ++r){
mat[l][r] = true;
}
}
for(int line = 0; line < N; ++line){
for(int column = 0; column < N; ++column){
cout << (mat[line][column] == true ? "1 " : "0 ");
}
cout << endl;
}
}
int main(){
showMatrix(10);
cout << count(10) << endl;
return 0;
}
for (int i = 0; i < n; i++)
{
for (int j = 0; j < i*i; j++)
{
cout << j << endl;
result++;
}
}
Running this code say for 5 it runs a total of 30 times. I know the outer loop runs N. the inner loop though is giving me some trouble since it's not n*n but i*i and I haven't seen one like this before trying to figure out T(n), and Big(O).
This algorithm is O(n^3): To realise this we have to figure out how often the inner code
cout << j << endl;
result++;
is executed. For this we need to sum up 1*1+2*2+...+n*n = n^3/3+n^2/2+n/6 which is a well known result (see e.g. Sum of the Squares of the First n Natural Numbers). Thus O(T(n)) = O(1*1+2*2+...+n*n) = O(n^3) and the (time) complexity of the algorithm is therefore O(n^3).
Edit: If you're wondering why this is sufficient (see also Example 4 in Time complexity with examples) it is helpful to rewrite your code as a single loop so we can see that the loops add a constant amount of instructions (for each run of the inner code):
int i = 0;
int j = 0;
while(i < n) {
cout << j << endl;
result++;
if(j < i * i) j++; //were still in the inner loop
else {//start the next iteration of the outer loop
j = 0;
i++;
}
}
Thus the two loops 'add' the two comparisons plus the if-statement which simply makes the conditional jumps and their effects more explicit.
For each of the following algorithms, identify and state the running time using Big-O.
//i for (int i = 0; Math.sqrt(i) < n; i++)
cout << i << endl;
//ii for (int i = 0; i < n; i++){
cout << i << endl;
int k = n;
while (k > 0)
{
k /= 2;
cout << k << endl;
} // while
}
//iii
int k = 1;
for (int i = 0; i < n; i++)
k = k * 2;
for (int j = 0; j < k; j++)
cout << j << endl;
I've calculate the loop times for the first question using n=1 and n=2. The loop in i will run n^2-1 times. Please help and guide me to identify the Big-O notation.
(i) for (int i = 0; Math.sqrt(i) < n; i++)
cout << i << endl;
The loop will run until squareRoot(i) < N , or until i < N^2. Thus the running time will be O(N^2), ie. quadratic.
(ii) for (int i = 0; i < n; i++){
cout << i << endl;
int k = n;
while (k > 0)
{
k /= 2;
cout << k << endl;
} // while
}
The outer loop will run for N iterations. The inner loop will run for logN iterations(because the inner loop will run for k=N, N/2, N/(2^2), N/(2^3), ...logN times). Thus the running time will be O(N logN), ie. linearithmic.
(iii)
int k = 1;
for (int i = 0; i < n; i++)
k = k * 2;
for (int j = 0; j < k; j++)
cout << j << endl;
The value of k after the execution of the first loop will be 2^n as k is multiplied by 2 n times. The second loop runs k times. Thus it will run for 2^n iterations. Running time is O(2^N), ie. exponential.
For the first question, you will have to loop until Math.sqrt(i) >= n, that means that you will stop when i >= n*n, thus the first program runs in O(n^2).
For the second question, the outer loop will execute n times, and the inner loop keeps repeatedly halving k (which is initially equal to n). So the inner loop executes log n times, thus the total time complexity is O(n log n).
For the third question, the first loop executes n times, and on each iteration you double the value of k which is initially 1. After the loop terminates, you will have k = 2^n, and the second loop executes k times, so the total complexity will be O(2^n)
Couple hints may allow you to solve most of running time complexity problems in CS tests/homeworks.
If something decrease by a factor of 2 on each iteration, that's a log(N). In your second case the inner loop index is halved each time.
Geometric series,
a r^0 + a r^1 + a r^2 ... = a (r^n - 1) / (r - 1).
Write out third problem:
2 + 4 + 8 + 16 ... = 2^1 + 2^2 + 2^3 + 2^4 + ...
and use the closed form formula.
Generally it helps to look for log2 and to write few terms to see if there is a repeatable pattern.
Other common questions require you to know factorials and its approximation (Sterling's approximation)
Using Sigma Notation, you can formally obtain the following results:
(i)
(ii)
(iii)