How to obtain expected execution time and worst case for algorithms with infinite loop? - algorithm

I have a one doubt about expected execution time and worst case executio time of algorithms, I was think a simple algorithm but this contain a loop infinite and I don't know about to explain that expected execution time?. I could always determine the time to algorithms with stop conditions.
This is my example (I just know, that more than n/2 can give true):
while (true)
{
int i = random(0,n-1);
bool e = decision(i); //Θ(n)
if (e==true)
return i;
}

The expected execution time is O(n).
With probability p >= 1/2, the first i will give decision(i) == true, so the loop will terminate after one call to decision.
Let q = 1 - p be the probability that it did not happen.
Then, with probability q * p, the second i will give decision(i) == true, so the loop will terminate after two calls to decision.
Similarly, with probability q^2 * p, the third i will give decision(i) == true, so the loop will terminate after three calls to decision, and so on.
By taking the sum, we have the expected number of calls to decision as
1 + q + q^2 + q^3 + ....
As q <= 1/2, the sum is at most 1 + 1/2 + 1/4 + 1/8 + ... which has an upper limit of 2.
So, the expected number of calls is limited by 2.
In total, as each call to decision takes O(n) time, the expected time to run is O(2*n) which is still O(n) since 2 is a constant.

It seems dependant on the probabilty that decision ends up being true.
So in this case decision takes n steps.
Which means that the runtime is O(n),
Now we take the probability of decision being true, assume 50%, that means on avarage we need to 2n steps per loop( sum(prob^x*n) , x=0..infinity ,prob=0.5).
Eventhough O increases by the decision probability, the multiplication is still linearly bound to the change of "decision" being true, and therefore stil O(n).

Related

Can we use probabilities to determine whether an algorithm is non-terminating?

My question isn't about some specific code but rather theoritical, so my apologies in advance if that's not the right place to post it.
Given a certain problem - for the sake of discussion, let it be a permutation problem - with an input of size N, and P the constant time needed to calculate 1 permutation; in this case, we need about T = ~ (P)*(N!) = O(N!) time to produce all results. If by any chance our algorithm takes much more than the expected time, it may be safe to assume it doesn't terminate.
For example, for P = 0.5 secs, N = 8, ⇒ T = 0.5*8! = 20.160 secs. Any value above T is 'suspicious' time.
My question is, how can one introduce a probability function which asymptotically reaches 1, while running time increases infinitely?
Our 'evaluation' shall depend on a constant time P, the size N of our input, and the time complexity Q of our current problem, thus it may have this form: f(P, N, Q) = ... , whereas 0 ≤ f ≤ 1, f increasing, and f indicates the probability of having 'fallen' in a non-terminating state.
If this approach isnt enough, how can we make sure that after a certain amount of time, our program is probably running endlessly?

Time complexity of recursive function dividing the input value by 2/3 everytime

I know that the time complexity of a recursive function dividing its input by /2 is log n base 2,I have come across some interesting scenarios on
https://stackoverflow.com/a/42038565/8169857
Kindly help me to understand the logic behind the scenarios in the answer regarding the derivation of the formula
It's back to the recursion tree. Why for 1/2 is O(log2(n))? Because if n = 2^k, you should divide k times to reach to 1. Hence, the number of computation is k = log2(n) comparison at most. Now suppose it is (c-1)/c. Hence, if n = (c/(c-1))^k, we need log_{c/(c-1)}(n) operations to reach to 1.
Now as for any constant c > 1, limit log2(n)/log_{c/(c-1)}(n), n \to \infty is equal to a constant greater than zero, log_{c/(c-1)}(n) = \Theta(log2(n)). Indeed, you can say this for any constants a, b > 1, log_a(n) = \Theta(log_b(n)). Now, the proof is completed.

Find the number of instructions of an algorithm

Given this algorithm (a>0, b>0) :
while(a>=b){
k=1;
while(a>=k*b){
a = a - k*b;
k++;
}
}
My question : I have to find the time complexity of this algorithm and to do so, I must find the number of instructions but I couldn't find it. Is there a way to find this number and if not, how can I find its time complexity ?
What I have done : First of all I tried to find the number of iterations of the first loop and I found a pattern : a_i = a - (i(i+1)/2)*b where i is the number of iterations. I've spent hours doing some manipulations on it but I couldn't find anything relevant (I've found weird results like q² <= a/b < q²+q where q is the number of iterations).
You have correctly calculated that the value of a after the i-th iteration of the inner loop is:
Where a_j0 is the value of a at the start of the j-th outer loop. The stopping condition for the inner loop is:
Which can be solved as a quadratic inequality:
Therefore the inner loop is approximately O(sqrt(a_j0 / b)). The next starting value of a satisfies:
Scaling roughly as sqrt(2b * a_j0). It would be quite tedious to compute the time complexity exactly, so let's apply the above approximations from here on:
Where a_n replaces a_j0, and t_n is the run-time of the inner loop – and of course the total time complexity is just the sum of t_n. Note that the first term is given by n = 1, and that the input value of a is defined to be a_0.
Before directly solving this recurrence, note that since the second term t_2 is already proportional to the square root of the first t_1, the latter dominates all other terms in the sum.
The total time complexity is therefore just O(sqrt(a / b)).
Update: numerical tests.
Note that, since all changes in the value of a are proportional to b, and all loop conditions are also proportional to b, the function can be "normalized" by setting b = 1 and only varying a.
Javascript test function, which measures the number of times that the inner loop executes:
function T(n)
{
let t = 0, k = 0;
while (n >= 1) {
k = 1;
while (n >= k) {
n -= k;
k++; t++;
}
}
return t;
}
Plot of sqrt(n) against T(n):
A convincing straight line which confirms that the time complexity is indeed half-power.

Theta Notation and Worst Case Running time nested loops

This is the code I need to analyse:
i = 1
while i < n
do
j = 0;
while j <= i
do
j = j + 1
i = 2i
So, the first loop should run log(2,n) and the innermost loop should run log(2,n) * (i + 1), but I'm pretty sure that's wrong.
How do I use a theta notation to prove it?
An intuitive way to think about this is to see how much work your inner loop is doing for a fixed value of outer loop variable i. It's clearly as much as i itself. Thus, if the value of i is 256, then then you will do j = j + 1 that many times.
Thus, total work done is the sum of the values that i takes in the outer loop's execution. That variable is increasing much rapidly to catch up with n. Its values, as given by i = 2i (it should be i = 2*i), are going to be like: 2, 4, 8, 16, ..., because we start with 2 iterations of the inner loop when i = 1. This is a geometric series: a, ar, ar^2 ... with a = 1 and r = 2. The last term, as you figured out will be n and there will be log2 n terms in the series. And that is simple summation of a geometric series.
It doesn't make much sense to have a worst case or a best case for this algorithm because there are no different permutations of the input which is just a number n in this case. Best case or worst case are relevant when a particular input (e.g. a particular sequence of numbers) affects the running time of the algorithm.
The running time then is the sum of geometric series (a.(r^num_terms - 1)/(r-1)):
T(n) = 2 + 4 + ... 2^(log2 n)
= 2 . (2^log2 n - 1)
= 2 . (n - 1)
⩽ 3n = O(n)
Thus, you can't be doing work that is more than some constant multiple of n. Hence, the running time of this algorithm is O(n).
You can't be doing some work that is less than some (other) constant multiple of n, since you have to go through the increment in inner loop as shown above. Thus, the running time of this algorithm is also ≥ c.n i.e. it is Ω(n).
Together, this means that running time of this algorithm is Θ(n).
You can't use i in your final expression; only n.
You can easily see that the inner loop executes i times each time it is reached. And it sounds like you've figured out the different values that i can have. So add up those values, and you have the total amount of work.

What is the time complexity of the code?

Is the time complexity of the following code O(NV^2)?
for i from 1 to N:
for j from 1 to V:
for k from 1 to A[i]://max(A) = V
z = z + k
yeah,whenever we talk about O-notation, we always think about the upper-bound(OR the worst case).
So,the complexity for this code will be equal to
O(N*V*maximum_value_of_A)
=O(N*V*V) // since,maximum value of A=V,so third loop can maximally iterate from 1 to V---V times
=O(N*V^2).
For sure it is O(NV^2) as it means the code is never slower than that. Because max(A) = V, you can say the worst case would be when at every index of A there is V. If so, then the complexity can be limited to O(NV*V).
You can calculate very roughly that the complexity of the for k loop can be O(avg(A)). This allows us to say that the whole function is Omega(NV*avg(A)), where avg(A) <= V.
Theta notation (meaning asympthotical complexity) would can be stated like Theta(NV*O(V)), O(V) representing complexity of a function which will never grow faster than V, but is not constant.

Resources