What is the time complexity of this algorithm - algorithm

Write a program that takes an integer and prints out all ways to multiply smaller integers that equal the original number, without repeating sets of factors. In other words, if your output contains 4 * 3, you should not print out 3 * 4 again as that would be a repeating set. Note that this is not asking for prime factorization only. Also, you can assume that the input integers are reasonable in size; correctness is more important than efficiency. PrintFactors(12) 12 * 1 6 * 2 4 * 3 3 * 2 * 2
public void printFactors(int number) {
printFactors("", number, number);
}
public void printFactors(String expression, int dividend, int previous) {
if(expression == "")
System.out.println(previous + " * 1");
for (int factor = dividend - 1; factor >= 2; --factor) {
if (dividend % factor == 0 && factor <= previous) {
int next = dividend / factor;
if (next <= factor)
if (next <= previous)
System.out.println(expression + factor + " * " + next);
printFactors(expression + factor + " * ", next, factor);
}
}
}
I think it is
If the given number is N and the number of prime factors of N = d, then the time complexity is O(N^d). It is because the recursion depth will go up to the number of prime factors. But it is not tight bound. Any suggestions?

2 ideas:
The algorithm is output-sensitive. Outputting a factorization uses up at most O(N) iterations of the loop, so overall we have O(N * number_of_factorizations)
Also, via Master's theorem, the equation is: F(N) = d * F(N/2) + O(N) , so overall we have O(N^log_2(d))

The time complexity should be:
number of iterations * number of sub calls ^ depth
There are O(log N) sub calls instead of O(N), since the number of divisors of N is O(log N)
The depth of recursion is also O(log N), and number of iterations for every recursive call is less than N/(2^depth), so overall time complexity is O(N ((log N)/2)^(log N))

The time complexity is calculated as : Total number of iterations multiplied
by no of sub iterations^depth.So over all complexity id Y(n)=O(no of
dividends)*O(number of factorization )+O(no of (factors-2) in loop);
Example PrintFactors(12) 12 * 1 ,6 * 2, 4 * 3, 3 * 2 * 2
O(no of dividends)=12
O(number of factorization)=3
O(no of factors-2){ in case of 3 * 2 * 2)=1 extra
Over all: O(N^logbase2(dividends))

Related

What is the Time complexity of isPascal() method

static int isPascal(int n) {
int sum = 0;
int nthVal = 1;
while (sum < n) {
sum = sum + nthVal;
nthVal++;
}
return sum == n ? 1 : 0;
}
Here the function checks given number is pascal number or not. Pascal number is a number that is the sum of the integers from 1 to i for some i.
For example 6 is a Pascal number because 6 = 1 + 2 + 3
What will be the Time complexity of this function? Will it be O(logn) time? If so what will be base of log here?
If you consider calculating the square root a O(1) operation, you can do this check in O(1), with the help of the formula for the sum of the first i natural numbers
sum(i) = (i^2 + i)/2
Now in your case you don't know i but you know sum(i), because that's your n you want to check if it's a pascal number. So you have
n = (i^2 + i) /2
or
i^2 + i - 2n = 0
Solving this quadratic equation with the respecitive formula gives
i = -1/2 + sqrt(2*n + 1/4)
You can discard the second solution to this equation, because i must be > 0 to be a valid solution. If that resulting i is an integer, n is a Pascal number. Otherwise it isn't.
From that formula also follows, your iterative solution is in O(sqrt(n))

Time complexity of this power function

I'm having this code to calculate the power of certain number
func power(x, n int) int {
if n == 0 {
return 1
}
if n == 1 {
return x
}
if n%2 == 0 {
return power(x, n/2) * power(x, n/2)
} else {
return power(x, n/2) * power(x, n/2) * x
}
}
go playground:
So, the total number of execution is 1 + 2 + 4 + ... + 2^k
and according to the formula of Geometric progression
a(1-r^n) / (1-r)
the sum of the execution times will be 2^k, where k is the height of the binary tree
Hence the time complexity is 2^logn
Am I correct? Thanks :)
Yes.
Another way of thinking on complexity of recursive functions is (amount of calls)**(height of recursive tree)
In each call you make two calls which divide n by two so the height of tree is logn so the time complexity is 2**(logn) which is O(n)
See a much more formal proof here:
https://cs.stackexchange.com/questions/69539/time-complexity-of-recursive-power-code
Every time you are dividing n by 2 unless n <= 1. So think how many times you can reduce n to 1 only by dividing by 0? Let's see,
n = 26
n1 = 13
n2 = 6 (take floor of 13/2)
n3 = 3
n4 = 1 (take floor of 3/2)
Let's say x_th power of 2 is greater or equal to x. Then,
2^x >= n
or, log2(2^x) = log2(n)
or, x = log2(n)
That is how you find the time complexity of your algorithm as log2(n).

Time complexity: theory vs reality

I'm currently doing an assignment that requires us to discuss time complexities of different algorithms.
Specifically sum1 and sum2
def sum1(a):
"""Return the sum of the elements in the list a."""
n = len(a)
if n == 0:
return 0
if n == 1:
return a[0]
return sum1(a[:n/2]) + sum1(a[n/2:])
def sum2(a):
"""Return the sum of the elements in the list a."""
return _sum(a, 0, len(a)-1)
def _sum(a, i, j):
"""Return the sum of the elements from a[i] to a[j]."""
if i > j:
return 0
if i == j:
return a[i]
mid = (i+j)/2
return _sum(a, i, mid) + _sum(a, mid+1, j)
Using the Master theorem, my best guess for both of theese are
T(n) = 2*T(n/2)
which accoring to Wikipedia should equate to O(n) if I haven't made any mistakes in my assumptions, however when I do a benchmark with different arrays of length N with random integers in the range 1 to 100, I get the following result.
I've tried running the benchmark a multiple of times and I get the same result each time. sum2 seems to be twice as fast as sum1 which baffles me since they should make the same amount of operations. (?).
My question is, are these algorthim both linear and if so why do their run time vary.
If it does matter, I'm running these tests on Python 2.7.14.
sum1 looks like O(n) on the surface, but for sum1 T(n) is actually 2T(n/2) + 2*n/2. This is because of the list slicing operations which themselves are O(n). Using the master theorem, the complexity becomes O(n log n) which causes the difference.
Thus, for sum1, the time taken t1 = k1 * n log n. For sum2, time taken t2 = k2 * n.
Since you are plotting a time vs log n graph, let x = log n. Then,
t1 = k1 * x * 10^x
t2 = k2 * 10^x
With suitable values for k1 and k2, you get a graph very similar to yours. From your data, when x = 6, 0.6 ~ k1 * 6 * 10^6 or k1 ~ 10^(-7) and 0.3 ~ k2 * 10^6 or k2 = 3 * 10^(-7).
Your graph has log10(N) on the x-axis, which means that the right-most data points are for an N value that's ten times the previous ones. And, indeed, they take roughly ten times as long. So that's a linear progression, as you expect.

Calculate complexity of recursion algorithm

func3(int n) {
for (int i = 1; i<n; i++)
System.out.println("*");
if (n <= 1)
{
System.out.println("*");
return;
}
if(n % 2 != 0) //check if odd
func3(n - 1)
func3(n / 2);
return;
}
I need to calculate the complexity of this algorithm, how can i do it when i have for in my code and 2 calls to func3?
The bit pattern of n is very helpful in illustrating this problem.
Integer division of n by 2 is equivalent to right-shifting the bit pattern by one place, discarding the least significant big (LSB). e.g.:
binary
----------------------
n = 15 | 0000 1111
n / 2 = 7 (round down) | 0000 0111 (1) <- discard
An odd number's LSB is always 1.
An power-of-two only has 1 bit set.
Therefore:
The best case is when n is a power-of-2, i.e. func3(n - 1) is only called once at the end when n = 1. In this case the time complexity is:
T(n) = T(n/2) + O(n) = O(n)
What is the worst case, when func3(n - 1) is always called once in each call to func3? The result of n / 2 must always be odd, hence:
All significant bits of n must be 1.
This corresponds to n equal to a power-of-two minus one, e.g. 3, 7, 15, 31, 63, ...
In this case, the call to func3(n - 1) will not yield another call to the same function, since if n is odd then n - 1 must be even. Also, n / 2 = (n - 1) / 2 (for integer division). Therefore the recurrence relation is:
T(n) = 2 * T(n/2) + O(n) = O(n log n)
One can obtain these results via the Master theorem.

How do I evaluate and explain the running time for this algorithm?

Pseudo code :
s <- 0
for i=1 to n do
if A[i]=1 then
for j=1 to n do
{constant number of elementary operations}
endfor
else
s <- s + a[i]
endif
endfor
where A[i] is an array of n integers, each of which is a random value between 1 and 6.
I'm at loss here...picking from my notes and some other sources online, I get
T(n) = C1(N) + C2 + C3(N) + C4(N) + C5
where C1(N) and C3(N) = for loops, and C4(N) = constant number of elementary operations. Though I have a strong feeling that I'm wrong.
You are looping from 1..n, and each loop ALSO loops from 1..n (in the worst case). O(n^2) right?
Put another way: You shouldn't be adding C3(N). That would be the case if you had two independent for loops from 1..n. But you have a nested loop. You will run the inner loop N times, and the outer loop N times. N*N = O(n^2).
Let's think for a second about what this algorithm does.
You are basically looping through every element in the array at least once (outer for, the one with i). Sometimes, if the value A[i] is 1, you are also looping again through the whole loop with the j for.
In your worst case scenario, you are running against an array of all 1's.
In that case, your complexity is:
time to init s
n * (time to test a[i] == 1 + n * time of {constant ...})
Which means: T = T(s) + n * (T(test) + n * T(const)) = n^2 * T(const) + n*T(test) + T(s).
Asymptotically, this is a O(n^2).
But this was a worst-case analysis (the one you should perform most of the times). What if we want an average case analysis?
In that case, assuming an uniform distribution of values in A, you are going to access the for loop of j, on average, 1/6 of the times.
So you'd get:
- time to init s
- n * (time to test a[i] == 1 + 1/6 * n * time of {constant ...} + 5/6 * T(increment s)
Which means: T = T(s) + n * (T(test) + 1/6 * n * T(const) + 5/6 * T(inc s)) = 1/6* n^2 * T(const) + n * (5/6 * T(inc s) + T(test)) + T(s).
Again, asymptotically this is still O(n^2), but according to the value of T(inc s) this could be larger or lower than the other case.
Fun exercise: can you estimate the expected average run time for a generic distribution of values in A, instead of an uniform one?

Resources