I paste the screencap of the coding question.
and my solution is written below, may I know how to calculate the time complexity of this kind of algorithm (contains while loop and if-else statement). Also, if the time complexity of this algorithm is not O(1), may I know is there a method to improve it to O(1)? Thank you
def solution(N, K):
glass_minimum = 0
while K != 0:
if N == 0 and K != 0:
return -1
elif K > N:
K -= N
N -= 1
glass_minimum += 1
else: #K < N or K = N
K = 0
glass_minimum += 1
return glass_minimum
Related
I need to calculate multiplicative order to solve a discrete logarithm problem. I've tried to use this algorithm below but it doesn't work with big numbers.
def multiplicativeOrder(A, N) :
if (GCD(A, N ) != 1) :
return -1
result = 1
K = 1
while (K < N) :
result = (result * A) % N
if (result == 1) :
return K
K = K + 1
return -1
There are faster ways of doing this, based on factorizing n and then applying a lot of math. However, as just a baseline improvement that goes from O(n) to O(sqrt(n)) using the baby-step giant-step idea. Its also fairly simple compared to the alternative.
def multiplicative_order2(a, n):
if gcd(a, n) != 1:
return -1
visited = {}
count = 0
count = slow = fast = 1
while fast not in visited:
visited[slow] = count
count += 1
slow = (slow * a) % n
fast = (fast * slow) % n
return count * (count + 1) // 2 - visited[fast]
I am looking for an efficient algorithm of the problem, for any N find all i and j such that N=i^j.
I can solve it of O(N^2) as follows,
for i=1 to N
{
for j=1 to N
{
if((Power(i,j)==N)
print(i,j)
}
}
I am looking for better algorithm(or program in any language)if possible.
Given that i^j=N, you can solve the equation for j by taking the log of both sides:
j log(i) = log(N) or j = log(N) / log(i). So the algorithm becomes
for i=2 to N
{
j = log(N) / log(i)
if((Power(i,j)==N)
print(i,j)
}
Note that due to rounding errors with floating point calculations, you might want to check j-1 and j+1 as well, but even so, this is an O(n) solution.
Also, you need to skip i=1 since log(1) = 0 and that would result in a divide-by-zero error. In other words, N=1 needs to be treated as a special case. Or not allowed, since the solution for N=1 is i=1 and j=any value.
As M Oehm pointed out in the comments, an even better solution is to iterate over j, and compute i with pow(n,1.0/j). That reduces the time complexity to O(logN), since the maximum value for j is log2(N).
Here is a method you can use.
Lets say you have to solve an equation:
a^b = n //b and n are known
You can find this using binary search. If, you get a condition such that,
x^b < n and (x+1)^b > n
Then, no pair (a,b) exists such that a^b = n.
If you apply this method in range for b from 1..log(n), you should get all possible pairs.
So, complexity of this method will be: O(log n * log n)
Follow these steps:
function ifPower(n,b)
min=1, max=n;
while(min<max)
mid=min + (max-min)/2
k = mid^b, l = (mid + 1)^b;
if(k == n)
return mid;
if(l == n)
return mid + 1;
if(k < n && l > n)
return -1;
if(k < n)
max = mid - 1;
else
min = mid + 2; //2 as we are already checking for mid+1
return -1;
function findAll(n)
s = log2(n)
for i in range 2 to s //starting from 2 to ignore base cases, powers 0,1...you can handle them if required
p = ifPower(n,i)
if(b != -1)
print(b,i)
Here, in the algorithm, a^b means a raised to power of b and not a xor b (its obvs, but just saying)
What is the Big-Oh formula for the following code fragment:
k=0
for i in range(1,100) :
for j in range(i, 100) :
k = k + 1
I think its n^2? Is this right? Also does it have to have the n variable in it?
The algorithmic complexity of code isn't determined by its time to run. It determined by how much extra work a computer has to do depending on the input.
This code has O(1) as regardless of any input you give it it does the exact same thing in the same amount of time
k = 0
for i in range(100):
for j in range(100):
k+=1
This code however would have O(N) time if N is a number given as input because each increment of N makes it do 1 more think
k = 0
for i in range(N):
for j in range(100):
k+=1
And this code however would have O(N^2) time if N is a number given as input because each increment of N makes the computer do N more things
k = 0
for i in range(N):
for j in range(100):
k+=1
So you code has O(1) as it does the same amount of "Things" regardless of any input
This is the algorithm: I think its time complexity is O(n^2) because of loop in loop. How can I explain that?
FindSum(array, n, t)
i := 0
found := 0
array := quick_sort(array, 0, n - 1)
while i < n – 2
j = i + 1
k = n - 1
while k > j
sum = array[i] + array[j] + array[k]
if sum == t
found += 1
k -= 1
j += 1
else if sum > t
k -= 1
else
j += 1
Yes, the complexity is indeed O(n^2).
The inner loops runs anywhere between k-j = n-1-(i+1) = n-i-2 to (k-j)/2 = (n-i-2)/2 iterations.
Summing it up for all possible values of i from 0 to n-2 gives you:
T = n-0-2 + n-1-2 + n-2-2 + ... + n-(n-2)-2
= n-2 + n-3 + ... + 0
This is sum of arithmetic progression, that sums in (n-1)(n-2)/2 (sum of arithmetic progression), which is quadric. Note that dividing by extra 2 (for "best" case of inner loop) does not change time complexity in terms of big O notation.
Suppose we have an array w, that contains n integers. By the following definition, and the following pseudo code, please, tell me what is the time complexity of the algorithm w.r.t. n:
idx[i] = max(j) : 1 <= j < i and w[j] < w[i]
alg:
Data: w : array of integers - idx: a pointer to maximum index with the less value.
Date: w is 1based
idx[1] = -1
for i=: 2 to n do
j=i-1
while w[j] > w[i] and j <> -1
{
j = idx[j]
}
idx[i] =j
end
You have 2 loops here -
First loop for loop runs n times. hence its O(n).
Second loop while loop runs each time from (i-1) + (i-2) + (i-3) + .... + (i-n). it runs for (n-1) times. So O(n).
Combines to O(n^2) algo. The other operations are either constant time or O(n) time, hence neglected.