I have developed the following algorithm to compute ciphertext as :
of ciphertext c =s (rp+mi ) mod q, where p, q are random primes and s,r is a random number from Zp. I searched to find the running time for mod operation, I found it as O(n) and other reference says it is O (n^2), while I found the complexity of mod power i.e (s^d (rp+mi)mod p) is O(log n), which smaller than normal mod operation, please advise, is the values are correct?
Related
I'm taking an algorithms class and I repeatedly have trouble when I'm asked to analyze the runtime of code when there is a line with multiplication or division. How can I find big-theta of multiplying an n digit number with an m digit number (where n>m)? Is it the same as multiplying two n digit numbers?
For example, right now I'm attempting to analyze the following line of code:
return n*count/100
where count is at most 100. Is the asymptotic complexity of this any different from n*n/100? or n*n/n?
You can always look up here Computational complexity of mathematical operations.
In your complexity of n*count/100 is O(length(n)) as 100 is a constant and length(count) is at most 3.
In general multiplication of two numbers n and m digits length, takes O(nm), the same time required for division. Here i assume we are talking about long division. There are many sophisticated algorithms which will beat this complexity.
To make things clearer i will provide an example. Suppose you have three numbers:
A - n digits length
B - m digits length
C - p digits length
Find complexity of the following formula:
A * B / C
Multiply first. Complexity of A * B it is O(nm) and as result we have number D, which is n+m digits length. Now consider D / C, here complexity is O((n+m)p), where overall complexity is sum of the two O(nm + (n+m)p) = O(m(n+p) + np).
Divide first. So, we divide B / C, complexity is O(mp) and we have m digits number E. Now we calculate A * E, here complexity is O(nm). Again overall complexity is O(mp + nm) = O(m(n+p)).
From the analysis you can see that it is beneficial to divide first. Of course in real life situation you would account for numerical stability as well.
From Modern Computer Arithmetic:
Assume the larger operand has size
m, and the smaller has size n ≤ m, and denote by M(m,n) the corresponding
multiplication cost.
When m is an exact multiple of n, say m = kn, a trivial strategy is to cut the
larger operand into k pieces, giving M(kn,n) = kM(n) + O(kn).
Suppose m ≥ n and n is large. To use an evaluation-interpolation scheme,
we need to evaluate the product at m + n points, whereas balanced k by k
multiplication needs 2k points. Taking k ≈ (m+n)/2, we see that M(m,n) ≤ M((m + n)/2)(1 + o(1)) as n → ∞. On the other hand, from the discussion
above, we have M(m,n) ≤ ⌈m/n⌉M(n)(1 + o(1)).
I'm trying to find the time complexity/ big theta of the following:
def f(n):
i = 2
while i <n:
print(i)
i = i*i
The only approach of how I know how to solve this is to find a general formula for i_k and then solve the equation of i_k >= n, however I end up with a log(logn/log2)/log(2) equation as my k value, and that seems awefully wrong to me and I'm not sure how I would translate that into a big theta expression. Any help would be appreciated!
That answer looks good, actually! If you rewrite log x / log 2 as log2 x (or lg x, for short), what you have is that the number of iterations is lg lg n. Since the value of i in iteration k of the loop is 22k, this means that the loop stops when i reaches the value 22lg lg n = 2lg n = n, which matches the loop bound.
More generally, the number of times you can square a value before it exceeds n is Θ(log log n), and similarly the number of square roots you can take before you drop a number n down to a constant is Θ(log log n), so your answer is pretty much what you’d expect.
I am new to asymptotic notation and here is the algorithm. What is the worst case tight bond for time complexity and why?
F(A,B) { //A and B are positive
while A>0
print(A mod B)
A=A div B
}
The time complexity of this:
F(A,B) { //A and B are positive
while A>0
A=A/B
}
is equal to the number of times the loop will execute, let's call that l and is equal to how many times B has to divide A to make "A > 0" false.
From this question, we know that:
Algorithm D in 4.3.1 of Knuth's book "The Art of Computer Programming" (Volume 2) performs any long division in O(m) steps, where m is the number of digits of A, so we have an upper bound.
Thus the Time Complexity is: *O(l * m)*
Now this:
print(A mod B)
assuming that IO is constant (that's something of course that's not correct in real-wolrd), you need the complexity of the modulo itself, which from this, we know it's:
O(log A log B)
and will run l times.
As a result, we have:
O(l * (m + log A log B))
It's not a very good question. First if B is unity, the algorithm will never complete. Presumably B must be 2 or greater. So we have O(log A) steps. But the question now is whether the division is itself an "operation" or not. If A and B are unbounded, then inherently it must also be logarithmic. But normally the code will be running on a processor that implements all divisions in 32 or 64 bits, and it can't divide an number out of range. So generally we say that divisions are "operations".
If we say division is logarithimic and B is small then we're O(log A)^2.
Given a positive integer m, find four integers a, b, c, d such that a^2 + b^2 + c^2 + d^2 = m in O(m^2 log m). Extra space can be used.
I can think of an O(m^3) solution, but I am confused about the O(m^2 logm) solution..
First hint:
What is the complexity of sorting squared elemnt from 1 to m^2
Second hint:
Have a look at this post for some help :
Break time, find any triple which matches pythagoras equation in O(n^2)
Third Hint:
If you need more help : (from yi_H response on the previous post):
I guess O(n^2 log n) would be to sort the numbers, take any two
pairs (O(n^2)) and see whether there is c in the number for which c^2
= a^2 + b^2. You can do the lookup for c with binary search, that's
O(log(n)).
author: yi_H
Now compare n and sqrt(m)
Hope you can figure out a solution with this.
There is a classical theorem of Lagrange that says that every natural number is the sum of four squares.
The Wikipedia page on this topic mentions that there is a randomized algorithm for computing the representation that runs in O(\lg^2 m) time (all the suggestions above are polynomial in m, i.e., they are exponential in the size of the problem instance (since the number m can be encoded in \lg m bits).
As an aside, Lagrange's theorem proves the undecidability of the integers with plus and times (since the naturals are undecidable, and can be defined in the integers with plus and times, by virtue of the theorem).
What's the complexity of a recursive program to find factorial of a number n? My hunch is that it might be O(n).
If you take multiplication as O(1), then yes, O(N) is correct. However, note that multiplying two numbers of arbitrary length x is not O(1) on finite hardware -- as x tends to infinity, the time needed for multiplication grows (e.g. if you use Karatsuba multiplication, it's O(x ** 1.585)).
You can theoretically do better for sufficiently huge numbers with Schönhage-Strassen, but I confess I have no real world experience with that one. x, the "length" or "number of digits" (in whatever base, doesn't matter for big-O anyway of N, grows with O(log N), of course.
If you mean to limit your question to factorials of numbers short enough to be multiplied in O(1), then there's no way N can "tend to infinity" and therefore big-O notation is inappropriate.
Assuming you're talking about the most naive factorial algorithm ever:
factorial (n):
if (n = 0) then return 1
otherwise return n * factorial(n-1)
Yes, the algorithm is linear, running in O(n) time. This is the case because it executes once every time it decrements the value n, and it decrements the value n until it reaches 0, meaning the function is called recursively n times. This is assuming, of course, that both decrementation and multiplication are constant operations.
Of course, if you implement factorial some other way (for example, using addition recursively instead of multiplication), you can end up with a much more time-complex algorithm. I wouldn't advise using such an algorithm, though.
When you express the complexity of an algorithm, it is always as a function of the input size. It is only valid to assume that multiplication is an O(1) operation if the numbers that you are multiplying are of fixed size. For example, if you wanted to determine the complexity of an algorithm that computes matrix products, you might assume that the individual components of the matrices were of fixed size. Then it would be valid to assume that multiplication of two individual matrix components was O(1), and you would compute the complexity according to the number of entries in each matrix.
However, when you want to figure out the complexity of an algorithm to compute N! you have to assume that N can be arbitrarily large, so it is not valid to assume that multiplication is an O(1) operation.
If you want to multiply an n-bit number with an m-bit number the naive algorithm (the kind you do by hand) takes time O(mn), but there are faster algorithms.
If you want to analyze the complexity of the easy algorithm for computing N!
factorial(N)
f=1
for i = 2 to N
f=f*i
return f
then at the k-th step in the for loop, you are multiplying (k-1)! by k. The number of bits used to represent (k-1)! is O(k log k) and the number of bits used to represent k is O(log k). So the time required to multiply (k-1)! and k is O(k (log k)^2) (assuming you use the naive multiplication algorithm). Then the total amount of time taken by the algorithm is the sum of the time taken at each step:
sum k = 1 to N [k (log k)^2] <= (log N)^2 * (sum k = 1 to N [k]) =
O(N^2 (log N)^2)
You could improve this performance by using a faster multiplication algorithm, like Schönhage-Strassen which takes time O(n*log(n)*log(log(n))) for 2 n-bit numbers.
The other way to improve performance is to use a better algorithm to compute N!. The fastest one that I know of first computes the prime factorization of N! and then multiplies all the prime factors.
The time-complexity of recursive factorial would be:
factorial (n) {
if (n = 0)
return 1
else
return n * factorial(n-1)
}
So,
The time complexity for one recursive call would be:
T(n) = T(n-1) + 3 (3 is for As we have to do three constant operations like
multiplication,subtraction and checking the value of n in each recursive
call)
= T(n-2) + 6 (Second recursive call)
= T(n-3) + 9 (Third recursive call)
.
.
.
.
= T(n-k) + 3k
till, k = n
Then,
= T(n-n) + 3n
= T(0) + 3n
= 1 + 3n
To represent in Big-Oh notation,
T(N) is directly proportional to n,
Therefore,
The time complexity of recursive factorial is O(n).
As there is no extra space taken during the recursive calls,the space complexity is O(N).