Consider an array multiplier for multiplying two n bit numbers. If
each gate in the circuit has a unit delay, the total delay of the
multiplier is ?
Θ(1)
Θ(logn)
Θ(n)
Θ(n^2)
If you see the image above you will notice that delay caused is diagonal to the array.
So the delay is approxiamtely sqrt(2)*(2n-1).
Which is Θ(n)
The no. of gates used in n bit array multiplier(nxn) is 2n-1.
So.
if every single gate takes unit delay, then total delay
0(2n-1)=0(n)
It is of linear order
Related
I am comparing two algorithms that determine whether a number is prime. I am looking at the upper bound for time complexity, but I can't understand the time complexity difference between the two, even though in practice one algorithm is faster than the other.
This pseudocode runs in exponential time, O(2^n):
Prime(n):
for i in range(2, n-1)
if n % i == 0
return False
return True
This pseudocode runs in half the time as the previous example, but I'm struggling to understand if the time complexity is still O(2^n) or not:
Prime(n):
for i in range(2, (n/2+1))
if n % i == 0
return False
return True
As a simple intuition of what big-O (big-O) and big-Θ (big-Theta) are about, they are about how changes the number of operations you need to do when you significantly increase the size of the problem (for example by a factor of 2).
The linear time complexity means that you increase the size by a factor of 2, the number of steps you need to perform also increases by about 2 times. This is what called Θ(n) and often interchangeably but not accurate O(n) (the difference between O and Θ is that O provides only an upper bound but Θ guarantees both upper and lower bounds).
The logarithmic time complexity (Θ(log(N))) means that when increase the size by a factor of 2, the number of steps you need to perform increases by some fixed amount of operations. For example, using binary search you can find given element in twice as long list using just one ore loop iterations.
Similarly the exponential time complexity (Θ(a^N) for some constant a > 1) means that if you increase that size of the problem just by 1, you need a times more operations. (Note that there is a subtle difference between Θ(2^N) and 2^Θ(N) and actually the second one is more generic, both lie inside the exponential time but neither of two covers it all, see wiki for some more details)
Note that those definition significantly depend on how you define "the size of the task"
As #DavidEisenstat correctly pointed out there are two possible context in which your algorithm can be seen:
Some fixed width numbers (for example 32-bit numbers). In such a context an obvious measure of the complexity of the prime-testing algorithm is the value being tested itself. In such case your algorithm is linear.
In practice there are many contexts where prime testing algorithm should work for really big numbers. For example many crypto-algorithms used today (such as Diffie–Hellman key exchange or RSA) rely on very big prime numbers like 512-bits, 1024-bits and so on. Also in those context the security is measured in the number of those bits rather than particular prime value. So in such contexts a natural way to measure the size of the task is the number of bits. And now the question arises: how many operations do we need to perform to check a value of known size in bits using your algorithm? Obviously if the value N has m bits it is about N ≈ 2^m. So your algorithm from linear Θ(N) converts into exponential 2^Θ(m). In other words to solve the problem for a value just 1 bit longer, you need to do about 2 times more work.
Exponential versus linear is a question of how the input is represented and the machine model. If the input is represented in unary (e.g., 7 is sent as 1111111) and the machine can do constant time division on numbers, then yes, the algorithm is linear time. A binary representation of n, however, uses about lg n bits, and the quantity n has an exponential relationship to lg n (n = 2^(lg n)).
Given that the number of loop iterations is within a constant factor for both solutions, they are in the same big O class, Theta(n). This is exponential if the input has lg n bits, and linear if it has n.
i hope this will explain you why they are in fact linear.
suppose you call function and see how many time they r executed
Prime(n): # 1 time
for i in range(2, n-1) #n-1-1 times
if n % i == 0 # 1 time
return False # 1 time
return True # 1 time
# overall -> n
Prime(n): # Time
for i in range(2, (n/2+1)) # n//(2+1) -1-1 time
if n % i == 0 # 1 time
return False # 1 time
return True # 1 time
# overall -> n/2 times -> n times
this show that prime is linear function
O(n^2) might be because of code block where this function is called.
function multiply(x, y)
Input: Two n-bit integers x and y, where y ≥ 0 Output: Their product
if y=0: return 0
z = multiply(x, ⌊y/2⌋)
if y is even:
return 2z
else:
return x + 2z
As stated in my question, why is this function O(n^2)? This is an explanation from the book that above example belongs to :
It must terminate after n recursive calls, because at each call y is halved—that is, its number of bits is decreased by one. And each recursive call requires these operations: a division by 2 (right shift); a test for odd/even (looking up the last bit); a multiplication by 2 (left shift); and possibly one addition, a total of O(n) bit operations. The total time taken is thus O(n^2), just as before.
Because of left shift and right shift of division by 2 and multiplication, I thought it would be bigger than O(n^2)..maybe n^3.
Each of the operations right shift, test, left shift takes a fixed amount of time per bit, so it is done O(n) times per recursion. Remember that O(3n) is still O(n).
Since the entire function is applied recursively to each bit using the left shift, the previous O(n) steps are carried out O(n) times, making for a total complexity of O(n^2).
Is this a comp-sci textbook or a VLSI textbook? Because the answer depends on the complexity of the operations y==0, y/2, 2z, x+2z, and "y is even".
As an applications developer, I consider those to be constant time operations so they are all O(1). The Multiply function is then either O(log(Y)) or O(N) where N is the number of bits in Y. Same thing. Therefore, I conclude that this entire function is O(N).
Now, a computer engineer might argue that y/2 requires shifting N bits, and thus it is an O(N) operation. There's probably some CPU out there that works that way. Permit me to be absurd for a moment and argue that I could create an implementation for y==0 that takes O(N^47), thus this function is O(N^48). :-)
In reality, any modern-day N-bit processor will do bit shifts of an N-bit number in parallel, so they really are O(1). Maybe back on an 8088 this wasn't the case, but for any modern design that would be true. So in practicality I argue this is O(N) not O(N^2)
I came across this problem that I am not sure how to solve:
Suppose A(.) is a subroutine that takes as input a number in binary, and takes linear time (that is, O(n), where n is the length (in bits) of the number).
Consider the following piece of code, which starts with an n-bit number x.
while x>1:
call A(x)
x=x-1
Assume that the subtraction takes O(n) time on an n-bit number.
(a) How many times does the inner loop iterate (as a function of n)? Leave your answer in big-O form.
(b) What is the overall running time (as a function of n), in big-O form?
My guess is that (a) is O(n^2) and (b) is O(n^3). Is this correct? The way I'm thinking about it is that the loop has to compute two steps each time it cycles through and it will cycle through x time each time subtracting 1 from n bits until x reaches 0. And for part b since A(.) takes time O(n) we multiply that with the time it takes to execute the loop and we then have the over all running time. Is my analysis correct?
Something that might help here is to write x = 2n, since if x has n bits its value is O(2n). Therefore, the loop will run O(2n) times.
Each iteration of the loop does O(n) work, giving an upper bound on the work of O(n · 2n). This bound ends up being tight. Notice that for the first x/2 iterations of the loop, the value of x will still need n bits. Therefore, as a lower bound on the work done, we get x/2 = 2n-1 iterations doing n work each, giving a total of Ω(n · 2n) work. Thus the work done is Θ(n · 2n).
Hope this helps!
Below is some pseudocode I wrote that, given an array A and an integer value k, returns true if there are two different integers in A that sum to k, and returns false otherwise. I am trying to determine the time complexity of this algorithm.
I'm guessing that the complexity of this algorithm in the worst case is O(n^2). This is because the first for loop runs n times, and the for loop within this loop also runs n times. The if statement makes one comparison and returns a value if true, which are both constant time operations. The final return statement is also a constant time operation.
Am I correct in my guess? I'm new to algorithms and complexity, so please correct me if I went wrong anywhere!
Algorithm ArraySum(A, n, k)
for (i=0, i<n, i++)
for (j=i+1, j<n, j++)
if (A[i]+A[j]=k)
return true
return false
Azodious's reasoning is incorrect. The inner loop does not simply run n-1 times. Thus, you should not use (outer iterations)*(inner iterations) to compute the complexity.
The important thing to observe is, that the inner loop's runtime changes with each iteration of the outer loop.
It is correct, that the first time the loop runs, it will do n-1 iterations. But after that, the amount of iterations always decreases by one:
n - 1
n - 2
n - 3
…
2
1
We can use Gauss' trick (second formula) to sum this series to get n(n-1)/2 = (n² - n)/2. This is how many times the comparison runs in total in the worst case.
From this, we can see that the bound can not get any tighter than O(n²). As you can see, there is no need for guessing.
Note that you cannot provide a meaningful lower bound, because the algorithm may complete after any step. This implies the algorithm's best case is O(1).
Yes. In the worst case, your algorithm is O(n2).
Your algorithm is O(n2) because every instance of inputs needs time complexity O(n2).
Your algorithm is Ω(1) because there exist one instance of inputs only needs time complexity Ω(1).
Following appears in chapter 3, Growth of Function, of Introduction to Algorithms co-authored by Cormen, Leiserson, Rivest, and Stein.
When we say that the running time (no modifier) of an algorithm is Ω(g(n)), we mean that no mater what particular input of size n is chosen for each value of n, the running time on that input is at least a constant time g(n), for sufficiently large n.
Given an input in which the summation of first two elements is equal to k, this algorithm would take only one addition and one comparison before returning true.
Therefore, this input costs constant time complexity and make the running time of this algorithm Ω(1).
No matter what the input is, this algorithm would take at most n(n-1)/2 additions and n(n-1)/2 comparisons before returning value.
Therefore, the running time of this algorithm is O(n2)
In conclusion, we can say that the running time of this algorithm falls between Ω(1) and O(n2).
We could also say that worst-case running of this algorithm is Θ(n2).
You are right but let me explain a bit:
This is because the first for loop runs n times, and the for loop within this loop also runs n times.
Actually, the second loop will run for (n-i-1) times, but in terms of complexity it'll be taken as n only. (updated based on phant0m's comment)
So, in worst case scenerio, it'll run for n * (n-i-1) * 1 * 1 times. which is O(n^2).
in best case scenerio, it's run for 1 * 1 * 1 * 1 times, which is O(1) i.e. constant.
What would be the BigO time of this algorithm
Input: Array A sorting n>=1 integers
Output: The sum of the elements at even cells in A
s=A[0]
for i=2 to n-1 by increments of 2
{
s=s+A[i]
}
return s
I think the function for this equation is F(n)=n*ceiling(n/2) but how do you convert that to bigO
The time complexity for that algorithm would be O(n), since the amount of work it does grows linearly with the size of the input. The other way to look at it is that loops over the input once - ignore the fact that it only looks at half of the values, that doesn't matter for Big-O complexity).
The number of operations is not proportional to n*ceiling(n/2), but rather n/2 which is O(n). Because of the meaning of big-O, (which includes the idea of an arbitrary coefficient), O(n) and O(n/2) are absolutely equivalent - so it is always written as O(n).
This is an O(n) algorithm since you look at ~n/2 elements.
Your algorithm will do N/2 iterations given that there are N elements in the array. Each iteration requires constant time to complete. This gives us O(N) complexity, and here's why.
Generally, the running time of an algorithm is a function f(x) from the size of data. Saying that f(x) is O(g(x)) means that there exists some constant c such that for all sufficiently large values of x f(x) <= cg(x). Easy to see how this applies in our case: if we assume that each iteration takes a unit of time, obviously f(N) <= 1/2N.
A formal manner to obtain the exact number of operations and your algorithm's order of growth: