I updated a question I asked before with this but as the original question was answered I'm guessing I should ask it seperately in a new question.
Take for example the simple multiplication algorithm. I see in numerous places the claim that this is a Log^2(N) operation. The given explanation is that this is due to it consisting of Log(N) additions of a Log(N) number.
The problem I have with this is although that is true it ignores the fact that each of those Log(N) numbers will be the result of a bit shift and by the end we will have bitshifted at least Log(N) times. As bitshift by 1 is a Log(N) operation the bitshifts considered alone give us Log^2(N) operations.
It therefore makes no sense to me when I see it further claimed that in practice multiplication doesn't in fact use Log^2(N) operations as various methods can reduce the number of required additions. Since the bitshifting alone gives us Log^2(N) I'm left confused as to how that claim can be true.
In fact any shift and add method would seem to have this bit cost irrespective of how many addition's there are.
Even if we use perfect minimal bit coding any Mbit by Nbit multiplication will result in an approx M+Nbit Number so M+N bits will have to have been shifted at least N times just to output/store/combine terms, meaning a minimum N^2 bit operations.
This seems to contradict the claimed number of operations for Toom-Cook etc so can someone please point out where my reasoning is flawed.
I think the way around this issue is the fact that you can do the operation
a + b << k
without having to perform any shifts at all. If you imagine what the addition would look like, it would look something like this:
bn b(n - 1) ... b(n-k) b(n-k-1) b0 0 ... 0 0 0 0
an a(n-1) ... ak a(k-1) ... a3 a2 a1 a0
In other words, the last k digits of the number will just be the last k digits of the number a, the middle digits will consist of the sum of a subset of b's digits and a subset of a's digits, and the leading digits can be formed by doing a ripple propagation of any carries up through the remaining digits of b. In other words, the total runtime will be proportional to the number of digits in a and b, plus the number of places to do the shift.
The real trick here is realizing that you can shift by k places without doing k individual shifts by one place over. Rather than shuffling everything down k times, you can just figure out where the bits are going to end up and write them there directly. In other words, the cost of a shift by k bits is not k times the cost of a shift by 1 bit. It's O(N + k), where N is the number of bits in the number.
Consequently, if you can implement multiplication in terms of some number of "add two numbers with a shift" operations, you will not necessarily have to do O((log n)2) bit operations. Each addition does O(log n + k) total bit operations, so if k is small (say, O(log n)) and you only do a small number of additions, then you can do better than O((log n)2) bit operations.
Hope this helps!
Related
I'm learning a course about big O notation on Coursera. I watched a video about the big O of a Fibonacci algorithm (non-recursion method), which is like this:
Operation Runtime
create an array F[0..n] O(n)
F[0] <-- 0 O(1)
F[1] <-- 1 O(1)
for i from 2 to n: Loop O(n) times
F[i] <-- F[i-1] + F[i-2] O(n) => I don't understand this line, isn't it O(1)?
return F[n] O(1)
Total: O(n)+O(1)+O(1)+O(n)*O(n)+O(1) = O(n^2)
I understand every part except F[i] <-- F[i-1] + F[i-2] O(n) => I don't understand this line, isn't it O(1) since it's just a simple addition? Is it the same with F[i] <-- 1+1?
The explanation they give me is:"But the addition is a bit worse. And normally additions are constant time. But these are large numbers. Remember, the nth Fibonacci number has about n over 5 digits to it, they're very big, and they often won't fit in the machine word."
"Now if you think about what happens if you add two very big numbers together, how long does that take? Well, you sort of add the tens digit and you carry, and you add the hundreds digit and you carry, and add the thousands digit, you carry and so on and so forth. And you sort of have to do work for each digits place.
And so the amount of work that you do should be proportional to the number of digits. And in this case, the number of digits is proportional to n, so this should take O(n) time to run that line of code".
I'm still a bit confusing. Does it mean a large number affects time complexity too? For example a = n+1 is O(1) while a = n^50+n^50 isn't O(1) anymore?
Video link for anyone who needed more information (4:56 to 6:26)
Big-O is just a notation for keeping track of orders of magnitude. But when we apply that in algorithms, we have to remember "orders of magnitude of WHAT"? In this case it is "time spent".
CPUs are set up to execute basic arithmetic on basic arithmetic types in constant time. For most purposes, we can assume we are dealing with those basic types.
However if n is a very large positive integer, we can't assume that. A very large integer will need O(log(n)) bits to represent. Which, whether we store it as bits, bytes, etc, will need an array of O(log(n)) things to store. (We would need fewer bytes than bits, but that is just a constant factor.) And when we do a calculation, we have to think about what we will actually do with that array.
Now suppose that we're trying to calculate n+m. We're going to need to generate a result of size O(log(n+m)), which must take at least that time to allocate. Luckily the grade school method of long addition where you add digits and keep track of carrying, can be adapted for big integer libraries and is O(log(n+m)) to track.
So when you're looking at addition, the log of the size of the answer is what matters. Since log(50^n) = n * log(50) that means that operations with 50^n are at least O(n). (Getting 50^n might take longer...) And it means that calculating n+1 takes time O(log(n)).
Now in the case of the Fibonacci sequence, F(n) is roughly φ^n where φ = (1 + sqrt(5))/2 so log(F(n)) = O(n).
For an assignment, I was supposed to write an algorithm which permits me to use at most n + O(log(n)) extra bits of memory (the details about what the algorithm was actually supposed to do isn't important here), where n is the size of an input array.
I submitted an algorithm that passes all of the test cases; however, my grader says that I am using more than n + O(log(n)) bits of memory. Their reason is that, as a part of my algorithm, I am adding the quantity (n * i) to every element in the array (where i = 1, 2, 3, ... n. i is an index variable in a loop). They are saying that for very large values of n, I will be using more memory to store the large numbers.
This leads me to the following question: is it true that my space complexity exceeds n + O(log(n)) bits by adding n * i to every number? My experience with algorithm analysis is quite limited, but I have personally never seen storing large numbers as a justification for increase in space complexity. But let's say for argument that it does increase complexity -- would I be using more than n + O(log(n)) bits?
I would like to formulate an argument for a challenge, but I just want to make sure that I am in the right before doing so.
Let b1 be the number of bits for each number before adding (i*n) to and b2 be the number after that.
Inequality (1):
b2-b1 <= log(n*n) = 2log(n)
Proof (1):
Lemma 1: binary number is the best coding scheme for integers in memory.
Lemma 2: The sum of 2 integers always has the result shorter than the sum of each number's sizes.
From Inequality (1),
In the extreme case, if b1 -> 0, then b2 = 2log(n) so the increase in space is 2nlog(n). The total space would be C + O(nlog(n))
Disclaimer: that's not a proof for your problem, because I don't know exactly how many bits you used for each number at the beginning.
while N-bit integer a > 1,
a = a / 2
I was thinking it is log(n) because each time you go through the while loop you are dividing a by two but my friend thinks its 2 logn(n).
Clearly your algorithm is in big-Theta(log(a)), where a is your number
But as far as I understand your problem, you want to know the asymptotic runtime depending on the amount of bits of your number
That's really difficult to say and depends on your number:
Let's say you have an n-bit integer and the Most significant bit is 1. You have to divide it n-times, to get a number smaller than 1.
Now let's look at a integer where only the least siginficant bit is 1 (so it equals the number 1 in decimal system). There you need just one division.
So i would say, it'll take n/2 in average which makes it big-Theta(n) where n is the amount of bits of your number. The worst-case is also in big-Theta(n) and the best-case is in big-Theta(1)
NOTE: Dividing a number by two in binary system has a similar effect as dividing a number by ten in decimal system
Dividing an integer by two can be efficiently implemented by taking a number in binary notation and shifting the bits. In the worst case, all the bits are set at you have to shift (n-1) bits for the first division, (n-2) bits for the second, etc. etc. until you shift 1 bit on the last iteration and find the number becomes equal to 1, at which point you stop. This means your algorithm must shift 1+2+...+(n-1) = n(n-1)/2 bits, making your algorithm O(n^2) in the number of bits of input.
A more efficient algorithm that will leave a with the same value is a = (a == 0 ? 0 : 1). This generates the same answer in linear time (equality checking is linear in the number of bits) and it works because your code will only leave a = 0 if a is originally zero; in all other cases, the highest-order bit ends up in the unit's place.
Here's the problem I am looking for an answer for:
An array A[1...n] contains all the integers from 0 to n except one. It would be easy to determine the missing
integer in O(n) time by using an auxiliary array B[0...n] to record which numbers appear in A. In this
problem, however, we cannot access an entire integer in A with a single operation. The elements of A are
represented in binary, and the only operation we can use to access them is ”fetch the jth bit of A[i],” which
takes constant time.
Show that if we use only this operation, we can still determine the missing integer in O(n) time.
I have this approach in mind:
If I didn't have the above fetching restriction, I would have taken all the numbers and did an XOR of them together. Then XOR the result with all numbers from 1..n. And the result of this would be my answer.
In this problem similarly I can repeatedly XOR the bits of different numbers at a distance of log(n+1) bits with each other for all elements in array and then XOR them with the elements 1...n but the complexity comes out to be O(nlogn) in my opinion.
How to achieve the O(n) complexity?
Thanks
You can use a variation of radix sort:
sort numbers according to MSb (Most Significant bit)
You get two lists of sizes n/2, n/2-1. You can 'drop' the list with n/2 elements - the missing number is not there.
Repeat for the second MSb and so on.
At the end, the 'path' you have chosen (the bit with the smaller list for each bit) will represent the missing number.
Complexity is O(n + n/2 + ... + 2 + 1), and since n + n/2 + .. + 1 < 2n - this is O(n)
This answer assumes for simplicity that n=2^k for some integer k (this relaxation can be later dropped off, by doing s 'special' handle for the MSb).
You have n integers with range [0..n]. You can inspect every number's most significant bit and divide these number into to groups C(with MSB 0) and D(with MSB 1). Since you know the range is [0..n], you can calculate how many numbers with MSB 0 in this range, called S1, and how many number with MSB 1 in this range, called S2. If the size of C is not equal to S1, then you know the missing number has MSB 0. Otherwise, you know the missing number has MSB 1. Then, you can recursively solve this problem. Since each recursive call takes linear time and each recursive call except the first one can half the problem size, the total running time is linear.
I have a question about running time of an algorithm that express the product of n numbers.. I think the best solution is divide and conquer which is based on halving the n element by 2 recursively and multiply 2 elements. The confusing part is the number of simple operations. In case of Divide and Conquer the complexity should be O(logn) So if we have 8 numbers to multiply we should end up with 3 basic steps E.g we have 8 numbers... we can halve 8 until we reach to 2 and start multiplying it.. (a1 a2 a3 a4 a5 a6 a7 a8) ... (a1*a2=b1) (a3*a4=b2) (a5*a6=b3) (a7*a8=b4) (b1*b2=c1) (b3*b4=c2) (c1*c2=final result)..However, in this result we need 7 simple multiplication. Can someone clarify this to me..?
Divide and conquer is for cases when you can divide your original set into multiple subsets which, after you've identified and created them, do not interact anymore (or only in a way that's neglectably cheap compared to the operation on each subset). In your case, you're violating the "subsets do not interact after identifying them" rule.
Also, O(x) does not mean the number of operations is less than x. It just means, that for any concrete data set of size x, there is a finite value d so that the number of operations needed is smaller than d*x. (My native language is german, i hope i didn't change the meaning when translating). So the fact that you need 8 operations on 8 data items does not, per se, mean the complexity is larger than O(log n).
If I understand your objective correctly, then the complexity should be O(n), as you can just multiply the n values in sequence. For n values, you need n-1 multiplications. No need for any divide and conquer.
Whichever way you choose to perform multiplication of n numbers, you will need to multiply all of them, making n-1 multiplications. Thus, the time complexity of this multiplication will always be O(n).
Here is how I can explain that the number of steps required to multiply n numbers by your approach is close to n, not log(n). To multiply n numbers you need to make n/2 multiplications first - first number with the second, third with the fourth and so on, till the n-1-th and n-th number. After that you have n/2 numbers and to multiply them all you need to make n/4 multiplications - first result of previous multiplication with the second, third with the fourth and so on. After that you have n/4 numbers, and you'll make n/8 multiplications. This process will end when you'll have only two numbers left - their multiplication will give you the result.
Now, let's count the total number of multiplications needed. On the first stage you've made n/2 multiplications, on the second - n/4 and so on, till the only one multiplication. You have the following sequence: n/2 + n/4 + n/8 + ... + 1. It was shown, that the sum of such sequence equals to n.