while N-bit integer a > 1,
a = a / 2
I was thinking it is log(n) because each time you go through the while loop you are dividing a by two but my friend thinks its 2 logn(n).
Clearly your algorithm is in big-Theta(log(a)), where a is your number
But as far as I understand your problem, you want to know the asymptotic runtime depending on the amount of bits of your number
That's really difficult to say and depends on your number:
Let's say you have an n-bit integer and the Most significant bit is 1. You have to divide it n-times, to get a number smaller than 1.
Now let's look at a integer where only the least siginficant bit is 1 (so it equals the number 1 in decimal system). There you need just one division.
So i would say, it'll take n/2 in average which makes it big-Theta(n) where n is the amount of bits of your number. The worst-case is also in big-Theta(n) and the best-case is in big-Theta(1)
NOTE: Dividing a number by two in binary system has a similar effect as dividing a number by ten in decimal system
Dividing an integer by two can be efficiently implemented by taking a number in binary notation and shifting the bits. In the worst case, all the bits are set at you have to shift (n-1) bits for the first division, (n-2) bits for the second, etc. etc. until you shift 1 bit on the last iteration and find the number becomes equal to 1, at which point you stop. This means your algorithm must shift 1+2+...+(n-1) = n(n-1)/2 bits, making your algorithm O(n^2) in the number of bits of input.
A more efficient algorithm that will leave a with the same value is a = (a == 0 ? 0 : 1). This generates the same answer in linear time (equality checking is linear in the number of bits) and it works because your code will only leave a = 0 if a is originally zero; in all other cases, the highest-order bit ends up in the unit's place.
Related
I'm learning a course about big O notation on Coursera. I watched a video about the big O of a Fibonacci algorithm (non-recursion method), which is like this:
Operation Runtime
create an array F[0..n] O(n)
F[0] <-- 0 O(1)
F[1] <-- 1 O(1)
for i from 2 to n: Loop O(n) times
F[i] <-- F[i-1] + F[i-2] O(n) => I don't understand this line, isn't it O(1)?
return F[n] O(1)
Total: O(n)+O(1)+O(1)+O(n)*O(n)+O(1) = O(n^2)
I understand every part except F[i] <-- F[i-1] + F[i-2] O(n) => I don't understand this line, isn't it O(1) since it's just a simple addition? Is it the same with F[i] <-- 1+1?
The explanation they give me is:"But the addition is a bit worse. And normally additions are constant time. But these are large numbers. Remember, the nth Fibonacci number has about n over 5 digits to it, they're very big, and they often won't fit in the machine word."
"Now if you think about what happens if you add two very big numbers together, how long does that take? Well, you sort of add the tens digit and you carry, and you add the hundreds digit and you carry, and add the thousands digit, you carry and so on and so forth. And you sort of have to do work for each digits place.
And so the amount of work that you do should be proportional to the number of digits. And in this case, the number of digits is proportional to n, so this should take O(n) time to run that line of code".
I'm still a bit confusing. Does it mean a large number affects time complexity too? For example a = n+1 is O(1) while a = n^50+n^50 isn't O(1) anymore?
Video link for anyone who needed more information (4:56 to 6:26)
Big-O is just a notation for keeping track of orders of magnitude. But when we apply that in algorithms, we have to remember "orders of magnitude of WHAT"? In this case it is "time spent".
CPUs are set up to execute basic arithmetic on basic arithmetic types in constant time. For most purposes, we can assume we are dealing with those basic types.
However if n is a very large positive integer, we can't assume that. A very large integer will need O(log(n)) bits to represent. Which, whether we store it as bits, bytes, etc, will need an array of O(log(n)) things to store. (We would need fewer bytes than bits, but that is just a constant factor.) And when we do a calculation, we have to think about what we will actually do with that array.
Now suppose that we're trying to calculate n+m. We're going to need to generate a result of size O(log(n+m)), which must take at least that time to allocate. Luckily the grade school method of long addition where you add digits and keep track of carrying, can be adapted for big integer libraries and is O(log(n+m)) to track.
So when you're looking at addition, the log of the size of the answer is what matters. Since log(50^n) = n * log(50) that means that operations with 50^n are at least O(n). (Getting 50^n might take longer...) And it means that calculating n+1 takes time O(log(n)).
Now in the case of the Fibonacci sequence, F(n) is roughly φ^n where φ = (1 + sqrt(5))/2 so log(F(n)) = O(n).
There are 2 ways to compute the number of set bits in an integer, n, that I will illustrate below. There's also an O(1) way that's platform dependent, but that's not relevant for this question.
The 2 ways are:
int count = 0;
while(n)
{
count++;
n &= n - 1;
}
int count = 0;
while(n)
{
if(n & 1) count++;
n >>= 1;
}
For any 32-bit integer, both loops will execute a maximum of 32 times, where each loop does O(1) work. So this sounds like the algorithm runs in O(1) time in the domain of 32-bit integers. But we can also phrase this as the number of loops is equal to log(n), which is bounded by 32, but this phrasing suggests the algorithm is dependent on the input, and runs in O(log n) time, but the upper bound is still a constant.
Is it more correct to say that these algorithms run in O(1) or O(log n) time?
None of these are O(1). You are putting a restriction on the size of input when you say the upper bound is fixed. There are numbers much bigger than 32 bits.
The time taken for the algorithm to compute the output is directly proportional to the size of the input i.e. n
An example of O(1) algo is sum of first n positive integers i.e. n*(n+1)/2
Here even if n gets bigger, the number of computation required stays constant - and thus O(1).
These loops each run in time O(log n). You can make an even tighter statement for the first one and say that it runs in time O(b), where b is the number of bits set in the number. It happens to be the case that b = O(log n) because the number n requires O(log n) bits to write out.
It's always a little bit dicey to talk about runtimes of algorithms that process numbers one bit at a time because you have two things in tension with one another. One is that, in a mathematical sense, the number of bits in a number grows logarithmically with the magnitude of the number. That is, the number of bits in the number n is O(log n). On the other hand, on actual machines there's a fixed limit to the number of bits in a number, which makes it seem like the runtime "ought" to be a constant, since you can't have unboundedly large values of n.
One mental model I find helpful here is to think about what it would really mean for n to grow unboundedly. If you have a 45-bit number, I'm assuming you're either
working on a 64-bit system, or
working on a 32-bit system and using multiple 32-bit integers to assemble your 45-bit number.
If we make assumption (1), then we're implicitly saying that as our numbers get bigger and bigger we're moving to larger and larger word size machines. (This can be formalized as the transdichotomous machine model). If we make assumption (2), then we're assuming that our numbers don't necessarily fit into a machine word, in which case the number of bits isn't a constant.
One last way to reason about this is to introduce another parameter w indicating the machine word size, then to do the analysis in terms of w. For example, we can say that the runtime of both algorithms is O(w). That eliminates the dependency on n, which may overestimate the work done.
Hope this helps!
For an assignment, I was supposed to write an algorithm which permits me to use at most n + O(log(n)) extra bits of memory (the details about what the algorithm was actually supposed to do isn't important here), where n is the size of an input array.
I submitted an algorithm that passes all of the test cases; however, my grader says that I am using more than n + O(log(n)) bits of memory. Their reason is that, as a part of my algorithm, I am adding the quantity (n * i) to every element in the array (where i = 1, 2, 3, ... n. i is an index variable in a loop). They are saying that for very large values of n, I will be using more memory to store the large numbers.
This leads me to the following question: is it true that my space complexity exceeds n + O(log(n)) bits by adding n * i to every number? My experience with algorithm analysis is quite limited, but I have personally never seen storing large numbers as a justification for increase in space complexity. But let's say for argument that it does increase complexity -- would I be using more than n + O(log(n)) bits?
I would like to formulate an argument for a challenge, but I just want to make sure that I am in the right before doing so.
Let b1 be the number of bits for each number before adding (i*n) to and b2 be the number after that.
Inequality (1):
b2-b1 <= log(n*n) = 2log(n)
Proof (1):
Lemma 1: binary number is the best coding scheme for integers in memory.
Lemma 2: The sum of 2 integers always has the result shorter than the sum of each number's sizes.
From Inequality (1),
In the extreme case, if b1 -> 0, then b2 = 2log(n) so the increase in space is 2nlog(n). The total space would be C + O(nlog(n))
Disclaimer: that's not a proof for your problem, because I don't know exactly how many bits you used for each number at the beginning.
Here's the problem I am looking for an answer for:
An array A[1...n] contains all the integers from 0 to n except one. It would be easy to determine the missing
integer in O(n) time by using an auxiliary array B[0...n] to record which numbers appear in A. In this
problem, however, we cannot access an entire integer in A with a single operation. The elements of A are
represented in binary, and the only operation we can use to access them is ”fetch the jth bit of A[i],” which
takes constant time.
Show that if we use only this operation, we can still determine the missing integer in O(n) time.
I have this approach in mind:
If I didn't have the above fetching restriction, I would have taken all the numbers and did an XOR of them together. Then XOR the result with all numbers from 1..n. And the result of this would be my answer.
In this problem similarly I can repeatedly XOR the bits of different numbers at a distance of log(n+1) bits with each other for all elements in array and then XOR them with the elements 1...n but the complexity comes out to be O(nlogn) in my opinion.
How to achieve the O(n) complexity?
Thanks
You can use a variation of radix sort:
sort numbers according to MSb (Most Significant bit)
You get two lists of sizes n/2, n/2-1. You can 'drop' the list with n/2 elements - the missing number is not there.
Repeat for the second MSb and so on.
At the end, the 'path' you have chosen (the bit with the smaller list for each bit) will represent the missing number.
Complexity is O(n + n/2 + ... + 2 + 1), and since n + n/2 + .. + 1 < 2n - this is O(n)
This answer assumes for simplicity that n=2^k for some integer k (this relaxation can be later dropped off, by doing s 'special' handle for the MSb).
You have n integers with range [0..n]. You can inspect every number's most significant bit and divide these number into to groups C(with MSB 0) and D(with MSB 1). Since you know the range is [0..n], you can calculate how many numbers with MSB 0 in this range, called S1, and how many number with MSB 1 in this range, called S2. If the size of C is not equal to S1, then you know the missing number has MSB 0. Otherwise, you know the missing number has MSB 1. Then, you can recursively solve this problem. Since each recursive call takes linear time and each recursive call except the first one can half the problem size, the total running time is linear.
I updated a question I asked before with this but as the original question was answered I'm guessing I should ask it seperately in a new question.
Take for example the simple multiplication algorithm. I see in numerous places the claim that this is a Log^2(N) operation. The given explanation is that this is due to it consisting of Log(N) additions of a Log(N) number.
The problem I have with this is although that is true it ignores the fact that each of those Log(N) numbers will be the result of a bit shift and by the end we will have bitshifted at least Log(N) times. As bitshift by 1 is a Log(N) operation the bitshifts considered alone give us Log^2(N) operations.
It therefore makes no sense to me when I see it further claimed that in practice multiplication doesn't in fact use Log^2(N) operations as various methods can reduce the number of required additions. Since the bitshifting alone gives us Log^2(N) I'm left confused as to how that claim can be true.
In fact any shift and add method would seem to have this bit cost irrespective of how many addition's there are.
Even if we use perfect minimal bit coding any Mbit by Nbit multiplication will result in an approx M+Nbit Number so M+N bits will have to have been shifted at least N times just to output/store/combine terms, meaning a minimum N^2 bit operations.
This seems to contradict the claimed number of operations for Toom-Cook etc so can someone please point out where my reasoning is flawed.
I think the way around this issue is the fact that you can do the operation
a + b << k
without having to perform any shifts at all. If you imagine what the addition would look like, it would look something like this:
bn b(n - 1) ... b(n-k) b(n-k-1) b0 0 ... 0 0 0 0
an a(n-1) ... ak a(k-1) ... a3 a2 a1 a0
In other words, the last k digits of the number will just be the last k digits of the number a, the middle digits will consist of the sum of a subset of b's digits and a subset of a's digits, and the leading digits can be formed by doing a ripple propagation of any carries up through the remaining digits of b. In other words, the total runtime will be proportional to the number of digits in a and b, plus the number of places to do the shift.
The real trick here is realizing that you can shift by k places without doing k individual shifts by one place over. Rather than shuffling everything down k times, you can just figure out where the bits are going to end up and write them there directly. In other words, the cost of a shift by k bits is not k times the cost of a shift by 1 bit. It's O(N + k), where N is the number of bits in the number.
Consequently, if you can implement multiplication in terms of some number of "add two numbers with a shift" operations, you will not necessarily have to do O((log n)2) bit operations. Each addition does O(log n + k) total bit operations, so if k is small (say, O(log n)) and you only do a small number of additions, then you can do better than O((log n)2) bit operations.
Hope this helps!