For Modified Baugh-Wooley multiplication algorithm , why is it !(A0*B5) instead of just (A0*B5) ?
Same questions for !(A1*B5), !(A2*B5), !(A3*B5), !(A4*B5), !(A5*B4), !(A5*3), !(A5*B2), !(A5*B1) and !(A5*B0)
Besides, why there are two extra '1' ?
In signed 6-bit 2s complement notation, the place values of the bits are:
-32 16 8 4 2 1
Notice that the top bit has a negative value. When addition, subtraction, and multiplication are performed mod 64, however, that minus sign makes absolutely no difference to how those operations work, because 32 = -32 mod 64.
Your multiplication is not being performed mod 64, though, so that sign must be taken into account.
One way to think of your multiplication is that the 6-bit numbers are extended to 12 bits, and multiplication is then performed mod 4096. When extending a signed number, the top bit is replicated, so -32 becomes -2048 + 1024 + 512 ... +32, which all together has the same value of -32. So extend the signed numbers and multiply. I'll do it with 3 bits, multiplying mod 64:
Given: Sign-extended:
A2 A1 A0 A2 A2 A2 A2 A1 A0
B2 B1 B0 B2 B2 B2 B2 B1 B0
Multiply:
A0B2 A0B2 A0B2 A0B2 A0B1 A0B0
A1B2 A1B2 A1B2 A1B1 A1B0
A2B2 A2B2 A2B1 A2B0
A2B2 A2B1 A2B0
A2B1 A2B0
A2B0
Since we replicated the same bits in multiple positions, you'll see the same bit products at multiple positions.
A0B2 appears 4 times with with total place value 60 or 15<<2, and so on. Let write the multipliers in:
A0B2*15 A0B1 A0B0
A1B2*7 A1B1 A1B0
A2B2*5 A2B1*7 A2B0*15
Again, because of modular arithmetic, the *15s and *7s are the same as *-1, and the *5 is the same as *1:
-A0B2 A0B1 A0B0
-A1B2 A1B1 A1B0
A2B2 -A2B1 -A2B0
That pattern is starting to look familiar. Now, of course -1 is not a bit value, but ~A0B2 = 1-A0B2, so we can translate -A0B2 into ~A0B2 and then subtract the extra 1 we added. If we do this for all the subtracted products:
~A0B2 A0B1 A0B0
~A1B2 A1B1 A1B0
A2B2 ~A2B1 ~A2B0
-2 -2
If we add up the place values of those -2s and expand them into the equivalent bits, we discover the source of the additional 1s in your diagram:
~A0B2 A0B1 A0B0
~A1B2 A1B1 A1B0
A2B2 ~A2B1 ~A2B0
1 1
why two extra '1'?
See some previous explanation in Matt Timmermans's answer
Note : '-2' in two complement is 110, and this contributes to the carries, thus two extra '1'
why flipping the values of some of the partial product bits.
It is due to signed bit in the MSB (A5 and B5).
Besides, please see below the Countermeasure for modified baugh-wooley algorithm in the case of A_WIDTH != B_WIDTH with the help of others.
I have written a hardware verilog code for this algorithm
Hopefully, this post helps some readers.
The short answer is that's because how 2's-complement representation works: the top bit is effectively a sign bit so 1 there means -. In other words you have to subtract
A5*(B4 B3 B2 B1 B0) << 5
and
B5*(A4 A3 A2 A1 A0) << 5
from the sum (note that A5*B5 is added again because both have the same - sign). And those two 1 is the result of substituting those two subtractions with additions of -X.
If you need more details, then you probably just need to re-read how 2's-complement work and then the whole math behind the Baugh-Wooley multiplication algorithm. It is not that complicated.
Related
To add a and b, first add their rightmost bits. This gives
a0 + b0 = c0 ⋅ 2 + s0,
where s0 is the rightmost bit in the binary expansion of a + b and c0 is the carry, which is either
0 or 1.
Then add the next pair of bits and the carry, a1 +b1 +c0 =c1 ⋅2+s1,
since we just add the carry C0 with the next operation without multiplying by 2 ???? why? or i am wrong here?
thanks in advance
I will try to explain this with a simple example.
We are adding 3(a) + 5(b) or 11 + 101. Following the described algorithm above we get.
To add 11(a) and 101(b), first add their rightmost bits. 1(a0) + 1(b0). This gives 1(a0) + 1(b0) = 1(c0) * 2 + 0(s0).
Multiplying by 2 here in binary is a bitshift, you are moving that 1 to the next binary place in the number so 1*2+0 = 10 which is the result of 1+1.
So following the next pair of bits 1(a1) + 0(b1) + 1(c0) = 1(c1)*2+0(s1).
This may seem counterintuitive, but the c0 digit originally belongs to the first binary place (0th index), by multiplying it by 2 we ensure we can add it with the bits on the second binary place (1st index).
The addition on a1 +b1 +c0 =c1 ⋅2+s1 is not possible without this 2* multiplication on c0 for we would be adding otherwise 10(a1) + 00(b1) + 01(c0) which is not the desired result for the second binary place
This is an interview question, and the problem description is as follows:
There are n couples sitting in a row with 2n seats. Find the minimum number of swaps to make everyone sit next on his/her partner. For example, 0 and 1 are couple, and 2 and 3 are couple. Originally they are sitting in a row in this order: [2, 0, 1, 3]. The minimum number of swaps is 1, for example swapping 2 with 1.
I know there is a greedy solution for this problem. You just need to scan the array from left to right. Every time you see an unmatched pair, you swap the first person of the pair to his/her correct position. For example, in the above example for pair [2, 0], you will directly swap 2 with 1. There is no need to try swapping 0 with 3.
But I don't really understand why this works. One of the proofs I saw was something like this:
Consider a simple example: 7 1 4 6 2 3 0 5. At first step we have two choices to match the first couple: swap 7 with 0, or swap 1 with 6. Then we get 0 1 4 6 2 3 7 5 or 7 6 4 1 2 3 0 5. Pay attention that the first couple doesn't count any more. For the later part it is composed of 4 X 2 3 Y 5 (X=6 Y=7 or X=1 Y=0). Since different couples are unrelated, we don't care X Y is 6 7 pair or 0 1 pair. They are equivalent! Thus it means our choice doesn't count.
I feel that this is very reasonable but not compelling enough. In my opinion we have to prove that X and Y are couple in all possible cases and don't know how. Can anyone give a hint? Thanks!
I've split the problem into 3 examples. A's are a pair and so are B's in all examples. Note that throughout the examples a match requires that elements are adjacent and the first element occupy an index that satisfies index%2 = 0. An array looking like this [X A1 A2 ...] does not satisfy this condition, however this does [X Y A1 A2 ...]. The examples also do not look to the left at all, because looking to the left of A2 below is the same as looking to the right of A1.
First example
There's an even number of elements between two unmatched pairs:
A1 B1 ..2k.. A2 B2 .. for any number k in {0, 1, 2, ..} meaning A1 B1 A2 B2 .. is just a another case.
Both can be matched in one swap:
A1 A2 ..2k.. B1 B2 .. or B2 B1 ..2k.. A2 A1 ..
Order is not important, so it doesn't matter which pair is first. Once the pairs are matched, there will be no more swapping involving either pair. Finding A2 based on A1 will result in the same amount of swaps as finding B2 based on B1.
Second example
There's an odd number of elements between two pairs (2k + the element C):
A1 B1 ..2k.. C A2 B2 D .. (A1 B1 ..2k.. C B2 A2 D .. is identical)
Both cannot be matched in one swap, but like before it doesn't matter which pair is first nor if the matched pair is in the beginning or in the middle part of the array, so all these possible swaps are equally valid, and none of them creates more swaps later on:
A1 A2 ..2k .. C B1 B2 D .. or B2 B1 ..2k.. C A2 A1 D .. Note that the last pair is not matched
C B1 ..2k.. A1 A2 B2 D .. or A1 D ..2k.. C A2 B2 B1 .. Here we're not matching the first pair.
The important thing about this is that in each case, only one pair is matched and none of the elements of that pair will need to be swapped again. The result of the remaining non-matched pair are either one of:
..2k.. C B1 B2 D ..
..2k.. C A2 A1 D ..
C B1 ..2k.. B2 D ..
A1 D ..2k.. C A2 ..
They are clearly equivalent in terms of swaps needed to match the remaining A's or B's.
Third example
This is logically identical to the second. Both B1/A2 and A2/B2 can have any number of elements between them. No matter how elements are swapped, only one pair can be matched. m1 and m2 are arbitrary number of elements. Note that elements X and Y are just the elements surrounding B2, and they're only used to illustrate the example:
A1 B1 ..m1.. A2 ..m2.. X B2 Y .. (A1 B1 ..m1.. B2 ..m2.. X A2 Y .. is identical)
Again both pairs cannot be matched in one swap, but it's not important which pair is matched, or where the matched pair position is:
A1 A2 ..m1.. B1 ..m2.. X B2 Y .. or B2 B1 ..m1.. A2 ..m2.. X A1 Y .. Note that the last pair is not matched
A1 X ..m1.. A2 ..m2-1.. B1 B2 Y .. or A1 Y ..m1.. A2 ..m2.. X B2 B1.. depending on position of B2. Here we're not matching the first pair.
Matching the pair around A2 is equivalent, but omitted.
As in the second example, one swap can also be matching a pair in the beginning or in the middle of the array, but either choice doesn't change that only one pair is matched. Nor does it change the remaining amount of unmatched pairs.
A little analysis
Keeping in mind that matched pairs drop out of the list of unmatched/problem pairs, the list of unmatched pairs are either one fewer or two fewer pairs for each swap. Since it's not important which pair drops out of the problem, it might as well be the first. In that case we can assume that pairs to the left of the cursor/current index are all matched. And that we only need to match the first pair, unless it's already matched by coincidence and the cursor is then rightfully moved.
It becomes even more clear if the above examples are looked at with the cursor being at the second unmatched pair, instead of the first. It still doesn't matter which pairs are swapped for the amount of total swaps needed. So there's no need to try to match pairs in the middle. The resulting amount of swaps are the same.
The only time two pairs can be matched with only one swap are those in the first example. There is no way to match two pairs in one swap in any other setup. Looking at the result of the swap in the second and third examples, it also becomes clear that none of the results have any advantage to any of the others and that each result becomes a new problem that can be described as one of the three cases (two cases really, because second and third are equivalent in terms of match-able pairs).
Optimal swapping
There is no way to modify the array to prepare it for more optimal swapping later on. Either a swap will match one or two pairs, or it will count as a swap with no matches:
Looking at this: A1 B1 ..2k.. C B2 ... A2 ...
Swap to prepare for optimal swap:
A1 B1 ..2k.. A2 B2 ... C ... no matches
A1 A2 ..2k.. B1 B2 ... C ... two in one
Greedy swap:
B2 B1 ..2k.. C A1 ... A2 ... one
B2 B1 ..2k.. A2 A1 ... C ... one
Un-matching pairs
Pairs already matched will not become unmatched because that would require that:
For A1 B1 ..2k.. C A2 B2 D ..
C is identical to A1 or
D is identical to B1
either of which is impossible.
Likewise with A1 B1 ..m1.. (Z) A2 (V) ..m2.. X B2 Y ..
Or it would require that matched pairs are shifted one (or any odd number of) index inside the array. That's also not possible, because we always ever swap, so the array elements aren't being shifted at all.
[Edited for clarity 4-Mar-2020.]
There is no point doing a swap which does not put (at least) one couple together. To do so would add 1 to the swap count and leave us with the same number of unpaired couples.
So, each time we do a swap, we put one couple together leaving at most n-1 couples. Repeating the process we end up with 1 pair, who must by then be a couple. So, the worst case must be n-1 swaps.
Clearly, we can ignore couples who are already together.
Clearly, where we have two pairs a:B b:A, one swap will create the two couples a:A b:B.
And if we have m pairs a:Q b:A c:B ... q:P -- where the m pairs are a "disjoint subset" (or cycle) of couples, m-1 swaps will put them into couples.
So: the minimum number of swaps is going to be n - s where s is the number of "disjoint subsets" (and s >= 1). [A subset may, of course, contain just one couple.]
Interestingly, there is nothing clever you can do to reduce the number of swaps. Provided every swap creates a couple you will do the minimum number.
If you wanted to arrange each couple in height order as well, things may or may not be more interesting.
FWIW: having shown that you cannot do better than n-1 swaps for each disjoint set of n couples, the trick then is to avoid the O(n^2) search for each swap. That can be done relatively straightforwardly by keeping a vector with one entry per person, giving where they are currently sat. Then in one scan you pick up each person and if you know where their partner is sat, swap down to make a pair, and update the location of the person swapped up.
I will swap every even positioned member,
if he/she doesn't sit besides his/her partner.
Even positioned means array indexed 1, 3, 5 and so on.
The couples are [even, odd] pair. For example [0, 1], [2, 3], [4, 5] and so on.
The loop will be like that:
for(i=1; i<n*2; i+=2) // when n = # of couples.
Now, we will check i-th and (i-1)-th index member. If they are not couple, then we will look for the partner of (i-1)-th member and once we have it, we should swap it with i-th member.
For an example, say at i=1, we got 6, now if (i-1)-th element is 7 then they form a couple (if (i-1)-th element is 5 then [5, 6] is not a couple.) and we don't need any swap, otherwise we should look for the partner of (i-1)-th element and will swap with i-th element. So, (i-1)-th and i-th will form a couple.
It ensure that we need to check only half of the total members, that means, n.
And, for any non-matched couple, we need a linear search from i-th position to the rest of the array. Which is O(2n), eventually O(n).
So, the overall technique complexity will be O(n^2).
In worst case, minimum swap will be n-1. (this is maximum as well).
Very straightforward. If you need help to code, let us know.
Interview question: you're given a file of roughly one billion unique numbers, each of which is a 32-bit quantity. Find a number not in the file.
When I was approaching this question, I tried a few examples with 3-bit and 4-bit numbers. For the examples I tried, I found that when I XOR'd the set of numbers, I got a correct answer:
a = [0,1,2] # missing 3
b = [1,2,3] # missing 0
c = [0,1,2,3,4,5,6] # missing 7
d = [0,1,2,3,5,6,7] # missing 4
functools.reduce((lambda x, y: x^y), a) # returns 3
functools.reduce((lambda x, y: x^y), b) # returns 0
functools.reduce((lambda x, y: x^y), c) # returns 7
functools.reduce((lambda x, y: x^y), d) # returns 4
However, when I coded this up and submitted it, it failed the test cases.
My question is: in an interview setting, how can I confirm or rule out with certainty that an approach like this is not a viable solution?
In all your examples, the array is missing exactly one number. That's why XOR worked. Try not to test with the same property.
For the problem itself, you can construct a number by taking the minority of each bit.
EDIT
Why XOR worked on your examples:
When you take the XOR for all the numbers from 0 to 2^n - 1 the result is 0 (there are exactly 2^(n-1) '1' in each bit). So if you take out one number and take XOR of all the rest, the result is the number you took out because taking XOR of that number with the result of all the rest needs to be 0.
Assuming a 64-bit system with more than 4gb free memory, I would read the numbers into an array of 32-bit integers. Then I would loop through the numbers up to 32 times.
Similarly to an inverse ”Mastermind” game, I would construct a missing number bit-by-bit. In every loop, I count all numbers which match the bits, I have chosen so far and a subsequent 0 or 1. Then I add the bit which occurs less frequently. Once the count reaches zero, I have a missing number.
Example:
The numbers in decimal/binary are
1 = 01
2 = 10
3 = 11
There is one number with most-significant-bit 0 and two numbers with 1. Therefore, I take 0 as most significant bit.
In the next round, I have to match 00 and 01. This immediately leads to 00 as missing number.
Another approach would be to use a random number generator. Chances are 50% that you find a non-existing number as first guess.
Proof by counterexample: 3^4^5^6=4.
Just to give some context, my motivation for this programming question is to understand the derivation of the CSHS inequality and basically entails maximizing the following function:
Abs[c1 Cos[2(a1-b1)]+ c2 Cos[2(a1-b2)] + c3 Cos[2(a2-b1)] + c4 Cos[2(a2-b2)]]
where a1,b1,b2,and a2 are arbitrary angles and c1,c2,c3,c4 = +/- 1 ONLY. I want to be able to determine the maximum value of this function along with the combination of angles that lead to this maximum
Eventually, I also want to repeat the calculation for a1,a2,a3,b1,b2,b3 (which will have a total of nine cosine terms)
When I tried putting the following code in Mathematica, it simply spat the input back at me and did not perform any computation, can someone help me out? (note my code didn't include the c1,c2,c3,c4 parameters, I wasn't quite sure how to incorporate them)
Maximize[{Abs[Cos[2 (a1 - b1)] - Cos[2 (a1 - b2)] + Cos[2 (a2 - b1)] +
Cos[2 (a2 - b2)]], 0 <= a1 <= 2 \[Pi] , 0 <= b1 <= 2 \[Pi], 0 <= a2 <= 2 \[Pi], 0 <= b2 <= 2 \[Pi]}, {a1, b2, a2, b1}]
The answer is 4. This is because each Cos can be made to equal 1. You have 4 variables a1, a2, b1 and b2, and four cosines, so there are going to be several ways of making the combinations 2(a1-b1), 2(a1-b2), 2(a2-b1) and 2(a2-b2) equal 0 (hence choosing the corresponding c1/c2/c3/c4 to be +1), or equal to pi (hence choosing the corresponding c1/c2/c3/c4 to be -1).
For one set of angles that give the max, the obvious answer is a1=a2=b1=b2=0. For the 9 cosine case, the max will be 9, and one possible answer is a1=a2=a3=b1=b2=b3=0.
Regarding using Mathematica, I think the lesson is that it's always best to think about before the maths itself before using tools to help with the maths.
I'm trying to understand two's complement:
Does two's complement mean that this number is invalid:
1000
Does two's complement disallow the use of the most significant bit for positive numbers. Ie. Could
1000
Ever represent 2^3? Or would it represent -0?
I'm also confused about why you need to add 1 to a one's complement.
in two's complement the MSB (most significant bit) is set to one for negatives. to multiply by -1 a two's complement number you well do the following:
add one to the number.
reverse all bits after the first one.
for example:
the number 10010 after adding one you well get: 10011 after reversing you get: 01101.
that means that 10010 is negative 13.
the number 1000 after adding one is: 1001 after reversing: 0111. that means that 1000 is negative 7.
now, to your last question: no. if you work with two's complement you can't use the MSB for positive numbers. but, you could define you are not using two's complement and use higher positive numbers.
Twos-complement is based on two requirements:
numbers are represented by a fixed number of bits;
x + -x = 0.
Assuming a four bit representation, say, we have
0 + -0 = 0000 + -0000 (base 2) = 0000 => -0000 = 0000
1 + -1 = 0001 + -0001 (base 2) = 0000 => -0001 = 1111 (carry falls off the end)
Now we have our building blocks, a drop of induction will show you the "flip the bits and add 1" algorithm is exactly what you need to convert a positive number to its twos-complement negative representation.
2's complement is mostly a matter of how you interpret the value, most math* doesn't care whether you view a number as signed or not. If you're working with 4 bits, 1000 is 8 as well as -8. This "odd symmetry" arises here because adding it to a number is the same as xoring it with a number (since only the high bit is set, so there is no carry into any bits). It also arises from the definition of two's complement - negation maps this number to itself.
In general, any number k represents the set of numbers { a | a = xk mod n } where n is 2 to the power of how many bits you're working with. This perhaps somewhat odd effect is a direct result of using modular arithmetic and is true whether you view number as signed or unsigned. The only difference between the signed and unsigned interpretations is which number you take to be the representative of such a set. For unsigned, the representative is the only such a that lies between 0 and n. For signed numbers, the representative is the only such a that lies between -(n/2) and (n/2)-1.
As for why you need to add one, the goal of negation is to find an x' such that x' + x = 0. If you only complemented the bits in x but didn't add one, x' + x would not have carries at any position and just sum to "all ones". "All ones" plus 1 is zero, so adding one fixes x' so that the sum will go to zero. Alternatively (well it's not really an alternative), you can take ~(x - 1), which gives the same result as ~x + 1.
*Signedness affects the result of division, right shift, and the high half of multiplication (which is rarely used and, in many programming languages, unavailable anyway).
It depends on how many bits you use to represent numbers.
The leftmost (largest) bit has a value of -1*(2**N-1) or in this case, -8. (N being the number of bits.) Subsequent bits are their normal values.
So
1000
is -8
1111
is -1
0111
is 7.
However, if you have 8 bits these become different values!
0000 1000
is positive 8. Only the leftmost bit adds a negative value to the answer.
In either case, the range of numbers is from
1000....0
for -2**(N-1) with N bits
to
0111....1
Which is 2**(N-1) -1. (This is just normal base 2 since the leftmost bit is 0.)