I want to find 9's complement of number but failed.
I tried it with the methods of 1's and 2's complements but no effect.
What is common method to find out the N's complement of a number?
The nines' complement in base 10 is found by subtracting each digit from 9.
So 45 (= ...000045) becomes 54 (= ...999954).
Ten's complement is just nines' complement plus 1. So ...000045 becomes (...999954 + 1) = ...999955.
More info on Wikipedia.
n's complement use: (simple three step method).
Suppose 512 - 96 = ? (both numbers are given in base n, say base:14).
Find n-1 complement of 96 (second number which is to be subtracted).
13's complement of 099 is DDD - 096 = D47 (since A=10, B=11, C=12, D=13).
Find n's complement by adding 1 to n - 1's complement value. 14's complement is D47 + 1 = D48.
Add the first number (512) with the n's complement (D48) and leave the carry. 512 + D48 = 45A (carry 1 removed).
CHECK:
512(14 base) = 996(10 base)
96(14 base) = 132(10 base)
996-132 = 864(base 10) = 45A(base 14) HENCE CHECKED.
Related
Multiply two numbers without using * operator, and with minimum number of additions
For eg: If input is, 5*8, one of the following ways, can be add the bigger number smaller number of times, and that will be the answer. But how can I minimise the number of additions?
One strategy to minimize reduce the number of additions is to add things hierarchically. This is the same strategy that is used in the classic power algorithm, which follows the same technique for minimizing the number of multiplications.
Let's say you need
M = a * 8 = a + a + a + a + a + a + a + a
Once you calculate m2 = a + a, you can substitute it into the above addition and get
M = m2 + m2 + m2 + m2
Then you can calculate m4 = m2 + m2 and arrive at
M = m4 + m4
So, the result is calculated in 3 additions instead of the original 8. However, adding a value to itself can be replaced by a left-shift by 1 bit (if this is allowed), this greatly reducing the number of additions.
This technique can be elegantly implemented through analyzing the binary representation of one of the multiplicands (exactly as it is typically implemented in the power algorithm). E.g. if you need to calculate a * b you can do it in this fashion
int M = 0;
for (int m = a; b != 0; b >>= 1, m <<= 1)
if ((b & 1) != 0)
M += m;
The total number of additions such implementation will use is the total number of 1 bits in b. It will multiply 5 by 8 in 1 addition.
Note that in order to achieve the lowest the number of additions provided by this strategy, multiplying larger number by smaller number is not necessarily the best idea. E.g. multiplying by 8 uses less additions than multiplying by 5.
A better example will be 5 * 7. This is essentially the binary multiplication using old methods, but with clever choice of the multiplier.
If we can use left-shift and that doesn't count as an addition: choose the number with the smaller number of bits as the multiplier. This will be 5 in this case.
111
x 101
------
111
000x <== This is not an addition, only a left shift
111xx
-------
100011 <== 2 additions totally.
-------
If we cannot use left-shift: note that left shift is the same as doubling / additions. Then we will have to use a slightly different tactic. Since the multiplicand will be shifted the same number of times as the (position of MSB - 1), the number of additions will be the number with the lesser value of (position of MSB - 1) + (number of bits set). In the case of 5 * 8, the values are (3-1) + 2 = 4 and (4-1) = 3 respectively. The lesser is for 8 and hence use that as the multiplier.
101
x 1000
-------
000
000x <== left shift
000xx <== left shift
101xxx <== left shift
--------
101000 <== no addition needed, so 3 additions totally.
--------
The above has three shifts and zero additions.
I like Codor's suggestion of using shifts and having zero additions!
But if you can truly only use additions and no other operations like shifts, logs, subtractions, etc, I believe the minimal number of additions to compute a * b will be:
min{int[log2(a+1)] + numbits(a), int[log2(b+1)] + numbits(b)} - 2
where
numbits(n) is the number of ones in the binary representation of
integer n
For example, numbits(4)=1, numbits(5)=2, etc.
int[x] is the integer part of float x
For example, int[3.9]=3
Now, how did we get there? First look at your original example. You can at least group additions together. E.g.
8+8=16
16+16=32
32+8=40
To generalize this, if you need to multiply a b times by only using additions that used a or the results of additions already computed, you need:
int[log2(b+1)]-1 additions to compute all the 2^n.a intermediate numbers you need.
In your example, int[log2(5+1)]-1 = 2: you need 2 additions to compute 16 and 32
numbits(b)-1 additions to add all intermediate results together, where numbits(b) is the number of ones in the binary representation of b.
In your example, 5 = 2^2 + 2^0 so numbits(5)-1 = 1: you need 1 addition to do 32 + 8
Interestingly, this means that your statement
add the bigger number smaller number of times
is not always the recipe to minimize the number of additions.
For example, if you need to compute 2^9 * (2^9 - 1), you are better off computing additions based on (2^9-1) than on 2^9 even though 2^9 is larger. The fastest approach is:
x = (2^9-1) + (2^9-1)
And then
x = x+x
8 times for a total of 9 additions.
If instead you added 2^9 to itself, you would need 8 additions to get all the 2^k*2^9 first and then an additional 8 additions to add all these numbers together for a total of 16 additions.
suppose a is to be multiplied with b and we are storing the result in res, we add a to res only if b is odd, else keep dividing b by 2 and multiplying a by 2. this is done in a loop till b becomes 0. multiplication and division can be done using bitwise operator.
Let the two given numbers be 'a' and 'b'
1) Initialize result 'res' as 0.
2) Do following while 'b' is greater than 0
a) If 'b' is odd, add 'a' to 'res'
b) Double 'a' and halve 'b'
3) Return 'res'.
Let's assume we will consider binary numbers which has length 2n and n might be about 1000. We are looking for kth number (k is limited by 10^9) which has following properties:
Amount of 1's is equal to amount of 0's what can be described as following: #(1) = #(0)
Every prefix of this number has to contain atleast as much 0's as 1's. It might be easier to understand it after negating the sentence, which is: There is no prefix which would contain more 1's than 0's.
And basically that's it.
So to make it clear let's do some example:
n=2, k=2
we have to take binary number of length 2n:
0000
0001
0010
0011
0100
0101
0110
0111
1000
and so on...
And now we have to find 2nd number which fulfill those two requirements. So we see 0011 is the first one, and 0101 is second one.
If we change k=3, then answer doesn't exist since there are number which have same amount of opposite bits, but for 0110, there is prefix 011 so number doesn't fulfill second constraint and same would be with all numbers which has 1 as most significant bit.
So what I did so far to find algorithm?
Well my first idea was to generate all possible bits settings, and check whether it has those two properties, but generate them all would take O(2^(2n)) which is not an option for n=1000.
Additionally I realize there is no need to check all numbers which are smaller than 0011 for n=2, 000111 for n=3, and so on... frankly speaking those which half of most significant bits remains "untouched" because those numbers have no possibility to fulfill #(1) = #(0) condition. Using that I can reduce n by half, but it doesn't help much. Instead of 2 * forever I have forever running algorithm. It's still O(2^n) complexity, which is way too big.
Any idea for algorithm?
Conclusion
This text has been created as a result of my thoughts after reading Andy Jones post.
First of all I wouldn't post code I have used since it's point 6 in following document from Andy's post Kasa 2009. All you have to do is consider nr as that what I described as k. Unranking Dyck words algorithm, would help us find out answer much faster. However it has one bottleneck.
while (k >= C(n-i,j))
Considering that n <= 1000, Catalan number can be quite huge, even C(999,999). We can use some big number arithmetic, but on the other hand I came up with little trick to overpass it and use standard integer.
We don't want to know how big actually Catalan number is as long as it's bigger than k. So now we will create Catalan numbers caching partial sums in n x n table.
... ...
5 | 42 ...
4 | 14 42 ...
3 | 5 14 28 ...
2 | 2 5 9 14 ...
1 | 1 2 3 4 5 ...
0 | 1 1 1 1 1 1 ...
---------------------------------- ...
0 1 2 3 4 5 ...
To generate it is quite trivial:
C(x,0) = 1
C(x,y) = C(x,y-1) + C(x-1,y) where y > 0 && y < x
C(x,y) = C(x,y-1) where x == y
So what we can see only this:
C(x,y) = C(x,y-1) + C(x-1,y) where y > 0 && y < x
can cause overflow.
Let's stop at this point and provide definition.
k-flow - it's not real overflow of integer but rather information that value of C(x,y) is bigger than k.
My idea is to check after each running of above formula whether C(x,y) is grater than k or any of sum components is -1. If it is we put -1 instead, which would act as a marker, that k-flow has happened. I guess it quite obvious that if k-flow number is sum up with any positive number it's still be k-flowed in particular sum of 2 k-flowed numbers is k-flowed.
The last what we have to prove is that there is no possibility to create real overflow. Real overflow might only happen if we sum up a + b which non of them is k-flowed but as sum they generated the real overflow.
Of course it's impossible since maximum value can be described as a + b <= 2 * k <= 2*10^9 <= 2,147,483,647 where last value in this inequality is value of int with sign. I assume also that int has 32 bits, as in my case.
The numbers you are describing correspond to Dyck words. Pt 2 of Kasa 2009 gives a simple algorithm for enumerating them in lexicographic order. Its references should be helpful if you want to do any further reading.
As an aside (and be warned I'm half asleep as I write this, so it might be wrong), the wikipedia article notes that the number of Dyck words of length 2n is the n th Catalan number, C(n). You might want to find the smallest n such that C(n) is larger than the k you're looking for, and then enumerate Dyck words starting from X^n Y^n.
I'm sorry for misunderstood this problem last time, so I edit it and now I can promise the correction and you can test the code first, the complexity is O(n^2), the detail answer is follow
First, we can equal the problem to the next one
We are looking for kth largest number (k is limited by 10^9) which has following properties:
Amount of 1's is equal to amount of 0's what can be described as following: #(1) = #(0)
Every prefix of this number has to contain at least as much [[1's as 0's]], which means: There is no prefix which would contain more [[0's than 1's]].
Let's give an example to explain it: let n=3 and k=4, the amount of satisfied numbers is 5, and the picture below has explain what we should determine in previous problem and new problem:
| 000111 ------> 111000 ^
| 001011 ------> 110100 |
| 001101 ------> 110010 |
| previous 4th number 010011 ------> 101100 new 4th largest number |
v 010101 ------> 101010 |
so after we solve the new problem, we just need to bitwise not.
Now the main problem is how to solve the new problem. First, let A be the array, so A[m]{1<=m<=2n} only can be 1 or 0, let DP[v][q] be the amount of numbers which satisfy condition2 and condition #(1)=q in {A[2n-v+1]~A[2n]}, so the DP[2n][n] is the amount of satisfied numbers.
A[1] only can be 1 or 0, if A[1]=1, the amount of numbers is DP[2n-1][n-1], if A[1]=0, the amount of numbers is DP[2n-1][n], now we want to find the kth largest number, if k<=DP[2n-1][n-1], kth largest number's A[1] must be 1, then we can judge A[2] with DP[2n-2][n-2]; if k>DP[2n-1][n-1], kth largest number's A[1] must be 0 and k=k-DP[2n-1][n-1], then we can judge A[2] with DP[2n-2][n-1]. So with the same theory, we can judge A[j] one by one until there is no number to compare. Now we can give a example to understand (n=3, k=4)
(We use dynamic programming to determine DP matrix, the DP equation is DP[v][q]=DP[v-1][q-1]+DP[v-1][q])
Intention: we need the number in leftest row can be compared,
so we add a row on DP's left row, but it's not include by DP matrix
in the row, all the number is 1.
the number include by bracket are initialized by ourselves
the theory of initialize just follow the mean of DP matrix
DP matrix = (1) (0) (0) (0) 4<=DP[5][2]=5 --> A[1]=1
(1) (1) (0) (0) 4>DP[4][1]=3 --> A[2]=0, k=4-3=1
(1) (2) (0) (0) 1<=DP[3][1]=3 --> A[3]=1
(1) (3) 2 (0) 1<=1 --> a[4]=1
(1) (4) 5 (0) no number to compare, A[5]~A[6]=0
(1) (5) 9 5 so the number is 101100
If you have not understand clearly, you can use the code to understand
Intention:DP[2n][n] increase very fast, so the code can only work when n<=19, in the problem n<1000, so you can use big number programming, and the code can be optimize by bit operation, so the code is just a reference
/*--------------------------------------------------
Environment: X86 Ubuntu GCC
Author: Cong Yu
Blog: aimager.com
Mail: funcemail#gmail.com
Build_Date: Mon Dec 16 21:52:49 CST 2013
Function:
--------------------------------------------------*/
#include <stdio.h>
int DP[2000][1000];
// kth is the result
int kth[1000];
void Oper(int n, int k){
int i,j,h;
// temp is the compare number
// jishu is the
int temp,jishu=0;
// initialize
for(i=1;i<=2*n;i++)
DP[i-1][0]=i-1;
for(j=2;j<=n;j++)
for(i=1;i<=2*j-1;i++)
DP[i-1][j-1]=0;
for(i=1;i<=2*n;i++)
kth[i-1]=0;
// operate DP matrix with dynamic programming
for(j=2;j<=n;j++)
for(i=2*j;i<=2*n;i++)
DP[i-1][j-1]=DP[i-2][j-2]+DP[i-2][j-1];
// the main thought
if(k>DP[2*n-1][n-1])
printf("nothing\n");
else{
i=2*n;
j=n;
for(;j>=1;i--,jishu++){
if(j==1)
temp=1;
else
temp=DP[i-2][j-2];
if(k<=temp){
kth[jishu]=1;
j--;
}
else{
kth[jishu]=0;
if(j==1)
k-=1;
else
k-=DP[i-2][j-2];
}
}
for(i=1;i<=2*n;i++){
kth[i-1]=1-kth[i-1];
printf("%d",kth[i-1]);
}
printf("\n");
}
}
int main(){
int n,k;
scanf("%d",&n);
scanf("%d",&k);
Oper(n,k);
return 0;
}
Is there an easy way to figure this out? What is the lowest (most negative) number that can be represented by 7-bit two's complement? Show how to convert the number into its two's complement representation.
The lowest number is -2^6. To find a negative number's inverse in 2's complement (aka its absolute value) flip the bits and add one. So (-1)*1000001 = 0111110+1 = 0111111 = 1000000 - 1 = 2^6-1. As you can see there is a number lower than 1000001 and it is one less than it: 1000000. Finding it's absolute value we get:
(-1)*(100000) = (-1)*(100001-1) = (-1)*(100001) + 1 = (2^6-1)+1 = 2^6.
I'm trying to understand two's complement:
Does two's complement mean that this number is invalid:
1000
Does two's complement disallow the use of the most significant bit for positive numbers. Ie. Could
1000
Ever represent 2^3? Or would it represent -0?
I'm also confused about why you need to add 1 to a one's complement.
in two's complement the MSB (most significant bit) is set to one for negatives. to multiply by -1 a two's complement number you well do the following:
add one to the number.
reverse all bits after the first one.
for example:
the number 10010 after adding one you well get: 10011 after reversing you get: 01101.
that means that 10010 is negative 13.
the number 1000 after adding one is: 1001 after reversing: 0111. that means that 1000 is negative 7.
now, to your last question: no. if you work with two's complement you can't use the MSB for positive numbers. but, you could define you are not using two's complement and use higher positive numbers.
Twos-complement is based on two requirements:
numbers are represented by a fixed number of bits;
x + -x = 0.
Assuming a four bit representation, say, we have
0 + -0 = 0000 + -0000 (base 2) = 0000 => -0000 = 0000
1 + -1 = 0001 + -0001 (base 2) = 0000 => -0001 = 1111 (carry falls off the end)
Now we have our building blocks, a drop of induction will show you the "flip the bits and add 1" algorithm is exactly what you need to convert a positive number to its twos-complement negative representation.
2's complement is mostly a matter of how you interpret the value, most math* doesn't care whether you view a number as signed or not. If you're working with 4 bits, 1000 is 8 as well as -8. This "odd symmetry" arises here because adding it to a number is the same as xoring it with a number (since only the high bit is set, so there is no carry into any bits). It also arises from the definition of two's complement - negation maps this number to itself.
In general, any number k represents the set of numbers { a | a = xk mod n } where n is 2 to the power of how many bits you're working with. This perhaps somewhat odd effect is a direct result of using modular arithmetic and is true whether you view number as signed or unsigned. The only difference between the signed and unsigned interpretations is which number you take to be the representative of such a set. For unsigned, the representative is the only such a that lies between 0 and n. For signed numbers, the representative is the only such a that lies between -(n/2) and (n/2)-1.
As for why you need to add one, the goal of negation is to find an x' such that x' + x = 0. If you only complemented the bits in x but didn't add one, x' + x would not have carries at any position and just sum to "all ones". "All ones" plus 1 is zero, so adding one fixes x' so that the sum will go to zero. Alternatively (well it's not really an alternative), you can take ~(x - 1), which gives the same result as ~x + 1.
*Signedness affects the result of division, right shift, and the high half of multiplication (which is rarely used and, in many programming languages, unavailable anyway).
It depends on how many bits you use to represent numbers.
The leftmost (largest) bit has a value of -1*(2**N-1) or in this case, -8. (N being the number of bits.) Subsequent bits are their normal values.
So
1000
is -8
1111
is -1
0111
is 7.
However, if you have 8 bits these become different values!
0000 1000
is positive 8. Only the leftmost bit adds a negative value to the answer.
In either case, the range of numbers is from
1000....0
for -2**(N-1) with N bits
to
0111....1
Which is 2**(N-1) -1. (This is just normal base 2 since the leftmost bit is 0.)
You are given 2^32-2 unique numbers that range from 1 to 2^32-1. It's impossible to fit all the numbers into memory (thus sorting is not an option). You are asked to find the missing number. What would be the best approach to this problem?
Let's assume you cannot use big-integers and are confined to 32bit ints.
ints are passed in through standard in.
Major Edit: Trust me to make things much harder than they have to be.
XOR all of them.
I'm assuming here that the numbers are 1 to 232 - 1 inclusive. This should use 1 extra memory location of 32 bits.
EDIT: I thought I could get away with magic. Ah well.
Explanation:
For those who know how Hamming Codes work, it's the same idea.
Basically, for all numbers from 0 to 2n - 1, there are exactly 2(n - 1) 1s in each bit position of the number. Therefore xoring all those numbers should actually give 0. However, since one number is missing, that particular column will give one, because there's an odd number of ones in that bit position.
Note: Although I personally prefer the ** operator for exponentiation, I've changed mine to ^ because that's what the OP has used. Don't confuse ^ for xor.
Add all the numbers you are given up using your favourite big integer library, and subtract that total from the sum of all the numbers from 1 to 2^32-1 as obtained from the sum of arithmetic progression formula
Use bitwise operator XOR. Here are example in JavaScript:
var numbers = [6, 2, 4, 5, 7, 1]; //2^3 exclude one, starting from 1
var result = 0;
//xor all values in numbers
for (var i = 0, l = numbers.length; i < l; i++) {
result ^= numbers[i];
}
console.log(result); //3
numbers[0] = 3; //replace 6 with 3
//same as above in functional style
result = numbers.reduce(function (previousValue, currentValue, index, array) {return currentValue ^= previousValue;});
console.log(result); //6
The same in C#:
int[] numbers = {3, 2, 4, 5, 7, 1};
int missing = numbers.Aggregate((result, next) => result ^ next);
Console.WriteLine(missing);
Assuming you can get the Size() you can use some binary approach. Select the set of numbers n where n< 2^32 -2 / 2. then get a count. The missing side should report a lower count. Do the process iteratively then you will get the answer
If you do not have XOR, then of course you can do the same with ordinary "unchecked" sum, that is sum of 32-bit integers with "wrap around" (no "overflow checking", sometimes known as unchecked context).
This is addition modulo 232. I will consider the "unsigned" case. If you 32-bit int uses two's complement, it is just the same. (To a mathematician, two's complement is still just addition (and multiplication) modulo 232, we only pick a different canonical representative for each equivalence class modulo 232.)
If we had had all the non-zero 32-bit integers, we would have:
1 + 2 + 3 + … + 4294967295 ≡ 2147483648
One way of realizing this is to take the first and the last term together, they give zero (modulo 232). Then the second term (2) and the second-last term (4294967294) also give zero. Thus all terms cancel except the middle one (2147483648) which is then equal to the sum.
From this equality, imagine you subtract one of the numbers (call it x) on both sides of the ≡ symbol. From this, you see that you find the missing number by starting from 2147483648 and subtracting (still unchecked) from that all of the numbers you are given. Then you end up with the missing number:
missingNumber ≡ 2147483648 - x1 - x2 - x3 - … - x4294967294
Of course, this is the same as moonshadow's solution, just carried out in the ring of integers modulo 232.
The elegant XOR solution (sykora's answer) can also be written in the same way, and with that XOR functions as both + and - at the same time. That is, if we had all the non-zero 32-bit integers, then
1 XOR 2 XOR 3 XOR … XOR 4294967295 ≡ 0
and then XOR with the missing number x on both sides of the ≡ symbol to see that:
missingNumber ≡ x1 XOR x2 XOR x3 XOR … XOR x4294967294