Understanding two's complement - algorithm

I'm trying to understand two's complement:
Does two's complement mean that this number is invalid:
1000
Does two's complement disallow the use of the most significant bit for positive numbers. Ie. Could
1000
Ever represent 2^3? Or would it represent -0?
I'm also confused about why you need to add 1 to a one's complement.

in two's complement the MSB (most significant bit) is set to one for negatives. to multiply by -1 a two's complement number you well do the following:
add one to the number.
reverse all bits after the first one.
for example:
the number 10010 after adding one you well get: 10011 after reversing you get: 01101.
that means that 10010 is negative 13.
the number 1000 after adding one is: 1001 after reversing: 0111. that means that 1000 is negative 7.
now, to your last question: no. if you work with two's complement you can't use the MSB for positive numbers. but, you could define you are not using two's complement and use higher positive numbers.

Twos-complement is based on two requirements:
numbers are represented by a fixed number of bits;
x + -x = 0.
Assuming a four bit representation, say, we have
0 + -0 = 0000 + -0000 (base 2) = 0000 => -0000 = 0000
1 + -1 = 0001 + -0001 (base 2) = 0000 => -0001 = 1111 (carry falls off the end)
Now we have our building blocks, a drop of induction will show you the "flip the bits and add 1" algorithm is exactly what you need to convert a positive number to its twos-complement negative representation.

2's complement is mostly a matter of how you interpret the value, most math* doesn't care whether you view a number as signed or not. If you're working with 4 bits, 1000 is 8 as well as -8. This "odd symmetry" arises here because adding it to a number is the same as xoring it with a number (since only the high bit is set, so there is no carry into any bits). It also arises from the definition of two's complement - negation maps this number to itself.
In general, any number k represents the set of numbers { a | a = xk mod n } where n is 2 to the power of how many bits you're working with. This perhaps somewhat odd effect is a direct result of using modular arithmetic and is true whether you view number as signed or unsigned. The only difference between the signed and unsigned interpretations is which number you take to be the representative of such a set. For unsigned, the representative is the only such a that lies between 0 and n. For signed numbers, the representative is the only such a that lies between -(n/2) and (n/2)-1.
As for why you need to add one, the goal of negation is to find an x' such that x' + x = 0. If you only complemented the bits in x but didn't add one, x' + x would not have carries at any position and just sum to "all ones". "All ones" plus 1 is zero, so adding one fixes x' so that the sum will go to zero. Alternatively (well it's not really an alternative), you can take ~(x - 1), which gives the same result as ~x + 1.
*Signedness affects the result of division, right shift, and the high half of multiplication (which is rarely used and, in many programming languages, unavailable anyway).

It depends on how many bits you use to represent numbers.
The leftmost (largest) bit has a value of -1*(2**N-1) or in this case, -8. (N being the number of bits.) Subsequent bits are their normal values.
So
1000
is -8
1111
is -1
0111
is 7.
However, if you have 8 bits these become different values!
0000 1000
is positive 8. Only the leftmost bit adds a negative value to the answer.
In either case, the range of numbers is from
1000....0
for -2**(N-1) with N bits
to
0111....1
Which is 2**(N-1) -1. (This is just normal base 2 since the leftmost bit is 0.)

Related

Binary problem : how to find integer B conforms to integer A

I want to write an algorithm (in Python), that get all the integers that are conforms to an another integer B, written in binary.
When A is conforms to B, it means that in all positions where B has bits set to 1, A has corresponding bits set to 1.
By example :
If we have 1001, the confoms numbers are : 1111, 1011, 1101;
We can assume that the solution should work with very large numbers (so has to be quite efficient).
I have thought about many solutions about doing some binary operations but I cannot get a complete solution.
Do you have any idea ?
As shown in your example:
An integer with z zero bits has 2**z conforming integers. We can subtract one, because one of these is the integer itself.
Accordingly, your algorithm has to count from 1 to 2**z and replace the z zero bits in the original integer by the z bits of your counter.
In python, you can use bitwise operators to test or change bit positions within an integer.
Examples for bitwise operations:
x & 1 returns 1, if the least-significant bit is set. Otherwise 0
x = x | 4 will set the 3rd bit corresponding to 4
Sketch of your algorithm:
1. Loop through the integer to find and count the zero bits
2. Loop from 1 to 2**z
Inner loop: Scan the z bits of the counter
Transfer the bits to a copy of the original integer
Record/output the resulting conformant integer

How to perform a bitwise round up to even numbers if odd?

How to perform a bitwise only (shifts,and,or,xor..) round up to even numbers if the number is odd (also for negative numbers)?
Example:
Input: 3; Output: 4
Input: 4; Output: 4
Input: 5; Output: 6
Input: 6; Output: 6
Input: -14; Output: -14
Input: -15; Output: -14
What I tried: This works so far, but it seems to be kinda redundant?
(((n + 1) >> 1) << 1)
Is there a shorter solution?
A solution is to add the least significant bit to the number :
n+(n&1)
If n is even, its LSB is 0, and the number is unchanged as expected.
If n is odd, its LSB is 1 and n will be changed to the even number immediately above it.
This is based on an arithmetic operations and will work, for either positive and negative numbers.
It does not even rely on the fact that numbers are coded in two's complement. The only real assumption is that even numbers have a LSB at 0 and odd numbers a LSB at 1. If n is coded in an unusual way, this method should still work, provided this assumption is verified. For instance, with numbers coded in sign-absolute value or with excess code (with the excess even).
Your method, while correct on most computers, realizes ((n+1)&div;2)×2 by means of right and left shifts. But the C or C++ standards leave (for the moment) implementation dependent the meaning of right shifts on signed integers and your code may break for negative numbers on some unusual architectures/compilers.

Having a hard time using booth's algorithm to multiply two negative numbers

Both numbers are in two's complement form
1101*1100
This is my work but I am getting an answer that is way off I'm not sure if its the adding part or the shifting but I know the criteria of when to shift and not shift so I guess it is the adding but I'm not sure whats going wrong.
Your error is in the third step when you do A-M then shift. In your calculations you are doing A+M, not A-M. Regardless of whether M is positive or negative, when you subtract, you take the 2's complement and then add. In essence, you need to do A + (-M). Taking the 2's complement of M gives 0101, so:
If A = 0000, M=1011:
A+M : 0000(0) + (1011)(-5) = 1011 (-5)
A-M : 0000(0) - (1011)(-5) = 0000(0) + 0101(5) = 0101 (5)
this is my approach.
IF u are considering (-9)*(-4).
then just invert 2's complement.
i.e
first subtract 1
then
again complement the result.
see ur result is 11101100
now sub 1
step1: 11101100 - 1 = 11101011
step2: invert (11101011) and result is 00010100
which is 36 in binary
the above is reverse 2's complement.
And try it for different question and it work.
Actually it is the right method to do, i think i have read it in some book actually i don't remember yet.
Hope it works

Two's complement detect overflow with carries

If I add two signed binary numbers with Two's Complement method, why does it automatically mean that overflow has occoured if the carry into the MSB (the sign) and the carry out are not the same?
Let's take one fact out of the way.
When we add a negative and a positive operand, the result will always be in the range of representation. Overflows occur when we add two numbers with the same sign (both positive or both negative) and the result has the opposite sign.
When we add numbers in two's complement, we add the sign bit of the first operand with the sign bit of second operand.
When we add positive + positive operands, the sum of the sign bits is 0.
0XXX (positive)
+ 0XXX (positive)
------
0XXX (positive)
That means that it doesn't matter what happens, there will NEVER be a carry-out when adding positive + positive operands.
So, if there is a carry-on into the sign bits
1
0XXX (positive)
+ 0XXX (positive)
------
1XXX (negative)
That carry-on bit 1 will become the sign of the result. That means that we added two positive operands and we got a negative number as a result.
Carry-out = 0
Carry-on sign bit = 1
OVERFLOW!
When we add negative + negative operands, the sum of the sign bits is 0 with a carry-out.
1XXX (negative)
+ 1XXX (negative)
------
10XXX (positive)
That means that it doesn't matter what happens, there will ALWAYS be a carry-out when adding negative + negative operands. Note that by "default" the result would be a positive number. It "needs" that carry-on to adjust the result to the same sign of the operands. If there is a carry-on into the sign bit, we will have two negative operands with a negative result.
So, if there is no carry-on into the sign bits
0
1XXX (negative)
+ 1XXX (negative)
------
10XXX (positive)
That carry-on bit 0 will become the sign of the result. That means that we added two negative operands and we got a positive number as a result.
Carry-out = 1
Carry-on sign bit = 0
OVERFLOW!

Generating strongly biased random numbers for tests

I want to run tests with randomized inputs and need to generate 'sensible' random
numbers, that is, numbers that match good enough to pass the tested function's
preconditions, but hopefully wreak havoc deeper inside its code.
math.random() (I'm using Lua) produces uniformly distributed random
numbers. Scaling these up will give far more big numbers than small numbers,
and there will be very few integers.
I would like to skew the random numbers (or generate new ones using the old
function as a randomness source) in a way that strongly favors 'simple' numbers,
but will still cover the whole range, i.e., extending up to positive/negative infinity
(or ±1e309 for double). This means:
numbers up to, say, ten should be most common,
integers should be more common than fractions,
numbers ending in 0.5 should be the most common fractions,
followed by 0.25 and 0.75; then 0.125,
and so on.
A different description: Fix a base probability x such that probabilities
will sum to one and define the probability of a number n as xk
where k is the generation in which n is constructed as a surreal
number1. That assigns x to 0, x2 to -1 and +1,
x3 to -2, -1/2, +1/2 and +2, and so on. This
gives a nice description of something close to what I want (it skews a bit too
much), but is near-unusable for computing random numbers. The resulting
distribution is nowhere continuous (it's fractal!), I'm not sure how to
determine the base probability x (I think for infinite precision it would be
zero), and computing numbers based on this by iteration is awfully
slow (spending near-infinite time to construct large numbers).
Does anyone know of a simple approximation that, given a uniformly distributed
randomness source, produces random numbers very roughly distributed as
described above?
I would like to run thousands of randomized tests, quantity/speed is more
important than quality. Still, better numbers mean less inputs get rejected.
Lua has a JIT, so performance is usually not much of an issue. However, jumps based
on randomness will break every prediction, and many calls to math.random()
will be slow, too. This means a closed formula will be better than an
iterative or recursive one.
1 Wikipedia has an article on surreal numbers, with
a nice picture. A surreal number is a pair of two surreal
numbers, i.e. x := {n|m}, and its value is the number in the middle of the
pair, i.e. (for finite numbers) {n|m} = (n+m)/2 (as rational). If one side
of the pair is empty, that's interpreted as increment (or decrement, if right
is empty) by one. If both sides are empty, that's zero. Initially, there are
no numbers, so the only number one can build is 0 := { | }. In generation
two one can build numbers {0| } =: 1 and { |0} =: -1, in three we get
{1| } =: 2, {|1} =: -2, {0|1} =: 1/2 and {-1|0} =: -1/2 (plus some
more complex representations of known numbers, e.g. {-1|1} ? 0). Note that
e.g. 1/3 is never generated by finite numbers because it is an infinite
fraction – the same goes for floats, 1/3 is never represented exactly.
How's this for an algorithm?
Generate a random float in (0, 1) with a library function
Generate a random integral roundoff point according to a desired probability density function (e.g. 0 with probability 0.5, 1 with probability 0.25, 2 with probability 0.125, ...).
'Round' the float by that roundoff point (e.g. floor((float_val << roundoff)+0.5))
Generate a random integral exponent according to another PDF (e.g. 0, 1, 2, 3 with probability 0.1 each, and decreasing thereafter)
Multiply the rounded float by 2exponent.
For a surreal-like decimal expansion, you need a random binary number.
Even bits tell you whether to stop or continue, odd bits tell you whether to go right or left on the tree:
> 0... => 0.0 [50%] Stop
> 100... => -0.5 [<12.5%] Go, Left, Stop
> 110... => 0.5 [<12.5%] Go, Right, Stop
> 11100... => 0.25 [<3.125%] Go, Right, Go, Left, Stop
> 11110... => 0.75 [<3.125%] Go, Right, Go, Right, Stop
> 1110100... => 0.125
> 1110110... => 0.375
> 1111100... => 0.625
> 1111110... => 0.875
One way to quickly generate a random binary number is by looking at the decimal digits in math.random() and replace 0-4 with '1' and 5-9 with '1':
0.8430419054348022
becomes
1000001010001011
which becomes -0.5
0.5513009827118367
becomes
1100001101001011
which becomes 0.25
etc
Haven't done much lua programming, but in Javascript you can do:
Math.random().toString().substring(2).split("").map(
function(digit) { return digit >= "5" ? 1 : 0 }
);
or true binary expansion:
Math.random().toString(2).substring(2)
Not sure which is more genuinely "random" -- you'll need to test it.
You could generate surreal numbers in this way, but most of the results will be decimals in the form a/2^b, with relatively few integers. On Day 3, only 2 integers are produced (-3 and 3) vs. 6 decimals, on Day 4 it is 2 vs. 14, and on Day n it is 2 vs (2^n-2).
If you add two uniform random numbers from math.random(), you get a new distribution which has a "triangle" like distribution (linearly decreasing from the center). Adding 3 or more will get a more 'bell curve' like distribution centered around 0:
math.random() + math.random() + math.random() - 1.5
Dividing by a random number will get a truly wild number:
A/(math.random()+1e-300)
This will return an results between A and (theoretically) A*1e+300,
though my tests show that 50% of the time the results are between A and 2*A
and about 75% of the time between A and 4*A.
Putting them together, we get:
round(6*(math.random()+math.random()+math.random() - 1.5)/(math.random()+1e-300))
This has over 70% of the number returned between -9 and 9 with a few big numbers popping up rarely.
Note that the average and sum of this distribution will tend to diverge towards a large negative or positive number, because the more times you run it, the more likely it is for a small number in the denominator to cause the number to "blow up" to a large number such as 147,967 or -194,137.
See gist for sample code.
Josh
You can immediately calculate the nth born surreal number.
Example, the 1000th Surreal number is:
convert to binary:
1000 dec = 1111101000 bin
1's become pluses and 0's minuses:
1111101000
+++++-+---
The first '1' bit is 0 value, the next set of similar numbers is +1 (for 1's) or -1 (for 0's), then the value is 1/2, 1/4, 1/8, etc for each subsequent bit.
1 1 1 1 1 0 1 0 0 0
+ + + + + - + - - -
0 1 1 1 1 h h h h h
+0+1+1+1+1-1/2+1/4-1/8-1/16-1/32
= 3+17/32
= 113/32
= 3.53125
The binary length in bits of this representation is equal to the day on which that number was born.
Left and right numbers of a surreal number are the binary representation with its tail stripped back to the last 0 or 1 respectively.
Surreal numbers have an even distribution between -1 and 1 where half of the numbers created to a particular day will exist. 1/4 of the numbers exists evenly distributed between -2 to -1 and 1 to 2 and so on. The max range will be negative to positive integers matching the number of days you provide. The numbers go to infinity slowly because each day only adds one to the negative and positive ranges and days contain twice as many numbers as the last.
Edit:
A good name for this bit representation is "sinary"
Negative numbers are transpositions. ex:
100010101001101s -> negative number (always start 10...)
111101010110010s -> positive number (always start 01...)
and we notice that all bits flip accept the first one which is a transposition.
Nan is => 0s (since all other numbers start with 1), which makes it ideal for representation in bit registers in a computer since leading zeros are required (we don't make ternary computer anymore... too bad)
All Conway surreal algebra can be done on these number without needing to convert to binary or decimal.
The sinary format can be seem as a one plus a simple one's counter with a 2's complement decimal representation attached.
Here is an incomplete report on finary (similar to sinary): https://github.com/peawormsworth/tools/blob/master/finary/Fine%20binary.ipynb

Resources