Effects of the negative bit - bit

How does the negative bit work?
Why does the negative bit allow negative numbers to have an absolute value 1 greater than positive ones (two's compliment)?
Why isn't this bit read in as 2^x, where c is the number of bits?
I just don't understand, can someone help me?

Everything that is stored in computers is just a bunch of bits. It is the conventions established by humans that attributes meaning to those bits. For example, 01000001 may represent an A according to the ASCII standard.
As another example, 10100100 may be interpreted as ¤ (the generic currency sign) in ISO-8859-1 or as € (the Euro sign) in ISO-8859-15.
Analogously, the first bit of a number may be interpreted as a negative-sign bit if those bits are supposed to store a signed number in two's complement form. We could choose to treat 10100100 as either an unsigned byte (one hundred sixty-four) or as a signed byte (negative ninety-two).
Specifically, interpreting 10100100 as an unsigned number is straightforward:
1 * 2^7 + 0 * 2^6 + 1 * 2^5 + 0 * 2^4 + 0 * 2^3 + 1 * 2^2 + 0 * 2^1 + 0 * 2^0
To interpret 10100100 as a signed number in two's complement form, note that by convention, the first bit indicates that the number is negative, so the following process kicks in:
Invert the bits, to 01011011.
0 * 2^7 + 1 * 2^6 + 0 * 2^5 + 1 * 2^4 + 1 * 2^3 + 0 * 2^2 + 1 * 2^1 + 1 * 2^0 = 91
Negate and subtract one: -91 - 1 = -92.

From wikipedia :
With two complement, you do not have two representations of 0 :
Say, for example, you have a signed integer coded on 3 bits :
000 => 0
001 => 1
010 => 2
011 => 3
100 => -4
101 => -3
110 => -2
111 => -1
With one complement, 000 and 111 would both represent 0, and your bounds would be -3:3
000 => 0
001 => 1
010 => 2
011 => 3
100 => -3
101 => -2
110 => -1
111 => -0 (wich is actually 0)

Related

Represent integers on d digits using smallest possible base

I'd like to create a function where for an arbitrary integer input value (let's say unsigned 32 bit) and a given number of d digits the return value will be a d digit B base number, B being the smallest base that can be used to represent the given input on d digits.
Here is a sample input - output of what I have in mind for 3 digits:
Input Output
0 0 0 0
1 0 0 1
2 0 1 0
3 0 1 1
4 1 0 0
5 1 0 1
6 1 1 0
7 1 1 1
8 0 0 2
9 0 1 2
10 1 0 2
11 1 1 2
12 0 2 0
13 0 2 1
14 1 2 0
15 1 2 1
16 2 0 0
17 2 0 1
18 2 1 0
19 2 1 1
20 0 2 2
21 1 2 2
22 2 0 2
23 2 1 2
24 2 2 0
25 2 2 1
26 2 2 2
27 0 0 3
28 0 1 3
29 1 0 3
30 1 1 3
.. .....
The assignment should be 1:1, for each input value there should be exactly one, unique output value. Think of it as if the function should return the nth value from the list of strangely sorted B base numbers.
Actually this is the only approach I could come up so far with - given an input value, generate all the numbers in the smallest possible B base to represent the input on d digits, then apply a custom sorting to the results ('penalizing' the higher digit values and putting them further back in the sort), and return the nth value from the sorted array. This would work, but is a spectacularly inefficient implementation - I'd like to do this without generating all the numbers up to the input value.
What would be an efficient approach for implementing this function? Any language or pseudocode is fine.
MBo's answer shows how to find the smallest base that will represent an integer number with a given number of digits.
I'm not quite sure about the ordering in your example. My answer is based on a different ordering: Create all possible n-digit numbers up to base b (e.g. all numbers up to 999 for max. base 10 and 3 digits). Sort them according to their maximum digit first. Numbers are sorted normalls within a group with the same maximum digit. This retains the characteristic that all values from 8 to 26 must be base 3, but the internal ordering is different:
8 0 0 2
9 0 1 2
10 0 2 0
11 0 2 1
12 0 2 2
13 1 0 2
14 1 1 2
15 1 2 0
16 1 2 1
17 1 2 2
18 2 0 0
19 2 0 1
20 2 0 2
21 2 1 0
22 2 1 1
23 2 1 2
24 2 2 0
25 2 2 1
26 2 2 2
When your base is two, life is easy: Just generate the appropriate binary number.
For other bases, let's look at the first digit. In the example above, five numbers start with 0, five start with 1 and nine start with 2. When the first digit is 2, the maximum digit is assured to be 2. Therefore, we can combine 2 with a 9 2-digit numbers of base 3.
When the first digit is smaller than the maximum digit in the group, we can combine it with the 9 2-digit numbers of base 3, but we must not use the 4 2-digit numbers that are ambiguous with the 4 2-digit numbers of base 2. That gives us five possibilites for the digits 0 and 1. These possibilities – 02, 12, 20, 21 and 22 – can be described as the unique numbers with two digits according to the same scheme, but with an offset:
4 0 2
5 1 2
6 2 0
7 2 1
8 2 2
That leads to a recursive solution:
for one digit, just return the number itself;
for base two, return the straightforward representation in base 2;
if the first number is the maximum digit for the determined base, combine it with a straighforward representations in that base;
otherwise combine it with a recursively determined representation of the same algorithm with one fewer digit.
Here's an example in Python. The representation is returned as list of numbers, so that you can represent 2^32 − 1 as [307, 1290, 990].
import math
def repres(x, ndigit, base):
"""Straightforward representation of x in given base"""
s = []
while ndigit:
s += [x % base]
x /= base
ndigit -= 1
return s
def encode(x, ndigit):
"""Encode according to min-base, fixed-digit order"""
if ndigit <= 1:
return [x]
base = int(x ** (1.0 / ndigit)) + 1
if base <= 2:
return repres(x, ndigit, 2)
x0 = (base - 1) ** ndigit
nprev = (base - 1) ** (ndigit - 1)
ncurr = base ** (ndigit - 1)
ndiff = ncurr - nprev
area = (x - x0) / ndiff
if area < base - 1:
xx = x0 / (base - 1) + x - x0 - area * ndiff
return [area] + encode(xx, ndigit - 1)
xx0 = x0 + (base - 1) * ndiff
return [base - 1] + repres(x - xx0, ndigit - 1, base)
for x in range(32):
r = encode(x, 3)
print x, r
Assuming that all values are positive, let's make simple math:
d-digit B-based number can hold value N if
Bd > N
so
B > N1/d
So calculate N1/d value, round it up (increment if integer), and you will get the smallest base B.
(note that numerical errors might occur)
Examples:
d=2, N=99 => 9.95 => B=10
d=2, N=100 => 10 => B=11
d=2, N=57 => 7.55 => B=8
d=2, N=33 => 5.74 => B=6
Delphi code
function GetInSmallestBase(N, d: UInt32): string;
const
Digits = '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ';
var
Base, i: Byte;
begin
Base := Ceil(Power(N, 1/d) + 1.0E-12);
if Base > 36 then
Exit('Big number, few digits...');
SetLength(Result, d);
for i := d downto 1 do begin
Result[i] := Digits[1 + N mod Base]; //Delphi string is 1-based
N := N div Base;
end;
Result := Result + Format(' : base [%d]', [Base]);
end;
begin
Memo1.Lines.Add(GetInSmallestBase(99, 2));
Memo1.Lines.Add(GetInSmallestBase(100, 2));
Memo1.Lines.Add(GetInSmallestBase(987, 2));
Memo1.Lines.Add(GetInSmallestBase(1987, 2));
Memo1.Lines.Add(GetInSmallestBase(87654321, 6));
Memo1.Lines.Add(GetInSmallestBase(57, 2));
Memo1.Lines.Add(GetInSmallestBase(33, 2));
99 : base [10]
91 : base [11]
UR : base [32]
Big number, few digits...
H03LL7 : base [22]
71 : base [8]
53 : base [6]

Algorithm for converting decimal fractions to negadecimal?

I would like to know, how to convert fractional values (say, -.06), into negadecimal or a negative base. I know -.06 is .14 in negadecimal, because I can do it the other way around, but the regular algorithm used for converting fractions into other bases doesn't work with a negative base. Dont give a code example, just explain the steps required.
The regular algorithm works like this:
You times the value by the base you're converting into. Record whole numbers, then keep going with the remaining fraction part until there is no more fraction:
0.337 in binary:
0.337*2 = 0.674 "0"
0.674*2 = 1.348 "1"
0.348*2 = 0.696 "0"
0.696*2 = 1.392 "1"
0.392*2 = 0.784 "0"
0.784*2 = 1.568 "1"
0.568*2 = 1.136 "1"
Approximately .0101011
I have a two-step algorithm for doing the conversion. I'm not sure if this is the optimal algorithm, but it works pretty well.
The basic idea is to start off by getting a decimal representation of the number, then converting that decimal representation into a negadecimal representation by handling the even powers and odd powers separately.
Here's an example that motivates the idea behind the algorithm. This is going to go into a lot of detail, but ultimately will arrive at the algorithm and at the same time show where it comes from.
Suppose we want to convert the number 0.523598734 to negadecimal (notice that I'm presupposing you can convert to decimal). Notice that
0.523598734 = 5 * 10^-1
+ 2 * 10^-2
+ 3 * 10^-3
+ 5 * 10^-4
+ 9 * 10^-5
+ 8 * 10^-6
+ 7 * 10^-7
+ 3 * 10^-8
+ 4 * 10^-9
Since 10^-n = (-10)^-n when n is even, we can rewrite this as
0.523598734 = 5 * 10^-1
+ 2 * (-10)^-2
+ 3 * 10^-3
+ 5 * (-10)^-4
+ 9 * 10^-5
+ 8 * (-10)^-6
+ 7 * 10^-7
+ 3 * (-10)^-8
+ 4 * 10^-9
Rearranging and regrouping terms gives us this:
0.523598734 = 2 * (-10)^-2
+ 5 * (-10)^-4
+ 8 * (-10)^-6
+ 3 * (-10)^-8
+ 5 * 10^-1
+ 3 * 10^-3
+ 9 * 10^-5
+ 7 * 10^-7
+ 4 * 10^-9
If we could rewrite those negative terms as powers of -10 rather than powers of 10, we'd be done. Fortunately, we can make a nice observation: if d is a nonzero digit (1, 2, ..., or 9), then
d * 10^-n + (10 - d) * 10^-n
= 10^-n (d + 10 - d)
= 10^-n (10)
= 10^{-n+1}
Restated in a different way:
d * 10^-n + (10 - d) * 10^-n = 10^{-n+1}
Therefore, we get this useful fact:
d * 10^-n = 10^{-n+1} - (10 - d) * 10^-n
If we assume that n is odd, then -10^-n = (-10)^-n and 10^{-n+1} = (-10)^{-n+1}. Therefore, for odd n, we see that
d * 10^-n = 10^{-n+1} - (10 - d) * 10^-n
= (-10)^{-n+1} + (10 - d) * (-10)^-n
Think about what this means in a negadecimal setting. We've turned a power of ten into a sum of two powers of minus ten.
Applying this to our summation gives this:
0.523598734 = 2 * (-10)^-2
+ 5 * (-10)^-4
+ 8 * (-10)^-6
+ 3 * (-10)^-8
+ 5 * 10^-1
+ 3 * 10^-3
+ 9 * 10^-5
+ 7 * 10^-7
+ 4 * 10^-9
= 2 * (-10)^-2
+ 5 * (-10)^-4
+ 8 * (-10)^-6
+ 3 * (-10)^-8
+ (-10)^0 + 5 * (-10)^-1
+ (-10)^-2 + 7 * (-10)^-3
+ (-10)^-4 + 1 * (-10)^-5
+ (-10)^-6 + 3 * (-10)^-7
+ (-10)^-8 + 6 * (-10)^-9
Regrouping gives this:
0.523598734 = (-10)^0
+ 5 * (-10)^-1
+ 2 * (-10)^-2 + (-10)^-2
+ 7 * (-10)^-3
+ 5 * (-10)^-4 + (-10)^-4
+ 1 * (-10)^-5
+ 8 * (-10)^-6 + (-10)^-6
+ 3 * (-10)^-7
+ 3 * (-10)^-8 + (-10)^-8
+ 6 * (-10)^-9
Overall, this gives a negadecimal representation of 1.537619346ND
Now, let's think about this at a negadigit level. Notice that
Digits in even-numbered positions are mostly preserved.
Digits in odd-numbered positions are flipped: any nonzero, odd-numbered digit is replaced by 10 minus that digit.
Each time an odd-numbered digit is flipped, the preceding digit is incremented.
Let's look at 0.523598734 and apply this algorithm directly. We start by flipping all of the odd-numbered digits to give their 10's complement:
0.523598734 --> 0.527518336
Next, we increment the even-numbered digits preceding all flipped odd-numbered digits:
0.523598734 --> 0.527518336 --> 1.537619346ND
This matches our earlier number, so it looks like we have the makings of an algorithm!
Things get a bit trickier, unfortunately, when we start working with decimal values involving the number 9. For example, let's take the number 0.999. Applying our algorithm, we start by flipping all the odd-numbered digits:
0.999 --> 0.191
Now, we increment all the even-numbered digits preceding a column that had a value flipped:
0.999 --> 0.191 --> 1.1(10)1
Here, the (10) indicates that the column containing a 9 overflowed to a 10. Clearly this isn't allowed, so we have to fix it.
To figure out how to fix this, it's instructive to look at how to count in negabinary. Here's how to count from 0 to 110:
000
001
002
003
...
008
009
190
191
192
193
194
...
198
199
180
181
...
188
189
170
...
118
119
100
101
102
...
108
109
290
Fortunately, there's a really nice pattern here. The basic mechanism works like normal base-10 incrementing: increment the last digit, and if it overflows, carry a 1 into the next column, continuing to carry until everything stabilizes. The difference here is that the odd-numbered columns work in reverse. If you increment the -10s digit, for example, you actually subtract one rather than adding one, since increasing the value in that column by 10 corresponds to having one fewer -10 included in your sum. If that number underflows at 0, you reset it back to 9 (subtracting 90), then increment the next column (adding 100). In other words, the general algorithm for incrementing a negadecimal number works like this:
Start at the 1's column.
If the current column is at an even-numbered position:
Add one.
If the value reaches 10, set it to zero, then apply this procedure to the preceding column.
If the current column is at an odd-numbered position:
Subtract one.
If the values reaches -1, set it to 9, then apply this procedure to the preceding column.
You can confirm that this math works by generalizing the above reasoning about -10s digits and 100s digits and realizing that overflowing an even-numbered column corresponding to 10k means that you need to add in 10k+1, which means that you need to decrement the previous column by one, and that underflowing an odd-numbered column works by subtracting out 9 · 10k, then adding in 10k+1.
Let's go back to our example at hand. We're trying to convert 0.999 into negadecimal, and we've gotten to
0.999 --> 0.191 --> 1.1(10)1
To fix this, we'll take the 10's column and reset it back to 0, then carry the 1 into the previous column. That's an odd-numbered column, so we decrement it. This gives the final result:
0.999 --> 0.191 --> 1.1(10)1 --> 1.001ND
Overall, for positive numbers, we have the following algorithm for doing the conversion:
Processing digits from left to right:
If you're at an odd-numbered digit that isn't zero:
Replace the digit d with the digit 10 - d.
Using the standard negadecimal addition algorithm, increment the value in the previous column.
Of course, negative numbers are a whole other story. With negative numbers, the odd columns are correct and the even columns need to be flipped, since the parity of the (-10)k terms in the summation flip. Consequently, for negative numbers, you apply the above algorithm, but preserve the odd columns and flip the even columns. Similarly, instead of incrementing the preceding digit when doing a flip, you decrement the preceding digit.
As an example, suppose we want to convert -0.523598734 into negadecimal. Applying the algorithm gives this:
-0.523598734 --> 0.583592774 --> 0.6845(10)2874 --> 0.684402874ND
This is indeed the correct representation.
Hope this helps!
For your question i thought about this object-oriented code. I am not sure although. This class takes two negadecimals numbers with an operator and creates an equation, then converts those numbers to decimals.
public class NegadecimalNumber {
private int number1;
private char operator;
private int number2;
public NegadecimalNumber(int a, char op, int b) {
this.number1 = a;
this.operator = op;
this.number2 = b;
}
public int ConvertNumber1(int a) {
int i = 1;
int nega, temp;
temp = a;
int n = a & (-10);
while (n > 0) {
temp = a / (-10);
n = temp % (-10);
n = n * i;
i = i * 10;
}
nega = n;
return nega;
}
public int ConvertNumber2(int b) {
int i = 1;
int negb, temp;
temp = b;
int n = b & (-10);
while (n > 0) {
temp = b / (-10);
n = temp % (-10);
n = n * i;
i = i * 10;
}
negb = n;
return negb;
}
public double Equation() {
double ans = 0;
if (this.operator == '+') {
ans = this.number1 + this.number2;
} else if (this.operator == '-') {
ans = this.number1 - this.number2;
} else if (this.operator == '*') {
ans = this.number1 * this.number2;
} else if (this.operator == '/') {
ans = this.number1 / this.number2;
}
return ans;
}
}
Note that https://en.wikipedia.org/wiki/Negative_base#To_Negative_Base tells you how to convert whole numbers to a negative base. So one way to solve the problem is simply to multiply the fraction by a high enough power of 100 to turn it into a whole number, convert, and then divide again: -0.06 = -6 / 100 => 14/100 = 0.14.
Another way is to realise that you are trying to create a sum of the form -a/10 + b/100 -c/1000 + d/10000... to approximate the target number so you want to reduce the error as much as possible at each stage, but you need to leave an error in the direction that you can correct at the next stage. Note that this also means that a fraction might not start with 0. when converted. 0.5 => 1.5 = 1 - 5/10.
So to convert -0.06. This is negative and the first digit after the decimal point is in the range [0.0, -0.1 .. -0.9] so we start with 0. to leave us -0.06 to convert. Now if the first digit after the decimal point is 0 then I have -0.06 left, which is in the wrong direction to convert with 0.0d so I need to chose the first digit after the decimal point to produce an approximation below my target -0.06. So I chose 0.1, which is actually -0.1 and leaves me with an error of 0.04, which I can convert exactly leaving me the conversion of 0.14.
So at each point output the digit which gives you either
1) The exact result, in which case you are finished
2) An approximation which is slightly larger than the target number, if the next digit will be negative.
3) An approximation which is slightly smaller than the target number, if the next digit will be positive.
And if you start off trying to approximate a number in the range (-1.0, 0.0] at each point you can choose a digit which keeps the remaining error small enough and in the right direction, so this always works.

How to find largest power of 2 a number is divisible by using logic functions?

How do you find the largest power of 2 a number is divisible by using logic function
for example 144 is divisible by 16 which is 2^4.
How would one do this.
I know 144 in binary is 1001 0000 and I have to use a bitwise function.
But what would I use (and or andn orn ?) or perhaps something else and what can I use as my mask?
I know you have to look at the right most number to tell if it is divisible by 2.
Any help is appreciated
I would go with n & -n or with n & (~n + 1), in case you are worried about running across one's complement arithmetic, given the latter works with both arithmetics.
E.g.,
> 144 & (~144 + 1)
< 16
Now a short explanation.
The bitwise NOT (i.e., ~ operator) of a number n gives -(n + 1). It inverts all the bits of n. The number 2 is represented by 00000010 while its negation is 11111101 which equals to -3 (i.e., , see the two's complement representation of signed numbers).
Do not to confuse it with logical negation.
E.g., ~144 = -(144 + 1) = -145.
The bitwise AND (i.e., & operator) compares two bits of the inputs and generates a result of 1 if both are 1, otherwise it returns 0.
Now the main topic.
This is an old tricks that gives the highest power of 2 that n is divisible by. This means that it returns a number with a single one bit, specifically the bottom bit that was set in n.
For example the binary representation of 144 is 010010000. Its bottom 1 bit is the bit in fourth position (counting backward from right and starting at position 0). Thus the higher power of 2 that divides 144 is 16 (i.e., 00010000).
144 & (~144 + 1) = 144 & -144 = 16
16 & ( ~16 + 1) = 16 & - 16 = 16
10 & ( ~10 + 1) = 10 & - 10 = 2
12 & ( ~12 + 1) = 12 & - 12 = 4
11 & ( ~11 + 1) = 11 & - 11 = 1
3 & ( ~ 3 + 1) = 3 & - 3 = 1
Note that if n is not divisible by any power of 2 it returns 1.
Why it works?
The negative of n is produced by inverting its bits via ~, then adding 1 (again, see two's complement definition). This sum causes every 1 bit (starting from the bottom) to overflow until a 0 bit is encountered (let us call it the bit x). Here the overflow process stops, leaving remaining bits (those beyond the current x bit) unchanged. Thus performing & between n and its inverse will result in a binary string containing only the x bit.
An example follows.
010010000 | +144 ~
----------|-------
101101111 | -145 +
1 |
----------|-------
101110000 | -144
101110000 | -144 &
010010000 | +144
----------|-------
000010000 | 16

Consolidate 10 bit Value into a Unique Byte

As part of an algorithm I'm writing, I need to find a way to convert a 10-bit word into a unique 8-bit word. The 10-bit word is made up of 5 pairs, where each pair can only ever equal 0, 1 or 2 (never 3). For example:
|00|10|00|01|10|
This value needs to somehow be consolidated into a single, unique byte.
As each pair can never equal 3, there are a wide range of values that this 10-bit word will never represent, which makes me think that it is possible to create an algorithm to perform this conversion. The simplest way to do this would be to use a lookup table, but it seems like a waste of resources to store ~680 values which will only be used once in my program. I've already tried to incorporate one of the pairs into the others somehow, but every attempt I've made has resulted in a non-unique value, and I'm now very quickly running out of ideas!
Any help?
The number you have is essentially base 3. You just need to convert this to base 2.
There are 5 pairs, so 3^5 = 243 numbers. And 8 bits is 2^8 = 256 numbers, so it's possible.
The simplest way to convert between bases is to go to base 10 first.
So, for your example:
00|10|00|01|10
Base 3: 02012
Base 10: 2*3^3 + 1*3^1 + 2*3^0
= 54 + 3 + 2
= 59
Base 2:
59 % 2 = 1
/2 29 % 2 = 1
/2 14 % 2 = 0
/2 7 % 2 = 1
/2 3 % 2 = 1
/2 1 % 2 = 1
So 111011 is your number in binary
This explains the above process in a bit more detail.
Note that once you have 59 above stored in a 1-byte integer, you'll probably already have what you want, thus explicitly converting to base 2 might not be necessary.
What you basically have is a base 3 number and you want to convert this to a single number 0 - 255, luckily 5 digits in ternary (base 3) gives 243 combinations.
What you'll need to do is:
Digit Action
( 1st x 3^4)
+ (2nd x 3^3)
+ (3rd x 3^2)
+ (4th x 3)
+ (5th)
This will give you a number 0 to 242.
You are considering to store some information in a byte. A byte can contain at most 2 ^ 8 = 256 status.
Your status is totally 3 ^ 5 = 243 < 256. That make the transfer possible.
Consider your pairs are ABCDE (each character can be 0, 1 or 2)
You can just calculate A*3^4 + B*3^3 + C*3^2 + D*3 + E as your result. I guarantee the result will be in range 0 -- 255.

What does & do in ruby (between integers)

I would like to know what & does in the use case:
7 & 3
=> 3
8 & 3
=> 0
Or as seen in the general use case:
Integer & Integer
=> ??
I know that array & array2 gives the intersection between the two arrays, but I am unsure of exactly what is going on here when used with integers.
& is bitwise AND which examines the two operands bit-by-bit and sets each result bit to 1 if both the corresponding input bits are 1, and 0 otherwise. You can also think of it as bit-by-bit multiplication.
111 (7)
AND 011 (3)
------------
= 011 (3)
1000 (8)
AND 0011 (3)
------------
= 0000 (0)

Resources