how to find a minimal value of a 9-bit trinary number - bit

I got the following question:
A trinary computer uses trits instead of bits (a trit can have values 0, 1 or 2). A trinary computer has a 9-trit floating-point representation of a number. Trit 8, the MST (Most Significant Trit), is the sign trit (1 is positive, 2 negative). Trits 7ā€“5 contain the exponent, with bias 13 (i.e., subtract 13 from the value to get the actual exponent). Trits 4ā€“0 contain the significand
i. What is the minimal value that can be represented in this way?
ii. What is the minimal positive value that can be represented?
I didn't quite know how to answer those 2 question.
In i. I thought to start the num with a negative bit 2 and then to continue with the highest number I can find = 222..2
and for ii. I would change the MSB to 1 and the exp. to be 0 and the rest could be the lowest num I could find = 10..001 but the right answers are:
i. -(2*3^13 + 2*3^12 +..+ 2*3^9) - why the LSB starts with multiplying it with 3^9 and not 3^0?
ii. 3^(-17)
Can you please lead me to my mistake and explain me how to solve it correctly?
thanks :)

Related

Modifying binary sequence

I was solving a problem and got stuck up. It states - We have a binary sequence of 0 and 1. We can perform this operation on the sequence any number of times : For any bit d which is 0, if there exists a 1(If there originally is a 1,and not after modification) on at least one of the previous two bits, i.e bits dāˆ’1 and dāˆ’2, and on at least one of the following two bits, i.e. bits d+1 and d+2, we can change it into a 1.
However, it is impossible to modify the first two bits and the last two bits.
The weight of sequence is the total number of 1s in the sequence divided by length of sequence. We need to make this weight at least 0.75 and the goal is to find the minimum number of bits that need to be modified in order to make the weight of sequence at least 0.75
If we cannot make the weight at least 0.75 print -1
E.g.
Given sequence : 100110111
Answer = 1
Explanation : We can change bit 3 from 0 to 1, so the sequence becomes 101110111 whose weight is greater than 0.75
My approach :
I first found the initial weight of the sequence and if it was less than 0.75 then iterate through the sequence from bit position 2 to length-2, and for every bit having 0 check the condition for
[ {(d-1)=1 OR (d-2)=1} AND {(d+1)=1 OR (d+2)=1} ]
And recalculated the weight at every step and if the weight exceeds 0.75 then print the answer.
But this approach gives wrong answer.
This is really two problems.
How many 0's have to become 1's in order to hit the weight condition?
Is that possible?
You can solve both by scanning through the string and produce three counts. How many 0's are going to wind up as 0? (x) How many 1's are there? (y) How many 0's can turn into 1's? (z)
If y+z < 3*x then the answer is -1.
Otherwise the answer is max(0, ceil((x+y+z)*0.75) - y).

Two ways to represent 0 with bits

Let's say we want to represent a signed number with 5 bits where the first bit is used for the sign (+ or -) of the number. Then the zero can be represented by two bit representations (10000 and 00000).
How is this problem solved?
Okay. There are always two bit in binary 1 or 0
And then there could be any number of bits for example 1bit to 64bit
If the question is 5-bit string then it should be XXXXX where X can be any bit(1 or 0)
First bit(sign bit) we can have either +0 and -0. (Thanks #machinery)
So if it is positive, we put 0 at first position and if it is negative, we put 1 at first position.
Four Bits
Now, we got our first bit, we are left with another 4-bits 0XXXX or 1XXXX as the question asked for 0,
the rest bit will be zero.
therefore the answer is 00000 or 10000
Look how to convert decimal to binary and binary to decimal.

Knowing the number of bits to represent a number

I learn that in order to determine the number of bits needed to represent a number n is by taking the logarithm of n, i.e. log(n) (base 2). However, I am not convinced! Look at my example:
if n=4, then I need log4 = 2 bits to represent 4, but 4 is (100) in binary which is clearly 3 bits!!
Can someone explain why?
Thank you.
Are you sure you aren't talking about n bit arrangements ?
With 2 bits you have 4 different sequences:
00
01
10
11
The number 4 is effectively 100 in binary, but I'm suspecting that you mixed those concepts.
To most direct scheme, you take ceil(log2(N+1)) with log2 expressed as floating.
In pure integral, a naive scheme would be to divide (integral div, thus trunc) the number by 2 until you get a result of zero (e.g. 4/2=2, 2/2=1, 1/2=0 - three divisions to go to zero, thus 3 bits are needed).
More advanced schemes exist, but going that path may hurt you performance - modern CPU-es have instructions to detect the position of the msb set to 1 for a number, instructions which require very few CPU cycles.

What is the rationale behind (x % 64) == (x & 63)? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Bitwise and in place of modulus operator
Can someone explain the rationale that makes both expressions equivalents? I know it only works because 64 is a power of two, but how can I logically or mathematically go from division to bitwise and?
The operation x % 64 returns the remainder when x is divided by 64, which (assuming x>0) must be a number between 0 and 63. Let's look at this in binary:
63dec = 0011 1111b
64dec = 0100 0000b
You can see that the binary representation of any multiple of 64 must end with 6 zeroes. So the remainder when dividing any number by 64 is the original number, with all of the bits removed except for the 6 rightmost ones.
If you take the bitwise AND of a number with 63, the result is exactly those 6 bits.
Each time you do a bit-shift this is the same as dividing by two. This is because a binary representation is base 2. It is the same way that removing the 3 from 123 in base 10 gives you 12, and that is like dividing 123 by 10.
% is the mod operator, which means the remainder of a division. 64 is 2 to the sixth power, so dividing by 64 is like shifting out six bits. The remainder of the division is those six bits that you shifted out. You can find the value of the six bits by doing a bitwise-and with only the lower six bits set, which is 63.
first one gives the remainder.
second one is short-circuit ( bit wise AND).
in bit wise AND, 63(in binary is 111111) so whatever is on LHS (x) is anded, resulting in same except the MSB. Ans so is the case with % with 64( binary 100000), divides and MSB remains the same.

Least significant bit first

While working on ruby I came across:
> "cc".unpack('b8B8')
=> ["11000110", "01100011"]
Then I tried Googleing to find a good answer on "least significant bit", but could not find any.
Anyone care to explain, or point me in the right direction where I can understand the difference between "LSB first" and "MSB first".
It has to do with the direction of the bits. Notice that in this example it's unpacking two ascii "c" characters, and yet the bits are mirror images of each other. LSB means the rightmost (least-significant) bit is the first bit. MSB means the leftmost (most-significant) is the first bit.
As a simple example, consider the number 5, which in "normal" (readable) binary looks like this:
00000101
The least significant bit is the rightmost 1, because that is the 2^0 position (or just plan 1). It doesn't impact the value too much. The one next to it is the 2^1 position (or just plain 0 in this case), which is a bit more significant. The bit to its left (2^2 or just plain 4) is more significant still. So we say this is MSB notation because the most significant bit (2^7) comes first. If we change it to LSB, it simply becomes:
10100000
Easy right?
(And yes, for all you hardware gurus out there I'm aware that this changes from one architecture to another depending on endianness, but this is a simple answer for a simple question)
The term "significance" of a bit or byte only makes sense in the context of interpreting a sequence of bits or bytes as an integer. The bigger the impact of the bit or byte on the value of the resulting integer - the higher its significance. The more "significant" it is to the value.
So, for example, when we talk about a sequence of four bytes having the least significant byte first (aka little-endian), what we mean is that when we interpret those four bytes as a 32bit integer, the first byte denotes the lowest eight binary digits of the integer, the second bytes denotes the 17th through 24th binary digits of the integer, the third denotes the 9th through 16th, and the last byte denotes the highest eight bits of the integer.
Likewise, if we say a sequence of 8 bits is in most significant bit first order, what we mean is that if we interpret the 8 bits as an 8-bit integer, the first bit in the sequence denotes the highest binary digit of the integer, the second bit the second highest, and so on, until the last bit denotes the lowest binary digit of the integer.
Another thing to think about is that one can say that the usual decimal notation has a convention that is most significant digit first. For example, a decimal number like:
1250
is read to mean:
1 x 1000 +
2 x 100 +
5 x 10 +
0 x 1
Right? Now imagine a different convention that is least significant digit first. The same number would be written:
0521
and would be read as:
0 x 1 +
5 x 10 +
2 x 100 +
1 x 1000
Another thing you should observe in passing is that in the C family of languages (most modern programming languages), the shift-left operator (<<) and shift-right operator (>>) are pointing in a most significant bit direction. That is shifting a bit left increases its significance and shifting it right decreases it, meaning that left is most significant (and the left side is usually what we mean by first, at least in the west).

Resources