Shifting bits in MIPS - byte

My professor gave an example to us:
Say your integer is 57412476 which is 0x36C0B7C. If you take out bits
19,20, and 21 and store them in $a0 as the least significant three
bits, you will get 5 in decimal in $a0.
Can someone explain to me how this works? For clarification, if register $a0 contain bits 19,20, and 21 of user input (call it A), the least significant bit of $a0 should contain the 19th bit of A, the second least significant bit of $a0 should contain the 20th bit of A, and so on...

Related

QR code generation algorithm implementation case analysis

I'm implementing a QR code generation algorithm as explained on thonky.com and I'm trying to understand one of the cases:
as stated in this page and this page I can deduct that if the code is protected with M error correction level, and the chosen mask is No. 0, the first 5 bits of the format string (non-XORed) are '00000', and because of this the whole string of 15 bits is zeros.
the next step is to remove all leading zeros, which are, again, all of them. it means that there's nothing to XOR the generator polynomial string(10100110111) with, thus giving us a final string of 15 zeros, which means that the final (XORed) string will be simply the mask string (101010000010010).
I'm seeking for confirmation that my logic is right.
Thank you all very much in advance for the help.
Your logic is correct.
remove all leading zeroes
The actual process could be described as appending 10 zero bits to the 5 bits of data and treating the 15 bits as 15 single bit coefficients of a polynomial, then dividing that polynomial by the 11 bit generator polynomial resulting in a 10 bit remainder polynomial, which is then subtracted from the 5 data bits + 10 zero bits polynomial. Since this is binary math, add and subtract are both xor operations, and since the 10 appended bits are zero bits, the process can just append the 10 remainder bits to the 5 data bits.
As commented above, rather than actually implementing a BCH encode function, since there are only 32 possible format strings, you can just do a table lookup.
https://www.thonky.com/qr-code-tutorial/format-version-tables

How can one byte be more significant than another?

The difference between little endian and big endian was explained to me like this: "In little endian the least significant byte goes into the low address, and in big endian the most significant byte goes into the low address". What exactly makes one byte more significant than the other?
In the number 222, you could regard the first 2 as most significant because it represents the value 200; the second 2 is less significant because it represents the value 20; and the third 2 is the least significant digit because it represents the value 2.
So, although the digits are the same, the magnitude of the number they represent is used to determine the significance of a digit.
It is the same as when a value is rounded to a number of significant figures ("S.F." or "SF"): 123.321 to 3SF is 123, to 2SF it is 120, to 4SF it is 123.3. That terminology has been used since before electronics were invented.
In any positional numeric system, each digit has a different weight in creating the overall value of the number.
Consider the number 51354 (in base 10): the first 5 is more significant than the second 5, as it stands for 5 multiplied by 10000, while the second 5 is just 5 multiplied by 10.
In computers number are generally fixed-width: for example, a 16 bit unsigned integer can be thought as a sequence of exactly 16 binary digits, with the leftmost one being unambiguously the most significant (it is worth exactly 32768, more than any other bit in the number), and the rightmost the least significant (it is worth just one).
As long as integers are in the CPU registers we don't really need to care about their representation - the CPU will happily perform operations on them as required. But when they are saved to memory (which generally is some random-access bytes store), they need to be represented as bytes in some way.
If we consider "normal" computers, representing a number (bigger than one byte) as a sequence of bytes means essentially representing it in base 256, each byte being a digit in base 256, and each base-256 digit being more or less significant.
Let's see an example: take the value 54321 as a 16 bit integer. If you write it in base 256, it'll be two base-256 digits: the digit 0xD41 (which is worth 0xD4 multiplied by 256) and the digit 0x31 (which is worth 0x31 multiplied by 1). It's clear that the first one is more significant than the second one, as indeed the leftmost "digit" is worth 256 times more than the one at its right.
Now, little endian machines will write in memory the least significant digit first, big endian ones will do the opposite.
Incidentally, there's a nice relationship between binary, hexadecimal and base-256: 4 bits are mapped straight to a hexadecimal digit, and 2 hexadecimal digits are mapped straight to a byte. For this reason you can also see that 54321 is in binary
1101010000110001 = 0xD431
can be split straight into two groups of 8 bits
11010100 00110001
which are the 0xD4 and 0x31 above. So you can see as well that the most significant byte is the one that contains the most significant bits.
Here I'm using the corresponding hexadecimal values to represent each base-256 digit, as there's no good way to represent them symbolically. I could use their ASCII character value, but I 0xD4 is outside ASCII, and 0x31 is 1, which would only add confusion.

How to check if two signed 32 bit integers cause overflow in MIPS?

I have figured out that for two unsigned integers, I can just do this:
sll $a0, $a0, 31 #a0 is integer 1 to be added
sll $a1, $a1, 31 #a1 is integer 2
add $t0, $a0, $a1 #result in $t0
addi $t1, $0, 2
beq $v0, $t0, $t1
What this does is it shifts the 32nd bit of both integers to the first bit spot and it adds them. If both of these numbers are 1, then the answer in $t1 will be 2 (01+01=10). Therefore, an overflow has occurred.
However for signed integers, the leading one signifies negative integers. I've ruled out the possibility for an overflow is both integers are off opposite sign, but suppose they are the same sign. Then 010 + 011 (two positive integers) will give me 101, but because this is signed. This becomes a negative integer. Would I just check if the number has changed signs? And for two negative integers, I can just check as if it were unsigned?
So you are talking about signed overflow? If the carry in does not match the carry out then it is a signed overflow.
abi or
000 00
001 01 signed overflow
010 01
011 10
100 01
101 10
110 10 signed overflow
111 11
a and b are the operands, i is the carry in, o is carry out and r is result.
notice anything about the relationship between a, b, and r? You can determine signed overflow by simply looking at the msbits of the two operands and the result.
This has absolutely nothing to do with mips, it is basic twos complement addition (and subtraction). To apply it to an instruction set you need to isolate the msbits of the operands and if they match the result. Can use a test/and can use, shifting 31 bits assuming it pads with something known (like zero and not sign extends, or maybe sign extension helps). add with zero check the z flag. mips doesnt use flags so and or shift to isolate then compare if equal or not is a good generic solution. can test carry in vs carry out as well (and both operands with 0x7FFFFFFF then add, then shift right 31, and or isolate the carry in and the two operand msbits, then add those three items and see if the carry out is the same or different, takes a lot more instructions).

A computer represents information in groups of 64 bits. How many different integers can be represented in BCD code?

This from my Interview-MCQ module:
A computer represents information in groups of 64 bits. How many
different integers can be represented in BCD code?
The given answer is 1016, however no explanation is provided, I was just wondering if somebody could help me understand the answer.
BCD is binary coded decimal. In BCD, every 4 bits is used to represent a single digit from 0 to 9. So if you have 64 bits, that gives you 64/4 = 16 decimal digits, which means you can have 10^16 different integers.

Least significant bit first

While working on ruby I came across:
> "cc".unpack('b8B8')
=> ["11000110", "01100011"]
Then I tried Googleing to find a good answer on "least significant bit", but could not find any.
Anyone care to explain, or point me in the right direction where I can understand the difference between "LSB first" and "MSB first".
It has to do with the direction of the bits. Notice that in this example it's unpacking two ascii "c" characters, and yet the bits are mirror images of each other. LSB means the rightmost (least-significant) bit is the first bit. MSB means the leftmost (most-significant) is the first bit.
As a simple example, consider the number 5, which in "normal" (readable) binary looks like this:
00000101
The least significant bit is the rightmost 1, because that is the 2^0 position (or just plan 1). It doesn't impact the value too much. The one next to it is the 2^1 position (or just plain 0 in this case), which is a bit more significant. The bit to its left (2^2 or just plain 4) is more significant still. So we say this is MSB notation because the most significant bit (2^7) comes first. If we change it to LSB, it simply becomes:
10100000
Easy right?
(And yes, for all you hardware gurus out there I'm aware that this changes from one architecture to another depending on endianness, but this is a simple answer for a simple question)
The term "significance" of a bit or byte only makes sense in the context of interpreting a sequence of bits or bytes as an integer. The bigger the impact of the bit or byte on the value of the resulting integer - the higher its significance. The more "significant" it is to the value.
So, for example, when we talk about a sequence of four bytes having the least significant byte first (aka little-endian), what we mean is that when we interpret those four bytes as a 32bit integer, the first byte denotes the lowest eight binary digits of the integer, the second bytes denotes the 17th through 24th binary digits of the integer, the third denotes the 9th through 16th, and the last byte denotes the highest eight bits of the integer.
Likewise, if we say a sequence of 8 bits is in most significant bit first order, what we mean is that if we interpret the 8 bits as an 8-bit integer, the first bit in the sequence denotes the highest binary digit of the integer, the second bit the second highest, and so on, until the last bit denotes the lowest binary digit of the integer.
Another thing to think about is that one can say that the usual decimal notation has a convention that is most significant digit first. For example, a decimal number like:
1250
is read to mean:
1 x 1000 +
2 x 100 +
5 x 10 +
0 x 1
Right? Now imagine a different convention that is least significant digit first. The same number would be written:
0521
and would be read as:
0 x 1 +
5 x 10 +
2 x 100 +
1 x 1000
Another thing you should observe in passing is that in the C family of languages (most modern programming languages), the shift-left operator (<<) and shift-right operator (>>) are pointing in a most significant bit direction. That is shifting a bit left increases its significance and shifting it right decreases it, meaning that left is most significant (and the left side is usually what we mean by first, at least in the west).

Resources