Representing decimal numbers in Big and Little Endian? - endianness

So if I have an unsigned int which consists of 4 bytes which are stored in address 10000 to 10011. If the representation of the storage is in Big Endian, what decimal value is stored in the varuable?
ADRESS INSTRUCTION
10000: 0b01000010
10001: 137
10010: 0x13
10011: 0b11000011
So the decimal numbers are: 66, 137, 19, 195.
I thought that the Big Endian representation just is 6 613 719 195. But apparently that is wrong. So what am I missing here? If it was Little Endian it should be 1 951 913 766. But again this is wrong. So what am I missing here? Yes this is a quiz question that I got wrong and I just don't get it completely. The question is literally:
"In a high-level language a variable is declared as an unsigned int and consists of 4 bytes which are stored in the address 10000-10011. If the representation of the storage is in Big.Endian, which decimal value is stored in the variable?
ADRESS INSTRUCTION
10000: 0b01000010
10001: 137
10010: 0x13
10011: 0b11000011
"

You calculate bad, you need to calculate like this:
6631246 = 195 * 2^24 + 19 * 2^16 + 137 * 2^8 + 66
The representation of the value in Big Endian is to store the most significant byte (MSB) first, so the value of each byte in memory is stored in the order 10011, 10010, 10001, 10000, which is 195, 19, 137, 66 in decimal.

Related

Checking if an adress is linecache aligned

This is a quiz question which I failed in the past and despite having access to the solution, I don't understand the different step to come to the correct answer.
Here is the problem :
Which of these adress is line cache aligned
a. 0x7ffc32a21164
b. 0x560c40e05350
c. 0x560c40e052c0
d. 0x560c3f2d71ff
And the solution to the problem:
Each hex char is represented by 4 bits
It takes 6 bits to represent 64 adress, since ln(64)/ln(2) = 6
0x0 0000
0x4 0100
0x8 1000
0xc 1100
________
2^3 2^2 2^1 2^0
8 4 2 1
Conclusion: if the adress ends if either 00, 40, 80 or c0, then it is aligned on 64 bytes.
The answer is c.
I really don't see how we go from 6 bits representation to this answer. Can anyone adds something to the solution given to make it clearer?
The question boils down to: Which number is a multiple of 64? All that remains is understanding the number system they're using.
In binary, 64 is written as 1000000. In hexadecimal, it's written as 0x40. So multiples of 64 will end in 0x00 (0 * 64), 0x40 (1 * 64), 0x80 (2 * 64), or 0xC0 (3 * 64). (The cycle then repeats.) Answer c is the one with the right ending.
An analogy in decimal would be: Which number is a multiple of 5? 0 * 5 is 0 and 1 * 5 is 5, after which the cycle repeats. So we just need to look at the last digit. If it's a 0 or a 5, we know the number is a multiple of 5.

How to calculate the EXTENDED_PAYLOAD_LENGTH if the PAYLOAD_LENGTH is 126 in a WebSocket frame data?

My goal is calculate the payload length of a message sent by a client to the server through the WebSocket protocol.
I am using RFC6455 as a reference.
The length of the "Payload data", in bytes: if 0-125, that is the
payload length. If 126, the following 2 bytes interpreted as a
16-bit unsigned integer are the payload length. If 127, the
following 8 bytes interpreted as a 64-bit unsigned integer (the
most significant bit MUST be 0) are the payload length.
Here is a sample of frame data of message abcdef from a client to the server.
129 134 167 225 225 210 198 131 130 182 194 135
What if, the second byte value is 254, in this case 254 - 128 equals to 126.
129 254 167 225 225 210 198 131 130 182 194 135
I am assuming the third and fourth byte are the EXTENDED_PAYLOAD_LENGTH. In this case, because the second byte - 128 equals to 126, the EXTENDED_PAYLOAD_LENGTH is the actual payload length.
However, in the RFC, it specifically says 16-bit instead of the next two bytes.
If 126, the following 2 bytes interpreted as a
16-bit unsigned integer are the payload length.
How do you combine the third and fourth byte to get the actual payload length? In this case, I wrote the code in Java, and currently I don't have access to read the frame data bit by bit, instead I read it byte by byte.
PS: I am thinking to use bitwise operators, am I going to the right direction?

Endianness order. How will these bytes be represented?

Given the hexadecimal bytes 0x12345678, copy the bytes to memory using big-endian order.
Address Content
0x00400003 0x78
0x00400002 0x56
0x00400001 0x34
0x00400000 0x12
Is that right?
In big-endian, the most significant byte (12) should come first, and then the rest should come in decreasing order of significance.
If the given number is in big-endian byte-order (and probably it is), your solution is right, as it will look like this:
00400000|00400001|00400002|00400003
--------+--------+--------+--------
12 | 34 | 56 | 78
If you had to arrange the bytes in little endian, the arrangement would be reversed:
00400000|00400001|00400002|00400003
--------+--------+--------+--------
78 | 56 | 34 | 12
Note that in this arrangement, only the order of bytes is reversed, but the order of nibbles (4-bit regions = hexadecimal digits) remains the same.
You can read more in this Wikipedia page about endianness.

Character code representation of decimal digit

Differentiate between the character code representation of a decimal digit and its pure binary representation
I study computer science and this is a concept I need to know for the exams but I am not sure I fully understand it
Character code representation of 261 (for example)
Would this just be the ASCII code equivalent?
Meaning:
2 has ASCII code 50
6 has ASCII code 54
1 has ASCII code 49
So the character code representation is 50, 54, 49
Pure Binary code representation
Is this just the binary conversion of 261?
So 100000101?
ASCII defines character digits 0 to 9 with the decimal number codes 48 to 57.
So there is a representation in binary for the character but also for the the decimal digit.
The binary representation of the character 46 is: 00110100 00110110.
The character 4 is code 52 in ASCII; hence, you get 00110100. While character 6 is 54, for which you get 00110110.
Meanwhile, the decimal number 46 is stored in a 16-bit word with the following representation: 00000000 00101110.
For the character 261, you would need to get the ASCII code for 2, 6 and 1.
2: 50
6: 54
1: 49
So you get for 50, 54, 49 : 00110010 00110110 00110001

Ruby - How to represent message length as 2 binary bytes

I'm using Ruby and I'm communicating with a network endpoint that requires the formatting of a 'header' prior to sending the message itself.
The first field in the header must be the message length which is defined as a 2 binary byte message length in network byte order.
For example, my message is 1024 in length. How do I represent 1024 as binary two-bytes?
The standard tools for byte wrangling in Ruby (and Perl and Python and ...) are pack and unpack. Ruby's pack is in Array. You have a length that should be two bytes long and in network byte order, that sounds like a job for the n format specifier:
n | Integer | 16-bit unsigned, network (big-endian) byte order
So if the length is in length, you'd get your two bytes thusly:
two_bytes = [ length ].pack('n')
If you need to do the opposite, have a look at String#unpack:
length = two_bytes.unpack('n').first
See Array#pack.
[1024].pack("n")
This packs the number as the network-order byte sequence \x04\x00.
The way this works is that each byte is 8 binary bits. 1024 in binary is 10000000000. If we break this up into octets of 8 (8 bits per byte), we get: 00000100 00000000.
A byte can represent (2 states) ^ (8 positions) = 256 unique values. However, since we don't have 256 ascii-printable characters, we visually represent bytes as hexadecimal pairs, since a hexadecimal digit can represent 16 different values and 16 * 16 = 256. Thus, we can take the first byte, 00000100 and break it into two hexadecimal quads as 0000 0100. Translating binary to hex gives us 0x04. The second byte is trivial, as 0000 0000 is 0x00. This gives us our hexadecimal representation of the two-byte string.
It's worth noting that because you are constrained to a 2-byte (16-bit) header, you are limited to a maximum value of 11111111 11111111, or 2^16 - 1 = 65535 bytes. Any message larger than that cannot accurately represent its length in two bytes.

Resources