I was reading an article which was explaining the difference in between little and big endian. I understand that big endian stores the data "big end" first and that little endian stores the data "little end" first. My confusion is in the following block of text:
Big endian machine: I think a short is two bytes, so I'll read them off: location s is address 0 (W, or 0x12) and location s + 1 is address 1 (X, or 0x34). Since the first byte is biggest (I'm big-endian!), the number must be 256 * byte 0 + byte 1, or 256*W + X, or 0x1234. I multiplied the first byte by 256 (2^8) because I needed to shift it over 8 bits.
I don't understand why they did a bit shift of 8 bits.
Also, here's another block of text I don't understand:
On a big endian machine we see:
Byte: U N I X
Location: 0 1 2 3
Which make sense. U is the biggest byte in "UN" and is stored first. The > same goes for IX: I is the biggest, and stored first.
On a little-endian machine we would see:
Byte: N U X I
Location: 0 1 2 3
If my understanding is correct, wouldn't it be "INUX, " on a little-endian machine?
The full article is at https://betterexplained.com/articles/understanding-big-and-little-endian-byte-order/.
If anyone could clear this up, that would be wonderful.
Alright, so I understand how big and little endian work now:
I'll address the issue I had understanding the second block of text.
Basically, in the article, the author stated that if we stored the word, "UNIX, " as a couple of shorts (not longs), then the final result would be "NUXI."
I'll now address the issue I had understanding the first block of text.
Basically, the bit shift is done so as to switch the arrangement of the bytes in memory so that, in the case of big endian, the most significant byte is first, and, in little endian, the least significant byte is first.
Related
In the following question :How to find tag bit in cache given word address
They have found the number of tag bits by 32 - number of index bits - word offset. But in the book, they also mentioned the following:
enter image description here
Why are we not subtracting 2 here in the problem?
For anyone who is getting confused like me, it is as follows as far as I know.
Given word address 3, we can convert it to by address which is one of address between 12 and 15 inclusive because word is 4 bytes and thus you have to multiply word address by 4.
If you convert 12 and 15 to binary it will be ...001100 and ...001111 and you have to get least 2 significant bits for byte offset and the next 1 bit for word( block) offset which leaves 001 in both cases.
I write LZW/Huffman encoder/decoder. LZW coder sends the number of bits according to its table: if it has less then 2^n elements, it sends n bits. Huffman coder recieve these bits byte by byte and encode them into specific number of bits according to its tree.
So the problem is the last byte can contain less then 8 bits. If i use EOF to detect end of input when decoding i can suddenly get EOF value before actual end of input. And if i send/recieve 4 bytes considering the first one is for sign, i lose 1 bit every 4 bytes.
Should i lose these first bits or there is another better solution i don't know?
This is from the book Assembly Language Step By Step, Jeff Duntemann:
Here’s the quick tour: A bit is a single binary digit, 0 or 1. A byte
is 8 bits side by side. A word is 2 bytes side by side. A double word
is 2 words side by side. A quad word is 2 double words side by side.
And this is from the book Principles of Computer Organization and Assembly Language: Using the Java Virtual Machine, Patrick Juola:
For convenience, 8 bits are usually grouped into a single block,
conventionally called a byte. The next-largest named block of bits is
a word. The definition and size of a word are not absolute, but vary
from computer to computer. A word is the size of the most convenient
block of data for the computer to deal with.
So is a word 2 bytes (16 bits), or is it the most convenient block of data for the computer to deal with? (I am also not sure what this means..)
I'm not familiar with either of these books, but the second is closer to current reality. The first may be discussing a specific processor.
Processors have been made with quite a variety of word sizes, not always a multiple of 8.
The 8086 and 8087 processors used 16 bit words, and it's likely this is the machine the first author was writing about.
More recent processors commonly use 32 or 64 bit words.
In the 50's and 60's there were machines with words sizes that seem quite strange to us now, such as 4, 9 and 36. Since about the 70's word size has commonly been a power of 2 and a multiple of 8.
On x86/x64 processors, a byte is 8 bits, and there are 256 possible binary states in 8 bits, 0 thru 255. This is how the OS translates your keyboard key strokes into letters on the screen. When you press the 'A' key, the keyboard sends a binary signal equal to the number 97 to the computer, and the computer prints a lowercase 'a' on the screen. You can confirm this in any Windows text editing software by holding an ALT key, typing 97 on the NUMPAD, then releasing the ALT key. If you replace '97' with any number from 0 to 255, you will see the character associated with that number on the system's character code page printed on the screen.
If a character is 8 bits, or 1 byte, then a WORD must be at least 2 characters, so 16 bits or 2 bytes. Traditionally, you might think of a word as a varying number of characters, but in a computer, everything that is calculable is based on static rules. Besides, a computer doesn't know what letters and symbols are, it only knows how to count numbers. So, in computer language, if a WORD is equal to 2 characters, then a double-word, or DWORD, is 2 WORDs, which is the same as 4 characters or bytes, which is equal to 32 bits. Furthermore, a quad-word, or QWORD, is 2 DWORDs, same as 4 WORDs, 8 characters, or 64 bits.
Note that these terms are limited in function to the Windows API for developers, but may appear in other circumstances (eg. the Linux dd command uses numerical suffixes to compound byte and block sizes, where c is 1 byte and w is bytes).
The second quote is correct, the size of a word varies from computer to computer. The ARM NEON architecture is an example of an architecture with 32-bit words, where 64-bit quantities are referred to as "doublewords" and 128-bit quantities are referred to as "quadwords":
A NEON operand can be a vector or a scalar. A NEON vector can be a 64-bit doubleword vector or a 128-bit quadword vector.
Normally speaking, 16-bit words are only found on 16-bit systems, like the Amiga 500.
This is from the book Hackers: Heroes of the Computer Revolution by Steven Levy.
.. the memory had been reduced to 4096 "words" of eighteen bits each.
(A "bit" is a binary digit, either a 1 or 0. A series of binary
numbers is called a "word").
As the other answers suggest, a "word" does not seem to have a fixed length.
In addition to the other answers, a further example of the variability of word size (from one system to the next) is in the paper Smashing The Stack For Fun And Profit by Aleph One:
We must remember that memory can only be addressed in multiples of the
word size. A word in our case is 4 bytes, or 32 bits. So our 5 byte buffer
is really going to take 8 bytes (2 words) of memory, and our 10 byte buffer
is going to take 12 bytes (3 words) of memory.
"most convenient block of data" probably refers to the width (in bits) of the WORD, in correspondance to the system bus width, or whatever underlying "bandwidth" is available. On a 16 bit system, with WORD being defined as 16 bits wide, moving data around in chunks the size of a WORD will be the most efficient way. (On hardware or "system" level.)
With Java being more or less platform independant, it just defines a "WORD" as the next size from a "BYTE", meaning "full bandwidth". I guess any platform that's able to run Java will use 32 bits for a WORD.
Another instance of a book citing the variable length of the Word is Operating System Concepts by Sileberschatz, Galvin, Gagne where the authors in Chapter 1 page 6 state:
A less common term is "word",
which is a given computer architecture's native storage unit. A word is
generally made up of one or more bytes. For example, a computer may have
instructions to move 64-bit (8-byte) words.
The above have answers in blue
Which bits will be used for what? Eg. I know lst 2 LSB are byte offset. Then which bits will be used for word select, block select & tag?
What I did was
2 LSB: Byte offset
1 Bit: Word select
4 Bit: Block select
Rest: Tag
But it appears wrong? For (a) I have
(b) incomplete I think but its already wrong
This part of your reasoning is incorrect:
2 LSB: Byte offset
1 Bit: Word select
4 Bit: Block select
Rest: Tag
Remember that the problem says the word size is 16 bits or two bytes, so only one address bit is used for byte offset within a word.
Again, this has absolutely nothing to do with MIPS which has a 32 or 64 bit word.
Could you please suggest an error detection scheme for detecting
one possible bit flip in the first 32 bytes of a 33-byte message using
no more than 8 bits of additional data?
Could Pearson hashing be a solution?
Detecting a single bit-flip in any message requires only one extra bit, independent of the length of the message: simply xor together all the bits in the message and tack that on the end. If any single bit flips, the parity bit at the end won't match up.
If you're asking to detect which bit flipped, that can't be done, and a simple argument shows it: the extra eight bits can represent up to 256 classes of 32-byte messages, but the zero message and the 256 messages with one on bit each must all be in different classes. Thus, there are 257 messages which must be distinctly classified, and only 256 classes.
You can detect one bit flip with just one extra bit in any length message (as stated by #Daniel Wagner). The parity bit can, simply put, indicate whether the total number of 1-bits is odd or even. Obviously, if the number of bits that are wrong is even, then the parity bit will fail, so you cannot detect 2-bit errors.
Now, for a more accessible understanding of why you can't error-correct 32 bytes (256 bits) with just 8 bits, please read about the Hamming code (like used in ECC memory). Such a scheme uses special error-correcting parity bits (henceforth called "EC parity") that only encode the parity of a subset of the total number of bits. For every 2^m - 1 total bits, you need to use m EC bits. These represent each possible different mask following the pattern "x bits on, x bits off" where x is a power of 2. Thus, the larger the number of bits at once, the better the data/parity bit ratio you get. For example, 7 total bits would allow encoding only 4 data bits after losing 3 EC bits, but 31 total bits can encode 26 data bits after losing 5 EC bits.
Now, to really understand this probably will take an example. Consider the following sets of masks. The first two rows are to be read top down, indicating the bit number (the "Most Significant Byte" I've labeled MSB):
MSB LSB
| |
v v
33222222 22221111 11111100 0000000|0
10987654 32109876 54321098 7654321|0
-------- -------- -------- -------|-
1: 10101010 10101010 10101010 1010101|0
2: 11001100 11001100 11001100 1100110|0
3: 11110000 11110000 11110000 1111000|0
4: 11111111 00000000 11111111 0000000|0
5: 11111111 11111111 00000000 0000000|0
The first thing to notice is that the binary values for 0 to 31 are represented in each column going from right to left (reading the bits in rows 1 through 5). This means that each vertical column is different from each other one (the important part). I put a vertical extra line between bit numbers 0 and 1 for a particular reason: Column 0 is useless because it has no bits set in it.
To perform error-correcting, we will bitwise-AND the received data bits against each EC bit's predefined mask, then compare the resulting parity to the EC bit. For any calculated parities discovered to not match, find the column in which only those bits are set. For example, if error-correcting bits 1, 4, and 5 are wrong when calculated from the received data value, then column #25--containing 1s in only those masks--must be the incorrect bit and can be corrected by flipping it. If only a single error-correcting bit is wrong, then the error is in that error-correcting bit. Here's an analogy to help you understand why this works:
There are 32 identical boxes, with one containing a marble. Your task is to locate the marble using just an old-style scale (the kind with two balanced platforms to compare the weights of different objects) and you are only allowed 5 weighing attempts. The solution is fairly easy: you put 16 boxes on each side of the scale and the heavier side indicates which side the marble is on. Discarding the 16 boxes on the lighter side, you then weigh 8 and 8 boxes keeping the heavier, then 4 and 4, then 2 and 2, and finally locate the marble by comparing the weights of the last 2 boxes 1 to 1: the heaviest box contains the marble. You have completed the task in only 5 weighings of 32, 16, 8, 4, and 2 boxes.
Similarly, our bit patterns have divided up the boxes in 5 different groups. Going backwards, the fifth EC bit determines whether an error is on the left side or the right side. In our scenario with bit #25, it is wrong, so we know that the error bit is on the left side of the group (bits 16-31). In our next mask for EC bit #4 (still stepping backward), we only consider bits 16-31, and we find that the "heavier" side is the left one again, so we have narrowed down the bits 24-31. Following the decision tree downward and cutting the number of possible columns in half each time, by the time we reach EC bit 1 there is only 1 possible bit left--our "marble in a box".
Note: The analogy is useful, though not perfect: 1-bits are not represented by marbles--the erroring bit location is represented by the marble.
Now, some playing around with these masks and thinking how to arrange things will reveal that there is a problem: If we try to make all 31 bits data bits, then we need 5 more bits for EC. But how, then, will we tell if the EC bits themselves are wrong? Just a single EC bit wrong will incorrectly tell us that some data bit needs correction, and we'll wrongly flip that data bit. The EC bits have to somehow encode for themselves! The solution is to position the parity bits inside of the data, in columns from the bit patterns above where only one bit is set. This way, any data bit being wrong will trigger two EC bits to be wrong, making it so that if only one EC bit is wrong, we know it is wrong itself instead of it signifying a data bit is wrong. The columns that satisfy the one-bit condition are 1, 2, 4, 8, and 16. The data bits will be interleaved between these starting at position 2. (Remember, we are not using position 0 as it would never provide any information--none of our EC bits would be set at all).
Finally, adding one more bit for overall parity will allow detecting 2-bit errors and reliably correcting 1-bit errors, as we can then compare the EC bits to it: if the EC bits say something is wrong, but the parity bit says otherwise, we know there are 2 bits wrong and cannot perform correction. We can use the discarded bit #0 as our parity bit! In fact, now we are encoding the following pattern:
0: 11111111 11111111 11111111 11111111
This gives us a final total of 6 Error-Checking and Correcting (ECC) bits. Extending the scheme of using different masks indefinitely looks like this:
32 bits - 6 ECC bits = 26 data
64 bits - 7 ECC bits = 57 data
128 bits - 8 ECC bits = 120 data
256 bits - 9 ECC bits = 247 data
512 bits - 10 ECC bits = 502 data
Now, if we are sure that we only will get a 1-bit error, we can dispense with the #0 parity bit, so we have the following:
31 bits - 5 ECC bits = 26 data
63 bits - 6 ECC bits = 57 data
127 bits - 7 ECC bits = 120 data
255 bits - 8 ECC bits = 247 data
511 bits - 9 ECC bits = 502 data
This is no change because we don't get any more data bits. Oops! 32 bytes (256 bits) as you requested cannot be error-corrected with a single byte, even if we know we can have only a 1-bit error at worst, and we know the ECC bits will be correct (allowing us to move them out of the data region and use them all for data). We need TWO more bits than we have--one must slide up to the next range of 512 bits, then leave out 246 data bits to get our 256 data bits. So that's one more ECC bit AND one more data bit (as we only have 255, exactly what Daniel told you).
Summary:: You need 33 bytes + 1 bit to detect which bit flipped in the first 32 bytes.
Note: if you are going to send 64 bytes, then you're under the 32:1 ratio, as you can error correct that in just 10 bits. But it's that in real world applications, the "frame size" of your ECC can't keep going up indefinitely for a few reasons: 1) The number of bits being worked with at once may be much smaller than the frame size, leading to gross inefficiencies (think ECC RAM). 2) The chance of being able to accurately correct a bit gets less and less, since the larger the frame, the greater the chance it will have more errors, and 2 errors defeats error-correction ability, while 3 or more can defeat even error-detection ability. 3) Once an error is detected, the larger the frame size, the larger the size of the corrupted piece that must be retransmitted.
If you need to use a whole byte instead of a bit, and you only need to detect errors, then the standard solution is to use a cyclic redundancy check (CRC). There are several well-known 8-bit CRCs to choose from.
A typical fast implementation of a CRC uses a table with 256 entries to handle a byte of the message at a time. For the case of an 8 bit CRC this is a special case of Pearson's algorithm.