Determining number of bytes in MIPS code segment - ascii

So, I understand that MIPS runs as 32-bit and that words are 8 bits (4 bytes).
If I have the following code,
.data
.word 5
.asciiz "Hi"
I know that there is one word being stored and it must be 4 bytes, but how do I determine the number of bytes in the third line? I've asked my instructor for help but she keeps referencing me to the following example:
.asciiz "help"
Apparently this is 5 bytes, but I'm not able to see how or why it is 5 bytes. I would appreciate some clarification, my instructor is reluctant to share techniques and I can't find information on this in my textbook

.asciiz creates a zero-terminated ASCII string, i.e. a string of ASCII characters followed by a byte with the value 0 (terminator).
So the the number of bytes required is the number of characters plus 1. Hence asciiz "help" -> 5 bytes, and asciiz "Hi" -> 3 bytes.

Related

Why don't we need to subtract byte offset bits in this problem?

In the following question :How to find tag bit in cache given word address
They have found the number of tag bits by 32 - number of index bits - word offset. But in the book, they also mentioned the following:
enter image description here
Why are we not subtracting 2 here in the problem?
For anyone who is getting confused like me, it is as follows as far as I know.
Given word address 3, we can convert it to by address which is one of address between 12 and 15 inclusive because word is 4 bytes and thus you have to multiply word address by 4.
If you convert 12 and 15 to binary it will be ...001100 and ...001111 and you have to get least 2 significant bits for byte offset and the next 1 bit for word( block) offset which leaves 001 in both cases.

How to detect that bit input ended?

I write LZW/Huffman encoder/decoder. LZW coder sends the number of bits according to its table: if it has less then 2^n elements, it sends n bits. Huffman coder recieve these bits byte by byte and encode them into specific number of bits according to its tree.
So the problem is the last byte can contain less then 8 bits. If i use EOF to detect end of input when decoding i can suddenly get EOF value before actual end of input. And if i send/recieve 4 bytes considering the first one is for sign, i lose 1 bit every 4 bytes.
Should i lose these first bits or there is another better solution i don't know?

How does UTF-8 represent characters?

I'm reading UTF-8 Encoding, and I don't understand the following sentence.
For characters equal to or below 2047 (hex 0x07FF), the UTF-8
representation is spread across two bytes. The first byte will have
the two high bits set and the third bit clear (i.e. 0xC2 to 0xDF). The
second byte will have the top bit set and the second bit clear (i.e.
0x80 to 0xBF).
If I'm not mistaken, this means UTF-8 requires two bytes to represent 2048 characters. In other words, we need to choose 2048 candidates from 2 to the power of 16 to represent each character.
For characters equal to or below 2047 (hex 0x07FF), the UTF-8
representation is spread across two bytes.
What's the big deal about choosing 2048 out of 65,536? However, UTF-8 explicitly sets boundary to each byte.
With following statements, The number of combinations is 30 (0xDF - 0xC2 + 0x01) for first byte, and 64 (0xBF - 0x80 + 0x01) for second byte.
The first byte will have
the two high bits set and the third bit clear (i.e. 0xC2 to 0xDF). The
second byte will have the top bit set and the second bit clear (i.e.
0x80 to 0xBF).
How does 1920 numbers (64 times 30) accommodate 2048 combinations?
As you already know, 2047 (0x07FF) contains the raw bits
00000111 11111111
If you look at the bit distribution chart for UTF-8:
You will see that 0x07FF falls in the second line, so it is encoded as 2 bytes using this bit pattern:
110xxxxx 10xxxxxx
Substitute the raw bits into the xs and you get this result:
11011111 10111111 (0xDF 0xBF)
Which is exactly as the description you quoted says:
The first byte will have the two high bits set and the third bit clear (11011111). The second byte will have the top bit set and the second bit clear (10111111).
Think of it as a container, where the encoding reserves a few bits for its own synchronization, and you get to use the remaining bits.
So for the range in question, the encoding "template" is
110 abcde 10 fghijk
(where I have left a single space to mark the boundary between the template and the value from the code point we want to encode, and two spaces between the actual bytes)
and you get to use the 11 bits abcdefghijk for the value you actually want to transmit.
So for the code point U+07EB you get
0x07 00000111
0xEB 11101011
where the top five zero bits are masked out (remember, we only get 11 -- because the maximum value that the encoding can accommodate in two bytes is 0x07FF. If you have a larger value, the encoding will use a different template, which is three bytes) and so
0x07 = _____ 111 (template: _____ abc)
0xEB = 11 101011 (template: de fghijk)
abc de = 111 11 (where the first three come from 0x07, and the next two from 0xEB)
fghijk = 101011 (the remaining bits from 0xEB)
yielding the value
110 11111 10 101011
aka 0xDF 0xAB.
Wikipedia's article on UTF-8 contains more examples with nicely colored numbers to see what comes from where.
The range 0x00-0x7F, which can be represented in a single byte, contains 128 code points; the two-byte range thus needs to accommodate 1920 = 2048-128 code points.
The raw encoding would allow values in the range 0xC0-0xBF in the first byte, but the values 0xC0 and 0xC1 are not ever needed because those would represent code points which can be represented in a single byte, and thus are invalid as per the encoding spec. In other words, the 0x02 in 0xC2 comes from the fact that at least one bit in the high four bits out of the 11 that this segment of the encoding can represent (one of abcd) needs to be a one bit in order for the value to require two bytes.

Confusion over little and big endian

I was reading an article which was explaining the difference in between little and big endian. I understand that big endian stores the data "big end" first and that little endian stores the data "little end" first. My confusion is in the following block of text:
Big endian machine: I think a short is two bytes, so I'll read them off: location s is address 0 (W, or 0x12) and location s + 1 is address 1 (X, or 0x34). Since the first byte is biggest (I'm big-endian!), the number must be 256 * byte 0 + byte 1, or 256*W + X, or 0x1234. I multiplied the first byte by 256 (2^8) because I needed to shift it over 8 bits.
I don't understand why they did a bit shift of 8 bits.
Also, here's another block of text I don't understand:
On a big endian machine we see:
Byte: U N I X
Location: 0 1 2 3
Which make sense. U is the biggest byte in "UN" and is stored first. The > same goes for IX: I is the biggest, and stored first.
On a little-endian machine we would see:
Byte: N U X I
Location: 0 1 2 3
If my understanding is correct, wouldn't it be "INUX, " on a little-endian machine?
The full article is at https://betterexplained.com/articles/understanding-big-and-little-endian-byte-order/.
If anyone could clear this up, that would be wonderful.
Alright, so I understand how big and little endian work now:
I'll address the issue I had understanding the second block of text.
Basically, in the article, the author stated that if we stored the word, "UNIX, " as a couple of shorts (not longs), then the final result would be "NUXI."
I'll now address the issue I had understanding the first block of text.
Basically, the bit shift is done so as to switch the arrangement of the bytes in memory so that, in the case of big endian, the most significant byte is first, and, in little endian, the least significant byte is first.

How many bits is a "word"?

This is from the book Assembly Language Step By Step, Jeff Duntemann:
Here’s the quick tour: A bit is a single binary digit, 0 or 1. A byte
is 8 bits side by side. A word is 2 bytes side by side. A double word
is 2 words side by side. A quad word is 2 double words side by side.
And this is from the book Principles of Computer Organization and Assembly Language: Using the Java Virtual Machine, Patrick Juola:
For convenience, 8 bits are usually grouped into a single block,
conventionally called a byte. The next-largest named block of bits is
a word. The definition and size of a word are not absolute, but vary
from computer to computer. A word is the size of the most convenient
block of data for the computer to deal with.
So is a word 2 bytes (16 bits), or is it the most convenient block of data for the computer to deal with? (I am also not sure what this means..)
I'm not familiar with either of these books, but the second is closer to current reality. The first may be discussing a specific processor.
Processors have been made with quite a variety of word sizes, not always a multiple of 8.
The 8086 and 8087 processors used 16 bit words, and it's likely this is the machine the first author was writing about.
More recent processors commonly use 32 or 64 bit words.
In the 50's and 60's there were machines with words sizes that seem quite strange to us now, such as 4, 9 and 36. Since about the 70's word size has commonly been a power of 2 and a multiple of 8.
On x86/x64 processors, a byte is 8 bits, and there are 256 possible binary states in 8 bits, 0 thru 255. This is how the OS translates your keyboard key strokes into letters on the screen. When you press the 'A' key, the keyboard sends a binary signal equal to the number 97 to the computer, and the computer prints a lowercase 'a' on the screen. You can confirm this in any Windows text editing software by holding an ALT key, typing 97 on the NUMPAD, then releasing the ALT key. If you replace '97' with any number from 0 to 255, you will see the character associated with that number on the system's character code page printed on the screen.
If a character is 8 bits, or 1 byte, then a WORD must be at least 2 characters, so 16 bits or 2 bytes. Traditionally, you might think of a word as a varying number of characters, but in a computer, everything that is calculable is based on static rules. Besides, a computer doesn't know what letters and symbols are, it only knows how to count numbers. So, in computer language, if a WORD is equal to 2 characters, then a double-word, or DWORD, is 2 WORDs, which is the same as 4 characters or bytes, which is equal to 32 bits. Furthermore, a quad-word, or QWORD, is 2 DWORDs, same as 4 WORDs, 8 characters, or 64 bits.
Note that these terms are limited in function to the Windows API for developers, but may appear in other circumstances (eg. the Linux dd command uses numerical suffixes to compound byte and block sizes, where c is 1 byte and w is bytes).
The second quote is correct, the size of a word varies from computer to computer. The ARM NEON architecture is an example of an architecture with 32-bit words, where 64-bit quantities are referred to as "doublewords" and 128-bit quantities are referred to as "quadwords":
A NEON operand can be a vector or a scalar. A NEON vector can be a 64-bit doubleword vector or a 128-bit quadword vector.
Normally speaking, 16-bit words are only found on 16-bit systems, like the Amiga 500.
This is from the book Hackers: Heroes of the Computer Revolution by Steven Levy.
.. the memory had been reduced to 4096 "words" of eighteen bits each.
(A "bit" is a binary digit, either a 1 or 0. A series of binary
numbers is called a "word").
As the other answers suggest, a "word" does not seem to have a fixed length.
In addition to the other answers, a further example of the variability of word size (from one system to the next) is in the paper Smashing The Stack For Fun And Profit by Aleph One:
We must remember that memory can only be addressed in multiples of the
word size. A word in our case is 4 bytes, or 32 bits. So our 5 byte buffer
is really going to take 8 bytes (2 words) of memory, and our 10 byte buffer
is going to take 12 bytes (3 words) of memory.
"most convenient block of data" probably refers to the width (in bits) of the WORD, in correspondance to the system bus width, or whatever underlying "bandwidth" is available. On a 16 bit system, with WORD being defined as 16 bits wide, moving data around in chunks the size of a WORD will be the most efficient way. (On hardware or "system" level.)
With Java being more or less platform independant, it just defines a "WORD" as the next size from a "BYTE", meaning "full bandwidth". I guess any platform that's able to run Java will use 32 bits for a WORD.
Another instance of a book citing the variable length of the Word is Operating System Concepts by Sileberschatz, Galvin, Gagne where the authors in Chapter 1 page 6 state:
A less common term is "word",
which is a given computer architecture's native storage unit. A word is
generally made up of one or more bytes. For example, a computer may have
instructions to move 64-bit (8-byte) words.

Resources