In the MIX computer a word is composed of five bytes and a sign. How is the sign represented in memory? Is it another byte so each word is six bytes really?
Thanks.
Your question is not quite clear. The architecture specification doesn't specify an actual implementation. It only specifies the observable behavior.
The important thing is that in MIX access to memory is aligned to words. In some other architectures like x86 you can read a word starting from an arbitrary address even non-word-aligned but not in MIX. It means that you can't access a "sign" in any other way than as the sign of the corresponding word. It in turn means that if someone wanted to implement a MIX in hardware, it would be enough to use just 31-bit for every word i.e. 1 bit for the sign + 5 "bytes" of 6 bits.
If you want to emulate MIX on a standard modern hardware that uses "bytes" that are multiply of 8 bits, you have a few choices:
Use on 32-bit value for the whole word and simulate its internal structure with some bit mask operations
Use 6 8-bit bytes: one 8-bit byte per MIX 6-bit byte and one more for sign.
Obviously, there are more more contrived options.
Related
Sometimes I encounter questions about converting sth to bytes. Are anything existing where it is vitally important to convert to bytes or what for could I convert sth to bytes?
In most languages the most common string functions come as part of the language or in a library/include/import that comes pre-made, often employing object code to take advantage of processor based strings functions, however, sometimes you need to do something with a string that isnt natively supported by the language so since 8-bit days, people have viewed strings as an array of 7 or 8-bit characters, which fit within a byte and use conventions like ASCII to determine which byte value represents which character.
While standard languages often have functions like "string.replaceChar(OFFSET,'a')" this methodology can be painstaking slow because each call to the replaceChar method results in processing overhead which may be greater than the processing needing to be done.
There is also the simplicity factor when designing your own string algorithms but like I said, most of the common algorithms come prebuilt in modern languages. (stringCompare, trimString, reverseString, etc).
Suppose you want to perform an operation on a string which doesnt come as standard.
Suppose you want to add two numbers which are represented in decimal digits in strings and the size of these numbers are greater than the 64-bit bus size of the processor? The RSA encryption/descryption behind the SSL browser padlocks employs the use of numbers which dont fit into the word size of a desktop computer but none the less the programs on a desktop which deal with RSA certificates and keys must be able to process these data which are actually strings.
There are many and varied reasons you would want to deal with string, as an array of bytes but each of these reasons would be fairly specialised.
Any byte sequences that can not be present in valid x86 code?
I'm looking for a byte sequence (or sequences), to inject into an x86 program compiled using GCC, that can not show up in the binary as a by product of compilation.
The reason is that I want these byte sequences to act as "labels", so that I can recognize them later during inspection.
Is it possible to construct patterns of bytes, so that, searching through the binary, these patterns will not show up except with very small probability (I prefer probability zero). In other words, I want to minimize the number of false positives!
There are sequences that today are not a valid encoding of any instruction.
Rather than digging in the opcode table present in the Intel Manual 2 you can exploit two facts of the x86 architecture:
The maximum instruction length is 15 bytes.
You can repeat prefixes.
These should also be more stable across generations than reserved opcodes.
The sequence 666666666666666666666666666666 (15 operand-size override prefixes, but any prefix will do) will generate an #UD exception because it is invalid.
For what it's worth, there is a specific instruction that fulfills the role of invalid instruction: ud2.
It's presence in a binary module is possible but its more idiomatic than an invalid encoding and it is standard, for example Linux uses it to mark a bug for if ud2 is the execution flow, the code behind it cannot be valid.
That said, if I got you right, that's not going to be useful to you.
You want to skip the process of decoding the instructions and scan the code section of the binary instead.
There is no guarantee that the code section will contain only code, for example ARM compilers generate literal pools - that's definitively uncommon on x86 though.
However the compilers usually align functions to a specific boundary (usually 16 bytes), this can be done in several ways - like stretching the previous function or with a mere padding.
This padding can be a sequence of bytes of any value - hence arbitrary bytes can be present in the code section.
Long story short, there is no universal byte sequence that appear with probability zero in the code section.
Everything that it's not in the execution flow can have any value.
We will deal with probability later, for now lets assume the 66..66h appears rarely enough in an executable.
You can't just use it directly, as 66..66h can be part of two instructions and thus be a valid sequence:
mov rax, 6666666666666666h
db 66h, 66h, 66h , 66h
db 66h, 66h, 66h
nop
is valid.
This is due to the immediate operands of instructions - the biggest immediate can be 8 bytes in length (as today), so the sequence must be lengthen to 15 + 8 = 23 bytes.
If you really want to be safe again future features, you can use a sequence of 14 + 15 = 29 bytes (for the 15-byte instruction length limit).
It's possible to find 23/29 bytes of value 66h in the code section or in the whole binary.
But how probable is that?
If the bytes in a binary were uniformly random then the probability would be astronomically small: 256-23 = 2-184.
Well, the point is that the bytes in a binary are not uniformly random.
You can open a file with an embedded icon to confirm that.
You can make the probability arbitrarily small by stretching the sequence - it's up to you to find a compromise between the length and an acceptable number of false positives.
It's unclear what you want to do but here some advice:
Most, if not all, building tools support generating a map file.
It is a file with all the symbols/names and their addresses.
If you could use actual labels (with a prefix and a random suffix) you'd collect them easily after the build.
Most output formats can be enriched with meta-information.
You can add an ELF/PE section with a table of offsets to the locations you want to mark.
I'm still confused about the bits and bytes although I've been searching through the internet. Is that one character of ASCII = 1 bytes = 8 bits? So 8 bits have 256 unique pattern that covered all the ASCII code, what form is it stored in our computer?
And if I typed "Hello" does that mean this consists of 5 bytes?
Yes to everything you wrote. "Bit" is a binary digit: a 0 or a 1. Historically there existed bytes of smaller sizes; now "byte" only ever means "8 bits of information", or a number between 0 and 255.
No. ASCII is a character set with 128 codepoints stored as the values 0-127. Modern computers predominantly address 8-bit memory and disk locations so a 7-bit ASCII value takes up 8 bits.
There is no text but encoded text. An encoding maps a member of a character set to one or more bytes. Unless you absolutely know you are using ASCII, you probably aren't. There are quite a few character sets with encodings that cover all 256 byte values and use any combination of byte values to encode a string.
There are several character sets that are similar but have a few less than 256 characters. And others that use more than one byte to encode a codepoint and don't use every combination of byte values.
Just so you know, Unicode is the predominant character set except in very specialized situations. It has several encodings. UTF-8 is often used for storage and streams. UTF-16 is often used in memory, particularly in Java, .NET, JavaScript, XML, …. When text is communicated between systems, there has to be an agreement, specification, standard, or indication about which character set and encoding it uses so a sequence of bytes can be interpreted as characters.
To add to the confusion, programming languages have data types called char, Character, etc. You have to look at the specific language's reference manual to see what they mean. For example in C, char is simply an integer that is defined as the size of the encoding of character used by that C implementation. (C also calls this a "byte" and it is not necessarily 8 bits. In all other contexts, people mean 8 bits when they say "byte". If they want to be exceedingly unambiguous they might say "octet".)
"Hello" is five characters. In a specific character set, it is five codepoints. In a specific encoding for that character set, it could be 5, 10 or 20, or ??? bytes.
Also, in the source code of a specific language, a literal string like that might be "null-terminated". This means that you could say it is 6 "characters". Other languages might store a string as a counted sequence of code units. Again, you have to look at the language reference to know the underlying data structure of strings. Of, if the language and the libraries used with it are sufficiently high-level, you might never need to know such internals.
I am reverse engineering things and I often stumble upon various decompression algorithms. Most of times, it's LZSS just like Wikipedia describes it:
Initialize dictionary of size 2^n
While output is less than known output size:
Read flag
If the flag is set, output literal byte (and append it at the end of dictionary)
If the flag is not set:
Read length and look behind position
Transcribe length bytes from the dictionary at look behind position to the output and at the end of dictionary.
The thing is that the implementations follow two schools of how to encode the flag. The first one treats the input as sequence of bits:
(...)
Read flag as one bit
If it's set, read literal byte as 8 unaligned bits
If it's not set, read length and position as n and m unaligned bits
This involves lots of bit shift operations.
The other one saves a little CPU time by using bitwise operations only for flag storage, whereas literal bytes, length and position are derived from aligned input bytes. To achieve this, it breaks the linearity by fetching a few flags in advance. So the algorithm is modified like this:
(...)
Read 8 flags at once by reading one byte. For each of these 8 flags:
If it's set, read literal as aligned byte
If it's not set, read length and position as aligned bytes (deriving the specific values from the fetched bytes involves some bit operations, but it's nowhere as expensive as the first version.)
My question is: are these both valid LZSS implementations, or did I identify these algorithms wrong? Are there any known names for them?
They are effectively variants on LZSS, since all use one bit to decide on literal vs. match. More generally they are variants on LZ77.
Deflate is also a variant on LZ77, which does not use a whole bit for literal vs. match. Instead deflate has a single code for the combination of literals and lengths, so the code implicitly determines whether the next thing is a literal or a match. A length code is followed by a separate distance code.
lz4 (a specific algorithm, not a family) handles byte alignment in a different way, coding the number of literals, which is necessarily followed by a match. The first byte with the number of literals also has part of the distance. The literals are byte aligned, as is the offset that follows the literals and the rest of the distance.
This is from the book Assembly Language Step By Step, Jeff Duntemann:
Here’s the quick tour: A bit is a single binary digit, 0 or 1. A byte
is 8 bits side by side. A word is 2 bytes side by side. A double word
is 2 words side by side. A quad word is 2 double words side by side.
And this is from the book Principles of Computer Organization and Assembly Language: Using the Java Virtual Machine, Patrick Juola:
For convenience, 8 bits are usually grouped into a single block,
conventionally called a byte. The next-largest named block of bits is
a word. The definition and size of a word are not absolute, but vary
from computer to computer. A word is the size of the most convenient
block of data for the computer to deal with.
So is a word 2 bytes (16 bits), or is it the most convenient block of data for the computer to deal with? (I am also not sure what this means..)
I'm not familiar with either of these books, but the second is closer to current reality. The first may be discussing a specific processor.
Processors have been made with quite a variety of word sizes, not always a multiple of 8.
The 8086 and 8087 processors used 16 bit words, and it's likely this is the machine the first author was writing about.
More recent processors commonly use 32 or 64 bit words.
In the 50's and 60's there were machines with words sizes that seem quite strange to us now, such as 4, 9 and 36. Since about the 70's word size has commonly been a power of 2 and a multiple of 8.
On x86/x64 processors, a byte is 8 bits, and there are 256 possible binary states in 8 bits, 0 thru 255. This is how the OS translates your keyboard key strokes into letters on the screen. When you press the 'A' key, the keyboard sends a binary signal equal to the number 97 to the computer, and the computer prints a lowercase 'a' on the screen. You can confirm this in any Windows text editing software by holding an ALT key, typing 97 on the NUMPAD, then releasing the ALT key. If you replace '97' with any number from 0 to 255, you will see the character associated with that number on the system's character code page printed on the screen.
If a character is 8 bits, or 1 byte, then a WORD must be at least 2 characters, so 16 bits or 2 bytes. Traditionally, you might think of a word as a varying number of characters, but in a computer, everything that is calculable is based on static rules. Besides, a computer doesn't know what letters and symbols are, it only knows how to count numbers. So, in computer language, if a WORD is equal to 2 characters, then a double-word, or DWORD, is 2 WORDs, which is the same as 4 characters or bytes, which is equal to 32 bits. Furthermore, a quad-word, or QWORD, is 2 DWORDs, same as 4 WORDs, 8 characters, or 64 bits.
Note that these terms are limited in function to the Windows API for developers, but may appear in other circumstances (eg. the Linux dd command uses numerical suffixes to compound byte and block sizes, where c is 1 byte and w is bytes).
The second quote is correct, the size of a word varies from computer to computer. The ARM NEON architecture is an example of an architecture with 32-bit words, where 64-bit quantities are referred to as "doublewords" and 128-bit quantities are referred to as "quadwords":
A NEON operand can be a vector or a scalar. A NEON vector can be a 64-bit doubleword vector or a 128-bit quadword vector.
Normally speaking, 16-bit words are only found on 16-bit systems, like the Amiga 500.
This is from the book Hackers: Heroes of the Computer Revolution by Steven Levy.
.. the memory had been reduced to 4096 "words" of eighteen bits each.
(A "bit" is a binary digit, either a 1 or 0. A series of binary
numbers is called a "word").
As the other answers suggest, a "word" does not seem to have a fixed length.
In addition to the other answers, a further example of the variability of word size (from one system to the next) is in the paper Smashing The Stack For Fun And Profit by Aleph One:
We must remember that memory can only be addressed in multiples of the
word size. A word in our case is 4 bytes, or 32 bits. So our 5 byte buffer
is really going to take 8 bytes (2 words) of memory, and our 10 byte buffer
is going to take 12 bytes (3 words) of memory.
"most convenient block of data" probably refers to the width (in bits) of the WORD, in correspondance to the system bus width, or whatever underlying "bandwidth" is available. On a 16 bit system, with WORD being defined as 16 bits wide, moving data around in chunks the size of a WORD will be the most efficient way. (On hardware or "system" level.)
With Java being more or less platform independant, it just defines a "WORD" as the next size from a "BYTE", meaning "full bandwidth". I guess any platform that's able to run Java will use 32 bits for a WORD.
Another instance of a book citing the variable length of the Word is Operating System Concepts by Sileberschatz, Galvin, Gagne where the authors in Chapter 1 page 6 state:
A less common term is "word",
which is a given computer architecture's native storage unit. A word is
generally made up of one or more bytes. For example, a computer may have
instructions to move 64-bit (8-byte) words.