Why does SPIR-V use a "4 byte" code, or "Word code"? - bytecode

According to the SPIR-V 1.3 specification, as I understand it, all instructions are segmented in 4 byte words. This is different from traditional byte code IR languages like Pythons byte code and Java's JVM bytecode, which if I understand correctly literally segment operations as individual bytes. I've even heard people offhandedly talk about how this is specifically a good thing ("word code"), but never elaborate on why. Why did Khronos group decide to go down this route for its SPIR-V definition, instead of using a single byte format, or some other arbitrary size?

There are many reasons, but probably the biggest is that it solves the endian issue in a simple, efficient, and extensible fashion.
SPIR-V is defined to be little-endian-encoded. So on a big-endian system, step 1 has to be converting the data to the proper endian. In a more byte-oriented assembly style, you would have to read and interpret each opcode, then figure out which operands have to be byte-swapped. Basically, endian-swapping requires interpreting the data. With SPIR-V, you just transpose every 4 bytes: a simple and efficient loop.
Plus, SPIR-V is extensible; extensions can add to the set of opcodes. In a byte-oriented assembly, you would need to be able to understand every extension opcode in order to do endian conversion. With SPIR-V, you don't care if you can understand all of the opcodes; just transpose every 4 bytes and you're done. So you can write a tool that can at least read SPIR-V assembly even if it doesn't understand all of the opcodes.
This makes SPIR-V a bit bulkier compared to other byte-codes, but it's a lot easier to chew.

Related

Are there any advantages to network byte order in a new protocol?

(I know many people are going to be tempted to close this question; please don't; I'm asking for concrete technical answers, if any exist.)
"Network byte order" is big-endian for reasons that cannot be asked on stackoverflow. Lots of old protocols use that order and can't be changed but I wonder if there are any technical reasons to choose big endian for a new protocol.
I would think little endian is better, because 99.99% of processors in use are little endian (ARM can technically do both, but in reality it is always set to little endian). So I was surprised to see that CBOR, a relatively recent protocol, uses big endian. Is there an advantage that I haven't thought of?
It boils down to human factors: It is easier to read a multi-byte integer in a hex dump if it is encoded with the most significant byte(s) first. For example, the CBOR representation of 0x1234 (4,660) is the byte sequence 19 12 34. If you are looking for the value 0x1234, it is easier to spot it that way.
TLDR;
I've been in the field for over 40 years now, so there's a lot of history behind this. Even the definition of a "byte" has changed over that many years, so this may take a bit of an open mind to understand how this evolved.
Dumps of binary information weren't always in bytes, nor hexadecimal. For example, the PDP-11 (with 16-bit words, and 8-bit bytes) the use of octal notation word-wide dumps was common. This was useful because of the machine architecture, which inculuded 8 registers, and 8 addressing modes, so machine langugage dumps in octal were easier to decode than hex.

Why is AVR instruction format so complex?

While looking at the atmel 8-bit AVR instruction set ( http://www.atmel.com/Images/doc0856.pdf ) I found the instruction format quite complex. A lot of instructions have different bit fields, where bits of operands/opcode are at different places in the instruction: why is that so? Isn't it more difficult for the decode unit to actually decode the opcode and operands with this format?
I think it largely may come from considerations of compatibility.
On some stage one instruction gets more powerful and the additional option is encoded in the bits that are free, so that the old instruction word still invokes the old behaviour.

The reason behind endianness?

I was wondering, why some architectures use little-endian and others big-endian. I remember I read somewhere that it has to do with performance, however, I don't understand how can endianness influence it. Also I know that:
The little-endian system has the property that the same value can be read from memory at different lengths without using different addresses.
Which seems a nice feature, but, even so, many systems use big-endian, which probably means big-endian has some advantages too (if so, which?).
I'm sure there's more to it, most probably digging down to the hardware level. Would love to know the details.
I've looked around the net a bit for more information on this question and there is a quite a range of answers and reasonings to explain why big or little endian ordering may be preferable. I'll do my best to explain here what I found:
Little-endian
The obvious advantage to little-endianness is what you mentioned already in your question... the fact that a given number can be read as a number of a varying number of bits from the same memory address. As the Wikipedia article on the topic states:
Although this little-endian property is rarely used directly by high-level programmers, it is often employed by code optimizers as well as by assembly language programmers.
Because of this, mathematical functions involving multiple precisions are easier to write because the byte significance will always correspond to the memory address, whereas with big-endian numbers this is not the case. This seems to be the argument for little-endianness that is quoted over and over again... because of its prevalence I would have to assume that the benefits of this ordering are relatively significant.
Another interesting explanation that I found concerns addition and subtraction. When adding or subtracting multi-byte numbers, the least significant byte must be fetched first to see if there is a carryover to more significant bytes. Because the least-significant byte is read first in little-endian numbers, the system can parallelize and begin calculation on this byte while fetching the following byte(s).
Big-endian
Going back to the Wikipedia article, the stated advantage of big-endian numbers is that the size of the number can be more easily estimated because the most significant digit comes first. Related to this fact is that it is simple to tell whether a number is positive or negative by simply examining the bit at offset 0 in the lowest order byte.
What is also stated when discussing the benefits of big-endianness is that the binary digits are ordered as most people order base-10 digits. This is advantageous performance-wise when converting from binary to decimal.
While all these arguments are interesting (at least I think so), their applicability to modern processors is another matter. In particular, the addition/subtraction argument was most valid on 8 bit systems...
For my money, little-endianness seems to make the most sense and is by far the most common when looking at all the devices which use it. I think that the reason why big-endianness is still used, is more for reasons of legacy than performance. Perhaps at one time the designers of a given architecture decided that big-endianness was preferable to little-endianness, and as the architecture evolved over the years the endianness stayed the same.
The parallel I draw here is with JPEG (which is big-endian). JPEG is big-endian format, despite the fact that virtually all the machines that consume it are little-endian. While one can ask what are the benefits to JPEG being big-endian, I would venture out and say that for all intents and purposes the performance arguments mentioned above don't make a shred of difference. The fact is that JPEG was designed that way, and so long as it remains in use, that way it shall stay.
I would assume that it once were the hardware designers of the first processors who decided which endianness would best integrate with their preferred/existing/planned micro-architecture for the chips they were developing from scratch.
Once established, and for compatibility reasons, the endianness was more or less carried on to later generations of hardware; which would support the 'legacy' argument for why still both kinds exist today.

Building a custom machine code from the ground up

I have recently begun working with logic level design as an amateur hobbyist but have now found myself running up against software, where I am much less competent. I have completed designing a custom 4 bit CPU in Logisim loosely based on the paper "A Very Simple Microprocessor" by Etienne Sicard. Now that it does the very limited functions that I've built into it (addition, logical AND, OR, and XOR) without any more detectable bugs (crossing fingers) I am running into the problem of writing programs for it. Logisim has the functionality of importing a script of Hex numbers into a RAM or ROM module so I can write programs for it using my own microinstruction code, but where do I start? I'm quite literally at the most basic possible level of software design and don't really know where to go from here. Any good suggestions on resources for learning about this low level of programming or suggestions on what I should try from here? Thanks much in advance, I know this probably isn't the most directly applicable question ever asked on this forum.
I'm not aware of the paper you mention. But if you have designed your own custom CPU, then if you want to write software for it, you have two choices: a) write it in machine code, or b) write your own assembler.
Obviously I'd go with b. This will require that you shift gear a bit and do some high-level programming. What you are aiming to write is an assembler program that runs on a PC, and converts some simple assembly language into your custom machine code. The assembler itself will be a high-level program and as such, I would recommend writing it in a high-level programming language that is good at both string manipulation and binary manipulation. I would recommend Python.
You basically want your assembler to be able to read in a text file like this:
mov a, 7
foo:
mov b, 20
add a, b
cmp a, b
jg foo
(I just made this program up; it's nonsense.)
And convert each line of code into the binary pattern for that instruction, outputting a binary file (or perhaps a hex file, since you said your microcontroller can read in hex values). From there, you will be able to load the program up onto the CPU.
So, I suggest you:
Come up with (on paper) an assembly language that is a simple written representation for each of the opcodes your machine supports (you may have already done this),
Learn simple Python,
Write a Python script that reads one line at a time (sys.stdin.readline()), figures out which opcode it is and what values it takes, and outputs the corresponding machine code to stdout.
Write some assembly code in your assembly language that will run on your CPU.
Sounds like a fun project.
I have done something similar that you might find interesting. I also have created from scratch my own CPU design. It is an 8-bit multi-cycle RISC CPU based on Harvard architecture with variable length instructions.
I started in Logisim, then coded everything in Verilog, and I have synthesized it in an FPGA.
To answer your question, I have done a simple and rudimentary assembler that translates a program (instructions, ie. mnemonics + data) to the corresponding machine language that can then be loaded into the PROG memory. I've written it in shell script and I use awk, which is what I was confortable with.
I basically do two passes: first translate mnemonics to their corresponding opcode and translate data (operands) into hex, here I keep track of all the labels addresses. second pass will replace all labels with their corresponding address.
(labels and addresses are for jumps)
You can see all the project, including the assembler, documented here: https://github.com/adumont/hrm-cpu
Because your instruction set is so small, and based on the thread from the mguica answer, I would say the next step is to continue and/or fully test your instruction set. do you have flags? do you have branch instructions. For now just hand generate the machine code. Flags are tricky, in particular the overflow (V) bit. You have to examine carry in and carry out on the msbit adder to get it right. Because the instruction set is small enough you can try the various combinations of back to back instructions and followed by or and followed by xor and followed by add, or followed by and or followed by xor, etc. And mix in the branches. back to flags, if the xor and or for example do not touch carry and overflow then make sure you see carry and overflow being a zero and not touched by logical instructions and carry and overflow being one and not touched, and also independently show carry and overflow are separate, one on one off, not touched by logical, etc. make sure all the conditional branches only operate on that one condition, lead into the various conditional branches with flag bits that are ignored in both states insuring that the conditional branch ignores them. Also verify that if the conditional branch is not supposed to modify them that it doesnt. likewise if the condition doesnt cause a branch that the conditional flags are not touched...
I like to use randomization but it may be more work than you are after. I like to independently develop a software simulator of the instruction set, which I find easier to use that the logic also sometimes easier to use in batch testing. you can then randomize some short list of instructions, varying the instruction and the registers, naturally test the tester by hand computing some of the results, both state of registers after test complete and state of flag bits. Then make that randomized list longer, at some point you can take a long instruction list and run it on the logic simulator and see if the logic comes up with the same register results and flag bits as the instruction set simulator, if they vary figure out why. If the do not try another random sequence, and another. Filling registers with prime numbers before starting the test is a very good idea.
back to individual instruction testing and flags go through all the corner cases 0xFFFF + 0x0000 0xFFFF+1, things like that places just to the either side of and right on operands and results that are one count away from where a flag changes at the point where the flag changes and just the other side of that. for the logicals for example if they use the zero flag, then have various data patterns that test results that are on either side of and at zero 0x0000, 0xFFFF 0xFFFE 0x0001 0x0002, etc. Probably a walking ones result as well 0x0001 result 0x0002, ox0004, etc.
hopefully I understood your question and have not pointed out the obvious or what you have already done thus far.

Is there really such a thing as a char or short in modern programming?

I've been learning to program for a Mac over the past few months (I have experience in other languages). Obviously that has meant learning the Objective C language and thus the plainer C it is predicated on. So I have stumbles on this quote, which refers to the C/C++ language in general, not just the Mac platform.
With C and C++ prefer use of int over
char and short. The main reason behind
this is that C and C++ perform
arithmetic operations and parameter
passing at integer level, If you have
an integer value that can fit in a
byte, you should still consider using
an int to hold the number. If you use
a char, the compiler will first
convert the values into integer,
perform the operations and then
convert back the result to char.
So my question, is this the case in the Mac Desktop and IPhone OS environments? I understand when talking about theses environments we're actually talking about 3-4 different architectures (PPC, i386, Arm and the A4 Arm variant) so there may not be a single answer.
Nevertheless does the general principle hold that in modern 32 bit / 64 bit systems using 1-2 byte variables that don't align with the machine's natural 4 byte words doesn't provide much of the efficiency we may expect.
For instance, a plain old C-Array of 100,000 chars is smaller than the same 100,000 ints by a factor of four, but if during an enumeration, reading out each index involves a cast/boxing/unboxing of sorts, will we see overall lower 'performance' despite the saved memory overhead?
The processor is very very fast compared to the memory speed. It will always pay to store values in memory as chars or shorts (though to avoid porting problems you should use int8_t and int16_t). Less cache will be used, and there will be fewer memory accesses.
Can't speak for PPC/Arm/A4Arm, but x86 has the ability to operate on data as if it was 8bit, 16bit, or 32bit (64bit if an x86_64 in 64bit mode), although I'm not sure if the compiler would take advantage of those instructions. Even when using 32bit load, the compiler could AND the data with a mask that'd clear the upper 16/24bits, which would be relatively fast.
Likely, the ability to fit far more data into the cache would at least cancel out the speed difference... although the only way to know for sure would be to actually profile the code.
Of course there is a need to use data structures less than the register size of the target machine. Imagine your are storing text data encoded as UTF-8, or ASCII in memory where each character is mostly like a byte in size, do you want to store the characters as 64 bit quantities?
The advice you are looking is a warning not to over optimizes.
You have to balance the savings in space versus the computation performance of you choice.
I wouldn't worry to much about it, today's modern CPUs are complicated enough that its hard to make this kind of judgement on your own. Choose the obvious datatype and let the compiler worry about the rest.
The addressing model of the x86 architecture is that the basic unit of memory is 8 bit bytes.
This is to simplify operation with character strings and decimal arithmetic.
Then, in order to have useful sizes of integers, the instruction set allows using these in units of 1, 2, 4, and (recently) 8 bytes.
A Fact to remember, is that most software development takes place writing for different processors than most of us here deal with on a day to day basis.
C and assembler are common languages for these.
About ten billion CPUs were manufactured in 2008. About 98% of new CPUs produced each year are embedded.

Resources