How to detect architecture of an LE (linear executable) file? - windows

What instruction set architecture does an LE (linear executable) file have? The linked article says: mixed 16/32 bit.
Does it mean that the same LE file can contain 16-bit and 32-bit code?
How do I detect whether it contains 16-bit (8086) code?
How do I detect whether it contains 32-bit (i386) code?
Please note that I'm aware of the CPU type field (see here) which can distinguish between 80286 and i386 (80386). However, I interpret this as a CPU type requirement, so this doesn't specify the architecture, e.g. the hex 40 is valid in both: it means inc ax in 16-bit code and it means inc eax in 32-bit code, and both can be executed by a 80386 CPU. I'm interested in what hex 40 means in the code of an LE file.

I was able to find https://www.program-transformation.org/Transform/PcExeFormat , based on which the answer is the following.
If object flag bit 13 in the object table entry is set, then it's 32-bit, otherwise it's 16-bit.
In the executable entry table, LE flag bit 1 of LX bundle type byte distinguishes between 16-bit and 32-bit.
Since there can be multiple entries in these tables, a single LE or LX file may contain both 16-bit and 32-bit code.

Related

What does the D flag in the code segment descriptor do for x86-64 instructions?

I'm trying to understand the workings of the D flag in the code segment descriptor when used in the x86-64 code. It's set in the D/B bit 22 of the code segment descriptor as shown on this diagram:
The Intel documentation (from section 3.4.5 Segment Descriptors) states the following:
D/B (default operation size/default stack pointer size and/or upper
bound) flag
Performs different functions depending on whether the segment
descriptor is an executable code segment, an expand-down data segment,
or a stack segment. (This flag should always be set to 1 for 32-bit
code and data segments and to 0 for 16-bit code and data segments.)
• Executable code segment. The flag is called the D flag and it
indicates the default length for effective addresses and operands
referenced by instructions in the segment. If the flag is set, 32-bit
addresses and 32-bit or 8-bit operands are assumed; if it is clear,
16-bit addresses and 16-bit or 8-bit operands are assumed. The
instruction prefix 66H can be used to select an operand size other
than the default, and the prefix 67H can be used select an address
size other than the default.
So I'm trying to understand which x86-64 instructions does it affect and how?
PS. When I try to run some tests (in Windows kernel) by setting that bit on, the OS immediately triple faults.
If L (long mode) is set for a code segment descriptor, D must be clear. The L=1 / D=1 combination is currently meaningless / reserved. Intel documents this nearby in the same document you were looking at.
If L is clear, then D selects between 16 and 32-bit mode. (i.e. the default operand / address size). And yes, 16-bit protected mode exists, but no, nobody uses it.
There are only 3 possibilities for default address/operand-size:
16-bit modes (real, vm86, protected): default address and operand-size = 16-bit
32-bit protected mode: default address and operand-size = 32-bit
64-bit mode: default address size = 64-bit, default operand-size = 32-bit
There's no option to have 16x 64-bit registers but a default operand size of 16-bit or 64-bit. Or a default address size of 32-bit overrideable to 64.

When are logical addresses created?

Always I refer to x86 (Linux)
Are logical addresses created during the generation of a binary?
If yes, are their inside the binary?
Thanks
The LINKER defines the initial layout of the processes user address space. The linker then defines the range of logical addresses and their page attributes (read or read/write, execute or no execute).
The user area of the logical address space gets set up by the program loader when the executable is run.
The answer to your question
Are logical addresses created during the generation of a binary?
then depends upon you mean "created" to be when the logical address space is defined (linker) or whether you mean when it is set up (program loader).
In x86, a logical address (also called a far pointer) consists of a 16-bit segment selector and a 16/32/64-bit offset (also called a near pointer). The size of the offset depends on the operating mode, the code segment descriptor, and the address size prefix. Then the segment selector is used to obtain the segment base address (or it's obtained from the segment descriptor cache except when operating in 64-bit mode in which the base address is considered to be zero for all segments except for FS and GS) to be added to the offset to form a virtual address. The x86 ISA offers no way to completely skip that process. So any x86 instruction must specify the two parts that constitute the logical address separately (implicitly or explicitly).
Are logical addresses created during the generation of a binary?
An x86 binary contains x86 instructions. Each instruction specifies which segment register to use and how to calculate the offset (using stuff like base, index, scale, and displacement). At run-time, when an instruction is being executed, the offset is calculated and the segment selector value is determined. So, technically, x86 instructions only tell the CPU where to get the segment selector from and how to calculate the offset, but it is the CPU that generates the logical address. Generally, the compiler and the OS determine the values of offsets, but only the OS controls the values of the segment selectors.
If yes, are their inside the binary?
x86 instructions may specify the offset as an immediate value (constant). The segment part can be either specified as an immediate value (far call or var jump), fetched from a segment register, or fetched from memory (far return). So the value of the offset might be in the binary encoded with the instruction that uses it, but the value of the segment selector might not.

MingW Windows GCC cant compile c program with 2gb global data

GCC/G++ of MingW gives Relocation Errors when Building Applications with Large Global or Static Data.
Understanding the x64 code models
References to both code and data on x64 are done with
instruction-relative (RIP-relative in x64 parlance) addressing modes.
The offset from RIP in these instructions is limited to 32 bits.
small code model promises to the compiler that 32-bit relative offsets
should be enough for all code and data references in the compiled
object. The large code model, on the other hand, tells it not to make
any assumptions and use absolute 64-bit addressing modes for code and
data references. To make things more interesting, there's also a
middle road, called the medium code model.
For the below example program, despite adding options-mcmodel=medium or -mcmodel=large the code fails to compile
#define SIZE 16384
float a[SIZE][SIZE], b[SIZE][SIZE];
int main(){
return 0;
}
gcc -mcmodel=medium example.c fails to compile on MingW/Cygwin Windows, Intel windows /MSVC
You are limited to 32-bits for an offset, but this is a signed offset. So in practice, you are actually limited to 2GiB. You asked why this is not possible, but your array alone is 2GiB in size and there are things in the data segment other than just your array. C is a high level language. You get the ease of just being able to define a main function and you get all of these other things for free -- a standard in and output, etc. The C runtime implements this for you and all of this consumes stack space and room in your data segment. For example, if I build this on x86_64-pc-linux-gnu my .bss size is 0x80000020 in size -- an additional 32 bytes. (I've erased PE information from my brain, so I don't remember how those are laid out.)
I don't remember much about the various machine models, but it's probably helpful to note that the x86_64 instruction set doesn't even contain instructions (that I'm aware of, although I'm not an x86 assembly expert) to access any register-relative address beyond a signed 32-bit value. For example, when you want to cram that much stuff on the stack, gcc has to do weird things like this stack pointer allocation:
movabsq $-10000000016, %r11
addq %r11, %rsp
You can't addq $-10000000016, %rsp because it's more than a signed 32-bit offset. The same applies to RIP-relative addressing:
movq $10000000016(%rip), %rax # No such addressing mode

Is x86 32-bit assembly code valid x86 64-bit assembly code?

Is all x86 32-bit assembly code valid x86 64-bit assembly code?
I've wondered whether 32-bit assembly code is a subset of 64-bit assembly code, i.e., every 32-bit assembly code can be run in a 64-bit environment?
I guess the answer is yes, because 64-bit Windows is capable of executing 32-bit programs, but then I've seen that the 64-bit processor supports a 32-bit compatible mode?
If not, please provide a small example of 32-bit assembly code that isn't valid 64-bit assembly code and explain how the 64-bit processor executes the 32-bit assembly code.
A modern x86 CPU has three main operation modes (this description is simplified):
In real mode, the CPU executes 16 bit code with paging and segmentation disabled. Memory addresses in your code refer to phyiscal addresses, the content of segment registers is shifted and added to the address to form an effective address.
In protected mode, the CPU executes 16 bit or 32 bit code depending on the segment selector in the CS (code segment) register. Segmentation is enabled, paging can (and usually is) enabled. Programs can switch between 16 bit and 32 bit code by far jumping to an appropriate segment. The CPU can enter the submode virtual 8086 mode to emulate real mode for individual processes from inside a protected mode operating system.
In long mode, the CPU executes 64 bit code. Segmentation is mostly disabled, paging is enabled. The CPU can enter the sub-mode compatibility mode to execute 16 bit and 32 bit protected mode code from within an operating system written for long mode. Compatibility mode is entered by far-jumping to a CS selector with the appropriate bits set. Virtual 8086 mode is unavailable.
Wikipedia has a nice table of x86-64 operating modes including legacy and real modes, and all 3 sub-modes of long mode. Under a mainstream x86-64 OS, after booting the CPU cores will always all be in long mode, switching between different sub-modes depending on 32 or 64-bit user-space. (Not counting System Management Mode interrupts...)
Now what is the difference between 16 bit, 32 bit, and 64 bit mode?
16-bit and 32-bit mode are basically the same thing except for the following differences:
In 16 bit mode, the default address and operand width is 16 bit. You can change these to 32 bit for a single instruction using the 0x67 and 0x66 prefixes, respectively. In 32 bit mode, it's the other way round.
In 16 bit mode, the instruction pointer is truncated to 16 bit, jumping to addresses higher than 65536 can lead to weird results.
VEX/EVEX encoded instructions (including those of the AVX, AVX2, BMI, BMI2 and AVX512 instruction sets) aren't decoded in real or Virtual 8086 mode (though they are available in 16 bit protected mode).
16 bit mode has fewer addressing modes than 32 bit mode, though it is possible to override to a 32 bit addressing mode on a per-instruction basis if the need arises.
Now, 64 bit mode is a somewhat different. Most instructions behave just like in 32 bit mode with the following differences:
There are eight additional registers named r8, r9, ..., r15. Each register can be used as a byte, word, dword, or qword register. The family of REX prefixes (0x40 to 0x4f) encode whether an operand refers to an old or new register. Eight additional SSE/AVX registers xmm8, xmm9, ..., xmm15 are also available.
you can only push/pop 64 bit and 16 bit quantities (though you shouldn't do the latter), 32 bit quantities cannot be pushed/popped.
The single-byte inc reg and dec reg instructions are unavailable, their instruction space has been repurposed for the REX prefixes. Two-byte inc r/m and dec r/m is still available, so inc reg and dec reg can still be encoded.
A new instruction-pointer relative addressing mode exists, using the shorter of the 2 redundant ways 32-bit mode had to encode a [disp32] absolute address.
The default address width is 64 bit, a 32 bit address width can be selected through the 0x67 prefix. 16 bit addressing is unavailable.
The default operand width is 32 bit. A width of 16 bit can be selected through the 0x66 prefix, a 64 bit width can be selected through an appropriate REX prefix independently of which registers you use.
It is not possible to use ah, bh, ch, and dh in an instruction that requires a REX prefix. A REX prefix causes those register numbers to mean instead the low 8 bits of registers si, di, sp, and bp.
writing to the low 32 bits of a 64 bit register clears the upper 32 bit, avoiding false dependencies for out-of-order exec. (Writing 8 or 16-bit partial registers still merges with the 64-bit old value.)
as segmentation is nonfunctional, segment overrides are meaningless no-ops except for the fs and gs overrides (0x64, 0x65) which serve to support thread-local storage (TLS).
also, many instructions that specifically deal with segmentation are unavailable. These are: push/pop seg (except push/pop fs/gs), arpl, call far (only the 0xff encoding is valid), les, lds, jmp far (only the 0xff encoding is valid),
instructions that deal with decimal arithmetic are unavailable, these are: daa, das, aaa, aas, aam, aad,
additionally, the following instructions are unavailable: bound (rarely used), pusha/popa (not useful with the additional registers), salc (undocumented),
the 0x82 instruction alias for 0x80 is invalid.
on early amd64 CPUs, lahf and sahf are unavailable.
And that's basically all of it!
No, it isn't.
While there is a large amount of overlap, 64-bit assembly code is not a superset of 32-bit assembly code and so 32-bit assembly is not in general valid in 64-bit mode.
This applies both the mnemonic assembly source (which is assembled into binary format by an assembler), as well as the binary machine code format itself.
This question covers in some detail instructions that were removed, but there are also many encoding forms whose meanings were changed.
For example, Jester in the comments gives the example of push eax not being valid in 64-bit code. Based on this reference you can see that the 32-bit push is marked N.E. meaning not encodable. In 64-bit mode, the encoding is used to represent push rax (an 8-byte push) instead. So the same sequence of bytes has a different meaning in 32-bit mode versus 64-bit mode.
In general, you can browse the list of instructions on that site and find many which are listed as invalid or not encodable in 64-bit.
If not, please provide a small example of 32-bit assembly code that
isn't valid 64-bit assembly code and explain how the 64-bit processor
executes the 32-bit assembly code.
As above, push eax is one such example. I think what is missing is that 64-bit CPUs support directly running 32-bit binaries. They don't do it via compatibility between 32-bit and 64-bit instructions at the machine language level, but simply by having a 32-bit mode where the decoders (in particular) interpret the instruction stream as 32-bit x86 rather than x86-64, as well as the so-called long mode for running 64-bit instructions. When such 64-bit chips were first released, it was common to run a 32-bit operating system, which pretty much means the chip is permanently in this mode (never goes into 64-bit mode).
More recently, it is typical to run a 64-bit operating system, which is aware of the modes, and which will put the CPU into 32-bit mode when the user launches a 32-bit process (which are still very common: until very recently my browser was still 32-bit).
All the details and proper terminology for the modes can be found in fuz's answer, which is really the one you should read.

what is the maximum size of a PE file on 64-bit Windows?

It seems to me it's always going to be 4GB, because it uses the same size datatype (A DWORD)? Isn't a DWORD for the SizeOfImage always going to be 32-bits? Or am I mistaken about this limitation?
Answer
4GB does indeed to seem to be the hard limit of ALL Portable Executable's (32-bit and 64-bit PE+).
According to the spec it is 32-bit unsigned value for a PE32+ image just like a PE32 image.
However, in my testing with both 32-bit and 64-bit applications (PE32/PE32+ files) on Windows 7 SP1 Home Premium x64, the maximum file size for either is between 1.8-1.85GB.
I tested by creating a very basic C executable with Visual Studio (~8K for 32-bit and 9K for 64-bit) and the added an empty code section to the PE header until Windows would no longer load it, and then binary searched for the limit. Looking at the process with vmmap showed that almost all of the entire first 2GB of address space were the image (including any subsequently loaded DLLs such as kernel32.dll). The limit was the same for me with both 32 and 64-bit processes. The 64-bit process did have the flag set in it's NT Header's File Header's section stating that it could handle addresses >2GB. It also could allocate memory for non-image sections above the 2GB limit.
It seems like the image is required to fit in it's entirety in the lower 2GB of VA space for the process, which means the SizeOfImage is being treated a signed 32-bit integer by the loader effectively.
According to the COFF/PE32 spec, the image size for a valid PE32+ (64 bit/(PE+) file is a 4 byte unsigned value.
The ImageSize field in the PE headers is largely unrelated to the file size on-disk of the PE file. ImageSize is the in-memory size of the loaded image i.e. the size of all sections (each rounded to the SectionAlignment boundary) + size of the PE headers (given in the next header field, SizeOfHeaders). This value cannot be > 2GB for either PE32 or PE32+ because a) the spec says so and b) there exist 31-bit RVAs in the spec, e.g. in the import lookup table. RVAs are memory references given as offsets from the in-memory base address.
That's in memory though. The file on disk can contain data that is not loaded into memory (e.g. debug data, cert data). File pointer fields in the PE spec are 32-bit unsigned values. So the theoretical maximum size of a PE file according to the spec is 4GB.
That's according to the spec. There may be file-system, loader, OS limits outside of the PE spec that reduce the maximum further.

Resources