Using push/pop around function calls on Windows x64 - windows

I am fairly new to assembly and architectures so I was playing around with the GCC to assemble my assembler file.
I am running Windows 10, with AMD 3750H (IDK if this helps)
It is a fairly simple program, that does the following:
Creates a stack frame
Pushes two numbers to the stack
Pops them one at a time, calling printf once for each. (So the last pop is after the first call)
exits
Here is the code I wrote:
.data
form:
.ascii "%d\n\0";
.text
.globl main
main:
pushq %rbp
movq %rsp, %rbp
subq $32, %rsp;
pushq $420;
pushq $69;
lea form(%rip) , %rcx;
xor %eax, %eax
popq %rdx
call printf
lea form(%rip) , %rcx;
xor %eax, %eax
popq %rdx
call printf
mov %rbp, %rsp
popq %rbp
ret
But the output I get is (rather strangely):
69
4199744
I read about shadow space in the Windows x64 calling convention but I couldn't find the proper way to work with it when using push/pop:
What is the 'shadow space' in x64 assembly?
gcc output on cygwin using stack space outside stack frame
This is what I tried (thanks to Jester) and it worked
# subtract 32 when I push
pushq $420;
subq $32, %rsp
# Add 32 when I pop
addq $32, %rsp
popq %rdx
But for some reason I feel there maybe a more elegant way to go about this
Do I have to leave 32 bytes after every push? That seems like a lot of space is being wasted.

Related

How to correctly use the "write" syscall on MacOS to print to stdout?

I have looked at similar questions, but cannot seem to find what is wrong with my code.
I am attempting to make the "write" syscall on MacOS to print a string to standard output.
I am able to do it with printf perfectly, and am familiar with calling other functions in x64 assembly.
This is, however, my first attempt at a syscall.
I am using GCC's GAS assembler.
This is my code:
.section __TEXT,__text
.globl _main
_main:
pushq %rbp
movq %rsp, %rbp
subq $32, %rsp
movq $0x20000004, %rax
movq $1, %rdi
leaq syscall_str(%rip), %rsi
movq $25, %rdx
syscall
jc error
xorq %rax, %rax
leave
ret
error:
movq $1, %rax
leave
ret
.section __DATA,__data
syscall_str:
.asciz "Printed with a syscall.\n"
There does not seem to be any error; there is simply nothing written to stdout.
I know that start is usually used as the starting point for an executable on MacOS, but it does not compile with GCC.
You are using the incorrect SYSCALL number for MacOS. The base for the user system calls is 0x2000000. You are incorrectly using that base. As a result you have encoded the write SYSCALL as $0x20000004 when it should have been $0x2000004 (one less zero)
As a rule of thumb, make sure you are using the correct value for the SYSCALL number in the %rax register; ensure you are using the correct arguments for the write SYSCALL. The write SYSCALL expects the following arguments:
%rdi: file descriptor to write to (e.g. 1 for standard output)
%rsi: pointer to the buffer containing the data to be written
%rdx: number of bytes to be written
On macOS, you need to use the syscall instruction to invoke a system call.

What is the correct constant for the exit system call?

I am trying to learn x86_64 assembly, and am using GCC as my assembler. The exact command I'm using is:
gcc -nostdlib tapydn.S -D__ASSEMBLY__
I'm mainly using gcc for its preprocessor. Here is tapydn.S:
.global _start
#include <asm-generic/unistd.h>
syscall=0x80
.text
_start:
movl $__NR_exit, %eax
movl $0x00, %ebx
int $syscall
This results in a segmentation fault. I believe the problem is with the following line:
movl $__NR_exit, %eax
I used __NR_exit because it was more descriptive than some magic number. However, it appears that my usage of it is incorrect. I believe this to be the case because when I change the line in question to the following, it runs fine:
movl $0x01, %eax
Further backing up this trail of thought is the contents of usr/include/asm-generic/unistd.h:
#define __NR_exit 93
__SYSCALL(__NR_exit, sys_exit)
I expected the value of __NR_exit to be 1, not 93! Clearly I am misunderstanding its purpose and consequently its usage. For all I know, I'm getting lucky with the $0x01 case working (much like undefined behaviour in C++), so I kept digging...
Next, I looked for the definition of sys_exit. I couldn't find it. I tried using it anyway as follows (with and without the preceeding $):
movl $sys_exit, %eax
This wouldn't link:
/tmp/cc7tEUtC.o: In function `_start':
(.text+0x1): undefined reference to `sys_exit'
collect2: error: ld returned 1 exit status
My guess is that it's a symbol in one of the system libraries and I'm not linking it due to my passing -nostdlib to GCC. I'd like to avoid linking such a large library for just one symbol if possible.
In response to Jester's comment about mixing 32 and 64 bit constants, I tried using the value 0x3C as suggested:
movq $0x3C, %eax
movq $0x00, %ebx
This also resulting a segmentation fault. I also tried swapping out eax and ebx for rax and rbx:
movq $0x3C, %rax
movq $0x00, %rbx
The segmentation fault remained.
Jester then commented stating that I should be using syscall rather than int $0x80:
.global _start
#include <asm-generic/unistd.h>
.text
_start:
movq $0x3C, %rax
movq $0x00, %rbx
syscall
This works, but I was later informed that I should be using rdi instead of rbx as per the System V AMD64 ABI:
movq $0x00, %rdi
This also works fine, but still ends up using the magic number 0x3C for the system call number.
Wrapping up, my questions are as follows:
What is the correct usage of __NR_exit?
What should I be using instead of a magic number for the exit system call?
The correct header file to get the system call numbers is sys/syscall.h. The constants are called SYS_### where ### is the name of the system call you are interested in. The __NR_### macros are implementation details and should not be used. As a rule of thumb, if an identifier begins with an underscore it should not be used, if it begins with two it should definitely not be used. The arguments go into rdi, rsi, rdx, r10, r8, and r9. Here is a sample program for Linux:
#include <sys/syscall.h>
.globl _start
_start:
mov $SYS_exit,%eax
xor %edi,%edi
syscall
These conventions are mostly portable to other UNIX-like operating systems.

Mac OS x86 Assembly: Why does the initialized memory amount change?

I just started learning assembly a week or so ago, and when debugging a program, I came across some strange memory usage. The following code (see end of post) is broken into two files for a reason.
If I compile and run with
gcc main.s
./a.out
with only code block 1 running (code block 2 commented out), then the program prints "8", meaning that right when my program starts, the Mac OS automatically puts 8 bytes worth of stuff on the stack, then leaves my program to do its thing.
However, if I compile and run with
gcc main.s print.s
./a.out
With only code block 2 running (code block 1 commented out), then the program prints "16", meaning that Mac OS is initially putting 16 bytes on the stack instead of 8. When this happens, the offsets applied to rsp to achieve 16-byte alignment remain the same, meaning that the start of the stack is being offset by 8 bytes whenever an outside function is called.
I also tried putting the _printNum function in the same file as main.s, but the discrepancy persisted. Another thing I tried was to add another format string and use it later on in the program to see if something to do with the format string was using memory, but it made no difference.
What I think is going on is that Mac OS is pushing the instruction pointer for the next instruction to execute when my program terminates onto the stack, then pushing the old base stack pointer onto the stack, both 32-bit, for a total of 8 bytes. When I include a function call (either local or external to the main file), it seems like the assembler decides to use 64-bit addresses instead of 32-bit addresses, doubling the memory used, and hence the 16 bytes used.
Why is this happening, and if I am wrong, what is Mac OS doing to the stack? Is any of the extra stack used of value to me? Is the computer doing something else instead of switching from 32-bit to 64-bit addressing? Thanks.
main program (main.s):
.cstring
_format: .asciz "%d\n"
.text
.globl _main
_main:
movq %rbp, %rax # Put stack base pointer in rax
subq %rsp, %rax # Subtract stack pointer to get total memory used
subq $8, %rsp # Get 16-byte alignment
#---------------------------------------------------------
# code block 1 - prints rax manually
#---------------------------------------------------------
movq %rax, %rsi # Value to print needs to be in rsi
lea _format(%rip), %rdi # Address of format string goes in rdi
# Don't know what the "_format(%rip)" does,
# but it works (any info would be handy)
call _printf
#---------------------------------------------------------
# code block 2 - prints rax via function call
#---------------------------------------------------------
call _printNum # Prints the value of rax
#---------------------------------------------------------
# stack cleanup and return
#---------------------------------------------------------
addq $8, %rsp # Account for the previous -8 to rsp
ret # end program
printing function (print.s):
.cstring
_format: .asciz "%d\n"
.text
.globl _printNum
# assumes 16-byte aligned when called
# prints the value of the rax register
_printNum:
push %rbp # save %rbp - previous stack base
movq %rsp, %rbp # update stack base
push %rsi # save %rsi - register
push %rdi # save %rdi - register
# print - already 16 byte aligned (rip and three values for 32 bytes)
movq %rax, %rsi # load the value to print
lea _format(%rip), %rdi # load the format string
call _printf
# restore registers
popq %rdi
popq %rsi
popq %rbp
# return
ret

What is the difference between assembly on mac and assembly on linux?

I've been trying to get familiar with assembly on mac, and from what I can tell, the documentation is really sparse, and most books on the subject are for windows or linux. I thought I would be able to translate from linux to mac pretty easily, however this (linux)
.file "simple.c"
.text
.globl simple
.type simple, #function
simple:
pushl %ebp
movl %esp, %ebp
movl 8(%ebp), %edx
movl 12(%ebp), %eax
addl (%edx), %eax
movl %eax, (%edx)
popl %ebp
ret
.size simple, .-simple
.ident "GCC: (Ubuntu 4.3.2-1ubuntu11) 4.3.2"
.section .note.GNU-stack,"",#progbits
seems pretty different from this (mac)
.section __TEXT,__text,regular,pure_instructions
.globl _simple
.align 4, 0x90
_simple: ## #simple
.cfi_startproc
## BB#0:
pushq %rbp
Ltmp2:
.cfi_def_cfa_offset 16
Ltmp3:
.cfi_offset %rbp, -16
movq %rsp, %rbp
Ltmp4:
.cfi_def_cfa_register %rbp
addl (%rdi), %esi
movl %esi, (%rdi)
movl %esi, %eax
popq %rbp
ret
.cfi_endproc
.subsections_via_symbols
The "normal" (for lack of a better word) instructions and registers such as pushq %rbp don't worry me. But the "weird" ones like .cfi_startproc and Ltmp2: which are smack dab in the middle of the machine instructions don't make any sense.
I have no idea where to go to find out what these are and what they mean. I'm about to pull my hair out as I've been trying to find a good resource for beginners for months. Any suggestions?
To begin with, you're comparing 32-bit x86 assembly with 64-bit x86-64. While the OS X Mach-O ABI supports 32-bit IA32, I suspect you want the x86-64 SysV ABI. (Thankfully, the x86-64.org site seems to be up again). The Mach-O x86-64 model is essentially a variant of the ELF / SysV ABI, so the differences are relatively minor for user-space code, even with different assemblers.
The .cfi directives are DWARF debugging directives that you don't strictly need for assembly - they are used for call frame information, etc. Here are some minimal examples:
ELF x64-64 assembler:
.text
.p2align 4
.globl my_function
.type my_function,#function
my_function:
...
.L__some_address:
.size my_function,[.-my_function]
Mach-O x86-64 assembler:
.text
.p2align 4
.globl _my_function
_my_function:
...
L__some_address:
Short of writing an asm tutorial, the main differences between the assemblers are: leading underscores for Mach-O functions names, .L vs L for labels (destinations). The assembler with OS X understands the '.p2align' directive. .align 4, 0x90 essentially does the same thing.
Not all the directives in compiler-generated code are essential for the assembler to generate valid object code. They are required to generate stack frame (debugging) and exception handling data. Refer to the links for more information.
Obviously the Linux code is 32-bit Linux code. Note that 64-bit Linux can run both 32- and 64-bit code!
The Mac code is definitely 64-bit code.
This is the main difference.
The ".cfi_xxx" lines are only information used for the Mac specific file format.

gcc gives weird Intel syntax

I have been trying to get a better idea of what happens under the hood by using the compiler to generate the assembly programs of various C programs at different optimization levels. There is something that has been bothering me for a while.
When I compile t.c as follows,
gcc -S t.c
I get the assembly in AT&T syntax as follows.
function:
pushl %ebp
movl %esp, %ebp
movl 12(%ebp), %eax
addl 8(%ebp), %eax
popl %ebp
ret
.size function, .-function
When I compile using the masm argument as follows:-
gcc -S t.c -masm=intel
I get the following output.
function:
push %ebp
mov %ebp, %esp
mov %eax, DWORD PTR [%ebp+12]
add %eax, DWORD PTR [%ebp+8]
pop %ebp
ret
.size function, .-function
There is a change in syntax but there are still "%"s before the notation of registers(this is why I don't prefer AT&T syntax in the first place).
Can someone shed some light on why this is happening? How do I solve this issue?
The GNU assembler (gas) does have a separate option for controlling the % prefix. Documentation seems to suggest GCC doesn't have such an option, but my GCC (version Debian 4.3.2-1.1) doesn't produce the % prefix.

Resources