Writing and debugging a min program in asm - debugging

I am trying to write a program to find the minimum value of a list of integers in asm. Here is what I have so far:
.section .data
data_items:
.long 2,3,4,5,1,9,10 # set 10 as the sentinal value
.section text
.globl _start
_start:
# %ebx holds min
# %edi holds index (destination index)
# %eax current data item
movl $255, %ebx # set the current min to 255
movl $0, %edi # the index is also zero
start_loop:
movl data_items(,%edi,4), %eax # set %eax equal to the current data item
cmpl $10, %eax # compare %eax with zero to see if we should exit
je exit_loop # if it's the sentinel value, exit
incl %edi # increment the index
cmpl %eax, %edi # compare the current value to the current min
jge start_loop # if it's not less than the current value, go to start
movl %eax, %ebx # move the current value if less that the current min
jmp start_loop # always go back to the start if we've gotten this far
exit_loop:
movl $1, %eax # push the linux system call to %eax (1=exit)
int $0x80 # give linux control (so it will exit)
When I run this, I get the following:
$ as min.s -o min.o && ld min.o -o min && ./min
Segmentation fault (core dumped)
How is one supposed to debug asm? For example, at least in C the compiler tells you what the error might be and the line number, whereas here I know just about nothing. (Note: the error is having .section text instead of .section .text but how would one figure that out?)

It's very possible in C to write a program that compiles with no warnings but crashes (e.g. NULL pointer deref), and you'll see exactly the same thing. It's much more likely in asm, though.
You debug asm with a debugger, GDB for example. See tips at the bottom of https://stackoverflow.com/tags/x86/info. And if you make any system calls, use strace to see what your program is actually doing.
To debug this, you'd run it under GDB and notice that it segfaulted on the first instruction, movl $255, %ebx. It doesn't access memory so code-fetch must have faulted. So there must be something wrong with your sections that resulted in your code in section linked into a non-executable segment of your executable.
objdump -d would also have given you a hint: it disassembles the .text section by default, and this program doesn't have one.
The reason text instead of .text causes this problem is that the defaults for sections with random names that aren't one of the few specially-recognized ones are read+write without exec.
In GAS, use .text or .data, special shortcut directives for .section .text or .data which avoid this problem for those sections. https://sourceware.org/binutils/docs/as/Text.html
But not all "standard" sections have special directives, you do still need .section .rodata to switch to the read-only data section, where you should have put your array. (read, no write. On newer toolchains, also no exec). Instead of switching to the .bss section, though, you can use .comm or .lcomm (https://sourceware.org/binutils/docs/as/bss.html)
Another possible problem is that you're building this 32-bit code as a 64-bit executable (unless you're using a 32-bit-only install where as --32 is the default). Using 32-bit addressing modes works in 64-bit modes, truncating the address to 32 bits. That works when accessing static data in a position-dependent executable on Linux, because all code+data is linked into the low 2GiB of virtual address space.
But any access to (%esp) or -4(%ebp) or whatever would fault because the stack in a 64-bit process is mapped to a high address with non-zero bits outside the low 32.
You'd notice that problem in GDB because layout reg would show all 16 64-bit integer registers, RAX..R15.

Related

Porting JonesForth to macOS v10.15 (Catalina)

I'm trying to make JonesForth run on a recent MacBook out of the box, just using Mac tools.
I started to convert everything 64 bits and attend to the Mac assembler syntax.
I got things to assemble, but I immediately run into a curious segmentation fault:
/* NEXT macro. */
.macro NEXT
lodsq
jmpq *(%rax)
.endm
...
/* Assembler entry point. */
.text
.globl start
.balign 16
start:
cld
mov %rsp,var_SZ(%rip) // Save the initial data stack pointer in FORTH variable S0.
mov return_stack_top(%rip),%rbp // Initialise the return stack.
//call set_up_data_segment
mov cold_start(%rip),%rsi // Initialise interpreter.
NEXT // Run interpreter!
.const
cold_start: // High-level code without a codeword.
.quad QUIT
QUIT is defined like this via macro defword:
.macro defword
.const_data
.balign 8
.globl name_$3
name_$3 :
.quad $4 // Link
.byte $2+$1 // Flags + length byte
.ascii $0 // The name
.balign 8 // Padding to next four-byte boundary
.globl $3
$3 :
.quad DOCOL // Codeword - the interpreter
// list of word pointers follow
.endm
// QUIT must not return (ie. must not call EXIT).
defword "QUIT",4,,QUIT,name_TELL
.quad RZ,RSPSTORE // R0 RSP!, clear the return stack
.quad INTERPRET // Interpret the next word
.quad BRANCH,-16 // And loop (indefinitely)
...more code
When I run this, I get a segmentation fault the first time in the NEXT macro:
(lldb) run
There is a running process, kill it and restart?: [Y/n] y
Process 83000 exited with status = 9 (0x00000009)
Process 83042 launched: '/Users/klapauciusisgreat/jonesforth64/jonesforth' (x86_64)
Process 83042 stopped
* thread #1, stop reason = EXC_BAD_ACCESS (code=EXC_I386_GPFLT)
frame #0: 0x0000000100000698 jonesforth`start + 24
jonesforth`start:
-> 0x100000698 <+24>: jmpq *(%rax)
0x10000069a <+26>: nopw (%rax,%rax)
jonesforth`code_DROP:
0x1000006a0 <+0>: popq %rax
0x1000006a1 <+1>: lodsq (%rsi), %rax
Target 0: (jonesforth) stopped.
rax does point to what I think is the dereferenced address, DOCOL:
(lldb) register read
General Purpose Registers:
rax = 0x0000000100000660 jonesforth`DOCOL
So one mystery is:
Why does RAX point to DOCOL instead of QUIT? My guess is that the instruction was halfway executed and the result of the indirection was stored in rax. What are some good pointers to documentation?
Why the segmentation fault?
I commented out the original segment setup code in the original that called brk to set up a data segment. Another [implementation] also did not call it at all, so I thought I could as well ignore this. Is there any magic on how to set up segment permissions with syscalls in a 64-bit binary on Catalina? The make command is pretty much the standard JonesForth one:
jonesforth: jonesforth.S
gcc -nostdlib -g -static $(BUILD_ID_NONE) -o $# $<
P.S.: Yes, I can get JonesForth to work perfectly in Docker images, but that's besides the point. I really want it to work in 64 bit on Catalina, out of the box.
The original code had something like
mov $cold_start,%rsi
And the Apple assembler complains about not being able to use 32 immediate addressing in 64-bit binaries.
So I tried
mov $cold_start(%rip),%rsi
but that also doesn't work.
So I tried
mov cold_start(%rip),%rsi
which assembles, but of course it dereferences cold start, which is not something I need.
The correct way of doing this is apparently
lea cold_start(%rip),%rsi
This seems to work as intended.

What is the correct constant for the exit system call?

I am trying to learn x86_64 assembly, and am using GCC as my assembler. The exact command I'm using is:
gcc -nostdlib tapydn.S -D__ASSEMBLY__
I'm mainly using gcc for its preprocessor. Here is tapydn.S:
.global _start
#include <asm-generic/unistd.h>
syscall=0x80
.text
_start:
movl $__NR_exit, %eax
movl $0x00, %ebx
int $syscall
This results in a segmentation fault. I believe the problem is with the following line:
movl $__NR_exit, %eax
I used __NR_exit because it was more descriptive than some magic number. However, it appears that my usage of it is incorrect. I believe this to be the case because when I change the line in question to the following, it runs fine:
movl $0x01, %eax
Further backing up this trail of thought is the contents of usr/include/asm-generic/unistd.h:
#define __NR_exit 93
__SYSCALL(__NR_exit, sys_exit)
I expected the value of __NR_exit to be 1, not 93! Clearly I am misunderstanding its purpose and consequently its usage. For all I know, I'm getting lucky with the $0x01 case working (much like undefined behaviour in C++), so I kept digging...
Next, I looked for the definition of sys_exit. I couldn't find it. I tried using it anyway as follows (with and without the preceeding $):
movl $sys_exit, %eax
This wouldn't link:
/tmp/cc7tEUtC.o: In function `_start':
(.text+0x1): undefined reference to `sys_exit'
collect2: error: ld returned 1 exit status
My guess is that it's a symbol in one of the system libraries and I'm not linking it due to my passing -nostdlib to GCC. I'd like to avoid linking such a large library for just one symbol if possible.
In response to Jester's comment about mixing 32 and 64 bit constants, I tried using the value 0x3C as suggested:
movq $0x3C, %eax
movq $0x00, %ebx
This also resulting a segmentation fault. I also tried swapping out eax and ebx for rax and rbx:
movq $0x3C, %rax
movq $0x00, %rbx
The segmentation fault remained.
Jester then commented stating that I should be using syscall rather than int $0x80:
.global _start
#include <asm-generic/unistd.h>
.text
_start:
movq $0x3C, %rax
movq $0x00, %rbx
syscall
This works, but I was later informed that I should be using rdi instead of rbx as per the System V AMD64 ABI:
movq $0x00, %rdi
This also works fine, but still ends up using the magic number 0x3C for the system call number.
Wrapping up, my questions are as follows:
What is the correct usage of __NR_exit?
What should I be using instead of a magic number for the exit system call?
The correct header file to get the system call numbers is sys/syscall.h. The constants are called SYS_### where ### is the name of the system call you are interested in. The __NR_### macros are implementation details and should not be used. As a rule of thumb, if an identifier begins with an underscore it should not be used, if it begins with two it should definitely not be used. The arguments go into rdi, rsi, rdx, r10, r8, and r9. Here is a sample program for Linux:
#include <sys/syscall.h>
.globl _start
_start:
mov $SYS_exit,%eax
xor %edi,%edi
syscall
These conventions are mostly portable to other UNIX-like operating systems.

x86 asm printf causes segfault when using intel syntax (gcc)

I'm just starting to learn x86 assembly, and I am a bit confused as to why this little example doesn't work. All I want to do is to print the content of the eax register as a decimal value. This is my code in AT&T Syntax:
.data
intout:
.string "%d\n"
.text
.globl main
main:
movl $666, %eax
pushl %eax
pushl $intout
call printf
movl $1, %eax
int $0x80
Which I compile and run as follows:
gcc -m32 -o hello helloworld.S
./hello
This works as excepted (Printing 666 to the console). On a little side note, I would like to point out that I don't understand what exactly "movl $1, %eax" and "int $0x80" are supposed to accomplish here. I'm also a not sure what "pushl $intout" does. Why is my output composed out of two separate stack entries? And what exactly does the .string macro do?
These are only side questions however, since my real problem is that I can't find a way to make this run using the much easier to read/write/comprehend Intel syntax.
Here is the code:
.intel_syntax noprefix
.data
intout:
.string "%d\n"
.text
.globl main
main:
mov eax, 666
push eax
push intout
call printf
mov eax, 1
int 0x80
Running this same as above, it just prints "Segmentation fault".
What am I doing wrong?
You need to use push OFFSET intout otherwise the 32-bit value stored at intout will be pushed on the stack, rather than its address.
intout is just a label, which is basically a name assigned to an address in your program. The .string "%d\n" directive that follows it defines a sequence of bytes in your program, both allocating memory and initializing that memory. Specifically it allocates 4 bytes in the .data section and initializes them with the characters '%', 'd', '\n', and '\0'. Since the label intout is defined just before the .string line it has the address of the first byte in the string.
The line push intout results in a instruction that reads the 4 bytes starting at the address of referred to by intout and pushes them on to the stack (specifically it subtracts 4 from ESP and then copies them to the 4 bytes now pointed to by ESP.) The line push $intout (or push OFFSET intout) pushes the 4 bytes that make up the 32-bit address of intout on the stack.
This means that the line push intout pushes a meaningless value on to the stack. The function printf ends up interpreting it as a pointer, an address where the format string is supposed to be stored, but since it doesn't point to valid location in memory your program crashes.

Mac OS x86 Assembly: Why does the initialized memory amount change?

I just started learning assembly a week or so ago, and when debugging a program, I came across some strange memory usage. The following code (see end of post) is broken into two files for a reason.
If I compile and run with
gcc main.s
./a.out
with only code block 1 running (code block 2 commented out), then the program prints "8", meaning that right when my program starts, the Mac OS automatically puts 8 bytes worth of stuff on the stack, then leaves my program to do its thing.
However, if I compile and run with
gcc main.s print.s
./a.out
With only code block 2 running (code block 1 commented out), then the program prints "16", meaning that Mac OS is initially putting 16 bytes on the stack instead of 8. When this happens, the offsets applied to rsp to achieve 16-byte alignment remain the same, meaning that the start of the stack is being offset by 8 bytes whenever an outside function is called.
I also tried putting the _printNum function in the same file as main.s, but the discrepancy persisted. Another thing I tried was to add another format string and use it later on in the program to see if something to do with the format string was using memory, but it made no difference.
What I think is going on is that Mac OS is pushing the instruction pointer for the next instruction to execute when my program terminates onto the stack, then pushing the old base stack pointer onto the stack, both 32-bit, for a total of 8 bytes. When I include a function call (either local or external to the main file), it seems like the assembler decides to use 64-bit addresses instead of 32-bit addresses, doubling the memory used, and hence the 16 bytes used.
Why is this happening, and if I am wrong, what is Mac OS doing to the stack? Is any of the extra stack used of value to me? Is the computer doing something else instead of switching from 32-bit to 64-bit addressing? Thanks.
main program (main.s):
.cstring
_format: .asciz "%d\n"
.text
.globl _main
_main:
movq %rbp, %rax # Put stack base pointer in rax
subq %rsp, %rax # Subtract stack pointer to get total memory used
subq $8, %rsp # Get 16-byte alignment
#---------------------------------------------------------
# code block 1 - prints rax manually
#---------------------------------------------------------
movq %rax, %rsi # Value to print needs to be in rsi
lea _format(%rip), %rdi # Address of format string goes in rdi
# Don't know what the "_format(%rip)" does,
# but it works (any info would be handy)
call _printf
#---------------------------------------------------------
# code block 2 - prints rax via function call
#---------------------------------------------------------
call _printNum # Prints the value of rax
#---------------------------------------------------------
# stack cleanup and return
#---------------------------------------------------------
addq $8, %rsp # Account for the previous -8 to rsp
ret # end program
printing function (print.s):
.cstring
_format: .asciz "%d\n"
.text
.globl _printNum
# assumes 16-byte aligned when called
# prints the value of the rax register
_printNum:
push %rbp # save %rbp - previous stack base
movq %rsp, %rbp # update stack base
push %rsi # save %rsi - register
push %rdi # save %rdi - register
# print - already 16 byte aligned (rip and three values for 32 bytes)
movq %rax, %rsi # load the value to print
lea _format(%rip), %rdi # load the format string
call _printf
# restore registers
popq %rdi
popq %rsi
popq %rbp
# return
ret

Generating a pure (or flat) binary

How can you generate a flat binary that will run directly on the CPU?
That is, without an Operating System; also called free standing environment code (see What is the name for a program running directly without an OS?).
I've noticed that the assembler I'm using, as from the OS-X developer tools bundle, keeps generating Mach-O files, and not flat binaries.
This is the way I've done it. Using the linker that comes with the XCode Command Line Tools, you can combine object files using:
ld code1.o code2.o -o code.bin -r -U start
The -r asks ld to just combine object files together without making a library, -U tells ld to ignore the missing definition of _start (which would normally be provided by the C stdlib).
This creates a binary which still has some header bytes, but this is easily identified with
otool -l code.bin
Look for the __text section in the output:
Section
sectname __text
segname __TEXT
addr 0x00000000
size 0x0000003b
offset 240
align 2^4 (16)
reloff 300
nreloc 1
flags 0x80000400
reserved1 0
reserved2 0
Note the offset (which you can confirm by comparing the output of otool -l and hexdump). We don't want the headers so just use dd to copy out the bytes you need:
dd if=code.bin of=code_stripped.bin ibs=240 skip=1
where I've set the block size to the offset and skipping one block.
You don't. You get the linker to produce a flat (pure) binary. To do that, you have to write a linker script file with OUTPUT_FORMAT(binary). If memory serves, you also need to specify something about how the sections are merged, but I don't remember any of the details.
I don't think you necessarily need to do this. Some bootloaders can load more complex executable formats. For example, GRUB can load ELF right off the bat. I'm sure you can somehow get it or some other bootloader to load Mach-O files.
You may want to try using the nasm assembler -- it has an option to control the output binary format, including -f bin for flat binaries.
Note that you can't easily compile C code to flat binaries, since almost any C code will require binary features (like external symbols and relocations) which can't be represented in a flat binary.
There is no easy way I know of.
Once I needed to create plain binary file which will be loaded and executed by another program. However, as didn't allow me to do that. I tried to use gobjcopy to convert object file to raw binary, but it was not able to properly convert code such as this:
.quad LinkName2 - LinkName1
In binary file produced by gobjcopy it looked like
.quad 0
I've ended up writing special dumping program, which is executable that will save part of the memory on disk:
.set SYS_EXIT, 0x2000001
.set SYS_READ, 0x2000003
.set SYS_WRITE, 0x2000004
.set SYS_OPEN, 0x2000005
.set SYS_CLOSE, 0x2000006
.data
dumpfile: .ascii "./dump"
.byte 0
OutputFileDescriptor: .quad 0
.section __TEXT,__text,regular
.globl _main
_main:
movl $0644, %edx # file mode
movl $0x601, %esi # O_CREAT | O_TRUNC | O_WRONLY
leaq dumpfile(%rip), %rdi
movl $SYS_OPEN, %eax
syscall
movq %rax, OutputFileDescriptor(%rip)
movq $EndDump - BeginDump, %rdx
leaq BeginDump(%rip), %rsi
movq OutputFileDescriptor(%rip), %rdi
movl $SYS_WRITE, %eax
syscall
movq OutputFileDescriptor(%rip), %rdi
movl $SYS_CLOSE, %eax
syscall
Done:
movq %rax, %rdi
movl $SYS_EXIT, %eax
syscall
.align 3
BeginDump:
.include "dump.s"
EndDump:
.quad 0
The code that have to be saved as raw binary file is included in dump.s

Resources