Why do disassemblers disassemble x86_64-apple-darwin binaries into arm assembly on my Intel Mac? [duplicate] - macos

This question already has answers here:
Differences between Assembly Languages
(1 answer)
Disassemble into x86_64 on OSX10.6 (But with _Intel_ Syntax)
(5 answers)
How to get assembly output from building with Cargo?
(4 answers)
How to have cargo/rustc generate concise assembly language for Rust code?
(1 answer)
Closed 12 hours ago.
I've been trying to explore low-level programming on my intel macbook but an issue I've been running into is that everything seems to think that the x86_64-apple-darwin binaries my computer uses are all arm.
For example, otool and objdump both disassemble into arm code and rustc emits arm code. (There is actually one exception to this where I can get intel asm that I'll explain below though.) That last one (rustc --emit asm) is what I've really been struggling with. I've confirmed that not only is the binary compiled to the above mentioned target but that that target is the only one that I have installed for rustc.
In a video I've been watching about how rustc works, the person I'm watching points out that on his M1 macbook the emitted asm is in arm and to fix this, he cross-compiles to "x86_64-apple-darwin" on which, the compiler then emits intel asm. Frustratingly similar but that I'm not on an M1 chip; my cpu is actually confirmably an intel 64 chip and using that target is exactly the thing I'm doing.
Like I said, I tried "cross-compiling" to x86_64-apple-darwin which is exactly my machine's format anyways. I've tried disassembling other binaries I have one the mac all to the same effect. According to everything I can find, I should be getting intel.
The one way I have been able to get intel asm from a native binary on my macbook is using radare2. Opening a binary in radare and looking at disassembly view, the code shows up in intel syntax.

Related

what does 'build a gcc cross compiler' mean?

I am very new to linux and GCC. The price of raspberry pi lured me in. I am interested in using GCC to cross compile some C Code to target some embedded hardware, specifically a Cortex-M3 micro. I eventually want to have a full suite of compiler/programmer/debugger, but for now I'm starting with compiler.
So I did a quick non-cross compile test on the RP3, and all was well. Now I am researching how to cross compile and target my uc. The gcc documentation online seems to indicate that I can use the plain vanilla gcc, and just specify some command line options to perform cross compilation: https://gcc.gnu.org/onlinedocs/gcc/ARM-Options.html
But searching around, I find a lot of people mentioning building a gcc cross compiler. What does this mean?
Does gcc have options to double as a cross compiler? If so, why would one desire "building" a cross compiler?
A cross-compiler is one that is created on machine type A (combination of hardware and o/s) and either runs on a different machine type B or runs on type A but produces software to be run on a different machine type B.
Thus, if you have a Linux machine using an x86_64 CPU and running on some version of Linux, but you compile GCC so that it will run on an IBM PowerPC platform running some version of AIX, you would be creating a cross-compiler.
Another variant might be having a compiler on Linux using an x86_64 CPU that runs on the Linux machine but produces code for an embedded hardware chip. You'd then benefit from the CPU power of the Linux machine while deploying to a much smaller, less powerful system that maybe has no o/s of its own, or only a minimal o/s.

Can't use printf or debugger in Intel SDK for OpenCL

I'm using the Intel SDK for OpenCL with an Intel HD Graphics 4000 GPU to successfully run an OpenCL program. I've made sure to link against the Intel OpenCL libraries since I also have Nvidia libraries installed.
However, putting a printf() call in the kernel gives the OpenCL compiler error
error: implicit declaration of function 'printf' is not allowed in OpenCL
Also, I've enabled OpenCL kernel debugging in the Visual Studio 2012 plugin, and passed the following options to clBuildProgram:
"-g -s C:\\Path\\to\\my\\program.cl"
However, kernel breakpoints are skipped. Hovering over the breakpoint gives the message:
The breakpoint will not currently be hit. No symbols have been loaded for this document.
My kernels are in a separate .cl file, and I'm setting the breakpoints the way I would for C/C++ code. Is this the correct way to set breakpoints using the Intel SDK for OpenCL debugger?
Why are printf() calls and breakpoints not working with the Intel SDK for OpenCL?
THe function printf() was introduced in the OCL version 1.2. Intel released this version not that long time ago. I'd bet that you still have the 1.1 version.
Regarding the debugger I almost never used it but based on this document the path is supposed to be given like that:
"-g -s \"C:\\Path\\to\\my\\program.cl\""
You are also supposed to choose which thread you wanna debug.

Is it possible to generate native x86 code for ring0 in gcc?

I wonder, are there any ways to generate with the gcc some native x86 code (which can be booted without any OS)?
Yes, the Linux kernel is compiled with GCC and runs in ring 0 on x86.
The question isn't well-formed. Certainly not all of the instructions needed to initialize a modern CPU from scratch can be emitted by gcc alone, you'll need to use some assembly for that. But that's sort of academic because modern CPUs don't actually document all this stuff and instead expect your hardware manufacturer to ship firmware to do it. After firmware initialization, a modern PC leaves you either in an old-style 16 bit 8086 environment ("legacy" BIOS) or a fairly clean 32 or 64 bit (depending on your specific hardware platform) environment called "EFI Boot Services".
Operations in EFI mode are all done using C function pointers, and you can indeed build for this environment using gcc. See the gummiboot boot loader for an excellent example of working with EFI.

cuda nvcc cross compiler

I want to compile CUDA code on mac but make it executable on Windows.
Is there a way to set up an nvcc CUDA cross compiler?
The problem is that my desktop windows will be inaccessible for a while due to traveling, however i do not want to wasted time by waiting til i get back and compile the code. If I have to wait then it would be a waste of time to debug the code and make sure it compiles correct and the likes. My mac is not equipped with cuda capable hardware though.
The short answer, is no, it is not possible.
It is a common misconception, but nvcc isn't actually a compiler. It is a compiler driver, and it relies heavily on the host C++ compiler in order to steer compilation both host and device code. To compile CUDA for Windows, you must using the Microsoft C++ compiler. That compiler can't be run on Linux or OS X, so cross compilation to a Windows target is not possible unless you are doing the compilation on a Windows host (so 32/64 bit cross compilation is possible, for example).
The other two CUDA platforms are equally incompatible, despite requiring gcc for compilation, because the back ends are different (Linux is an elf platform, OS X is a mach platform), so even cross compilation between OS X and Linux isn't possible.
You have two choices if compilation on the OS X platform is the goal
Install the OS X toolkit. Even though your hardware doesn't have a compatible GPU, you can still install the toolkit and compile code.
Install the Windows toolkit and visual studio inside a virtual windows installation (or a physical boot camp installation), and compile code inside Windows on the Mac. Again, you don't need NVIDIA compatible hardware to do this.
If you want to run code without a CUDA GPU, there is a non-commercial (GPU Ocelot) and commercial (PGI CUDA-x86) option you could investigate.

Execute 32 bit object file on 64 bit environment

I made a cross compiling toolchain for arm-gcc, configuring binutils, newlib, gcc and gdb for the arm-elf target. The problem I am having is, when I compile a program with arm-elf-gcc on my Mac, it generates a 32 bit executable with cannot be executed in the 64 bit environment.
What is the easiest way to circumvent this? I could place the 32 bit executables to an arm environment, but I am interested to know if I could execute the file in my Mac in any way?
--Added--
I should have done this before, but let me inform that the target of my program is a Beagleboard, and I was expecting that I would compile and generate the objects using arm-gcc on my Mac OS X and transfer the *.o to the Beagleboard to view output. Alas, it gives the same error on the Beagleboard as well when I do a ./hello.o.
Thanks,
Sayan
There are several issues preventing you from running your executable on a Mac.
1) Architecture. Your Mac is probably an x86/x86_64 machine (or PowerPC) but your binary is compiled for ARM architecture (which is the whole point of using a cross-compiler). These instruction sets are not compatible.
2) Your binary is linked as an ELF object file, whereas Macs use the Mach-O object file format. Your OS cannot load this executable format.
3) Your executable is linked against newlib (for some target which is probably not Mac OS) instead of the Mac OS libc. Your system calls are not correct for this platform.
If your program is a standard unix executable, you may be able to simply compile it with the standard system gcc and it will run. Otherwise, you can run it in an ARM emulator, though this may be pretty complicated to set up.
The fact that it's 32-bit is irrelevant - you can't execute ARM code on a Mac (unless you can find some kind of ARM emulator).

Resources