How does 64 bit code work on OS-X 10.5? - macos

I initially thought that 64 bit instructions would not work on OS-X 10.5.
I wrote a little test program and compiled it with GCC -m64.
I used long long for my 64 bit integers.
The assembly instructions used look like they are 64 bit. eg. imultq and movq 8(%rbp),%rax.
I seems to work.
I am only using printf to display the 64 bit values using %lld.
Is this the expected behaviour?
Are there any gotcha's that would cause this to fail?
Am I allowed to ask multiple questions in a question?
Does this work on other OS's?

Just to make this completely clear, here is the situation for 32- and 64-bit executables on OS X:
Both 32- and 64-bit user space executables can be run on both 32- and 64-bit kernels in OS X 10.6, without emulation. On 10.4 and 10.5, both 32- and 64-bit executables can run on the 32-bit kernel. (This is not true on Windows)
The user space system libraries and frameworks are built 32/64-bit fat on 10.5 and 10.6. You can link against them normally, whether you're building for 32-bit, 64-bit, or both. A few libraries (basically the POSIX layer) are also built 32/64-bit fat on 10.4, but many of them are not.
On 10.6, the build tools produce 64-bit executables by default. On 10.5 and earlier, the default is 32-bit.
On 10.6, executables that are built fat will run the 64-bit side by default. On 10.5 and earlier, the 32-bit side is executed by default.
You can always manually specify which slice of a fat executable to use by using the arch command. eg. arch -arch i386 someCommandToRunThatIWantToRunIn32BitMode. For application bundles, you can either launch them from the command line, or there is a preference if you "get info" on the application.
OS X and Linux use the LP64 model for 64-bit executables. Pointers and long are 64 bits wide, int is still 32 bits, and long long is still 64 bits. (Windows uses the LLP64 model instead -- long is 32 bits wide in 64 bit Windows).

Mac OS X 10.5 supports 64-bit user-land applications pretty well. In fact, Xcode runs in 64-bit in 10.5 on a compatible architecture.
It's only the built-in applications (Finder, Safari, frameworks, daemons etc.) also have the 64-bit version in 10.6.

Meta: I don't like to see answers deleted. I guess this has been discussed somewhere.
Anyway, KennyTM and the other kind sole got me started and although one answer was deleted, I appreciated your efforts.
It looks like this is expected behaviour on the Mac, and it even seems to work on a 32-bit Linux as well (although I have not tested extensively)
Yep. GCC behaves different (at least in my limited observation) for 32 (-m32) and 64 (-m64) bit modes. In 32 bit, I was able to access variable arguments using an array. In 64 bit mode this just does not work.
I have learnt that you MUST access variable parameters using va_list as defined by stdarg.h because it works in both modes.
Now I have a command-line program that runs and passes all of my test cases in 32 bit and 64 bit modes on Mac OS-X.
The program implements a linked list garbage collector sweeping 16-byte aligned malloc-allocated objects from a global list as well as machine registers and the stack - actually, there are extra registers in 64 bit mode, so I still have a bit of work to do.
Objects are either a collection of 32 or 64 bit words which link together to form LISP/Scheme-like data structures.
In summary, it is a complex program that does a lot of messing with pointers and it works the same under 32 and 64 bit modes.
Asking multiple questions does not get you all the answers you might want.
It seems to work, as I wrote, on Linux.
Again, thank you for helping me with this.

Related

Mac Architecture Questions

Hi everyone :) am a newbie to develop applications for Mac. My questions are regading to different OS architectures in Mac and am greatly confused in this. Kindly bear with me if my questions are very cheap. Thank u all:)
I know that there is 32 bit support for 10.6(SnowLeopard). I would like to know if there is 32 bit support for 10.7(Lion)??
I have a 64 bit machine. I want a 32 bit 10.7 on it. How would i do so??
I have a 32 bit iMac and I have 10.6.8 in it. I have built an application on it; the application uses a user developed library which is also 32 bit. Now I carry on this application to another Mac machine which has 64 bit processor with 10.7(Lion). Will I be able to execute the same application as such in 10.7(Lion)?? I was not able to do so.
OS X uses a binary format that can support multiple architectures (e.g. 32- and 64-bit Intel, as well as PowerPC, etc) in a single executable or library. Most of the binaries and libraries in Lion are dual-architecture 32&64-bit Intel. So, yes, there is 32-bit support in Lion.
There is no such thing as 32-bit Lion; it's a dual-architecture OS. It can boot the kernel in either 32- or 64-bit mode, and run programs in 32- or 64-bit mode. Unlike most other OSes, it can even run programs in 64-bit mode under a 32-bit kernel. Whenever you run a program in Lion, it checks what architectures the program includes and what the CPU is capable of, and picks the "best" mode to run that program in.
There's no obvious reason this shouldn't work. If you were trying to use a 32-bit-only library from a program that was running in 64-bit mode, or a 64-bit-only library from a program running in 32-bit, it would fail. But if the program is 32-bit only it'll obviously run in that mode, your user developed library is 32-bit, and all of the libraries supplied with the OS are 32+64-bit.
There are a few things that might cause your 32-bit program to fail under Lion. First, does it depend on any libraries other than the one you mentioned and those supplied with the OS (e.g. libraries compiled locally by something like MacPorts, Fink, or Homebrew)? If so, those libraries might've been compiled 64-bit only. IMO libraries should always be compiled for all relevant architectures to avoid this sort of problem, but that's not the default.
Another possible source of trouble is if your program isn't really a program, but something that loads into another program (e.g. a plugin of some sort, screensaver, etc). In that case, your plugin needs to support whatever mode the program that'll load it is running in. You can actually get this issue with Java programs, since the java runtime will start in 64-bit mode (when the CPU supports it) in Lion.
Telling us more about your program and what specific error you get would probably help a lot...

Is Int64 supported on all Windows versions?

If I use a variable of type Int64, will it work on all Windows versions: win95, 98, 2000, nt, xp, vista, win7? No matter what OS it is 32bit or 64bit? And no matter what CPU they are using?
I just want to be sure, that my program will work on all Windows versions.
The size of datatypes provided by a language is not constrained by the operating system or hardware platform. I can have 64-bit integers on 32-bit platforms (or 16- or 8- or 11-bit, for that matter).
Int64 variables are supported by the 32 bit Delphi compiler. All operations on Int64 operands will give identical results no matter what platform (machine, OS etc.) the code executes on.
On 32 bit platforms the compiler has to use special routines to perform 64 bit arithmetic using the 32 bit machine instructions that are available. When targetting a 64 bit machine the compiler can use native 64 bit instructions. No matter, the end result is indistinguishable to you.
Note that if you execute a 32 bit Delphi executable on a 64 bit OS, you will still be using the 32 bit emulator, a.k.a. WOW64. From the perspective of the executable, you are running on a 32 bit machine. Unless you are using the new 64 bit compiler introduced in XE2, you will be producing 32 bit executables.
The 64bit integers will work fine on a 32bit operating system.
Performance gains in using these data types however will only come when using code compiled for a 64bit operating system - for this you would need Delphi XE2.
Meanwhile you have the benefits extra data capacity, but not extra execution speed (although this would not normally be a consideration for most applications).

Open in 32-bit mode

Under MacOS, you can change a little option for 32-bit executables called "Open in 32-bit mode". Wouldn't it work directly? And it works, but for some applications you had to select this option in order to run without problems. This was frequent in Safari, where some add-ons required a 32-bit environment.
I can't understand what makes an 32-bit executable not able to run directly in 64-bit, so what exactly changes in 32-bit mode?
This is really only of historical interest. In the transition period from 32 bit to 64 bit many apps were built as universal with 3 or sometimes even 4 architectures combined into one fat binary (aka "Universal Binary"), typically ppc, x86 and x86-64. In a 32 bit x86 environment the 32 bit x86 executable would be used. In a 64 bit x86-64 bit environment the 64 bit executable would be used. However in some cases you might want to use the 32 bit x86 executable even in a 64 bit x86-64 environment, e.g. in the case you mentioned where you have older plug-ins which are 32-bit only and can not be used with a 64 bit executable. Hence the option to launch an app in 32 bit mode.
Obviously a 32 bit app uses 32 bit APIs and has a 32 bit address space, whereas a 64 bit app has a 64 bit address space and uses 64 bit APIs.

what is the ultimate difference between a 16-bit and 32-bit application?

32-bit x86 is a superset of 16-bit x86. Suppose I write a code in 16-bit x86. It should ideally work on system with 32-bit x86 without any hitch. But that is not the case. Compatibility is an issue here. But why exactly? Is it because 32-bit OS installed on 32-bit x86 machine loads the programs differently in the memory and manages the memory differently?
Are different memory-management requirements the real difference between 16-bit and 32-bit applications?
In Windows:
The major problem with running 16bit program in 32bit OS is that most of 16bit programs used to run on Real Mode, which is not supported anymore(by the OS). These modes are fundamentally different and therefore require software emulation. Also since all of the 16bit API stubs, DOS functions, and BIOS calls are not available, programs would not really be able to interact with the operating system, thus making them unusable without some kind of emulation. In case of Windows, NTVDM does all the emulation starting from Windows NT3.1.
Of course, if your program does not require any interaction with the OS, you should be able to run it. In terms of the opcodes and instruction set, it is true 32bit x86 is superset of 16bit x86. It's just that the environment in which the code usually runs on is completely different.
The only one difference between the 32 bit - and the 16 bit addressmode is the meaning and the usage of those operandsize- and addresssize prefixes.
what is meant by 32-bit application?
Operand size prefix in 16-bit mode
There's a related (16bit on 64bit OS) discussion at superuser here.

16-bit Assembly on 64-bit Windows?

I decided to start learning assembly a while ago, and so I started with 16-bit assembly, using FASM.
However, I recently got a really new computer running Windows 7 64-bit, and now none of the compiled .COM files that the program assembles work any more. They give an error message, saying that the .COM is not compatible with 64-bit however.
32-bit assemblies still work, however I'd rather start with 16 and work my way up...
Is it possible to run a 16-bit program on windows 7? Or is there a specific way to compile them? Or should I give up and skip to 32-bit instead?
The reason you can't use 16-bit assembly is because the 16-bit subsystem has been removed from all 64-bit versions of Windows.
The only way to remedy this is to install something like DOSBox, or a virtual machine package such as VirtualBox and then install FreeDOS into that. That way, you get true DOS anyway. (NTVDM is not true DOS.)
Personally, would I encourage writing 16-bit assembly for DOS? No. I'd use 32- or even 64-bit assembly, the reason being there are a different set of function calls for different operating systems (called the ABI). So, the ABI for 64-bit Linux applications is different to 32-bit ones. I am not sure if that's the case with Windows. However, I guarantee that the meaning of interrupts is probably different.
Also, you've got all sorts of things to consider with 16-bit assembly, like the memory model in use. I might be wrong, but I believe DOS gives you 64K memory to play with "and that's it". Everything, your entire heap and stack along with code must fit into this space, as I understand it, which makes you wonder how anything ever worked, really.
My advice would be to just write 32-bit code. While it might initially seem like it would make sense to learn how to write 16-bit code, then "graduate" to 32-bit code, I'd say in reality rather the opposite is true: writing 32-bit code is actually easier because quite a few arbitrary architectural constraints (e.g., on what you can use as a base register) are basically gone in 32-bit code.
For that matter, I'd consider it open to substantial question whether there's ever a real reason to write 16-bit x86 code at all. For most practical purposes, it's a dead platform -- for desktop machines it's seriously obsolete, and for embedded machines, you're more likely to see things like ARMs or Microchip PICs. Unless you have a specific target in mind and know for sure that it's going to be a 16-bit x86, I'd probably forget that it existed, just like most of the rest of the world has.
32-bit Windows 7 and older include / enable NTVDM by default. On 32-bit Win8+, you can enable it in Windows Features.
On 64-bit Windows (or any other 64-bit OS), you need an emulator or full virtualization.
A kernel in long mode can't use vm86 mode to provide a virtual 8086 real-mode environment. This is a limitation of the AMD64 / x86-64 architecture.
With a 64-bit kernel running, the only way for your CPU to natively run in 16-bit mode is 16-bit protected mode (yes this exists; no, nobody uses it, and AFAIK mainstream OSes don't provide a way to use it). Or for the kernel to switch the CPU out of long mode back to legacy mode, but 64-bit kernels don't do that.
But actually, with hardware virtualization (VirtualBox, Hyper-V or whatever using Intel VT-x or AMD SVM), a 64-bit kernel can be the hypervisor for an entire virtual machine, whether that VM is running in 16-bit real mode or running a 32-bit OS (like Windows 98 or 2000) which can in turn use vm86 mode to run 16-bit real-mode executables.
Especially on a 64-bit kernel, it's usually easier to just emulate a 16-bit PC entirely (like DOSBOX does), instead of using HW virtualization to running normal instructions natively but trap direct hardware access (in / out, loads/stores to VGA memory, etc.) and int instructions that make DOS system calls / BIOS calls / whatever.

Resources