Differences between compiling for i386 vs x86_64 in Xcode? - xcode

What are the differences between compiling a Mac app in Xcode with the Active Architecture set to i386 vs x86_64 (chosen in the drop down at the top left of the main window)? In the Build settings for the project, the Architecture options are Standard (32/64-bit Universal), 32-bit Universal, and 64-bit Intel. Practically, what do these mean and how does one decide?
Assume one is targeting OS X 10.5 and above. I see in the Activity Monitor that compiling for x86_64 results in an app that uses more memory than one compiled for i386. What is the advantage? I know 64-bit is "the future", but given the higher memory usage, would it make sense to choose 32-bit ever?

32/64-bit Universal -- i386, x86_64, ppc
32-bit Universal -- i386, ppc
64-bit Intel -- 64 bit Intel only
ppc64 is no longer supported.
x86_64 binaries are faster for a number of reasons; faster ABI, more registers, on many (most & all new machines) machines the kernel is 64 bit & kernel calls are faster, etc.etc.etc...
While 64 bit has a bit of memory overhead directly related, generally, to how pointer heavy your app's data structures are, keep in mind that 32 bit applications drag in the 32 bit versions of all frameworks. If yours is the only 32 bit app on the system, it is going to incur a massive amount of overhead compare to the 64 bit version.
64 bit apps also enjoy the latest and greatest Objective-C ABI; synthesized ivars, non-fragile ivars, unified C++/ObjC exceptions, zero-cost #try blocks etc... and there are a number of optimizations that are only possible in 64 bit, too.

iOS apps need to run on many different architectures:
arm7: Used in the oldest iOS 7-supporting devices[32 bit]
arm7s: As used in iPhone 5 and 5C[32 bit]
arm64: For the 64-bit ARM processor in iPhone 5S[64 bit]
i386: For the 32-bit simulator
x86_64: Used in 64-bit simulator
Xcode basically emulates the environment of 32 bit or 64 bit based on what is set in the Valid Architecture - i386 or x86_64 respectively
Every architecture requires a different binary, and when you build an app Xcode will build the correct architecture for whatever you’re currently working with. For instance, if you’ve asked it to run in the simulator, then it’ll only build the i386 version (or x86_64 for 64-bit).

Unless you have a reason to compile for x86_64, I recommend just compiling for i386 (and PPC if you support that). Read Apple's stance on the matter:
Although 64-bit executables make it
easier for you to manage large data
sets (compared to memory mapping of
large files in a 32-bit application),
the use of 64-bit executables may
raise other issues. Therefore you
should transition your software to a
64-bit executable format only when the
64-bit environment offers a compelling
advantage for your specific purposes.

Related

Is 32-bit ARM (windows) considered dead/deprecated?

I was curious about the ARM32 architecture (the 32-bit only version) and it's future: according to the wiki page, the Windows 8 variant Windows RT was ARM32, but it is deprecated now.
Windows 11 seems it will be ARM64-only.
What about devices released in-between?
I could not find any information/statistics related to this.
As far as I know ARM64 can run ARM (32-bit) applications, but one developing a system driver or working low-level has to support both platforms.
For comparison as far as I know, majority of current android phones are 64-bit already and with the 32-bit architecture having the 4GB limitation, logic would dictate, that outside of niche scenarios we should not really see 32bit-only ARM systems.
Anyone has any information regarding this?
There is absolutely no reason for MS to release ARM32 version:
No chip vendor is launching high end ARM32 chips
aarch64 is vastly superior to aarch32 while not much more expensive for licensees (if at all)
aarch64 is NOT backward compatible.
Why would MS want to fragment their Windows eco system more than necessary?

Compiling on ARMv8 - Running on ARMv7

Is it possible to compile a Package on ARMv8 and run it on ARMv7 ?
I am not really experienced in the whole building thing (yet).
I came to this question because my Odroid C1+ fails to compile icinga2 due to the very limited RAM.
The C2 has 2 GB of RAM and will do probably better at this task.
But can I run a C2 (ARMv8) compiled package on my C1+ (ARMv7)?
Is it possible to compile a Package on ARMv8 and run it on ARMv7 ?
That's called cross-compiling and is the usual way how ARM code is generated – only that most build machines for ARM binaries are probably x86_64 nowadays. But if you have a compiler that targets platform ARMv7 running on ARMv8, I don't see a problem.
I am not really experienced in the whole building thing (yet). I came to this question because my Odroid C1+ fails to compile icinga2 due to the very limited RAM. The C2 has 2 GB of RAM and will do probably better at this task.
You know what is much much better at compiling? A proper PC with more than 4GB of RAM, massive RAM bandwidth and a much higher storage bandwidth, with a heavily pipelined multicore CISC CPU rather than an energy-efficient ARM.
Really, software for embedded systems is usually built on non-embedded computers with cross-compilers. There's definitely different ways to cross-compile something for your C1+ on your PC; I'd generally recommend using the method your Linux distro (if you're using any) has for cross-compiling packages.
ARMv7 is a different platform from ARMv8, so compiling software from ARMv7 on v8 has no advantage over compiling software for ARMv7 on x86. You'll need a cross-compiling toolchain, anyway.

64 bit processor running at 32 bit on mac

according to apple's mac processor list a i5 should be a 64 bit processor. According to this video if i type uname -m in terminal i should get x86_64. but in my case it says i386 instead. why is that? i developed also an app, that is a 64 bit only app. that app is NOT running on this mac. it crashes at start. but if i compile in x86_64 mode instead of 64 bit only then it works. does somebody have an idea on how to fix this?
ARC is not supported under 32-bit runtimes. Therefore if you are using ARC you will need to produce 64-bit binaries only.
From Transitioning to ARC Release Notes:
ARC is supported in Xcode 4.2 for OS X v10.6 and v10.7 (64-bit
applications) and for iOS 4 and iOS 5. Weak references are not
supported in OS X v10.6 and iOS 4.
The video is wrong: uname -m tells you what mode the kernel is running in, which has very little to do with userland programs such as yours. If you want to to find out for sure if the CPU is 64-bit capable, use sysctl hw.cpu64bit_capable -- since you have an i5, it should print hw.cpu64bit_capable: 1 meaning "yes" (0 would mean "no"). Also, run the Activity Monitor utility, and note the modes various processes are running in -- my guess is that a lot will be in "Intel (64 bit)", since in 10.6 most of the programs supplied with OS X came in 32/64-bit dual architecture, and will prefer 64-bit.
Now, about your app: It should run in 64-bit mode whether you compile it 64-bit only or 63/64, so I doubt that's the problem. To be sure, compile it 32/64, run it, then use Activity Monitor to see what mode it's actually running in.
I can't tell for sure, but my guess would be that your app has a problem with ARC. At least as I understand it, that's only enabled if you compile 64-bit only (and disabled if you compile 32/64).

i386 x86_64 combined architecture?

I am coming from windows development. I have known that a binary can either be a 32bit or 64bit. But not both. And also that on a 64bit platform I can run 32bit binary but not viceversa.
In mac I am seeing a combined architecture like i386 x86_64 which is a bit of surprise for me. Why and when exactly do we target an app on mac osx for this architecture , what is the benefit of this ? why not a 32bit only which per my understanding of windows can run on 32bit as well as 64bit ?
It's not that the code is compiled for a "mixed" architecture - it's just that it's compiled for multiple ones.
The reason for compiling it for both 32 and 64 bit is that 64-bit programs generally perform better on a 64-bit architecture (most modern Macs) than 32-bit ones.

How does 64 bit code work on OS-X 10.5?

I initially thought that 64 bit instructions would not work on OS-X 10.5.
I wrote a little test program and compiled it with GCC -m64.
I used long long for my 64 bit integers.
The assembly instructions used look like they are 64 bit. eg. imultq and movq 8(%rbp),%rax.
I seems to work.
I am only using printf to display the 64 bit values using %lld.
Is this the expected behaviour?
Are there any gotcha's that would cause this to fail?
Am I allowed to ask multiple questions in a question?
Does this work on other OS's?
Just to make this completely clear, here is the situation for 32- and 64-bit executables on OS X:
Both 32- and 64-bit user space executables can be run on both 32- and 64-bit kernels in OS X 10.6, without emulation. On 10.4 and 10.5, both 32- and 64-bit executables can run on the 32-bit kernel. (This is not true on Windows)
The user space system libraries and frameworks are built 32/64-bit fat on 10.5 and 10.6. You can link against them normally, whether you're building for 32-bit, 64-bit, or both. A few libraries (basically the POSIX layer) are also built 32/64-bit fat on 10.4, but many of them are not.
On 10.6, the build tools produce 64-bit executables by default. On 10.5 and earlier, the default is 32-bit.
On 10.6, executables that are built fat will run the 64-bit side by default. On 10.5 and earlier, the 32-bit side is executed by default.
You can always manually specify which slice of a fat executable to use by using the arch command. eg. arch -arch i386 someCommandToRunThatIWantToRunIn32BitMode. For application bundles, you can either launch them from the command line, or there is a preference if you "get info" on the application.
OS X and Linux use the LP64 model for 64-bit executables. Pointers and long are 64 bits wide, int is still 32 bits, and long long is still 64 bits. (Windows uses the LLP64 model instead -- long is 32 bits wide in 64 bit Windows).
Mac OS X 10.5 supports 64-bit user-land applications pretty well. In fact, Xcode runs in 64-bit in 10.5 on a compatible architecture.
It's only the built-in applications (Finder, Safari, frameworks, daemons etc.) also have the 64-bit version in 10.6.
Meta: I don't like to see answers deleted. I guess this has been discussed somewhere.
Anyway, KennyTM and the other kind sole got me started and although one answer was deleted, I appreciated your efforts.
It looks like this is expected behaviour on the Mac, and it even seems to work on a 32-bit Linux as well (although I have not tested extensively)
Yep. GCC behaves different (at least in my limited observation) for 32 (-m32) and 64 (-m64) bit modes. In 32 bit, I was able to access variable arguments using an array. In 64 bit mode this just does not work.
I have learnt that you MUST access variable parameters using va_list as defined by stdarg.h because it works in both modes.
Now I have a command-line program that runs and passes all of my test cases in 32 bit and 64 bit modes on Mac OS-X.
The program implements a linked list garbage collector sweeping 16-byte aligned malloc-allocated objects from a global list as well as machine registers and the stack - actually, there are extra registers in 64 bit mode, so I still have a bit of work to do.
Objects are either a collection of 32 or 64 bit words which link together to form LISP/Scheme-like data structures.
In summary, it is a complex program that does a lot of messing with pointers and it works the same under 32 and 64 bit modes.
Asking multiple questions does not get you all the answers you might want.
It seems to work, as I wrote, on Linux.
Again, thank you for helping me with this.

Resources