Create a "Universal Binary" from two apps? - xcode

Short question: How do you take two apps, one for intel and the other ppc, and package them into one Universal Binary?
My current thoughts on this problem:
I have read though the apple developer documentation on universal binaries and haven't been able to find an answer so it may not be possible.
Due to reasons I won't go into here I have two apps of my program (apposed to using xtools to compile the binary universally once), one for Intel Macs and the other for Mac >=10.3.9 running on PPC. Sharing resources is a non-issue.
I could put both MyProg_intel.app and MyProg_ppc.app into one zip and distribute it that way; but that may result in confusion for many people who I will be distributing my program to.

See the lipo tool. It will let you stitch together your PPC and i386 binaries.
Also, sometimes separate targets for different architectures can be avoided by using conditional build settings in Xcode. This is useful if you need to link against a different binary library for each architecture, for example.

Check out the man page for lipo. I believe you can use -create to take multiple input files and create a single output file with multiple architectures.

Apple's developer web site has an article on Building an Open Source Universal Binary that explains how to use Xcode to 'package' a Universal Binary using build scripts. This is probably your best road to sanity. You could use lipo, but in the long run if you are going to update and maintain your application, having an Xcode project that does the magic for you is going to take up a lot less of your time.

In order to create a Universal Binary, you have to use Xcode and select both Intel and PPC target architectures. As far as I know, you can't just stuff two different binaries into one .app ex post facto.

Related

Go binary file for all platform

I have a .go file and produced the binary file using go build command from Mac. Is there a way to build a binary file which runs in windows,linux,IOS ?
I am aware we can build binary file for each of them by changing the GOOS,GOARCH params but i would like to have a single go binary file which should run in all the platforms . Please help me out of this.
Thanks in advance
No, it is not at all possible in Go or any other programming language (the executable is necessarily tailored to individual platforms and architectures).
However, to cross-compile, some tools do exist which do the cross compiling for you.
This post helps explain how to cross compile with Golang (which is pretty easy at this point).
There's also a Unix StackExchange question, https://unix.stackexchange.com/a/298283/177527, which explains why different architectures require different binaries:
The reason is because the code is compiled to machine code for a specific architecture, and machine code is very different between most processor families (ARM and x86 for instance are very different).
The binary also depends on the OS, as explained here https://softwareengineering.stackexchange.com/a/251255:
Binary Format: The executable has to conform to a certain binary format, which allows the operating system to correctly load, initialize, and start the program. Windows mainly uses the Portable Executable format, while Linux uses ELF.
System APIs: The program may be using libraries, which have to be present on the executing system. If a program uses functions from Windows APIs, it can't be run on Linux. In the Unix world, the central operating system APIs have been standardized to POSIX: a program using only the POSIX functions will be able to run on any conformant Unix system, such as Mac OS X and Solaris.
For Mac (not Windows), you can associate cross-compilation with a tool like randall77/makefat to generate a "universal binary", which will run on any architecture supported by one of the input executables.
This is currently implemented in goreleaser/goreleaser PR 2572, which means the process would be completely automated.

Which Xamarin ABIs should we support

Currently our Xamarin Android app (PCL) is huge in my opinion, even in release mode. I suspect it is due to supported architectures. Currently we have them all selected. Does anyone know if we have to select all of these? We are not using the Android NDK at all as well.
I will copy part of my answer from here.
Make sure you are at least checking the following architectures: armeabi-v7a and x86. You could do the other three but we do not since we use LLVM compiling in release mode, which is not compatible with the 64 bit architectures (except for armeabi, which is deprecated). The good thing about that is that all of the 64 bit architectures can still use 32 bit builds so they all still get covered if you check those 3.
So I would just check those 3 unless you have a specific reason to check the other ones. We have had 0 problems installing our app on devices using those 3 only.
On a side note, turning on LLVM compiling and optimizing your icons/images will help with the final APK size.
*Edit: Since writing this we ran into a bug only on certain devices (Android Nexus 9) which leads to app crashes when launching the app. The solution is to check the arm64-v8a architecture. This will probably increase app size so weigh the pros and cons and see how much of a difference it makes in your APK size after including the architecture or split your APK for each architecture if necessary.
No you do not have to select all of them. You can create an .apk per ABI if you wanted to to reduce the size of your .apk. Note: The encouraged method is that you develop and publish a single .apk. However this is not always practical, and sometimes it's better to create separate ones. Although this answer only goes into depth about different CPU Architectures (ABI), you could also create different .apk for screen size, device features, and API levels.
https://developer.xamarin.com/guides/android/advanced_topics/build-abi-specific-apks/
http://developer.android.com/google/play/publishing/multiple-apks.html
I would recommend grabbing a tool like WinDirStat(https://windirstat.info/) or Disk Inventory X(http://www.derlien.com/) to investigate why your .apk is so large. You might find other reasons why your .apk is large such as resources(images, raw files), assemblies, etc.

Win32: Get info on statically linked libraries

Assuming you only have access to the final product (i.e. in form of the exe file), how would you go about finding out which libraries/components the developer used to create the application?
In my specific case the question is about an application developed in VC++ using a few third party components and I'm curious which those are.
But I think the question is generally valid, e.g. when it should be proven if a developer is in line with license requirements of a specific library.
So, what you're saying is that if I suspect that a binary is using a certain library, I could try to map the respective function calls and see if I get a result. But there is no shortcut to this and unless I am willing to try out hundreds of mappings or the dev left some information in some strings or other resources, I have little chance of finding this out. Yes?
There is small shortcut, here's what I'd do:
check executable for strings and constants, and try to find out what library is that.
IF used libraries are open-source, compile them on my own and create FLAIR signatures (IDA Pro).
Use generated flair signatures on target executable.
In some situations, that can really work like a charm and can let you distinguish actual code from used libraries.
The IDA Pro Book - Ch 12. Library Recognition Using FLIRT Signatures

Cross-compile on a Linux host for various targets

I have a set of more or less portable C/C++ sources sitting on a Linux development host that I would like to be able to:
compile for 32- and 64-bit Linux targets
cross-compile for 32- and 64-bit Windows targets
cross-compile for 32- and 64-bit Mac targets
and, ideally, without any runtime dependencies on other emulation DLL's like cygwin1.dll, MinGW, etc though I could use them if there's no other choice. If I have to use them, I'd prefer statically linking their functionality to my code.
The target binary that is desired is:
a shared library (.so) for Linux and Mac targets, and
a DLL for Windows.
I have no idea how to build a cross-compiler (and the associated toolchain) from scratch. I'm hearing that pre-built cross-compiler toolchains are available for various host-and-target combinations, but I don't know where to find them, or even how to use them without running into runtime crashes/coredumps later due to pointer model subtleties (LP64, LLP64, etc), specifying wrong or inadequate compiler switches, other misconfiguration, etc.
I've so far been unable to find the relevant and complete information on the above, and whatever little I've managed to find is scattered all over the place in so many bits and pieces that I'm not even sure if all that I've read is complete or even correct (applies fully, no more no less to my case).
I'm not a compilers expert, just their regular user. Would appreciate information achieving the above compilation goals.
I would like to cross compile a library for Mac OsX on Linux and I am considering imcross. The instructions in the site are simple, but everytime you setup a crosscompiling environment you have to fix a lot of things, so I won't expect that it will be straightforward. You can check in the website that there are some limitations to this project but it is the best I came across.
Not being a priority for me now (I have other stuff to do before performing this task) I didn't setup the crossenvironment yet. I am going to do that in few days time.

How can I compile object code for the wrong system and cross compiling question?

Reference this question about compiling. I don't understand how my program for Mac can use the right -arch, compile with those -arch flags, the -arch flags be for the system I am on (a ppc64 g5), and still produce the wrong object code.
Also, if I used a cross compiler and was on Linux, produced 10.5 code for mac, how would this be any different than what I described above?
Background is that I have tried to compile various apache modules. They compile with the -arch ppc, ppc64, etc. I get no errors and I get my mod_whatever.so. But, apache will always complain that some symbol isn't found. Apparently, it has to do with what the compiler produces, even though the file type says it is for ppc, ppc64, i386, x_64 (universal binary) and seems to match all the other .so mods I have.
I guess I don't understand how it could compile for my system with no problem and then say my system can't use it. Maybe I do not understand what a compiler is actually giving me.
EDIT: All error messages and the complete process can be seen here.
Thank you.
Looking at the other thread and elsewhere and without a G5 or OSX Server installation, I can only make a few comments and suggestions but perhaps they will help.
It's generally not a good idea to be modifying the o/s vendor's installed software. Installing a new Apache module is less problematic than, say, overwriting an existing library but you're still at the mercy of the vendor in that a Software Update could delete your modifications and, beyond that you have to figure out how the vendor's version was built in the first place. A common practice in the OS X world is to avoid this by making a completely separate installation of an open source product, like Apache, using, for instance, MacPorts. That has its cons, too: to achieve a high-level of independence, MacPorts will often download and build a lot of dependent packages for things which are already in OS X but there's no harm in that other than some extra build cycles and disk space.
That said, it should be possible to build and install apache modules to supplement those supplied by Apple. Apple does publish the changes it makes to open source products here; you can drill down in the various versions there to find the apache directory which contains the source, Makefile and applied patches. That might be of help.
Make sure that the mod_*.so you build are truly 64-bit and don't depend on any non-64 bit libraries. Use otool -L mod_*.so to see the dynamic libraries that each references and then use file on those libraries to ensure they all have ppc64 variants.
Make sure you are using up-to-date developer tools (Xcode 3.1.3 is current).
While the developer tool chain uses many open source components, Apple has enhanced many of them and there are big differences in OS X's ABIs, universal binary support, dynamic libraries, etc. The bottom line is that cross-compilation of OS X-targeted object code on Linux (or any other non-OS X platform) is neither supported nor practical.

Resources