Hi all I need to change architecture type in the file.
when I do lipo -info command I get only arm7 but I need to have i386 next to it.
Is there any command that can add me i386 info inside the file?
Yes, it's called cc, but it depends on some minor configuration files (*.m or *.c files).
Edit: Sorry for not making the joke obvious. What I wanted to say is:
There's not way to change the architecture of an executable file. Executables are produced by compiling source code to machine code and this process is not reversible. You'll need the original source code to recompile the project to the missing architecture. Then you can use lipo to combine the executables to a fat binary.
Related
We have a CMake based project targeting Xcode, and must include a precompiled 3rd party library which supplies separate arm64 and x86_64 binaries.
What we have working now is to simply attach both binaries like
add_library( someLib INTERFACE )
add_library( someLib_x64 STATIC IMPORTED )
set_target_properties(
someLib_x64
PROPERTIES
IMPORTED_LOCATION_RELEASE "path/to/x64/libsomeLib.a"
)
add_library( someLib_arm STATIC IMPORTED )
set_target_properties(
someLib_arm
PROPERTIES
IMPORTED_LOCATION_RELEASE "path/to/arm/libsomeLib.a"
)
target_link_libraries(
someLib
INTERFACE
someLib_x64
someLib_arm
)
This seems to result in a valid compilation for both architectures (building for "Any Mac (Apple Silicon, Intel)"), however it causes a bunch of linker warnings as each architecture complains about the other one.
ld: warning: ignoring file /path/to/x64/libsomeLib.a, building for macOS-arm64 but attempting to link with file built for macOS-x86_64
and vice versa.
What is a more accurate way to do this that avoids linker warnings? I couldn't find an applicable generator expression to change the link path?
Edited, I misunderstood this previously. I think you have 3 options
suppress error, the error doesn't affect anything in fact, so the simplist way to
add_link_option("-w")
to ignore it, or just change link option for the target
try the latest cmake concept IMPORTED_TARGET, it looks like perfectly fit your demand, but require new cmake version
try to compile an universal library from source code, this is some example
change flag or cmake official example, but this looks like need another project for source code of the lib
UPDATE: ACCEPTED ANSWER:
Based on the documentation for IMPORTED_TARGET linked here, it revealed that you can use the symbol $(CURRENT_ARCH) in the library path, which is interpreted by Xcode at link time.
Works perfectly.
You can combine the two .a files into the fat binary and use the combined library for compilation. The linker will select the correct version based on the architecture.
To combine the .a library files, you can use the lipo command:
lipo -create 'path/to/x64/libsomeLib.a' 'path/to/arm/libsomeLib.a' \
-output 'path/to/combined/libsomeLib.a'
The combined library file can be reused until you need to install an update to the library. Alternatively, you can create a aggregate target to combine the library files every time you compile if you prefer not to manage the library manually.
I have a CMake project that's used to generate an iOS-targeted XCode project that supports multiple CPU architectures (arm64 and armv7).
My CMake project includes some commands (defined with add_custom_command) that convert Lua scripts into C++ source files. These generated C++ files differ by architecture (the armv7 file should not be compiled for arm64 and vice versa).
The tool meant to be invoked like this:
./data_tool --input <script> --output <C++ source> --architecture <armv7 or arm64>
My (incorrect) CMake file currently looks something like this:
foreach(ARCHITECTURE ${TARGET_ARCHITECTURES})
string(
REPLACE ".lua" ".cpp" GENERATED_CPP
${GENERATED_SOURCE_DIRECTORY}/${ARCHITECTURE}/${INPUT_SCRIPT}
)
add_custom_command(
OUTPUT ${GENERATED_CPP}
COMMAND ${DATA_TOOL} --input "${INPUT_SCRIPT}" --output "${GENERATED_CPP}" --architecture ${ARCHITECTURE}
MAIN_DEPENDENCY ${INPUT_SCRIPT}
)
list( APPEND GENERATED_SOURCE ${GENERATED_CPP} )
endforeach()
Later, GENERATED_SOURCE is appended to the source file list passed to add_executable. This code is obviously wrong because both the armv7 and arm64 files are compiled when building for either architecture.
How can I tell CMake that each architecture compiles a different set of source files?
XCode doesn't have a great way to exclude files based on the architecture being built. While it is possible (see Disabling some files in XCode project from compilation), setting this up via CMake is going to be somewhat difficult.
Instead, I would suggest simply making your generation tool/script put preprocessor guards around the entire file, for the architecture that the generated file supports. That way, when XCode compiles them, they will be essentially empty, except for the architecture that they are meant for. In this answer (Determine if the device is ARM64), it shows how you can do a conditional compile based on arm64 (and use the reverse for armv7).
Well, don't put generated sources for different arches into the same list. Unwrap foreach body and just repeat these commands for each arch.
If you don't want to introduce code duplication, you can write a CMake function that creates that custom command and returns a list of generated sources. See this question for how to return values from functions.
I have an SML/NJ program that I can run as a heap image, and I want to create a standalone executable binary. However, the heap2exec tool in SML/NJ 110.73 always yields errors for me.
I created my heap image tigerc.x86-darwin via the following:
ml-build sources.cm Main.main tigerc
I can run my program fine using the heap image via
sml #SMLload=tigerc.x86-darwin
I should be able to create the standalone binary via
heap2exec tigerc.x86-darwin tigerc
but that generates the error
ld: warning: -macosx_version_min not specificed, assuming 10.7
ld: warning: ignoring file tigerc.o, file was built for unsupported file format
which is not the architecture being linked (i386)
I looked at the heap2exec shell script, and the key lines (variable-expanded) do the following:
heap2asm "$heapfile" "$execfile".s
cc -c -o "$execfile".o "$execfile".s
ld -o "$execfile" ${RUNX} "$execfile".o
When I run these steps individually, the cc command generates an x86_64 .o file, but the ld command is trying to link an i386 executable. So I need to convince the cc command to generate an i386 .o file as well.
Is there a way to set an environment variable to get cc to build i386 instead of x86_84? (ARCH doesn't do the trick, by the way — it's already set to i386.)
Or is there another workaround to get heap2exec to generate the right architecture?
Try adding CFLAGS=-m32 as an environment variable. That's the standard way to force it to build a 32-bit object file.
I know you're asking specifically about SML/NJ, but MLton has 64-bit support and makes this kind of task really easy. You might thank yourself later if you're in a position to use it to generate executable binaries instead.
I would like to rename symbols inside object files (.o) with something that would be the Mac equivalent of binutils' objcopy --redefine-syms tool.
I found no arm-apple-darwin10-objcopy. I tried the MacPorts' arm-elf-binutils port and also tried to play a bit with otool and segedit without much success.
Any ideas please?
Sounds like a job for Agner Fog's objconv.
From the announcement:
I have now finished making full support for Mach-O files in the object file converter mentioned in my previous posts. You may use it as a replacement for the missing objcopy utility.
Objconv can be used for the following purposes:
Convert object files and library/archive files between Mach-O, ELF, COFF and OMF formats for all x86 and x86-64 platforms.
Change symbol names in object files, make symbols weak, add alias names to symbols.
Build, modify and convert static library files (*.a, *.lib) across platforms (Mac, Linux, BSD, Windows)
Dump object files and executable files
Disassemble object files and executable files. Very good disassembler.
objconv manual
objconv.zip - source
I know I'm resurrecting this post from the dead, but...
I have a sudden need to do this as well, and discovering that objcopy doesn't work on OSX was a bit of a shock. But I think it's possible to use ld to achieve the same effect:
ld -r input.o -o output.o -alias oldsymbol newsymbol -unexported_symbol oldsymbol
This really just creates an alias for the symbol under a new name and hides the old one.
I haven't had a chance to do much testing yet, but looking at the output file with nm shows it seems to be doing the right thing.
objconv does not currently work for ARM, so this option is not available for iPhone. It should be no problem to use objconv from elf to mach-o for mac osx x86/x64 though. Let me know if you found a solution for ARM
I would like to know how you can support i386 and ppc architectures for programs at /bin.
I run for instance
bin $ file amber
I get
amber: setgid Mach-O universal binary with 2 architectures
amber (for architecture i386): Mach-O executable i386
amber (for architecture ppc): Mach-O executable ppc
How do programs support i386 and ppc in the source code?
In other words, which components can you remove, for instance, in /bin/amber if you remove the support of ppc -architecture?
It's called a Universal binary. In short, the executable contains both types of executable code. Apple has a published document describing how developers should build their applications to run on either platform.
The executable lipo can be used to remove either version of the executable from the file. If you want your executables to contain only one version, you can use lipo to achieve this.
Be aware that there is more than just ppc and i386, although these are the "safest" architectures to choose for a Universal binary. Read the manpage for arch; there you can see that a modern OSX binary is likely to contain any of ppc, ppc64, i386 or x86_64. There are many more listed, but they exist there for completeness.
It's called a fat binary.
There's a copy of the native code for both architectures in the binary. The binary format and the operating system have to support it, so it can know where to look in the file for the correct code.