Compiling on ARMv8 - Running on ARMv7 - compilation

Is it possible to compile a Package on ARMv8 and run it on ARMv7 ?
I am not really experienced in the whole building thing (yet).
I came to this question because my Odroid C1+ fails to compile icinga2 due to the very limited RAM.
The C2 has 2 GB of RAM and will do probably better at this task.
But can I run a C2 (ARMv8) compiled package on my C1+ (ARMv7)?

Is it possible to compile a Package on ARMv8 and run it on ARMv7 ?
That's called cross-compiling and is the usual way how ARM code is generated – only that most build machines for ARM binaries are probably x86_64 nowadays. But if you have a compiler that targets platform ARMv7 running on ARMv8, I don't see a problem.
I am not really experienced in the whole building thing (yet). I came to this question because my Odroid C1+ fails to compile icinga2 due to the very limited RAM. The C2 has 2 GB of RAM and will do probably better at this task.
You know what is much much better at compiling? A proper PC with more than 4GB of RAM, massive RAM bandwidth and a much higher storage bandwidth, with a heavily pipelined multicore CISC CPU rather than an energy-efficient ARM.
Really, software for embedded systems is usually built on non-embedded computers with cross-compilers. There's definitely different ways to cross-compile something for your C1+ on your PC; I'd generally recommend using the method your Linux distro (if you're using any) has for cross-compiling packages.
ARMv7 is a different platform from ARMv8, so compiling software from ARMv7 on v8 has no advantage over compiling software for ARMv7 on x86. You'll need a cross-compiling toolchain, anyway.

Related

what does 'build a gcc cross compiler' mean?

I am very new to linux and GCC. The price of raspberry pi lured me in. I am interested in using GCC to cross compile some C Code to target some embedded hardware, specifically a Cortex-M3 micro. I eventually want to have a full suite of compiler/programmer/debugger, but for now I'm starting with compiler.
So I did a quick non-cross compile test on the RP3, and all was well. Now I am researching how to cross compile and target my uc. The gcc documentation online seems to indicate that I can use the plain vanilla gcc, and just specify some command line options to perform cross compilation: https://gcc.gnu.org/onlinedocs/gcc/ARM-Options.html
But searching around, I find a lot of people mentioning building a gcc cross compiler. What does this mean?
Does gcc have options to double as a cross compiler? If so, why would one desire "building" a cross compiler?
A cross-compiler is one that is created on machine type A (combination of hardware and o/s) and either runs on a different machine type B or runs on type A but produces software to be run on a different machine type B.
Thus, if you have a Linux machine using an x86_64 CPU and running on some version of Linux, but you compile GCC so that it will run on an IBM PowerPC platform running some version of AIX, you would be creating a cross-compiler.
Another variant might be having a compiler on Linux using an x86_64 CPU that runs on the Linux machine but produces code for an embedded hardware chip. You'd then benefit from the CPU power of the Linux machine while deploying to a much smaller, less powerful system that maybe has no o/s of its own, or only a minimal o/s.

Is it possible to generate native x86 code for ring0 in gcc?

I wonder, are there any ways to generate with the gcc some native x86 code (which can be booted without any OS)?
Yes, the Linux kernel is compiled with GCC and runs in ring 0 on x86.
The question isn't well-formed. Certainly not all of the instructions needed to initialize a modern CPU from scratch can be emitted by gcc alone, you'll need to use some assembly for that. But that's sort of academic because modern CPUs don't actually document all this stuff and instead expect your hardware manufacturer to ship firmware to do it. After firmware initialization, a modern PC leaves you either in an old-style 16 bit 8086 environment ("legacy" BIOS) or a fairly clean 32 or 64 bit (depending on your specific hardware platform) environment called "EFI Boot Services".
Operations in EFI mode are all done using C function pointers, and you can indeed build for this environment using gcc. See the gummiboot boot loader for an excellent example of working with EFI.

cuda nvcc cross compiler

I want to compile CUDA code on mac but make it executable on Windows.
Is there a way to set up an nvcc CUDA cross compiler?
The problem is that my desktop windows will be inaccessible for a while due to traveling, however i do not want to wasted time by waiting til i get back and compile the code. If I have to wait then it would be a waste of time to debug the code and make sure it compiles correct and the likes. My mac is not equipped with cuda capable hardware though.
The short answer, is no, it is not possible.
It is a common misconception, but nvcc isn't actually a compiler. It is a compiler driver, and it relies heavily on the host C++ compiler in order to steer compilation both host and device code. To compile CUDA for Windows, you must using the Microsoft C++ compiler. That compiler can't be run on Linux or OS X, so cross compilation to a Windows target is not possible unless you are doing the compilation on a Windows host (so 32/64 bit cross compilation is possible, for example).
The other two CUDA platforms are equally incompatible, despite requiring gcc for compilation, because the back ends are different (Linux is an elf platform, OS X is a mach platform), so even cross compilation between OS X and Linux isn't possible.
You have two choices if compilation on the OS X platform is the goal
Install the OS X toolkit. Even though your hardware doesn't have a compatible GPU, you can still install the toolkit and compile code.
Install the Windows toolkit and visual studio inside a virtual windows installation (or a physical boot camp installation), and compile code inside Windows on the Mac. Again, you don't need NVIDIA compatible hardware to do this.
If you want to run code without a CUDA GPU, there is a non-commercial (GPU Ocelot) and commercial (PGI CUDA-x86) option you could investigate.

can gcc cross compile for different CPU?

Is it possible for gcc, installed on fedora 16, to cross compile for a different CPU, say SPARC?
I have build a certain understanding, need some expert to correct me if I am wrong. Different operating systems differ by the system calls they use to access the kernel or entirely by the kernel they use. IS THIS CORRECT? different kernels understands different systems calls for accessing underlying hardware. binaries or executables or programs are nothing but a bunch of system calls only. therefore every OS has its own executable. an executable meant to run to on windows wound not run on linux. by cross compiling the source code of any windown's executable we can generate executable for other OSs. word PLATFORM means operating system. POSIX are certain design standards for UNIX-like OSs.
we usually cross compile for different OSs. BUT can we cross compile for different hardware too? for example, in case of a microcontroller which does not have an OS?
No. You can't use native machine (x86) gcc for compiling program files for a different architecture. For that you require a cross-compiler-gcc that is specific to that processor architecture.
Your understanding about system calls for OS is correct. Each OS has its own set of system call which is been used by library. These libraries at the end will be translated into machine language for the processor.
Each Processor Architecture has its own set of instruction know as Instruction Set Architecture(ISA). So when a program written in high-level-language (like C) is compiled, it should be converted into machine language from its ISA. This job is done by the compiler(gcc). A compiler will be specific to only one processor architecture. For example gcc is for x86 processor. So if you want a compiler for different processor in you x86 machine you should go for a cross-compiler of that processor.
You would have to build such a version. That's part of the process of porting gcc to a new platform. You build a version that cross-compiles, then you cross-compile that version, then you test that version on the new platform, debug, rinse, and repeat.

Differences between compiling for i386 vs x86_64 in Xcode?

What are the differences between compiling a Mac app in Xcode with the Active Architecture set to i386 vs x86_64 (chosen in the drop down at the top left of the main window)? In the Build settings for the project, the Architecture options are Standard (32/64-bit Universal), 32-bit Universal, and 64-bit Intel. Practically, what do these mean and how does one decide?
Assume one is targeting OS X 10.5 and above. I see in the Activity Monitor that compiling for x86_64 results in an app that uses more memory than one compiled for i386. What is the advantage? I know 64-bit is "the future", but given the higher memory usage, would it make sense to choose 32-bit ever?
32/64-bit Universal -- i386, x86_64, ppc
32-bit Universal -- i386, ppc
64-bit Intel -- 64 bit Intel only
ppc64 is no longer supported.
x86_64 binaries are faster for a number of reasons; faster ABI, more registers, on many (most & all new machines) machines the kernel is 64 bit & kernel calls are faster, etc.etc.etc...
While 64 bit has a bit of memory overhead directly related, generally, to how pointer heavy your app's data structures are, keep in mind that 32 bit applications drag in the 32 bit versions of all frameworks. If yours is the only 32 bit app on the system, it is going to incur a massive amount of overhead compare to the 64 bit version.
64 bit apps also enjoy the latest and greatest Objective-C ABI; synthesized ivars, non-fragile ivars, unified C++/ObjC exceptions, zero-cost #try blocks etc... and there are a number of optimizations that are only possible in 64 bit, too.
iOS apps need to run on many different architectures:
arm7: Used in the oldest iOS 7-supporting devices[32 bit]
arm7s: As used in iPhone 5 and 5C[32 bit]
arm64: For the 64-bit ARM processor in iPhone 5S[64 bit]
i386: For the 32-bit simulator
x86_64: Used in 64-bit simulator
Xcode basically emulates the environment of 32 bit or 64 bit based on what is set in the Valid Architecture - i386 or x86_64 respectively
Every architecture requires a different binary, and when you build an app Xcode will build the correct architecture for whatever you’re currently working with. For instance, if you’ve asked it to run in the simulator, then it’ll only build the i386 version (or x86_64 for 64-bit).
Unless you have a reason to compile for x86_64, I recommend just compiling for i386 (and PPC if you support that). Read Apple's stance on the matter:
Although 64-bit executables make it
easier for you to manage large data
sets (compared to memory mapping of
large files in a 32-bit application),
the use of 64-bit executables may
raise other issues. Therefore you
should transition your software to a
64-bit executable format only when the
64-bit environment offers a compelling
advantage for your specific purposes.

Resources