Are games/programs compiled for multiple architectures? - compilation

This might be a big broad and somewhat stupid, but it is something I've never understood.
I've never dealt with code that needs to be compiled except Java, which I guess falls between two chairs, so here goes.
I don't understand how games and programs are compiled. Are they compiled for multiple architectures? Or are they compiled during installation (it does not look that way)?
As far as I've understood, code needs to be compiled based on the local architecture in order to make it work. Meaning that you can't compile something for AMD and "copy" the binaries and execute them on a computer running Intel (or similar).
Is there something I've misunderstood here, or does they use an approach which differs from the example I am presenting?

AMD and Intel are manufacturers. You might be thinking of amd64 (also known as x86_64) versus x86. x86_64 is, as the name suggests, based on x86.
Computers running a 64-bit x86_64 OS can normally run x86 apps, but the reverse is not true. So one possibility is to ship 32 bit x86 games, but that limits the amount of RAM that can be accessed per process. That might be OK for a game though.
A bigger issue is shipping for different platforms, such as Playstation and (Windows) PC. The Playstation not only has a completely different CPU architecture (Cell), but a different operating system.
In this case you can't simply cross-compile - and that is because of the operating system difference. You have to have two separate versions of the game - sharing a bunch of common code and media files (also known as assets) - one version for PC and one for Playstation.
You could use Java to overcome that problem, in theory... but that only works when a JVM is available for all target platforms. Also, there is now fragmentation in the Java market, with e.g. Android supporting a different API from JME. And iPhones and iPads don't support Java at all.
Many games publishers do not in fact use Java. An exception is Mojang.

Related

What does M1 mac optimization process for an application mean?

You know the ARM-based M1 chips that are used in modern mac computers. On those macs, some number of software are ran through the layer called Rosetta (Discord, Steam), some natively, directly through M1 (Slack, IntelliJ) and some actually doesn't work in either way (Virtual Box). Huge list holding the status can be found here.
Apps that can be ran only with Rosetta are not yet M1 optimized, their developers have to optimize it, it takes some time to do so. But what does it mean to optimize it? What the process looks like? I'm quite sure that they don't rewrite the whole application code to another language (like Swift), because Jetbrains was able to M1 optimize their apps quite quickly. On the other hand, Discord is not yet optimized, same for Unity game engine (it's in beta though).
At bottom, it just means that the compiler's backend was configured to emit ARM64 instructions for the program instead of (or in-addition to) x86-64 instructions.
This means that certain x86-64 specific functionality instruction can no longer be used, unless equivalent ARM instructions are used instead.
This usually isn't much of a problem though, because most macOS software is typically written at a higher level of abstraction, using system-provided frameworks.
For example, using CoreImage to manipulate images abstracts you from the details of the CPU and GPU. In such cases, Apple does the heavy lifting of porting over their frameworks. All you have to do as an application developer is to check a box that says "target ARM64".

Can I run C program compiled on different ARM processor?

Let's say I compiled C-program on RaspberryPi, can I run this binary on let's say Cubietruck?
How to know for sure that 2 ARM processors are compatible? Are they all compatible between each other?
It should be some easy answer referring instruction set supported by processors, but I can't find any good materials on that.
There are several conditions for that:
Your executable should use the "least common denominator" of all the ARM microarchitectures you wish to support. See gcc's -march=... option for that. Assuming you're running Linux, grep '^model' /proc/cpuinfo should give you that information for each platform.
(related) Some features may not be supported by all your target ARM cores (FPU, NEON, etc...), so be very careful with that.
You should, of course, run the same OS on all supported platforms.
You need to make sure that all supported platforms run the same ABI; ARM has an history of ABIs changes, so you must take this into consideration.
If you're lucky enough to target only reasonably modern ARM platforms, you should be able to find some common ground (EABI or Hard Float ABI). Otherwise you probably have no choice but to maintain several versions of your executable.

Why is it that we can't write a program that will run on both mac and pc?

Programming languages are platform independent, so why is it that we can't write a program that will run on both a PC and a Mac?
I want to develop a software and I'm on a mac, but I want it to run on a PC also, is it possible to develop such a software without having to require the user to download a special program that will make my program compatible with their computer?
The problem with this is that most software is dependent on the OS to handle some tasks. Yes, most programming languages are compatible with many platforms, but the OS provides a lot of support. When software uses the OS, it is sometimes called making a system call. If you want here is some more information.
Theoretically if you write your program in a 'high level language' it should be portable between two operating systems.
Practically however, the differences start from the very beginning - the API of choice, which works on one and does not on another(Such as, Mac's BSD API is incompatible with Win32 API) and boils down to the very last, which is, executable format, linker and loader. Each operating system has its own quirks.
Then comes the difference between the underlying architecture. Previously Macs ran on PowerPC architecture and Motorola architectures, while PCs used Intel. Since Macs have switched to Intel, there have been attempts at making cross platform executables inside Apple. Most attempts have failed.
There is however a way around your problem. You can use a very high level language such as Python to code and then distribute your python code to your PC friends.(But remember remember, you need a Python interpreter in your PC friends' computers for your program to run). I have successfully ported Python programs from Mac to PC with 0 code changes, and sometimes requiring only 2-4% code changes.
Simple answer: because language per se is not enough to make an application cross-platform. Also the framework it uses must be cross-platform too, frameworks are required for everything: handling data, displaying things, communicate with the hardware, multi threading, etc
This can usually be done:
by choosing a complete solution like Java, which will actually run on both platforms seamlessly and even with the same binary
by using C/C++ and cross platform libraries so that the same program can be compiled for both platforms (keep in mind that you can't distribute the real same binary, you need to compile two in any case)
by writing the logic of your program just using standard libraries and a standard language and then attach whatever you need for a specific platform just to build two different libraries. Of course you will have to wrap as much as you need so that the cross-platform part of your program doesn't know it
Mind that developing cross-platform applications which are not trivial examples like a game (for which there are plenty of cross-platform APIs) without using a complete solution like Java is not an easy task at all. Especially because most of the GUI you can build are strictly platform specific and relies on their own frameworks.
If you want an application to run "anywhere" your best option is a JIT type language which means that it compiles as it runs (Just In Time) for the platform that it's on. Really the language that stands out in my mind is Java (there's others and personally I don't like java). However, it's not quite that simple. For example a Window on a Mac computer has pieces and functions that a Window on a PC doesn't have and vice versa. And other operating systems don't even have windows or anything equivilant yet still run Java like Android or iOS for example or countless Linux Distros. And that's just a very basic example it gets MUCH MUCH harrier. Really the best way to build an application that can be used by anyone on just about any device is going web based.
The lesson is that if it was that simple a lot of people wouldn't have jobs and it never will be that simple, things will always progress and change and not everyone is going to want to do the same thing with their OS as someone else. There's a million ways to skin a cat and there's many more ways to implement something in an OS.
Yes, it is possible. But it is quite tricky. You need to:
Use a cross platform language (this is the easy part, many languages run on different plaftforms)
Avoid using any platform-specific features (usually not too hard, but needs testing)
Ensure you have cross platform libraries for all your dependencies (hard!)
Because of the library issue in particular, there are very few options that work across platforms. Your best options are probably:
A JVM language (like Java, Scala or Clojure) - because the JVM abstracts away from platform specific features, pure Java applications and libraries will run on any platform. Java probably has the best ecosystem of cross platform libraries and tools as a result.
JavaScript - quite a good option if you don't mind running in a browser. There are lots of quirks to deal with, but JavaScript is one of the best cross-platform options because of it's ubiquity.

Is the 64bit Windows platform immature? (even if comparing 32bit binaries with 64bit binaries running on it)

I compiled an 64bit binary of ioquake3 and an SDL binary to go along with it and I noticed on Windows 7 64bit, operation, while relatively stable, it doesn't have top notch performance.
An equivalent binary on 64bit Debian, runs definitely faster, and perfectly stable.
And I'm thinking: with all the major manufacturers still dispatching 32bit binaries predominately - major exception I can think of is Autodesk's Autocad - is Windows still immature on its 64bit libraries?
I would answer no. 64-bit support in the operating system has been around since Windows XP was released in 64-bit edition, although pre-vista drivers and 3rd-party software were very much experimental. Windows is a fully capable 64-bit operating system.
However, you have to remember that Microsoft's success is built on the fact Windows runs pretty well on any x86 based processor with any other combination of hardware, thanks to HAL. When 64-bit XP first came out drivers were scare in their 64-bit form until traction was gained. As you've observed, most manufacturers still develop 32-bit applications only for Windows; Visual Studio and Microsoft Office proof that it is not only 3rd-party vendors. Why? Ease. Take a walk around any pc-shop and you'll hear all sorts of praise about 64-bit cpus in todays modern laptops but what you'll actually find is that their OSes are shipped 32-bit. It's a standard and it works on 32-bit.
Linux, by contrast, has always been a programmer's platform. Most distributions support at least i386 and "x86-64"; some support ppc architectures. The kernel can be built upon just about every chipset known to man. Why? Linux isn't constrained by needing commercial product stability to survive and is often a researcher's platform. Not only that, but even if the core devs had no interest in porting it, you could. Much of the rest of the gnu system is written for portability so it's trivial to compile elsewhere. And not writing your code in a portable manner isn't considered good etiquette.
Take a look at Flash support in linux - 64-bit alpha. Even Mozilla provide 32-bit only builds although they allow distributors to build official 64-bit versions. Skype is 32-bit only.
Basically, many software developers don't need to support 64-bit yet, or don't see it that way at least. As such, I'd say both operating systems are mature - the eco system around them is what is different.
While Windows is slowly making strides towards 64-bitness, one could easily say that Linux has a massive, perhaps even crushing, advantage due to the wide variety of platforms that it has been made to work on. Issues that Windows developers are only coming across now have been long solved under Linux (although of course there are Linux developers who choose to ignore these solutions; their code tends to be brittle, and sometimes non-portable).
Define "relatively stable"?
All Windows API calls end up in the to the same 64 bit functions if you are running a x64 bit Windows, so there is no stability difference at all.
You should profile. Most certainly something inside the user space application is causing the performance degregation, not the kernel.
Are you sure that you are running the same optimization level when compiling the 64 bit binary? And what compiler did you use?

Should I provide an x64 build of my application?

Perhaps I'm missing a major point of the x64 platform here, but my perception was that x64 applications were only better performing than x86 versions (on an x64 OS and hardware, obviously) when large amounts of memory, large pointers, or other demanding factors were involved.
However, I've started to notice some smaller applications offering x64 versions of their installers in addition to the standard x86 versions. Since x86 runs just fine on Windows x64 using WoW, is there any benefit to me releasing an x64-compiled version of my application? As I see it:
Pros:
Potentially higher performance (in what conditions, though)
Cons:
Additional build to create/support
Potential bugs in x64 target that aren't present in the x86 target
Dependence on x64 versions of vendor/OS DLLs, requiring different install checklist and introducing additional troubleshooting complications
What are some compelling reasons that should cause me to reconsider adding an x64-compiled version of my app?
Another potential reason for compiling and debugging an x64 version is that it may expose hidden bugs in the x86 version. For example, this may expose improper conversions between 32-bit integers and (now) 64-bit pointers. This also positions you to support x64 in the future.
However, unless your application would benefit from 64-bit integers, additional cpu registers, a larger memory space (beyond 3.5Gb), or implement a device driver, staying with 32-bit application is fine. All major operating systems support running both x32 and x64 applications at the same time so there will not be a major push towards 64-bit only applications.
BTW. Applications based on .NET automatically benefit from being executed on a 64-bit system without any code changes. No additional testing required.
Potential performance improvement relates mostly to usage of 64-bit integers (around 4 times as fast in an x64 build on my machine than in x86 on the same) and the fact that compilers may assume some CPU features to be universally present in CPUs supporting x64, such as SSE2, &c.; this can lead to more optimized code.
For many applications, especially small ones it isn't too difficult to be 64-bit clean, for larger ones it's a major headache, granted. Compelling reasons are few, usually. But some platforms don't install their 32-bit support in 64-bit versions by default (I think FreeBSD needs to be explicitly told to do so, but I may err on that).
Your program will benefit if it uses a lot of long longs and WOW does of course mean a minor performance hit(very minor though because the CPU has a compatibility mode for such this reason)...
Windows support for 32 bit programs will be degrading in the future(albeit slowly) so I say in another year or 2, you can just about wonder why you would want to deploy a 32 bit application...
Also, a 64 bit build of your application, can actually be much more optimized than a 32 bit build because with 64 bit, you are guaranteed to have quite a few things, such as SSE2.
Performance; x86_64 is faster than x86 by a small amount (that also depends on your compiler), for the reasons already stated. Also, it's easier to support really huge data sets, but of course many applications never need go there.
Certainly on Linux and OS X, you really should support x86_64; I'm typing this to a 64 bit browser on OS X, and my Linux box in the corner is also 64 bit almost exclusively. 64 bit Windows is a bit more of a question, but that is now coming (with the drivers).
This is often based on human factors rather than objective technical reasoning. 64-bit is the latest and greatest and must be better than 32-bit. If customers want it the customer is always right. Have spoke with windows users saying their goal was to make it such that when they view their process list in windows *32 does not appear next to any of their apps.
Often times this is also fueled by confusion on the point of compatibility where people have 64-bit operating systems and just want to make sure the software will work on their computers. Expecting average people to understand the technical demarcation line between 32-bit processes on a 64-bit OS is unrealistic. Unless explicitly stated on the packaging it can be a point of confusion/concern for a customer purchasing new software. Often times you will see 64-bit mentioned for this reason. Typically this really means only 64-bit compatibility.
Now there are a few 64-bit applications (flash player and google earth top my list) that can't come soon enough.
Here's one reason: integrating with 64-bit apps, like writing a shell extension. A 64-bit process cannot load a 32-bit DLL directly, so you have to make yours 64-bit as well.
One factor that hasn't been mentioned yet: WoW64 Is Now an Optional Feature for Server Core. Only an issue if your application needs to run on server systems, of course.
Similarly, Windows PE, Windows RE, etc., do not include Wow64.

Resources