I am currently taking a class on Assembly Language and Computer Architecture. We're programming in MASM for x86 processors. I have a Macbook Air, so of course I have to run Windows on a virtual machine to program in MASM for our assignments.
What I'm confused about: We're learning about, and programming for x86 architecture. When I looked up my Macbook Air's processor, it seemed to be in the x86 family. Considering that, why doesn't MASM work with Mac OS X?
Furthermore, if assembly language communicates directly w/ hardware, why does merely installing the Windows OS (or running it through a VM) on Apple Hardware suddenly allow me to program in MASM?
Thanks,
Ian
[EDIT for clarification: My understanding -- please tell me if i'm wrong -- is that Assembly Language is as "low as you can go." I.e. it's pre-operating system, and provides instructions directly to the hardware itself. Thus, I don't understand why an assembly language for x86 architecture doesn't work on ALL x86 machines, regardless of OS]
Programs are made up of more than just the raw machine code. The executable needs to have a special format that the OS can understand, so it can load and run the code. Also, the code expects a certain environment, such as libraries and system calls (along with the appropriate calling conventions).
To compile and run your assembly program you need to assemble it first, that is run it through MASM in this case. However, MASM itself is a windows executable. It is in the executable format for windows, and it uses libraries and operating system functions accordingly. As such, you can't run it directly on mac os. Afterwards, you typically also need to link your code, which has the same issues. The next problem is with the program itself. MASM (and the rest of the toolchain) is by default also targeting windows (or dos) and so the created program has the appropriate format.
You can theoretically create a program intended to run on mac os using windows and masm. This is called cross-compiling in general. If your toolchain does not support the required mac format, you will need to create everything by hand. You obviously also need to write your program such that it expects the mac environment. For example, you can't use dos interrupts or windows libraries.
Since the architecture is the same, you don't need to virtualize the cpu. You can get away with emulating just the environment. An example for this is the windows emulator, wine, or cygwin emulating unix on windows.
A very rough analogy: there are human languages that use the same alphabet, but you still need to translate. There are also languages that do not even use the same alphabet, or don't even have letters. You will need to do more work in these cases.
Related
I try to understand this whole "compiling" topic in a way more detailed than all those "what is a compiler (doing)?" articles out there.
One big question to me is processor- and os-platform dependency when compiling directly to machine code (e.g. C). I try to formulate concrete questions that needs to be resolved in order to get my picture clearer:
I compile my C code via gcc on a Linux distribution... :
Can I run the resulting executable on any other Linux Distribution?
Is that executable bound the processor platform compiled on? Do I need to search for another e.g. power-pc gcc when I am running a x86 distro?
Can I somehow execute this on windows? I know executables differs but the binary code is the same, isn't it?
So in the end my questions aims on: Is compiling about targeting a specifiy OS paltform, processor platform or both?
Thanks!
Compiling targets both, OS, and Architecture.
The OS needs to be targeted because:
The format of what is an "executable" file is different among operating systems.
Programs call the operating system even for common tasks like writing to the console, reading from a file, or terminating cleanly (standards like POSIX mitigate OS dependencies by defining a common layer between the program and the OS).
The CPU architecture must be targeted because the CPU instructions are different, even among different generations of the "same architecture".
Can I run the resulting executable on any other Linux Distribution?
In general, Yes, but on specific cases it may depend on the type of program (f.i. GUI) and the services assumed available on the OS.
Is that executable bound the the processor platform compiled on? Do I need to search for another e.g. power-pc gcc when I am running a x86 distro?
I don't understand what you mean by "search", but, Yes, you can cross-compile from, say, x86 targeting PPC.
Can I somehow execute this on Windows? I know executables differ but the binary code is the same, isn't it?
These days Windows has Ubuntu integration, and that allows for some kind of exceptions, but the general answer is No, because of the above.
Since I have started to learn Golang since yesterday :) I have a question about the compiled file.
Let's assume that I compile my project. It generates an .exec file in /bin folder.
Now my question is Since the file has been compiled on Mac with Intel based CPU, should it be compiled on other OS and other CPU architectures such as AMD, ARM, etc. if I want to publish it to public?
I guess this should not be problem if I'm using GO lang for my backend since I run it on a server. However, what happens if I publish my .exec file, let's say on AWS, with lots of instances that they are automatically increases/decreases based on load? Does it problem?
Edit:
This is nice solution for those how are looking Go cross compiling tool https://github.com/mitchellh/gox
The answer to the first question is yes. The current implementations of Go produce a native binary, so you will probably need a different one for Linux x86 (32-bit), Linux x64 (64-bit), and Linux ARM. You will probably need a different one for Mac OS X also. You should be able to run the 32-bit executable on a 64-bit system as long as any libraries you depend on are available in 32-bit form on that system, so you might be able to skip making a 64-bit executable.
In the future, there may be other implementations of Go that compile for a virtual machine (such as JVM or .NET), in which case you wouldn't need to compile multiple versions for different architectures. Your question is more about existing Go implementations than the language itself.
I don't know anything about AWS, but I suggest you ask that as a separate question.
Hi everyone :) am a newbie to develop applications for Mac. My questions are regading to different OS architectures in Mac and am greatly confused in this. Kindly bear with me if my questions are very cheap. Thank u all:)
I know that there is 32 bit support for 10.6(SnowLeopard). I would like to know if there is 32 bit support for 10.7(Lion)??
I have a 64 bit machine. I want a 32 bit 10.7 on it. How would i do so??
I have a 32 bit iMac and I have 10.6.8 in it. I have built an application on it; the application uses a user developed library which is also 32 bit. Now I carry on this application to another Mac machine which has 64 bit processor with 10.7(Lion). Will I be able to execute the same application as such in 10.7(Lion)?? I was not able to do so.
OS X uses a binary format that can support multiple architectures (e.g. 32- and 64-bit Intel, as well as PowerPC, etc) in a single executable or library. Most of the binaries and libraries in Lion are dual-architecture 32&64-bit Intel. So, yes, there is 32-bit support in Lion.
There is no such thing as 32-bit Lion; it's a dual-architecture OS. It can boot the kernel in either 32- or 64-bit mode, and run programs in 32- or 64-bit mode. Unlike most other OSes, it can even run programs in 64-bit mode under a 32-bit kernel. Whenever you run a program in Lion, it checks what architectures the program includes and what the CPU is capable of, and picks the "best" mode to run that program in.
There's no obvious reason this shouldn't work. If you were trying to use a 32-bit-only library from a program that was running in 64-bit mode, or a 64-bit-only library from a program running in 32-bit, it would fail. But if the program is 32-bit only it'll obviously run in that mode, your user developed library is 32-bit, and all of the libraries supplied with the OS are 32+64-bit.
There are a few things that might cause your 32-bit program to fail under Lion. First, does it depend on any libraries other than the one you mentioned and those supplied with the OS (e.g. libraries compiled locally by something like MacPorts, Fink, or Homebrew)? If so, those libraries might've been compiled 64-bit only. IMO libraries should always be compiled for all relevant architectures to avoid this sort of problem, but that's not the default.
Another possible source of trouble is if your program isn't really a program, but something that loads into another program (e.g. a plugin of some sort, screensaver, etc). In that case, your plugin needs to support whatever mode the program that'll load it is running in. You can actually get this issue with Java programs, since the java runtime will start in 64-bit mode (when the CPU supports it) in Lion.
Telling us more about your program and what specific error you get would probably help a lot...
Is it possible for gcc, installed on fedora 16, to cross compile for a different CPU, say SPARC?
I have build a certain understanding, need some expert to correct me if I am wrong. Different operating systems differ by the system calls they use to access the kernel or entirely by the kernel they use. IS THIS CORRECT? different kernels understands different systems calls for accessing underlying hardware. binaries or executables or programs are nothing but a bunch of system calls only. therefore every OS has its own executable. an executable meant to run to on windows wound not run on linux. by cross compiling the source code of any windown's executable we can generate executable for other OSs. word PLATFORM means operating system. POSIX are certain design standards for UNIX-like OSs.
we usually cross compile for different OSs. BUT can we cross compile for different hardware too? for example, in case of a microcontroller which does not have an OS?
No. You can't use native machine (x86) gcc for compiling program files for a different architecture. For that you require a cross-compiler-gcc that is specific to that processor architecture.
Your understanding about system calls for OS is correct. Each OS has its own set of system call which is been used by library. These libraries at the end will be translated into machine language for the processor.
Each Processor Architecture has its own set of instruction know as Instruction Set Architecture(ISA). So when a program written in high-level-language (like C) is compiled, it should be converted into machine language from its ISA. This job is done by the compiler(gcc). A compiler will be specific to only one processor architecture. For example gcc is for x86 processor. So if you want a compiler for different processor in you x86 machine you should go for a cross-compiler of that processor.
You would have to build such a version. That's part of the process of porting gcc to a new platform. You build a version that cross-compiles, then you cross-compile that version, then you test that version on the new platform, debug, rinse, and repeat.
Is it possible to run the COFF executable files on UNIX or the ELF executable files on Windows? And what would be the steps to be able to run either file type on Windows and UNIX. I'm just curious.
To answer your question properly, it is relevant to review what ELF, COFF, and PE are. These binary formats are essentially just containers that give directions to the operating system about how to execute the raw CPU instructions contained in the file. They are very much like audio/video containers like MKV, WMV, and OGG. Support for the executable format is either in the operating system or not. Microsoft Windows has consistently not given any support for COFF or ELF, until recently. With Windows 10, Microsoft has provided indirect support for ELF by building into the Windows kernel UserMode-Linux compatible system routines. A UserMode Linux kernel runs on top of the Windows kernel and runs all ELF binary formats almost as if it were running independent of MS Windows.
The alternative to using the UserMode-Linux (sub-kernel) being for Microsoft to rewrite the majority of the Linux API in a completely compatible format, their choice solves one other compatibility issue: The API. "A" stands for Application and "I" for Interface, however the API as an interface is mainly just a set of executable routines and environment assumptions. Access to the filesystem and most basic system routines is provided by the Windows kernel, while everything else is provided in the UserMode Linux kernel. This way not only can Windows run ELF formatted executables, but in can run the most popular ELF executables that are already made to run on the Linux API.
The reverse, the other half of the question, running PE (most Microsoft Windows executables) on Linux is possible as well. There are two runtime wrapping libraries that can run MSIL (virtual machine application) and Win32 (normal CPU application). Because the Linux kernel is extendable to recognize a certain byte format, then run an appropriate wrapper program, in effect the kernel supports PE and potentially more executable container formats. Therefore, Linux can run some PE programs either in the mono runtime (.NET/C# applications) or in the WINE runtime (Win32 C/C++).
To install the UserMode-Linux environment you can follow instructions provided on Microsoft's Development Network. To summarize:
Turn on Developer Mode: Settings | Update & Security | For Developers | Check the Developer Mode radio button
From the start menu, open “Turn Windows Features on or off”
Scroll down and check the “Windows Subsystem for Linux (Beta)” feature
Hit okay and reboot (required step)
Once rebooted, open a PowerShell/command prompt and run “Bash” and follow the simple prompts to accept Canonical’s license and kick-off the download of the Ubuntu image
After download has completed, you’ll be able to start “Bash on Ubuntu on Windows” from the Start menu
Be aware this method only works on Windows 10 and is still limited to text-mode console and a Win32 port of Xorg like vcXsrv for anything graphical. Cygwin or MSYS2 systems are not able to run ELF binaries, but make it possible to port and run the same applications that are normally ELF binaries on a Linux system.
To actually run executables and have them do useful stuff, you need to worry about the API, not just the executable file format. On a Linux machine with WINE installed, you can run Windows .EXE files from the command line and they do the same thing that they do on Windows.
The other way around is not really possible, however if you install CYGWIN on a Windows machine, and then rebuild the application from source with CYGWIN compilers, you will get an executable that runs on Windows and does the same thing that the Linux executable does on Linux. Lots of standard Linux tools are already ported and in the CYGWIN repository including stuff like X-Windows and GIMP.
http://lbw.sourceforge.net/ works better than line.
low was another project for doing the same thing, but that was the less working.
EDIT: http://atratus.org/ seems to do the same as well, without the need to have Interix/SFU.
COFF was originally introduced by UNIX (around System V or thereabouts) so yes, some UNIX probably still supports COFF format. It's been deprecated by Linux at least for a while, and presumably most other Unices have also deprecated or outright dropped support.
Windows ELF support is a bit more iffy - almost certainly not there without some deep trickery. You should be more specific about what you're trying to do here...