Hi everyone :) am a newbie to develop applications for Mac. My questions are regading to different OS architectures in Mac and am greatly confused in this. Kindly bear with me if my questions are very cheap. Thank u all:)
I know that there is 32 bit support for 10.6(SnowLeopard). I would like to know if there is 32 bit support for 10.7(Lion)??
I have a 64 bit machine. I want a 32 bit 10.7 on it. How would i do so??
I have a 32 bit iMac and I have 10.6.8 in it. I have built an application on it; the application uses a user developed library which is also 32 bit. Now I carry on this application to another Mac machine which has 64 bit processor with 10.7(Lion). Will I be able to execute the same application as such in 10.7(Lion)?? I was not able to do so.
OS X uses a binary format that can support multiple architectures (e.g. 32- and 64-bit Intel, as well as PowerPC, etc) in a single executable or library. Most of the binaries and libraries in Lion are dual-architecture 32&64-bit Intel. So, yes, there is 32-bit support in Lion.
There is no such thing as 32-bit Lion; it's a dual-architecture OS. It can boot the kernel in either 32- or 64-bit mode, and run programs in 32- or 64-bit mode. Unlike most other OSes, it can even run programs in 64-bit mode under a 32-bit kernel. Whenever you run a program in Lion, it checks what architectures the program includes and what the CPU is capable of, and picks the "best" mode to run that program in.
There's no obvious reason this shouldn't work. If you were trying to use a 32-bit-only library from a program that was running in 64-bit mode, or a 64-bit-only library from a program running in 32-bit, it would fail. But if the program is 32-bit only it'll obviously run in that mode, your user developed library is 32-bit, and all of the libraries supplied with the OS are 32+64-bit.
There are a few things that might cause your 32-bit program to fail under Lion. First, does it depend on any libraries other than the one you mentioned and those supplied with the OS (e.g. libraries compiled locally by something like MacPorts, Fink, or Homebrew)? If so, those libraries might've been compiled 64-bit only. IMO libraries should always be compiled for all relevant architectures to avoid this sort of problem, but that's not the default.
Another possible source of trouble is if your program isn't really a program, but something that loads into another program (e.g. a plugin of some sort, screensaver, etc). In that case, your plugin needs to support whatever mode the program that'll load it is running in. You can actually get this issue with Java programs, since the java runtime will start in 64-bit mode (when the CPU supports it) in Lion.
Telling us more about your program and what specific error you get would probably help a lot...
Related
I am currently taking a class on Assembly Language and Computer Architecture. We're programming in MASM for x86 processors. I have a Macbook Air, so of course I have to run Windows on a virtual machine to program in MASM for our assignments.
What I'm confused about: We're learning about, and programming for x86 architecture. When I looked up my Macbook Air's processor, it seemed to be in the x86 family. Considering that, why doesn't MASM work with Mac OS X?
Furthermore, if assembly language communicates directly w/ hardware, why does merely installing the Windows OS (or running it through a VM) on Apple Hardware suddenly allow me to program in MASM?
Thanks,
Ian
[EDIT for clarification: My understanding -- please tell me if i'm wrong -- is that Assembly Language is as "low as you can go." I.e. it's pre-operating system, and provides instructions directly to the hardware itself. Thus, I don't understand why an assembly language for x86 architecture doesn't work on ALL x86 machines, regardless of OS]
Programs are made up of more than just the raw machine code. The executable needs to have a special format that the OS can understand, so it can load and run the code. Also, the code expects a certain environment, such as libraries and system calls (along with the appropriate calling conventions).
To compile and run your assembly program you need to assemble it first, that is run it through MASM in this case. However, MASM itself is a windows executable. It is in the executable format for windows, and it uses libraries and operating system functions accordingly. As such, you can't run it directly on mac os. Afterwards, you typically also need to link your code, which has the same issues. The next problem is with the program itself. MASM (and the rest of the toolchain) is by default also targeting windows (or dos) and so the created program has the appropriate format.
You can theoretically create a program intended to run on mac os using windows and masm. This is called cross-compiling in general. If your toolchain does not support the required mac format, you will need to create everything by hand. You obviously also need to write your program such that it expects the mac environment. For example, you can't use dos interrupts or windows libraries.
Since the architecture is the same, you don't need to virtualize the cpu. You can get away with emulating just the environment. An example for this is the windows emulator, wine, or cygwin emulating unix on windows.
A very rough analogy: there are human languages that use the same alphabet, but you still need to translate. There are also languages that do not even use the same alphabet, or don't even have letters. You will need to do more work in these cases.
Since I have started to learn Golang since yesterday :) I have a question about the compiled file.
Let's assume that I compile my project. It generates an .exec file in /bin folder.
Now my question is Since the file has been compiled on Mac with Intel based CPU, should it be compiled on other OS and other CPU architectures such as AMD, ARM, etc. if I want to publish it to public?
I guess this should not be problem if I'm using GO lang for my backend since I run it on a server. However, what happens if I publish my .exec file, let's say on AWS, with lots of instances that they are automatically increases/decreases based on load? Does it problem?
Edit:
This is nice solution for those how are looking Go cross compiling tool https://github.com/mitchellh/gox
The answer to the first question is yes. The current implementations of Go produce a native binary, so you will probably need a different one for Linux x86 (32-bit), Linux x64 (64-bit), and Linux ARM. You will probably need a different one for Mac OS X also. You should be able to run the 32-bit executable on a 64-bit system as long as any libraries you depend on are available in 32-bit form on that system, so you might be able to skip making a 64-bit executable.
In the future, there may be other implementations of Go that compile for a virtual machine (such as JVM or .NET), in which case you wouldn't need to compile multiple versions for different architectures. Your question is more about existing Go implementations than the language itself.
I don't know anything about AWS, but I suggest you ask that as a separate question.
I am confused what setting of target platform should be chosen to enable my application to run on all computers, regardless of the processor type. I tried All CPU but it did not work on a few computers.
Thanks
The x86 works on 32bit OS as well on 64bit OS, the same for AnyCPU. So what is the difference?
The difference lays in the way the JIT compiler emits the code of your application on the target computer.
When you use x86 platform the code emitted by JIT is always a 32bit code, also on 64bit systems.
This could be a problem if you don't have installed the correct 32 drivers/dll needed by your applications (The Microsoft.ACE.OleDB is one of these problematic libraries).
Conversely, when you use the AnyCPU platform the JIT emits 32bit code on 32bit systems and 64bit code on 64bit systems. And this is more problematic than x86 because you need the correct drivers for both systems. So I suspect that the reason your app fails on some systems is due to the lack of the correct (for the system bitness) libraries used by your app.
In doubt I think is better to use x86 platform unless you have very specific requirements for 64bit systems.
I decided to start learning assembly a while ago, and so I started with 16-bit assembly, using FASM.
However, I recently got a really new computer running Windows 7 64-bit, and now none of the compiled .COM files that the program assembles work any more. They give an error message, saying that the .COM is not compatible with 64-bit however.
32-bit assemblies still work, however I'd rather start with 16 and work my way up...
Is it possible to run a 16-bit program on windows 7? Or is there a specific way to compile them? Or should I give up and skip to 32-bit instead?
The reason you can't use 16-bit assembly is because the 16-bit subsystem has been removed from all 64-bit versions of Windows.
The only way to remedy this is to install something like DOSBox, or a virtual machine package such as VirtualBox and then install FreeDOS into that. That way, you get true DOS anyway. (NTVDM is not true DOS.)
Personally, would I encourage writing 16-bit assembly for DOS? No. I'd use 32- or even 64-bit assembly, the reason being there are a different set of function calls for different operating systems (called the ABI). So, the ABI for 64-bit Linux applications is different to 32-bit ones. I am not sure if that's the case with Windows. However, I guarantee that the meaning of interrupts is probably different.
Also, you've got all sorts of things to consider with 16-bit assembly, like the memory model in use. I might be wrong, but I believe DOS gives you 64K memory to play with "and that's it". Everything, your entire heap and stack along with code must fit into this space, as I understand it, which makes you wonder how anything ever worked, really.
My advice would be to just write 32-bit code. While it might initially seem like it would make sense to learn how to write 16-bit code, then "graduate" to 32-bit code, I'd say in reality rather the opposite is true: writing 32-bit code is actually easier because quite a few arbitrary architectural constraints (e.g., on what you can use as a base register) are basically gone in 32-bit code.
For that matter, I'd consider it open to substantial question whether there's ever a real reason to write 16-bit x86 code at all. For most practical purposes, it's a dead platform -- for desktop machines it's seriously obsolete, and for embedded machines, you're more likely to see things like ARMs or Microchip PICs. Unless you have a specific target in mind and know for sure that it's going to be a 16-bit x86, I'd probably forget that it existed, just like most of the rest of the world has.
32-bit Windows 7 and older include / enable NTVDM by default. On 32-bit Win8+, you can enable it in Windows Features.
On 64-bit Windows (or any other 64-bit OS), you need an emulator or full virtualization.
A kernel in long mode can't use vm86 mode to provide a virtual 8086 real-mode environment. This is a limitation of the AMD64 / x86-64 architecture.
With a 64-bit kernel running, the only way for your CPU to natively run in 16-bit mode is 16-bit protected mode (yes this exists; no, nobody uses it, and AFAIK mainstream OSes don't provide a way to use it). Or for the kernel to switch the CPU out of long mode back to legacy mode, but 64-bit kernels don't do that.
But actually, with hardware virtualization (VirtualBox, Hyper-V or whatever using Intel VT-x or AMD SVM), a 64-bit kernel can be the hypervisor for an entire virtual machine, whether that VM is running in 16-bit real mode or running a 32-bit OS (like Windows 98 or 2000) which can in turn use vm86 mode to run 16-bit real-mode executables.
Especially on a 64-bit kernel, it's usually easier to just emulate a 16-bit PC entirely (like DOSBOX does), instead of using HW virtualization to running normal instructions natively but trap direct hardware access (in / out, loads/stores to VGA memory, etc.) and int instructions that make DOS system calls / BIOS calls / whatever.
I initially thought that 64 bit instructions would not work on OS-X 10.5.
I wrote a little test program and compiled it with GCC -m64.
I used long long for my 64 bit integers.
The assembly instructions used look like they are 64 bit. eg. imultq and movq 8(%rbp),%rax.
I seems to work.
I am only using printf to display the 64 bit values using %lld.
Is this the expected behaviour?
Are there any gotcha's that would cause this to fail?
Am I allowed to ask multiple questions in a question?
Does this work on other OS's?
Just to make this completely clear, here is the situation for 32- and 64-bit executables on OS X:
Both 32- and 64-bit user space executables can be run on both 32- and 64-bit kernels in OS X 10.6, without emulation. On 10.4 and 10.5, both 32- and 64-bit executables can run on the 32-bit kernel. (This is not true on Windows)
The user space system libraries and frameworks are built 32/64-bit fat on 10.5 and 10.6. You can link against them normally, whether you're building for 32-bit, 64-bit, or both. A few libraries (basically the POSIX layer) are also built 32/64-bit fat on 10.4, but many of them are not.
On 10.6, the build tools produce 64-bit executables by default. On 10.5 and earlier, the default is 32-bit.
On 10.6, executables that are built fat will run the 64-bit side by default. On 10.5 and earlier, the 32-bit side is executed by default.
You can always manually specify which slice of a fat executable to use by using the arch command. eg. arch -arch i386 someCommandToRunThatIWantToRunIn32BitMode. For application bundles, you can either launch them from the command line, or there is a preference if you "get info" on the application.
OS X and Linux use the LP64 model for 64-bit executables. Pointers and long are 64 bits wide, int is still 32 bits, and long long is still 64 bits. (Windows uses the LLP64 model instead -- long is 32 bits wide in 64 bit Windows).
Mac OS X 10.5 supports 64-bit user-land applications pretty well. In fact, Xcode runs in 64-bit in 10.5 on a compatible architecture.
It's only the built-in applications (Finder, Safari, frameworks, daemons etc.) also have the 64-bit version in 10.6.
Meta: I don't like to see answers deleted. I guess this has been discussed somewhere.
Anyway, KennyTM and the other kind sole got me started and although one answer was deleted, I appreciated your efforts.
It looks like this is expected behaviour on the Mac, and it even seems to work on a 32-bit Linux as well (although I have not tested extensively)
Yep. GCC behaves different (at least in my limited observation) for 32 (-m32) and 64 (-m64) bit modes. In 32 bit, I was able to access variable arguments using an array. In 64 bit mode this just does not work.
I have learnt that you MUST access variable parameters using va_list as defined by stdarg.h because it works in both modes.
Now I have a command-line program that runs and passes all of my test cases in 32 bit and 64 bit modes on Mac OS-X.
The program implements a linked list garbage collector sweeping 16-byte aligned malloc-allocated objects from a global list as well as machine registers and the stack - actually, there are extra registers in 64 bit mode, so I still have a bit of work to do.
Objects are either a collection of 32 or 64 bit words which link together to form LISP/Scheme-like data structures.
In summary, it is a complex program that does a lot of messing with pointers and it works the same under 32 and 64 bit modes.
Asking multiple questions does not get you all the answers you might want.
It seems to work, as I wrote, on Linux.
Again, thank you for helping me with this.