wrap 32-bit binary library to 64-bit one on OS X - macos

I have a 32-bit static library (.a) which I need to use in 64-bit project, it's library from one of the legacy project of my company and the source code was lost. So I wonder, is it possible to wrap 32-bit library to 64-bit one without source code?

There is no way to mix 32-bit and 64-bit code in the same process. This is an architectural limitation of x86_64. Depending on your needs, you may be able to package your library as an XPC service and then communicate with it directly via XPC.

Related

Build output of SpiderMonkey under Windows

I built SpiderMonkey 60 under Windows (VS2017) according to the documentation, using
../configure --enable-nspr-build followed by mozmake.
In the output folder (dist\bin) I could see 5 DLLs created:
mozglue.dll, mozjs-60.dll, nspr4.dll, plc4.dll, plds4.dll
In order to run the SpiderMonkey Hello World sample I linked my C++ program with mozjs-60.lib and had to copy over to my program location the following DLLs: mozglue.dll, mozjs-60.dll, nspr4.dll
It seems that plc4.dll, plds4.dll are not needed for the program to run and execute scripts.
I could not find any documentation about what is the purpose of each one of the DLLs. Do I need all 5 DLLs? what is the purpose of each one?
Quoting from NSPR archived release notes for an old version I found this:
The plc (Portable Library C) library is a separate library from the
core nspr. You do not need to use plc if you just want to use the core
nspr functions. The plc library currently contains thread-safe string
functions and functions for processing command-line options.
The plds (Portable Library Data Structures) library supports data
structures such as arenas and hash tables. It is important to note
that services of plds are not thread-safe. To use these services in a
multi-threaded environment, clients have to implement their own
thread-safe access, by acquiring locks/monitors, for example.
It sounds like they are unused unless specifically loaded by your application.
It seems it would be safe to not distribute these if you don't need them.

Cross-compile on a Linux host for various targets

I have a set of more or less portable C/C++ sources sitting on a Linux development host that I would like to be able to:
compile for 32- and 64-bit Linux targets
cross-compile for 32- and 64-bit Windows targets
cross-compile for 32- and 64-bit Mac targets
and, ideally, without any runtime dependencies on other emulation DLL's like cygwin1.dll, MinGW, etc though I could use them if there's no other choice. If I have to use them, I'd prefer statically linking their functionality to my code.
The target binary that is desired is:
a shared library (.so) for Linux and Mac targets, and
a DLL for Windows.
I have no idea how to build a cross-compiler (and the associated toolchain) from scratch. I'm hearing that pre-built cross-compiler toolchains are available for various host-and-target combinations, but I don't know where to find them, or even how to use them without running into runtime crashes/coredumps later due to pointer model subtleties (LP64, LLP64, etc), specifying wrong or inadequate compiler switches, other misconfiguration, etc.
I've so far been unable to find the relevant and complete information on the above, and whatever little I've managed to find is scattered all over the place in so many bits and pieces that I'm not even sure if all that I've read is complete or even correct (applies fully, no more no less to my case).
I'm not a compilers expert, just their regular user. Would appreciate information achieving the above compilation goals.
I would like to cross compile a library for Mac OsX on Linux and I am considering imcross. The instructions in the site are simple, but everytime you setup a crosscompiling environment you have to fix a lot of things, so I won't expect that it will be straightforward. You can check in the website that there are some limitations to this project but it is the best I came across.
Not being a priority for me now (I have other stuff to do before performing this task) I didn't setup the crossenvironment yet. I am going to do that in few days time.

fast windows marshaling between 32-bit and 64-bit processes

Currently app structure is as follows:
Our C# GUI
Our managed C++ library
3rd-party unmanaged 32-bit C++ library
What we need is to make our application 64-bit but leave 3rd-party library 32-bit (there is no 64-bit version). The issue is that this library is decoding large arrays (10-100 MB) all the time, so marshaling time is an issue.
Few options that we thought of:
Wrap 3rd-party library into Managed C++ ActiveX and call it from C# - simple, but we expect heavy marshaling penalties
Use Boost.Interprocess at both sides - seems to be more complex, but probably faster
Any suggestions regarding which way to choose for sake of execution speed? Are there other ways?

How does the same source code generate binaries for different platforms?

Many multi-platform applications seem to have common source code. How do builds generate platform specific binaries?
Is it possible to build say, a windows binary on linux or mac?
It's possible if you have an appropriate cross-compiler and libraries. For example, many programs which are available on both Linux and Windows use the MinGW toolchain on Windows, which includes a library that emulates POSIX functions using Win32 functions.
The platform a binary is compiled to run on depends on the compiler and generally, one can have the compiler compile for a target system. To that end, yes it is generally possible to compile for a system other than the one you are running on. Though you are usually better off compiling for a target system on that system.

Regular DLL using: MFC Shared vs MFC statically linked

When we create a DLL using Visual studio (VC8 or 9), we get an option as
create Regular DLL
using MFC as shared DLL
or
using MFC as static library
How are they different? Which one is advisable to use?
A static library means the code you use from the library is included in your executable. Because of this, you don't need to ship the library or require the end user to have it on their machine. However this bloats the size of your executable and ties you to that library version, so if you need to update just the library, you have to ship a new executable.
A shared library calls the library at the time it needs it (runtime) to execute the code, but it requires the user to have it (usually a specific or minimum version) installed on their machine. You can also distribute the required version of the library with your application if you need to.
As for which is better, I don't know. I'm not a Windows C++ or MFC programmer so I couldn't say. On my Linux servers, the applications I write are generally server-side and thus use shared libraries.
It depends on how your application is to be used, distributed, updated, how often the MFC library changes, if it's generally available on user's PCs etc.
[I think I got my answer now]
If you use MFC DLL as dynamic linking, your code will need the Microsoft Foundation Library DLL's (specifically the version your code requires) installed along with your application or dll in the user end. So this means your installation package would contain
Your application/DLL and supporting files
All MFC Dlls
This makes the installation package size go bigger and also make take time for user to download your installation setup.
If you link to MFC as static library, you code will work even without MFC DLLs present at the user end . The reason being pretty simple that all the MFC libraries you refererred in your code, will be linked into your application or dll. This means those MFC libraries used in your app/dll becomes the part of the your binary; however, your app/dll will be little bigger.
Another consideration is servicing your application.
If you ship the MSFT redis, dynamically linking against its libraries, and then MSFT later "fixes" some vital flaw in a DLL, they patch the DLL on your customer's machines through Window's Update. If you statically link, you will need to update all your customers directly.
Of course, if you are concerned that a patched DLL might break your application (because you rely on unspecified behavior), you may want to handle the servicing (and testing) directly with your customer.

Resources