I'm developing an electron application which depends on a bunch of modules that use native (C/C++) code.
Due to the way electron includes a patched node.js, these native modules have to be recompiled from source (everything is OSS) when we prepare a release.
One problem we have is that for a small number of users those native modules fail to load with pretty unspecific error messages. The modules don't have any further dependencies and we do validate at startup that all files shipped with the application are there.
So the only reason I can think of why the modules don't load is that some Anti-Virus may interfere with the loading of those modules (they are dlls but with a different file extension).
Since the modules were recompiled, even if the prebuild ones had been signed, the ones we ship are not.
So my question is: Does it make sense for us to sign all those dlls we ship with our own certificate, even though they aren't developed by us? Are there potential (legal) ramifications for using a certificate like that?
Is my theory about AV interfering with the loading of DLLs even plausible?
Related
I am working on a Qt app with some library dependencies, for which I will have to make an installer.
From everything I read, seems like the best way is to make a bundle app with all library dependencies, and the required Qt frameworks, inside myapp.app/Contents/Frameworks
There are other applications created in parallel... that will get to be deployed on mac as well. They will have the same library dependencies and will be built using the same Qt version.
In that case it makes sense for the libraries and Qt to be installed OUTSIDE the bundle... so both (all) apps have access without having multiple copies of the same libraries.
Does that seem reasonable - and do-able with mac osx concept of bundle islands ? And how would I create such an installer that places libraries outside the app bundle ?
The simplest method of deploying Qt for OS X is to use the macdeployqt command line tool, and you have identified correctly that the normal method is to place the frameworks inside the app bundle, but multiple apps will each have copies of the frameworks.
It is reasonable to suggest moving the Qt frameworks to a separate, external location and linking to that instead. However, you will need to manage the framework carefully, especially when it comes to providing updates and be aware that if the framework is removed or altered, all your applications will fail to load. This, however is the same for any framework dependent application.
The thing to consider is where to place the framework. Normally, external frameworks reside in /Library/Frameworks, but if we all start to use that for Qt, problems may occur when your app is installed and another developer installs their app's frameworks with a different version of the libraries.
Apple defines various 'key directories' for applications and initially, the most likely location would appear to be the "Application Support Directory", but the documentation states that this is for: -
any type of file that supports the app but is not required for the app to run
This location is often used for support files, such as templates for the user to select.
If your application is to be deployed via the Apple Store, I wouldn't be surprised if it is rejected if you use this location. However, you're not using the Apple Store, then you could deploy the frameworks here.
If the Apple Store is your method of deployment, then /Library/Frameworks is probably the only place acceptable for the Qt framework to reside, with the possibility of the problems I've mentioned above.
Alternatively, consider just how many applications you're developing and is it really an issue to bundle the frameworks multiple times against the advantages that it brings, such as allowing the user to cleanly remove the application and all of its dependencies, as well as reducing problems of the framework being altered or removed accidentally?
If you choose to move them externally, you can refer to the answer to this question, which comprehensively explains how to make installer packages, after having updated your binary dependencies on the frameworks with the install_name_tool, as outlined here.
I'm still not 100% sure with the framework linking process, but from what I've seen here before nobody has asked a similar question, perhaps because this could be a silly question, but I'll give it a go anyway.
In my current X-Code project, I'm using a custom framework, say example.framework. At the moment, as far as I'm aware of, in order for the program to function with the framework, I need to have it either in /Library/Frameworks, or I need to have it copied into the bundle resources in the build phase.
Would anybody know about adding a framework to a project in a way that it gets compiled into the executable, so I don't have to include the raw framework with the app? I'd rather not share the whole framework...
Thank you in advance! Any suggestions are also welcome!
A Mac OS X framework is basically a shared library, meaning it's a separate binary.
Basically, when your main executable is launched, the OS will load the framework/dylib into memory, and map the symbols, so your main executable can access them.
Note that a framework/dylib (bundled into the application or not), does not need to contain the header files, as those are only needed at compilation time.
With Xcode, you can actually decide whether or not to include the header files, when you are copying the framework to its installation directory (see your build phases).
If you don't copy header files, people won't be able to use your framework/dylib (unless they reverse-engineer it, of course).
If you still think a framework is not suitable for your needs, you may want to create a static library instead.
A static library is a separate object file (usually .a) that is «included» with your final binary, at link time.
This way, you only have a single binary file, containing the code from the library and from your project.
I understand for non-iOS targets, using shared libraries can lead to lower memory usage, and also that some companies distribute a library and headers (like Superpin) and a static library allows them to not distribute the source of their product. But outside of those, what are the reasons you'd want to use a static library? I use git for all of my projects, and I usually add external libraries (open source ones) as a submodule. This means they take up disk space locally, but they do not clutter up the repo. Also since iOS doesn't support shared libraries, the benefits of building libraries to promote code reuse seems diminished.
Basically, is there any reason outside of selling closed source libraries that it makes sense to build/use static libraries for iOS?
organization, reuse, and easy integration into other programs.
if you have a library which is used by multiple apps or targets multiple platforms, then you will have to maintain the build for each app. with a library, you let the library maintainer set up the build correctly, then you just link to the result (if it's developed internally, then you'll want to add it as a dependency too).
it's like DRY, but for projects.
libraries become more useful as projects become more complex. you should try to identify what programs (functions, class hierarchies, etc) are reusable outside of your app's context, and put it in a library for easy reuse - like pattern recognition.
once your codebase has hundreds or thousands of files, you will want to minimize what you use, and you will not want to maintain the dependencies manually for each project.
Also since iOS doesn't support shared
libraries, the benefits of building
libraries to promote code reuse seems
diminished.
There's no reason you can't build your own static library to use across multiple projects.
Other than for that purpose and the ones you mentioned I don't think there's much else.
Static libraries allow you to have truly standalone executables. Since all library code is actually, physically present in the executable, you don't have to worry about the exec failing to run because there's a too-old version of some library, or a too-new one, or it's completely missing, etc. And you don't have to worry about your app suddenly breaking because some library got replaced. It cuts down on dependencies and lets your app be more encapsulated.
I am building a win32 executable. The compiler is the latest version of MinGW. The library dependencies are GLUT and libpng.
I first tested on a windows 7 machine, and had to obtain libpng3.dll and freeglut32.dll. However, on XP, I had to (in addition) acquire zlib1.dll.
The XP machine was a VM with a fresh install, so I suspect a fresh win7 machine may also be lacking zlib1.
My question is how do I go about finding out which dll's I need to distribute? How do I know, a priori, which dynamic libraries are needed for my program to run on a particular system? I suppose this is what installer programs are for... I'm guessing that what the installer does is look through the system to find out which dependencies are unsatisfied, and then provides them. So this way if I were to distribute my program I could check if the user's machine already has zlib1.dll, and I won't install zlib1.dll if it's already found in the system directory. However I never found a document that said to me specifically, "libpng requires zlib", and so, until such point as I tested the executable on a machine lacking zlib, I was unaware of this dependency. How can I create my dependency list without having a fresh install of each version of every operating system to test on?
One idea I have is to decompile the executable, or through some method examine the linking process, to find all the libraries that are being linked at runtime. The problem now becomes figuring out which of these are supposed to already be there, and which of them I could be expected to provide in the distribution.
edit: Okay, I looked, and the installation of libpng I downloaded did provide zlib1.dll inside its bin directory. So not including it is pretty much my fault. In any case, Daniel's answer is definitive.
Dependendy Walker shows all deps of your program.
The correct answer to this question, in my view, is to start at the source rather than to reverse engineer the solution with Dependency Walker, awesome and useful tool though it undoubtedly is.
The problem with Dependency Walker is that it only tells you what one particular run of the program requires on the OS on which you run it. If you have any dynamic loading dependencies in your app then you would only pick those up if you made sure you profiled the app with Dep. Walker and forced it through those dynamic loads.
My preferred approach to this problem is to start with your own source code and analyse and understand what it depends upon. It's often easy enough to do so because you know it well.
You need to understand what are the deployment requirements for your compiler. You usually have options of linking statically and dynamically to the C++ runtime. Obviously a dynamic link results in a deployment requirement.
You will also likely link to 3rd party code. One example would be Windows components. These typically don't need deployment, you can take them as already being in place. Sometimes that's not true, e.g. GDI+ on Windows 2000.
Sometimes you will link statically to 3rd party code (again easy), but if you link dynamically then that implies a deployment requirement.
I have a custom framework that, following the advice in Apple's Framework Programming Guide >> Installing your framework I install in /Library/Frameworks. I do this by adding a Run Script build phase with the following script:
cp -R build/Debug/MyFramework.framework /Library/Frameworks
In my projects I then link against /Library/Frameworks/MyFramework and import it in my classes like so:
#import <MyFramework/MyFramework.h>
This works very well, except that I always see the following message in my debugger console:
Loading program into debugger…
sharedlibrary apply-load-rules all
warning: Unable to read symbols for "/Users/elisevanlooij/Library/Frameworks/MyFramework.framework/Versions/A/MyFramework" (file not found).
warning: Unable to read symbols from "MyFramework" (not yet mapped into memory).
Program loaded.
Apparently, the compiler first looks in /Users/elisevanlooij/Library/Frameworks, can't find MyFramework, then looks in /Library/Frameworks, does find MyFramework and continues on its merry way. So far this has been more of an annoyance than a real problem, but when runnning unit tests, gdb stops on the (file not found) and refuses to continue. I have solved the problem by adding an extra line to the Run Script Phase
cp -R build/Debug/MyFramework.framework ~/Library/Frameworks
but it feels like sello-taping something that shouldn't be broken in the first place. How can I fix this?
In the past months, I've learned a lot more about frameworks, so I'm rewriting this answer. Please note that I'm talking about installing a framework as part of the development workflow.
The preferred location for installing a public framework (i.e. a framework that will be used by more than one of your apps or bundles) is /Library/Frameworks[link text] because "frameworks in this location are discovered automatically by the compiler at compile time and the dynamic linker at runtime."[Framework Programming Guide]. The most elegant way to do this is in the Deployment section of the Build settings.
As you work on your framework, there are times when you do want to update the framework when you do a build, and times when you don't. For that reason, I change the Deployment settings only in the Release Configuration. So:
Double-click on the framework target to bring up the Target info window and switch to the Build tab.
Select Release in the Configuration selectbox.
Scroll down to the Deployment section and enter the following values:
Deployment Location = YES (click the checkbox)
Installation Build Products Location = /
Installation Directory = /Library/Frameworks
The Installation Build Products Location serves as the root of the installation. Its default value is some /tmp directory: if you don't change it to the system root, you'll never see your installed framework since it's hiding in the /tmp.
Now you can work on your framework as you like in the Debug configuration without upsetting your other projects and when you are ready to publish all you need to do is switch to Release and do a Build.
Xcode 4 Warning
Since switching to Xcode 4, I've experienced a number of problems with my custom framework. Mostly, they are linking warnings in GDB that do not really interfere with the usefulness of the framework, except when running the built-in unit-test. I have submitted a technical support ticket to Apple a week ago, and they are still looking into it. When I get a working solution I will update this answer since the question has proven quite popular (1 kViews and counting).
There's not much reason to put a framework into Library/Frameworks, and it's a lot of work: You'd need to either do it for the user in an Installer package, which is a tremendous hassle to create and maintain, or have installation code in your app (which could only install to ~/L/F, unless you expend the time and effort necessary to make your app capable of installing to /L/F with root powers).
Much more common is what Apple calls a “private framework”. You'll bundle this into your application bundle.
Even frameworks intended for general use by any applications (e.g., Sparkle, Growl) are, in practice, built to be used as private frameworks, simply because the “right” way of installing a single copy of the framework to Library/Frameworks is such a hassle.
The conventional way to do this is to have your framework project and its clients share a common build directory. Xcode will search for framework headers and link against framework binaries in the build folder first, before any other location. So an app project that compiles and links against the header will pick up the most-recently-built one, rather than whatever's installed.
You can then remove the cp -r and instead use the Install Location build setting to place your build product in the final location, using xcodebuild install DSTROOT=/ at the command line. But you'll only need to do this when you're finished, not every time you rebuild the framework.
Naturally, when you distribute your framework it should be installed in /Library/Frameworks; however it seems odd to me that you're doing that with the test/debug versions of your framework.
My first instinct would be to install test versions under ~/Library, as it just makes setting up your test and debug environment that much simpler. If possible, I would expect the debug/test framework to be located in the build tree of the version I'm testing, in which case it's installed as a Private Framework for testing purposes. That would make your life much simpler when it comes time to deal with multiple versions of your framework.
Ultimately, it doesn't matter where the framework is located as long as your application or test suite loads the correct version. Choose the location that makes testing/debugging/development easiest.