How to package shell programs into an AppImage? - appimage

I have made an AppImage via:
linuxdeploy --appdir AppDir --icon-file icon.png --desktop-file desktop.desktop --executable myExecutable --output appimage
which runs fine. However, the program I've packaged (myExecutable) makes shell calls (say to shellProgram1, shellProgram2, ...) at run-time to make use of various programs that aren't necessarily on every distro.
Question: Does linuxdeploy (or some other AppImage utility) provide an easy way to package these programs into the AppImage, so that when myExecutable calls them at run-time, they are guaranteed to be available?

To achieve such thing you need to deploy all the binaries that may not be present in all distros into the AppDir and set the PATH environment to make them available at runtime.
With linuxdeploy you have to manually copy the files into the AppDir and create a wrapper for the main binary to set the PATH. Something like this
$!/bin/bash
export PATH="$APPDIR/usr/bin:$PATH"
exec $APPDIR/usr/bin/my_program
You can also use appimage-builder which creates such wrapper for you. In the project examples folder, you can find several recipes that can be used for inspiration.

Related

Appimage problems

I would like to release my program that wrote in ruby language, I need to pack ruby to appimage file and send it to my client ubuntu PC first.
so I create the folder "ruby-img", then copy my compiled ruby which in "/app/ruby" folder to "ruby-img/app/ruby" and then made a link as "ln -r -s app/ruby/bin/ruby usr/bin/." in "ruby-img" folder.
then I create the desktop file and put png file to "ruby-img", using appimagetool to create ruby-x86_64.AppImage. sadly it can not run, AFAIK that ruby.AppImage still using /app/ruby/lib folder to find some library of ruby but not in "ruby-img/app/ruby/lib" related folder.
so I tried re-compile ruby as --prefix=/tmp/ruby or --prefix=/usr/local/ruby, then copy them to "ruby-img/usr/local/ruby" or "ruby-img/tmp/ruby" then maka some link as above, and repack to AppImage but ruby.AppImage still not working...
any idea can help me ?
AppImages contain of a filesystem with all the content you provide plus a small executable stub that will mount the AppImage filesystem, then run the AppRun executable to be found inside.
With that knowledge it is utmost important that you provide an executable in the root directory along with the .desktop and icon files. I suggest you do not create AppRun yourself. Use the precompiled one from https://github.com/AppImage/AppImageKit/releases/tag/continuous (do not forget to rename it to exactly 'AppRun').
Now when this AppRun gets invoked, it will perform a few checks, cd into the /usr directory and try to start the executable specified in the .desktop file. Check it's source code and you can see that it also sets a few environment variables.
Therefore it is best you provide your entrypoint as /usr/bin/ruby.sh and register that in the desktop file. Remember if /usr/bin/ruby.sh gets called, the current work directory is /usr. So ruby.sh can set further environment variables such as LD_LIBRARY_PATH so that the libraries you configured for /usr/lib will actually be loaded.
With that I hope you have at least as much success as I had.

What is the difference between activating an anaconda environment and running its python executable directly?

I have setup multiple python environment using Anaconda.
Usually, to run a script "manually", I would open a command line and then type:
activate my-env
python path/to/my/script.py
Fine.
Now I am trying to run a script automatically using a scheduler and I was wondering what the difference was between
Writing a batch which activates the environment and the executes the script (like in the snippet above)
Calling directly the python executable from the environment (within the envs/my-enjv/ directory) like below:
/path/to/envs/my-env/python.exe path/to/my/script.py
Both seem to work fine. Is there any difference?
I don't claim to be an expert but here's my 2 cents.
For small scripts, no, there isn't a difference.
You should notice a difference when calling external modules / packages. conda activate alters the system path to change how the command shell searches for the appropriate capabilities.
If you supply a full path to an interpreter and the full path to an isolated script, then the shell doesn't need to do a lookup as this has priority over the path. This means you could be in a situation where the interpreter can see the script but cannot see dependencies.
If you follow the conda activate process, and the environment is correctly packaged, then the shell will be able to trace any additional resources.
EDIT: The idea behind this is portability. If an admin has been careful in setting up a system, then scripts should have the appropriate visibility - i.e. see everything in it's environment plus everything in the main system installation.
It's possible to full-path every call to an interpreter and a script or package location, but then what happens when you need to move it to another machine? You would need to spend a lot of time setting everything up exactly as it was before. On the other hand, you can follow the package process and the system path will trace everything for you.
Simply checkout the PATH variable in your environment. After conda activation it has been extended by
\Anaconda3;
\Anaconda3\Library\mingw-w64\bin;
\Anaconda3\Library\usr\bin;
\Anaconda3\Library\bin;
\Anaconda3\Scripts;
\Anaconda3\bin;
This doesn't make much of a difference, if you are just using the standard library in your code. However, if you rely on external packages like pandas, it's a prerequisite so that the modules can be found.

Consistent builds / remove personal information from binaries

I've now realized that Go saves absolute paths to source code in binaries for the purpose of printing stack-traces and the likes. I don't want to completely remove this information, however, this also means that every developer building the same program will produce an executable with a different checksum. Before I try to reimplement the build using chroot or something like that: isn't there any way to tell Go not to use absolute paths for this purpose?
I know it doesn't directly address what you asked, but #JimB's suggestion does indicate a class of solutions to the problem you seem to be having.
One of the easier ones (I think) would be to have your developers install Docker and create an alias so that the go command runs:
docker run --rm --tty --volume $GOPATH:/go golang:1.7.1(-$YOUR_PLATFORM) go
Then, every build (and test and run) thinks it's using a GOPATH of /go and your developers' checksums won't disagree based on that.
See here for more info.
isn't there any way to tell Go not to use absolute paths for this purpose?
Nowadays there is: -trimpath.
https://pkg.go.dev/cmd/go#hdr-Compile_packages_and_dependencies explains:
-trimpath
remove all file system paths from the resulting executable.
Instead of absolute file system paths, the recorded file names
will begin either a module path#version (when using modules),
or a plain import path (when using the standard library, or GOPATH).

Deb file from sh script

Im trying to establish if it possible to create a deb package for the following app:
http://openfoam.org/download/4-0-source/
It uses an Allmake shell script which contains various standard shell commands and wmake commands to compile the source. wmake appears to be specific to this application but does call make:
http://www.cfdsupport.com/OpenFOAM-Training-by-CFD-Support/node25.html
https://github.com/OpenFOAM/OpenFOAM-2.1.x/blob/master/wmake/wmake
Is it possible to call the shell script from within a debian/rules file? or is there a better way of doing this if it is indeed possible?
Any assistance is much appreciated.
Indeed, the general idea of the debian/rules file is to run whatever commands are required to configure and install the upstream package into a location suitable for the dpkg toolchain.
Modern debhelper-based debian/rules files are typically extremely terse, because most typical packages adhere to build conventions for which good, very simple canned helpers are available, but traditional, more complex and explicit rules files are well-documented in older Debian packaging documentation.
Basically, the debian/rules file is a Makefile; it should have a binary target with the commands to build the upstream package into the Debian package root.
https://www.debian.org/doc/manuals/maint-guide/dreq.en.html#rules is probably useful as a starting point - unless your needs are really arcane, the dh defaults will mostly make sense, and it allows you to easily override the parts which don't.

How do I use CPACK_INSTALL_COMMANDS?

I'm creating a Linux tgz self-extracting installer using CPack and I'd like the installer to run a script or sequence of commands after all files have been installed. CPack documentation contains the following guidance:
CPACK_INSTALL_COMMANDS Extra commands to install components.
I set this variable in my CMakeLists.txt file and I see it set in the resulting CPackConfig.cmake file, but the commands I embed in this variable do not appear anywhere in the final .sh install script. What am I missing?
You're not missing anything, that's simply not how the CPACK_INSTALL_COMMANDS variable works.
On a typical project, CPack does a "make install" into a temporary location, in order to build the final installer based on the "make install" tree. The CPACK_INSTALL_COMMANDS variable is meant to be set for projects that would rather run some other command sequence, instead of the typical "make install" in order to produce the install tree.
So, CPack should be running your commands as it generates the package. It will not run your commands on the end user's machine at the end of him/her running the generated installation script...
There are per-generator ways of running installed executables and/or scripts at the end of the end user installation, but it will require some customization on your part. In this case, I'd recommend attempting to override the CPack.STGZ_Header.sh.in input file that is used when CPack generates the STGZ self-extracting script. Customize that file and add your calls to the bottom of it, above the line:
exit 0
To override the file, provide your own copy of it in your source tree, perhaps in a ${CMAKE_CURRENT_SOURCE_DIR}/CMake directory, and then in your CMakeLists.txt file, add:
set(CMAKE_MODULE_PATH ${CMAKE_CURRENT_SOURCE_DIR}/CMake ${CMAKE_MODULE_PATH})
(Actually, as I'm writing this, I'm wondering if that's sufficient, or if the module path also needs to be set at the time that CPack runs... Try this, and let us know if your customization gets used by CPack or not. If not, I'll investigate a bit further and add some more advice here.)

Resources