How to run ./configure from another ./configure - bash

I was wondering is it possible to run a configure script from another one? What I have is the situation where my own project uses autotools for config and make. So before any build a configure script is ran (as usual). But now I want to add another lib to my project which also uses the same build principle (It is necessary to run configure script before building a project). So instead of making my future users run two configure scripts, is there a way to automate this. (but without using a shell script - bash, perl, etc.)
Can this be done and if so, how??

Related

How to package shell programs into an AppImage?

I have made an AppImage via:
linuxdeploy --appdir AppDir --icon-file icon.png --desktop-file desktop.desktop --executable myExecutable --output appimage
which runs fine. However, the program I've packaged (myExecutable) makes shell calls (say to shellProgram1, shellProgram2, ...) at run-time to make use of various programs that aren't necessarily on every distro.
Question: Does linuxdeploy (or some other AppImage utility) provide an easy way to package these programs into the AppImage, so that when myExecutable calls them at run-time, they are guaranteed to be available?
To achieve such thing you need to deploy all the binaries that may not be present in all distros into the AppDir and set the PATH environment to make them available at runtime.
With linuxdeploy you have to manually copy the files into the AppDir and create a wrapper for the main binary to set the PATH. Something like this
$!/bin/bash
export PATH="$APPDIR/usr/bin:$PATH"
exec $APPDIR/usr/bin/my_program
You can also use appimage-builder which creates such wrapper for you. In the project examples folder, you can find several recipes that can be used for inspiration.

Sharing common script functions with maven-rpm-plugin

I have two separate projects that are built into RPMs using the maven-rpm-plugin.
Both packages have a postinstall script, which contains some duplicated code.
I would like to move the duplicated code into a single 'functions' script that could be inherited by both packages. Is this possible?
Write the common script as a standalone *.sh shell script that is installed and invoked in %post (or any other rpm scriptlet).
Remove the duplication of code by adding a dependency on whatever package you choose to install the common script (but avoiding dependency loops).

What is the difference between activating an anaconda environment and running its python executable directly?

I have setup multiple python environment using Anaconda.
Usually, to run a script "manually", I would open a command line and then type:
activate my-env
python path/to/my/script.py
Fine.
Now I am trying to run a script automatically using a scheduler and I was wondering what the difference was between
Writing a batch which activates the environment and the executes the script (like in the snippet above)
Calling directly the python executable from the environment (within the envs/my-enjv/ directory) like below:
/path/to/envs/my-env/python.exe path/to/my/script.py
Both seem to work fine. Is there any difference?
I don't claim to be an expert but here's my 2 cents.
For small scripts, no, there isn't a difference.
You should notice a difference when calling external modules / packages. conda activate alters the system path to change how the command shell searches for the appropriate capabilities.
If you supply a full path to an interpreter and the full path to an isolated script, then the shell doesn't need to do a lookup as this has priority over the path. This means you could be in a situation where the interpreter can see the script but cannot see dependencies.
If you follow the conda activate process, and the environment is correctly packaged, then the shell will be able to trace any additional resources.
EDIT: The idea behind this is portability. If an admin has been careful in setting up a system, then scripts should have the appropriate visibility - i.e. see everything in it's environment plus everything in the main system installation.
It's possible to full-path every call to an interpreter and a script or package location, but then what happens when you need to move it to another machine? You would need to spend a lot of time setting everything up exactly as it was before. On the other hand, you can follow the package process and the system path will trace everything for you.
Simply checkout the PATH variable in your environment. After conda activation it has been extended by
\Anaconda3;
\Anaconda3\Library\mingw-w64\bin;
\Anaconda3\Library\usr\bin;
\Anaconda3\Library\bin;
\Anaconda3\Scripts;
\Anaconda3\bin;
This doesn't make much of a difference, if you are just using the standard library in your code. However, if you rely on external packages like pandas, it's a prerequisite so that the modules can be found.

Deb file from sh script

Im trying to establish if it possible to create a deb package for the following app:
http://openfoam.org/download/4-0-source/
It uses an Allmake shell script which contains various standard shell commands and wmake commands to compile the source. wmake appears to be specific to this application but does call make:
http://www.cfdsupport.com/OpenFOAM-Training-by-CFD-Support/node25.html
https://github.com/OpenFOAM/OpenFOAM-2.1.x/blob/master/wmake/wmake
Is it possible to call the shell script from within a debian/rules file? or is there a better way of doing this if it is indeed possible?
Any assistance is much appreciated.
Indeed, the general idea of the debian/rules file is to run whatever commands are required to configure and install the upstream package into a location suitable for the dpkg toolchain.
Modern debhelper-based debian/rules files are typically extremely terse, because most typical packages adhere to build conventions for which good, very simple canned helpers are available, but traditional, more complex and explicit rules files are well-documented in older Debian packaging documentation.
Basically, the debian/rules file is a Makefile; it should have a binary target with the commands to build the upstream package into the Debian package root.
https://www.debian.org/doc/manuals/maint-guide/dreq.en.html#rules is probably useful as a starting point - unless your needs are really arcane, the dh defaults will mostly make sense, and it allows you to easily override the parts which don't.

Making Sphinx documentation inside of a virtual environment with cron

I have an application development server that is automatically updated every night with a massive shell script that we run with crontab. The script specifies #!/bin/sh at the top of the file and I am not able to change that. The basic purpose of the script is to go through the machine and download the latest code in each of the directories that we list in the script. After all of the repositories are updated, we execute a number of scripts to update the relevant databases using the appropriate virtual environment (Django manage.py commands) by calling that virtualenv's python directly.
The issue that I am having is that we have all the necessary Sphinx plugins installed in one of the virtual environments to allow us to build the documentation from the code at the end of the script, but I cannot seem to figure out how to allow the make command to run inside of the virtualenv so that it has access to the proper packages and libraries. I need a way to run the make command inside of the virtual environment and if necessary deactivate that environment afterwards so that the remainder of the script can run.
My current script looks like the below and gives errors on the latter 3 lines, because sh does not have workon or deactivate, and because make can't find the sphinx-build.
cd ${_proj_root}/dev/docs
workon dev
make clean && make html
deactivate
I was able to find the answer to this question here. The error message that is shown when you attempt to build the sphinx documentation from the root is as follows, and leads to the answer that was provided there:
Makefile:12: *** The 'sphinx-build' command was not found. Make sure
you have Sphinx installed, then set the SPHINXBUILD environment
variable to point to the full path of the 'sphinx-build' executable.
Alternatively you can add the directory with the executable to your
PATH. If you don't have Sphinx installed, grab it from
http://sphinx-doc.org/. Stop.
The full command for anyone looking to build sphinx documentation through a cron when all tools are installed in various virtual environments are listed below. You can find the location of your python and sphinx-build commands by using which while the environment is activated.
make html SPHINXBUILD='<virtualenv-path-to>/python <virtualenv-path-to>/sphinx-build'

Resources