Im trying to establish if it possible to create a deb package for the following app:
http://openfoam.org/download/4-0-source/
It uses an Allmake shell script which contains various standard shell commands and wmake commands to compile the source. wmake appears to be specific to this application but does call make:
http://www.cfdsupport.com/OpenFOAM-Training-by-CFD-Support/node25.html
https://github.com/OpenFOAM/OpenFOAM-2.1.x/blob/master/wmake/wmake
Is it possible to call the shell script from within a debian/rules file? or is there a better way of doing this if it is indeed possible?
Any assistance is much appreciated.
Indeed, the general idea of the debian/rules file is to run whatever commands are required to configure and install the upstream package into a location suitable for the dpkg toolchain.
Modern debhelper-based debian/rules files are typically extremely terse, because most typical packages adhere to build conventions for which good, very simple canned helpers are available, but traditional, more complex and explicit rules files are well-documented in older Debian packaging documentation.
Basically, the debian/rules file is a Makefile; it should have a binary target with the commands to build the upstream package into the Debian package root.
https://www.debian.org/doc/manuals/maint-guide/dreq.en.html#rules is probably useful as a starting point - unless your needs are really arcane, the dh defaults will mostly make sense, and it allows you to easily override the parts which don't.
Related
I am trying to build a library with a different build system, but files in the library require a config.h header file that is generated after running the configure scripts generated by autoconf.
This is the sequence of steps I am following to try and generate the config.h file that is needed
autoreconf -ivf
./configure --disable-dependency-tracking
The build system guarantees that the library gflags will be linked and the headers will be available at preprocessing time. But the configure script exits with the following error
configure: error: Please install google-gflags library
Is there some way I can get the list of required libraries (such as gflags) and then pass arguments to the configure script that tells it to assume that this library exists on the system? I went through the help output for both autoreconf and ./configure and wasn't able to figure this out.
Sorry for the long explanation and problem. I am very new to autoconf, etc.
The answer to your question is: no, it is not possible to get a list of dependencies from autotools.
Why?
Well, autotools doesn't track dependencies at all.
Instead, it checks whether specific features are present on the system (e.g. a given header-file; or a given library file).
Now a specific header file can come from a variety of sources, e.g. depending on your distribution the foo.h header can be installed via
libfoo-dev (Debian and derivatives)
foo-devel (Fedora)
foo (upstream)
...
In your specific case, the maintainers of your project output a nice error message telling you to install a given package by name.
The maintainers of your project also chose to abort with a fatal error if a given dependency is not available.
The reason might well be, that the project simply won't work without that dependency, and that is impossible to compile the program without it.
Example
Your project might be written in C++ and thus require a C++-compiler.
Obviously there is little use in passing some flags to ./configure so it assumes that there is a C++-compiler available if in reality there is none.
There is hope
However, not all is bad.
Your configure script might will have the ability to disable certain features (that appear to be hard requirements by default).
Just check ./configure --help and look for flags like
--enable-FOO
--disable-FOO
--with-BAR
--without-BAR
automation?
One thing to know about autotools, is that configure really is a program (the source-code being configure.ac) written in some arcane programming language (involving bash and m4),
This means that it can practically have any behavior, and there is no single standard way to achieve "dependecy tracking".
What you're trying to do will not work as umläute already said. On the other hand, depending on the package you're trying to build, you may be able to tell ./configure that a given library is there even if it isn't.
For instance if the script uses pkg-config to check for the presence of a library, you can use FOO_CFLAGS and FOO_LIBS to override the presence checking and telling it "yes those packages are there, you just don't know how to find them", but these are very package-specific so you may have to provide more information if that's what you're looking for.
I'm currently learning how to use the autoconf/automake toolchain. I seem to have a general understanding of the workflow here - basically you have a configure.ac script which generates an executable configure file. The generated configure script is then executed by the end user to generate Makefiles, so the program can be built/installed.
So the installation for a typical end-user is basically:
./configure
make
make install
make clean
Okay, now here's where I'm confused:
As a developer, I've noticed that the auto-generated configure script sometimes won't run, and will error with:
config.status: error: cannot find input file: `somedir/Makefile.in'
This confuses me, because I thought the configure script is supposed to generate the Makefile.in. So Googling around for some answers, I've discovered that this can be fixed with an autogen.sh script, which basically "resets" the state of the autoconf environment. A typical autogen.sh script would be something like:
aclocal \
&& automake --add-missing \
&& autoconf
Okay fine. But as an end-user who's downloaded countless tarballs throughout my life, I've never had to use an autogen.sh script. All I did was uncompress the tarball, and do the usual configure/make/make install/make clean routine.
But as a developer who's now using autoconf, it seems that configure doesn't actually run unless you run autogen.sh first. So I find this very confusing, because I thought the end-user shouldn't have to run autogen.sh.
So why do I have to run autogen.sh first - in order for the configure script to find Makefile.in? Why doesn't the configure script simply generate it?
In order to really understand the autotools utilities you have to remember where they come from: they come from an open source world where there are (a) developers who are working from a source code repository (CVS, Git, etc.) and creating a tar file or similar containing source code and putting that tar file up on a download site, and (b) end-users who are getting the source code tar file, compiling that source code on their system and using the resulting binary. Obviously the folks in group (a) also compile the code and use the resulting binary, but the folks in group (b) don't have or need, often, all the tools for development that the folks in group (a) need.
So the use of the tools is geared towards this split, where the people in group (b) don't have access to autoconf, automake, etc.
When using autoconf, people generally check in the configure.ac file (input to autoconf) into source control but do not check in the output of autoconf, the configure script (some projects do check in the configure script of course: it's up to you).
When using automake, people generally check in the Makefile.am file (input to automake) but do not check in the output of automake: Makefile.in.
The configure script basically looks at your system for various optional elements that the package may or may not need, where they can be found, etc. Once it finds this information, it can use it to convert various XXX.in files (typically, but not solely, Makefile.in) into XXX files (for example, Makefile).
So the steps generally go like this: write configure.ac and Makefile.am and check them in. To build the project from source code control checkout, run autoconf to generate configure from configure.ac. Run automake to generate Makefile.in from Makefile.am. Run configure to generate Makefile from Makefile.in. Run make to build the product.
When you want to release the source code (if you're developing an open source product that makes source code releases) you run autoconf and automake, then bundle up the source code with the configure and Makefile.in files, so that people building your source code release just need make and a compiler and don't need any autotools.
Because the order of running autoconf and automake (and libtool if you use it) can be tricky there are scripts like autogen.sh and autoreconf, etc. which are checked into source control to be used by developers building from source control, but these are not needed/used by people building from the source code release tar file etc.
Autoconf and automake are often used together but you can use autoconf without automake, if you want to write your own Makefile.in.
For this error:
config.status: error: cannot find input file: `somedir/Makefile.in'
In the directory where the configure.ac is located in the Makefile.am add a line with the subdirectory somedir
SUBDIRS = somedir
Inside somedir put a Makefile.am with all the description. then run automaker --add-missing
A better description can be found in 7.1 Recursing subdirectories automake manual.
https://www.gnu.org/software/automake/manual/automake.html
I am making a configure.ac file for a tool i made and i need to check whether pdflatex is installed in the users system. How do i do it ? For checking for other libraries i simply included the test programs using AC_COMPILE_IFELSE, but i dont know if pdflatex can be invoked from the program.
Also is it regular practise to install all the required packages automatically using some script or i can just specify in the readme file which packages are required and then its upto user to install those packages.
You can use AC_CHECK_PROG([have_pdflatex], [pdflatex], [yes], [no]) to simply check if it exists and set have_pdflatex to yes if so. It's more likely that you'll want to use AC_PATH_PROG([PDFLATEX], [pdflatex]) to find the actual path of the program if it exists and store it in PDFLATEX.
I think it's best to let the user install the prerequisites themself. You don't know how they install their software (apt? yum? pacman? emerge? source?) and it wouldn't be worth the effort to try to cover all cases. It's sufficient to just mention them in the README and to test for them with Autoconf macros.
How can I put my Go binary into a Debian package? Since Go is statically linked, I just have a single executable--I don't need a lot of complicated project metadata information. Is there a simple way to package the executable and resource files without going through the trauma of debuild?
I've looked all over for existing questions; however, all of my research turns up questions/answers about a .deb file containing the golang development environment (i.e., what you would get if you do sudo apt-get install golang-go).
Well. I think the only "trauma" of debuild is that it runs lintian after building the package, and it's lintian who tries to spot problems with your package.
So there are two ways to combat the situation:
Do not use debuild: this tool merely calls dpkg-buildpackage which really does the necessary powerlifting. The usual call to build a binary package is dpkg-buildpackage -us -uc -b. You still might call debuild for other purposes, like debuild clean for instance.
Add the so-called "lintian override" which can be used to make lintian turn a blind eye to selected problems with your package which, you insist, are not problems.
Both approaches imply that you do not attempt to build your application by the packaging tools but rather treat it as a blob which is just wrapped to a package. This would require slightly abstraining from the normal way debian/rules work (to not attempt to build anything).
Another solution which might be possible (and is really way more Debian-ish) is to try to use gcc-go (plus gold for linking): since it's a GCC front-end, this tool produces a dynamically-linked application (which links against libgo or something like this). I, personally, have no experience with it yet, and would only consider using it if you intend to try to push your package into the Debian proper.
Regarding the general question of packaging Go programs for Debian, you might find the following resources useful:
This thread started on go-nuts by one of Go for Debian packagers.
In particular, the first post in that thread links to this discussion on debian-devel.
The second thread on debian-devel regarding that same problem (it's a logical continuation of the former thread).
Update on 2015-10-15.
(Since this post appears to still be searched and found and studied by people I've decided to update it to better reflec the current state of affairs.)
Since then the situation with packaging Go apps and packages got improved dramatically, and it's possible to build a Debian package using "classic" Go (the so-called gc suite originating from Google) rather than gcc-go.
And there exist a good infrastructure for packages as well.
The key tool to use when debianizing a Go program now is dh-golang described here.
I've just been looking into this myself, and I'm basically there.
Synopsis
By 'borrowing' from the 'package' branch from one of Canonical's existing Go projects, you can build your package with dpkg-buildpackage.
install dependencies and grab a 'package' branch from another repo.
# I think this list of packages is enough. May need dpkg-dev aswell.
sudo apt-get install bzr debhelper build-essential golang-go
bzr branch lp:~niemeyer/cobzr/package mypackage-build
cd mypackage-build
Edit the metadata.
edit debian/control file (name, version, source). You may need to change the golang-stable dependency to golang-go.
The debian/control file is the manifest. Note the 'build dependencies' (Build-Depends: debhelper (>= 7.0.50~), golang-stable) and the 3 architectures. Using Ubuntu (without the gophers ppa), I had to change golang-stable to golang-go.
edit debian/rules file (put your package name in place of cobzr).
The debian/rules file is basically a 'make' file, and it shows how the package is built. In this case they are relying heavily on debhelper. Here they set up GOPATH, and invoke 'go install'.
Here's the magic 'go install' line:
cd $(GOPATH)/src && find * -name '*.go' -exec dirname {} \; | xargs -n1 go install
Also update the copyright file, readme, licence, etc.
Put your source inside the src folder. e.g.
git clone https://github.com/yourgithubusername/yourpackagename src/github.com/yourgithubusername/yourpackagename
or e.g.2
cp .../yourpackage/ src/
build the package
# -us -uc skips package signing.
dpkg-buildpackage -us -uc
This should produce a binary .deb file for your architecture, plus the 'source deb' (.tgz) and the source deb description file (.dsc).
More details
So, I realised that Canonical (the Ubuntu people) are using Go, and building .deb packages for some of their Go projects. Ubuntu is based on Debian, so for the most part the same approach should apply to both distributions (dependency names may vary slightly).
You'll find a few Go-based packages in Ubuntu's Launchpad repositories. So far I've found cobzr (git-style branching for bzr) and juju-core (a devops project, being ported from Python).
Both of these projects have both a 'trunk' and a 'package' branch, and you can see the debian/ folder inside the package branch. The 2 most important files here are debian/control and debian/rules - I have linked to 'browse source'.
Finally
Something I haven't covered is cross-compiling your package (to the other 2 architectures of the 3, 386/arm/amd64). Cross-compiling isn't too tricky in go (you need to build the toolchain for each target platform, and then set some ENV vars during 'go build'), and I've been working on a cross-compiler utility myself. Eventually I'll hopefully add .deb support into my utility, but first I need to crystallize this task.
Good luck. If you make any progress then please update my answer or add a comment. Thanks
Building deb or rpm packages from Go Applications is also very easy with fpm.
Grab it from rubygems:
gem install fpm
After building you binary, e.g. foobar, you can package it like this:
fpm -s dir -t deb -n foobar -v 0.0.1 foobar=/usr/bin/
fpm supports all sorts of advanced packaging options.
There is an official Debian policy document describing the packaging procedure for Go: https://go-team.pages.debian.net/packaging.html
For libraries: Use dh-make-golang to create a package skeleton. Name your package with a name derived from import path, with a -dev suffix, e.g. golang-github-lib-pq-dev. Specify the dependencies ont Depends: line. (These are source dependencies for building, not binary dependencies for running, since Go statically links all source.)
Installing the library package will install its source code to /usr/share/golang/src (possibly, the compiled libraries could go into .../pkg). Building depending Go packages will use the artifacts from those system-wide locations.
For executables: Use dh-golang to create the package. Specify dependencies in Build-Depends: line (see above regarding packaging the dependencies).
I recently discovered https://packager.io/ - I'm quite happy with what they're doing. Maybe open up one of the packages to see what they're doing?
Maybe I am asking a silly question, but is there any way I can tell automake to put my project include files when I do a "make dist" but not when I do a "make install"?
Maybe I am not acting the right way, so to make it clearer I will tell what I need.
I need to deploy my applications in an embedded board and I use "make install" in a script to create a package that can be copied to the target board.
On the other side, I'd like to be able to update my toolchain with my libraries and include files.
In the first situation, I can't have any fat wasting my limited flash memory but just the necessary things to make the application to run.
In the second one, I need to have headers, pkgconfig and all of the stuff needed for development.
How I am supposed to configure my "Makefile.am" and which rules to expect so that I can reach my goals?
Really thanks.
I just want to be able to set a given script SUID, other data files
R/W arbitrary permissions and so on.
I think adding the $(DESTDIR) 's makefile user variable do that.
As it is not define by automake, "make install" use it empty,
but dpkg-buildpackage define it with the "make dist" target.
(see: http://www.gnu.org/prep/standards/html_node/DESTDIR.html#DESTDIR)
It help me to manage setuid install:
configure.ac:
# Add option to disable setuid during install, use in distcheck
AC_ARG_ENABLE(setuid-install,
AS_HELP_STRING(
[--disable-setuid-install do not set setuid flags during install]),
[enable_setuid_install=$enableval], [enable_setuid_install="yes"])
AM_CONDITIONAL(SETUID_INSTALL, test x"$enable_setuid_install" = "xyes")
Makefile.am:
if SETUID_INSTALL
install-data-hook:
/bin/chmod 4755 $(DESTDIR)$(bindir)myBinary
endif
I don't think autoconf was really designed to be a generic installer/uninstaller that'll give you that kind of control without at least some pain. You're looking for something like dpkg-buildpackage or rpmbuild where you can split up the output of make install into specific subpackages so you can have:
Package foo be for the embedded board and possibly toolchain, depending on what's in the package (DSOs, executables, and other files necessary at runtime)
Package foo-dev or foo-devel for the toolchain (headers, static libs, other files needed for development).