Why is there no way to specify install options for binaries in ebuilds? - gentoo

The Gentoo's ebuild mecanism comes with several built-in eclasses/commands to manage (amongst others) libraries, binaries, executables, etc... Some of them are really useful to work at installation phase, like setting permissions, modifying the default installation directory, etc...
About library installation, the ebuild documentation says :
dolib [list of more libraries]
Installs a library or a list of libraries into DESTTREE/lib. Creates all necessary dirs.
libopts [options for install(1)]
Can be used to define options for the install function used in the dolib functions. The default is -m0644.
The same is available for "executables": exeopts works with doexe.
Question
The thing I really don't understand is that why dobin and dosbin exist but not binopts and sbinopts?
Is it possible have libopts or exeopts equivalents for dobin and dosbin, to manage permissions at installation phase?

Because dobin and dosbin are special cases of doexe, which have pre-defined options; if you need special permissions (e.g. suid) you can use doexe as needed.
Effectively (/usr)/bin and (/usr)/sbin should be executable to all users, unless something special (like limiting access to a group that has access to the hardware) is needed.
(I would probably be in favour of removing libopts too, but that's a different story, I guess.)

Related

DESTDIR vs prefix options in a build system?

Can someone explain me the purpose of a $(DESTDIR) variable in a build system?
I mean that I know that it points a temporary directory for currently installed package, but I can not imagine what is practical use of it.
To clarify, well, I know what --prefix option is, for instance if we point the buldsystem like ./configure --prefix="/usr" all the package's files will belong /usr, like /usr/lib, /usr/share and so on, but in Makefiles I've also seen the following constructions:
$(DESTDIR)/$(prefix)
And what's id purpose of that? In short, is there a difference between DESTDIR and prefix and when should both be used?
Yes, there's a very important difference... in some environments.
The prefix is intended to be the location where the package will be installed (or appear to be installed) after everything is finalized. For example, if there are hardcoded paths in the package anywhere they would be based on the prefix path (of course, we all hope packages avoid hardcoded paths for many reasons).
DESTDIR allows people to actually install the content somewhere other than the actual prefix: the DESTDIR is prepended to all prefix values so that the install location has exactly the same directory structure/layout as the final location, but rooted somewhere other than /.
This can be useful for all sorts of reasons. One example are facilities like GNU stow which allow multiple instances to be installed at the same time and easily controlled. Other examples are creating package files using RPM or DEB: after the package is installed you want it unpacked at root, but in order to create the package you need it installed at some other location.
And other people use it for their own reasons: basically it all boils down to DESTDIR is used to create a "staging area" for the installation, without actually installing into the final location.
Etc.

How to reliably refer to static file in a Go application?

I am writing a Go command-line tool that generates some files based on templates.
The templates are located in the Git repository alongside the code for the command-line tool itself.
I wanted to allow the following:
The binary, wherever it is called from, should always find the templates directory.
The templates directory can be overriden by the user if need be.
Since this is a Go application, I went with something like:
templateRoot := filepath.Join(
os.Getenv("GOPATH"),
"src/github.com/myuser/myproject/templates",
)
But being rather new to Go, I wonder if this approach is reliable enough: is it guaranteed that my application template will always be accessible at that path ?
What if someone vendorize my application into their own project ? Does that even make sense for a command-line tool ?
Because of 2., I obviously can't/won't use go-bindata because I want to allow for the templates to be overriden if need be.
In summary: what is a good strategy to reliably refer to non-go, static files in a Go command-line tool ?
GOPATH is used for building the application. While you could look for GOPATH and check relative locations to each GOPATH entry at runtime, you can't be sure it will exist (unless of course you make it a prerequisite for running your application).
go get itself is a convenience for developers to fetch and build a go package. It relies on having a GOPATH (though there's a default now in go1.8), and GOBIN in your PATH. Many programs require extra steps not covered by the simple go tool, and have scripts or Makefiles to do the build. If you're targeting users that aren't developers, you need to provide a way to install into standard system paths anyway.
Do what any regular program would do, and use some well-known path to locate the template files. You can certainly add some logic in your program to check for a hierarchy of locations: relative to $GOPATH, relative to the binary, working directory, $HOME, etc; just provide a list of locations to the user that your program will look for templates.

gentoo: how delete all config files on unmerging package (from its ebuild)

I am making my own personal package to have collection of usefull programs and configs. Main idea is to emerge this package and have system prepared for my prefferencies. Mainly it works (it simply depends on all my favourite programs), but I have two problems here:
how to install USE flags, UNMASK and such before affected programs are installed?
how to uninstall it (emerge --unmerge does NOT delete files in /etc, so even after uninstalling the package the USE flags (and others) are still kept - my intent is to REMOVE them, so next rebuild of world would NOT use them anymore - yes it means a lot of programs would lose some functionalities like support for some languages, support for some other programs and so on, it is desired result)
My solutions so far are:
The package have some files in /etc/portage/package.*
1.1. I emerge that package with --nodeps (so the config files are installed)
1.2. I emerge it again without that flag (so dependencies are installed
with right configuration))
I create (and install) script to parse /var/db/packages for my package CONTENTS and delete all /etc/portage/something files "manually" and I have to rum this script before unmerging the package
Is there better way to do it ?
You just doing/understanding it wrong! (sorry :)
First of all, instead of a metapackage (an empty ebuild that have only runtime dependencies) there is other ways:
use sets to describe your preferred packages. Manage your USE flags in a usual way (including per package USE if needed).
medium complexity solution is to write a metapackage ebuild (your current case) -- but, you can't mask/unmask USE flags anyway…
if you already have your overlay (obviously) -- defining your own profile would solve everything! Here you can manage everything just like you want: mask/unmask any USE flags, define what is system predefined package means for you, & etc…
Unfortunately, I don't use Gentoo portage (and emerge) and have no idea if it's possible to have multiple additive profiles. I have my own profiles here and it works perfectly with Paludis.
Second, never remove any configuration files (config-protected) after uninstall! There is no packages that do that, and there is a bunch of reasons for that… The main one is that user may have them modified and don't want to loose his changes. Moreover, personally I prefer to have all configs that I've ever touched to be in a dedicated VCS repo -- it wouldn't be nice, if someone, except me, would remove smth…
Imagine a real life example: user wants to reinstall some package and he has a bunch of configuration files, he spent some time to carefully edit them. Trivial way is to uninstall and then install again -- Oops! He lost his configs!
Moreover, from ebuild's POV, you have pkg_prerm and pkg_postrm functions, but both of them are called even at upgrade time (i.e. when unmerge followed by immediate merge phase). You have to be really careful to distinct that use cases… And what is more scare, having any "hardcoded" (and unique) rules in any package, you don't have any influence on them…
So, please, never remove any config protected files, let the user to take care of them (he is the boss, not a package manager)…
Update: If you really want to be able to remove some config-protected files, setting up your own profile looks even more better idea. You can set CONFIG_PROTECT_MASK to enforce unprotect files and/or directories. In that way you don't need to modify any ebuilds and/or write an ugly cleanup code.

Effective way of distributing go executable

I have a go app which relies heavily on static resources like images and jars. I want to install that go executable in different platforms like linux, mac and windows.
I first thought of bundling the resources using https://github.com/jteeuwen/go-bindata, but since the files(~100) have size ~ 20MB or so, it takes a really long time to build the executable. I thought having a single executable is an easy way for people to download the executable and run it. But seems like that is not an effective way.
I then thought of writing a installation package for each of the platform like creating a .rpm or .deb packages? So these packages contain all the resources and puts it into some platform specific pre defined locations and the go executable can reference them. But the only thing is that I have to handle that in the go code. I have to see if it is windows then load the files from say c:\go-installs or if it is linux then load the files from say /usr/local/share/go-installs. I want the go code to be as platform agnostic as it can be.
Or is there some other strategy for this?
Thanks
Possibly does not qualify as real answer but still…
As to your point №2, one way to handle this is to exploit Go's way to do conditional compilation: you might create a set of files like res_linux.go, res_windows.go etc and put a set of the same variables in each, pointing to different locations, like
var InstallsPath = `C:\go-installs`
in res_windows.go and
var InstallsPath = `/usr/share/myapp`
in res_linux.go and so on. Then in the rest of the program just reference the res.InstallsPath variable and use the path/filepath package to construct full pathnames of actual resources.
Of course, another way to go is to do a runtime switch on runtime.GOOS variable—possibly in an init() function in one of the source files.
Pack everything in a zip archive and read your resource files from it using archive/zip. This way you'll have to distribute just two files—almost "xcopy deployment".
Note that while on Windows you could just have your executable extract the directory from the pathname of itself (os.Args[0]) and assume the resource file is located in the same directory, on POSIX platforms (GNU/Linux and *BSD etc) the resource file should still be located under /usr/share/myapp or a similar place dictated by FHS (or particular distro's rules), so some logic to locate that file will still be required.
All in all, if this is supposed to be a piece of FOSS, I'd go with the first variant to let the downstream packagers tweak the pathnames. If this is a proprietary (or just niche) software the second idea appears to be rather OK as you'll play the role of downstream packagers yourself.

How to design software for Linux in relation to Windows?

I have an application that I've written for Windows which I am porting to Linux (Ubuntu to be specific). The problem is that I have always just used Linux, never really developed for it. More specifically, I dont understand the fundamental layout of the system. For example, where should I install my software? I want it to be accessible to all users, but I need write permission to the area to edit my data files. Furthermore, how can I determine in a programmatic way, where the software was installed (not simply where its being called from)? In windows, I use the registry to locate my configuration file which has all of the relevant information, but there is no registry in Linux. Thanks!
The Filesystem Hierarchy Standard (misnamed -- it is not a standard) will be very helpful to you; it clearly describes administrator preferences for where data should live.
Since you're first packaging your software, I'd like to recommend doing very little. Debian, Ubuntu, Red Hat, SuSE, Mandriva, Arch, Annvix, Openwall, PLD, etc., all have their own little idiosyncrasies about how software should be best packaged.
Building
Your best bet is to provide a source tarball that builds and hope users or packagers for those distributions pick it up and package it for you. Users will probably be fine with downloading a tarball, unpacking, compiling, and installing.
For building your software, make(1) is the usual standard. Other tools exists, but this one is available everywhere, and pretty reasonable. (Even if the syntax is cranky.) Users will expect to be able to run: make ; make install or ./configure ; make ; make install to build and install your software into /usr/local by default. (./configure is part of the autotools toolchain; especially nice for providing ./configure --prefix=/opt/foo to allow users to change where the software gets installed with one command line parameter. I'd try to avoid the autotools as far as you can, but at some point, it is easier to write portable software with them than without them.)
Packaging
If you really do want to provide one-stop-packaging, then the Debian Policy Manual will provide the canonical rules for how to package your software. The Debian New Maintainers Guide will provide a kinder, gentler, walkthrough of the tools unique to building packages for Debian and Debian-derived systems.
Ubuntu's Packaging Guide may have details specific to Ubuntu. (I haven't read it yet.)
Configuration
When it comes to your application's configuration file, typically a file is stored in /etc/<foo> where <foo> represents the program / package. See /etc/resolv.conf for details on name resolution, /etc/fstab for a list of devices that contain filesystems and where to mount them, /etc/sudoers for the sudo(8) configuration, /etc/apt/ for the apt(8) package management system, etc.
Sometimes applications also provide per-user configuration; those config files are often stored in ~/.foorc or ~/.foo/, in case an entire directory is more useful than a file. (See ~/.vim/, ~/.mozilla/, ~/.profile, etc.)
If you also wanted to provide a -c <filename> command line option to tell your program to use a non-standard configuration file, that sometimes comes in real handy. (Especially if your users can run foo -c /dev/null to start up with completely default configuration.)
Data files
Users will store their data in their home directory. You don't need to do anything about this; just be sure to start your directory navigation boxes with getenv("HOME") or load your configuration files via sprintf(config_dir, "%s/%s/config", getenv("HOME"), ".application"); or something similar. (They won't have permissions to write anywhere but their home directory and /tmp/ at most sites.)
Sometimes all the data can be stored in a hidden file or directory; ssh(1) for example, keeps all its data in ~/.ssh/. Typically, users want the default kry name from ssh-keygen(1) so ssh-agent(1) can find the key with the minimum of fuss. (It uses ~/.ssh/id_rsa by default.) The shotwell(1) photo manager provides a managed experience, similar to iPhoto.app from Apple. It lets users choose a starting directory, but otherwise organizes files and directories within as it sees fit.
If your application is a general purpose program, you'll probably let your users select their own filenames. If they want to store data directly to a memory stick mounted in /dev or /media or a remote filesystem mounted into /automount/blah, their home directories, a /srv/ directory for content served on the machine, or /tmp/, let them. It's up to users to pick reasonable filenames and directories for their data. It is up to users to have proper permissions already. (Don't try to provide mechanisms for users to write in locations they don't have privileges.)
Application file installation and ownership
There are two common ways to install an application on a Linux system:
The administrator installs it once, for everyone. This is usual. The programs are owned by root or bin or adm or some similar account. The programs run as whichever user executes them, so they get the user's privileges for creating and reading files. If they are packaged with distribution packaging files, executables will typically live in /usr/bin/, libraries in /usr/lib/, and non-object-files (images, schemas, etc.) will live in /usr/share/. (/bin/ and /lib/ are for applications needed at early boot or for rescue environments. /usr might be common to all machines in a network, mounted read-only late in the boot up process.) (See the FHS for full details.)
If the programs are unpackaged, then /usr/local/ will be the starting point: /usr/local/bin/, /usr/local/lib/, /usr/local/share/, etc. Some administrators prefer /opt/.
Users install applications into their home directory. This is less common, but many users will have a ~/bin/ directory where they store shell scripts or programs they write, or link in programs from a ~/Local/<foo>/ directory. (There is nothing magic about that name. It was just the first thing I thought of years ago. Others choose other names.) This is where ./configure --prefix=~/Local/blah pays for itself.)
In Linux, everything is text i.e. ASCII.
Configuration is stored in configuration files which normally have .conf extension and stored in /etc folder.
The executable of your application normally resides in /usr/bin folder. The data files of your application can go to /usr/lib or folder in /usr/ folder.
It is important to consider which language you are writing your application in. In C/C++ a custom makefile is used to do installation which copies these files in respective folders. The location of installation can be tracked by tracking the .conf file and storing the location while generation using bash script.
You should really know bash scripting in order to automate this everything.

Resources