gentoo: how delete all config files on unmerging package (from its ebuild) - gentoo

I am making my own personal package to have collection of usefull programs and configs. Main idea is to emerge this package and have system prepared for my prefferencies. Mainly it works (it simply depends on all my favourite programs), but I have two problems here:
how to install USE flags, UNMASK and such before affected programs are installed?
how to uninstall it (emerge --unmerge does NOT delete files in /etc, so even after uninstalling the package the USE flags (and others) are still kept - my intent is to REMOVE them, so next rebuild of world would NOT use them anymore - yes it means a lot of programs would lose some functionalities like support for some languages, support for some other programs and so on, it is desired result)
My solutions so far are:
The package have some files in /etc/portage/package.*
1.1. I emerge that package with --nodeps (so the config files are installed)
1.2. I emerge it again without that flag (so dependencies are installed
with right configuration))
I create (and install) script to parse /var/db/packages for my package CONTENTS and delete all /etc/portage/something files "manually" and I have to rum this script before unmerging the package
Is there better way to do it ?

You just doing/understanding it wrong! (sorry :)
First of all, instead of a metapackage (an empty ebuild that have only runtime dependencies) there is other ways:
use sets to describe your preferred packages. Manage your USE flags in a usual way (including per package USE if needed).
medium complexity solution is to write a metapackage ebuild (your current case) -- but, you can't mask/unmask USE flags anyway…
if you already have your overlay (obviously) -- defining your own profile would solve everything! Here you can manage everything just like you want: mask/unmask any USE flags, define what is system predefined package means for you, & etc…
Unfortunately, I don't use Gentoo portage (and emerge) and have no idea if it's possible to have multiple additive profiles. I have my own profiles here and it works perfectly with Paludis.
Second, never remove any configuration files (config-protected) after uninstall! There is no packages that do that, and there is a bunch of reasons for that… The main one is that user may have them modified and don't want to loose his changes. Moreover, personally I prefer to have all configs that I've ever touched to be in a dedicated VCS repo -- it wouldn't be nice, if someone, except me, would remove smth…
Imagine a real life example: user wants to reinstall some package and he has a bunch of configuration files, he spent some time to carefully edit them. Trivial way is to uninstall and then install again -- Oops! He lost his configs!
Moreover, from ebuild's POV, you have pkg_prerm and pkg_postrm functions, but both of them are called even at upgrade time (i.e. when unmerge followed by immediate merge phase). You have to be really careful to distinct that use cases… And what is more scare, having any "hardcoded" (and unique) rules in any package, you don't have any influence on them…
So, please, never remove any config protected files, let the user to take care of them (he is the boss, not a package manager)…
Update: If you really want to be able to remove some config-protected files, setting up your own profile looks even more better idea. You can set CONFIG_PROTECT_MASK to enforce unprotect files and/or directories. In that way you don't need to modify any ebuilds and/or write an ugly cleanup code.

Related

Upgrading old go project to work with go modules

My $GOPATH contains 3 locations
/home//Documents/gotree
/home//Documents/perforce/modules/thirdparty/golibs
/home//Documents/perforce/modules/sggolibs/
Here location 1 is for general purposes, 2 and 3 for work-related libraries, which are maintained on one perforce server. These last two libraries are keeping in perforce so that anyone in the company should use these exact versions, not the library's latest version from internet.
In other location a couple of go servers are there, and all of them are using atleast a single library from $GOPATH location 2 and 3.
All those server are written 2,3 years ago, and does not contain any go.mod or any package management items.
My question is how do I upgrade all these servers to latest version go so that it will work with go modules, and probably a vendor directory to the thirdparty libraries?
Apologies if my question is too generic.
Unfortunately, Perforce is not one of the version control systems supported natively in the go command, so you may need to apply a bit of scripting or tooling in order to slot in the libraries from your Perforce repositories.
One option is to set up a module proxy to serve the dependencies from Perforce, and have your developers set the GOPROXY and GONOSUMDB environment variables so that they use that proxy instead of (or in addition to) the defaults (proxy.golang.org,direct).
Note that Go modules compute and store checksums for dependencies, so if you have modified any third-party dependencies it is important that any modifications be served with unique version strings (or different module paths!) so that they don't collide with upstream versions with different contents. (I seem to recall that the Athens proxy has support for filtering and/or injecting modules, although I'm not very familiar with its capabilities or configuration.)
I'm not aware of any Go module proxy implementations that support Perforce today, but you might double-check https://pkg.go.dev/search?q=%22module+proxy%22 to be sure; at the very least, there are a number of implementations listed there that you could use as a reference. The protocol is intentionally very simple, so implementing it hopefully wouldn't be a huge amount of work.
Another option — probably less work in the short term but more in the long term — is to use replace directives in each module to replace the source code for each Perforce-hosted dependency with the corresponding filesystem path. You could probably write a small script to automate that process; the go mod edit command is intended to support that kind of scripting.
Replacement modules are required to have go.mod files (to reduce confusion due to typos), so if you opt for this approach you may need to run go mod init in one or more of your Perforce directories to create them.
With either of the above approaches, it is probably simplest to start with as few modules as possible in your first-party repository: ideally just one at the root of your package tree. You can run go mod init there, then set up your replace directives and/or local proxy, then run go mod tidy to fill in the dependency graph.

Why is there no way to specify install options for binaries in ebuilds?

The Gentoo's ebuild mecanism comes with several built-in eclasses/commands to manage (amongst others) libraries, binaries, executables, etc... Some of them are really useful to work at installation phase, like setting permissions, modifying the default installation directory, etc...
About library installation, the ebuild documentation says :
dolib [list of more libraries]
Installs a library or a list of libraries into DESTTREE/lib. Creates all necessary dirs.
libopts [options for install(1)]
Can be used to define options for the install function used in the dolib functions. The default is -m0644.
The same is available for "executables": exeopts works with doexe.
Question
The thing I really don't understand is that why dobin and dosbin exist but not binopts and sbinopts?
Is it possible have libopts or exeopts equivalents for dobin and dosbin, to manage permissions at installation phase?
Because dobin and dosbin are special cases of doexe, which have pre-defined options; if you need special permissions (e.g. suid) you can use doexe as needed.
Effectively (/usr)/bin and (/usr)/sbin should be executable to all users, unless something special (like limiting access to a group that has access to the hardware) is needed.
(I would probably be in favour of removing libopts too, but that's a different story, I guess.)

Should I push Makefile.in to git repository?

Using autotools as build system, should we ship Makefile.in (generated by automake) withing distribution? Running make dist puts Makefile.in in archive, so should I push Makefile.in to my git repo?
There is no definitive answer to this, just strongly-held opinions.
The traditional view -- and I think I am justified in calling it this, as it was the operative view where Autoconf and Automake were invented -- was that you should check in the generated files. The rationale for this was twofold.
First, it reduced dependencies for development: you could check out a project and run configure without needing to install autoconf and friends. This was especially important in the bad old pre-Linux days, when the tools weren't installed by default and when package managers were just a dream.
Second, because most source changes don't involve changes to the configury, this reduced a possible source of errors where different developers might have different versions of the tools installed.
The check-it-in approach essentially relies on the use of AM_MAINTAINER_MODE. In fact, this is why this mode was invented.
A different view eventually emerged, which was that such files should not be checked in. I think the rationale for this is also twofold.
First, it is cleaner. I'm sure one can find any number of exhortations saying that only editable files should be committed to source control. And, this makes sense -- derived files can be derived; in source control they are just clutter.
Second, it is not uncommon for the generated files to get out of date in the source tree. This happens because developers forget to enable maintainer mode. Checking in just the source files not only avoids this, but also lets other developers catch any possible bugs from this.
This approach pretty much requires avoiding AM_MAINTAINER_MODE.
To sum up, there is no right answer. Some people, in my observation, prefer one of the arguments above; but neither is truly conclusive, in the sense that both approaches have worked well for multiple serious projects over a very long period of time.

Whats a good best practice with Go workspaces?

I'm just getting into learning Go, and reading through existing code to learn "how others are doing it". In doing so, the use of a go "workspace", especially as it relates to a project's dependencies, seems to be all over the place.
What (or is there) a common best practice around using a single or multiple Go workspaces (i.e. definitions of $GOPATH) while working on various Go projects? Should I be expecting to have a single Go workspace that's sort of like a central repository of code for all my projects, or explicitly break it up and set up $GOPATH as I go to work on each of these projects (kind of like a python virtualenv)?
I think it's easier to have one $GOPATH per project, that way you can have different versions of the same package for different projects, and update the packages as needed.
With a central repository, it's difficult to update a package as you might break an unrelated project when doing so (if the package update has breaking changes or new bugs).
I used to use multiple GOPATHs -- dozens, in fact. Switching between projects and maintaining the dependencies was a lot harder, because pulling in a useful update in one workspace required that I do it in the others, and sometimes I'd forget, and scratch my head, wondering why that dependency works in one project but not another. Fiasco.
I now have just one GOPATH and I actually put all my dev projects - Go or not - within it. With one central workspace, I can still keep each project in its own git repository (src/<whatever>) and use git branching to manage dependencies when necessary (in practice, very seldom).
My recommendation: use just one workspace, or maybe two (like if you need to keep, for example, work and personal code more separate, though the recommended package path naming convention should do that for you).
If you just set GOPATH to $HOME/go or similar and start working, everything works out of the box and is really easy.
If you make lots of GOPATHs with lots of bin dirs for lots of projects with lots of common dependencies in various states of freshness you are, as should be quite obvious, making things harder on yourself. That's just more work.
If you find that, on occasion, you need to isolate some things, then you can make a separate GOPATH to handle that situation.
But in general, if you find yourself doing more work, it's often because you're choosing to make things harder.
I've got what must be approaching 100 projects I've accumulated in the last four years of go. I almost always work in GOPATH, which is $HOME/go on my computers.
Using one GOPATH across all of your projects is very handy, but I find this to only be the case for my own personal projects.
I use a separate GOPATH for each production system I maintain because I use git submodules in each GOPATH's directory tree in order to freeze dependencies.
So, something like:
~/code/my-project
- src
- github.com
+ dependency-one
+ dependency-two
- my-org
- my-project
* main.go
+ package-one
+ package-two
- pkg
- bin
By setting GOPATH to ~/code/my-project, then it uses the dependency-one and dependency-two git submodules within that project instead of using global dependencies.
Try envirius (universal virtual environments manager). It allows to compile any version of go and create any number of environments based on it. $GOPATH/$GOROOT are depend on each particular environment.
Moreover, it allows to create environments with mixed languages (for example, python & go in one environment).
At my company I created Virtualgo to make managing multiple GOPATHs super easy. A couple of advantages over handling it manually are:
Automatic switching to the correct GOPATH when you cd to a project.
It integrates well with vendoring tools
It also sets the new GOBIN in your path, so you can use the executables installed there.
It still has your original GOPATH as a backup. If a package is not found in the project specific workspace it will search the main GOPATH.
One workspace + godep is best as for me.
I follow KISS - one GOPATH, two go paths:
export GOPATH=$HOME/go:$HOME/development/go
That way third party stuff goes in a central place (package install uses the first path entry by default), and I can flexibly move my projects elsewhere, at the second path entry.
You might want to try the direnv package.
https://direnv.net/
Just use GoSwitch. Saves a heck of a lot of time and sanity.
Add the script to the root of each of your projects and source it.
It will make that project dir your gopath and also add/removes the exact bin folder of that project to path.
https://github.com/buffonomics/goswitch

Easy way to change .deb build prefix?

I am trying to make a live CD for simplifying chrooting into unbootable Linux systems for users, as many unbootable Linux issues could be fixed with chroot, but many users probably don't understand the concept of chroot.
One of the abilities I want to add is the ability to temporarily import some utilities from the Live CD into the target system, so that they can be used as if they where installed, to do configuration tasks.
The problem is is that I can't seem to work around the apps trying to search for stuff in /usr/share when they are imported. (I already have a hacky workaround for /usr/lib using patchelf...) I would do a union mount on the /usr/share's, but that could confuse some package managers when they see files that should not be there, as the user might need to run a package manager to fix the broken system. (or at least I think it could confuse package managers).
I'm trying to see if I can create a script that will rebuild all packages to use a different build prefix instead of /usr. The script can rebuild packages with apt-get build-dep/apt-get source/debbuild, but it can't change the prefix.
Question: Is there a way to pass an argument to debuild or dpkg-buildpackage to change the build prefix?
Right now it seems I have to take a look at the contents of the source (from apt-get source) for every package and see what files are specifying /usr and figure out a way to change it for every one, but I have a feeling I'm missing something obvious...
Is this possible?
I don't think this is feasible. Why don't you mount in a different location, for example /usr/local? That way, you also eliminate a source for possible conflicts.
Still, some packages are full of hardcoded references to the location for their data files, for example.
I'll throw in a pointer to stow as well, although I imagine it's not really helpful for your scenario.

Resources