Easy way to change .deb build prefix? - prefix

I am trying to make a live CD for simplifying chrooting into unbootable Linux systems for users, as many unbootable Linux issues could be fixed with chroot, but many users probably don't understand the concept of chroot.
One of the abilities I want to add is the ability to temporarily import some utilities from the Live CD into the target system, so that they can be used as if they where installed, to do configuration tasks.
The problem is is that I can't seem to work around the apps trying to search for stuff in /usr/share when they are imported. (I already have a hacky workaround for /usr/lib using patchelf...) I would do a union mount on the /usr/share's, but that could confuse some package managers when they see files that should not be there, as the user might need to run a package manager to fix the broken system. (or at least I think it could confuse package managers).
I'm trying to see if I can create a script that will rebuild all packages to use a different build prefix instead of /usr. The script can rebuild packages with apt-get build-dep/apt-get source/debbuild, but it can't change the prefix.
Question: Is there a way to pass an argument to debuild or dpkg-buildpackage to change the build prefix?
Right now it seems I have to take a look at the contents of the source (from apt-get source) for every package and see what files are specifying /usr and figure out a way to change it for every one, but I have a feeling I'm missing something obvious...
Is this possible?

I don't think this is feasible. Why don't you mount in a different location, for example /usr/local? That way, you also eliminate a source for possible conflicts.
Still, some packages are full of hardcoded references to the location for their data files, for example.
I'll throw in a pointer to stow as well, although I imagine it's not really helpful for your scenario.

Related

What does yarn --pnp?

There is this new shining Yarn feature called Plug'n'Play.
I would like to know what it does exactly?
I know it's creating a .pnp folder and a .pnp.js file, but does it change anything else on the machine, like a config file somewhere?
Thank you.
I designed and implemented PnP, so I can talk hours about it 🙂
tl;dr: We only write the .pnp.js and .pnp folders (on top of the regular Yarn cache). We don't store configuration anywhere else.
Without Plug'n'Play
When you run yarn install (even without PnP), a few things happen:
If you use the offline mirror feature, we download the tarballs from the registry and store them within the offline mirror folder
Regardless of whether or not you use the offline mirror, we unpack all the tarballs downloaded and store their files in the Yarn cache
We then figure out which files from the cache should be copied into which location in the node_modules
We apply the computed changes (a bunch of rsync operations, basically)
With Plug'n'Play
With PnP, the workflow becomes like this:
No changes, we download the tarballs from the registry in the offline mirror (if enabled)
No changes, we still unpack them into the Yarn cache
We generate a .pnp.js file¹
And that's it. There is no other generated file than the .pnp.js file (and the cache, but it already was there before).
¹ As you mentioned, we also generate a .pnp folder (.yarn as of Yarn 2) in the project. This folder is meant to contain two types of data:
Unplugged packages are packages that must be local to the project. Typically, those are the packages with postinstall scripts (we cannot store them into the cache, as the generated artifacts might be different from a project to another).
Virtual packages, which are symlinks created for each package in your dependency tree that lists peer dependencies. Without going into the details, they are a necessary part of the design, and are required to make require.resolve work as before. Those files don't exist anymore as of Yarn 2 🎉
How does it work?
The .pnp.js file contains information similar to the following:
webpack#1.0.0 -> /cache/webpack-1.0.0/
-> it depends on lodash#1.0.0
lodash#1.0.0 -> /cache/lodash-1.0.0/
-> no dependencies
By having those information, the resolution can correctly infer that when a file within /cache/webpack-1.0.0 makes a require call to lodash, then the required files must be loaded from /cache/lodash-1.0.0. It's a bit more complex in practice (we keep an inverse map for improved perfs, we use relative paths to ensure portability, etc), but the general concept is there.
Bonus round: With Plug'n'Play+Zip loading (Yarn 2)
Bonus: With Yarn 2, we're about to improve this workflow even more. This is what it will look like:
We download the tarballs from the registry and store then into the cache (no more distinction between offline mirror and cache - they are the same)
We generate the same .pnp.js file as before
And that's it! As you can see we don't unpack the packages anymore (instead, we use a Node loader to read them from the package archives at runtime).
Doing this has a very interesting property: if both your cache and .pnp.js files are there, you don't need to run yarn install for your application to work! And to ensure you have those files, you just have to add them to your repository and version them as you would with everything else.²
It's very useful, as you don't need to remember to run yarn install after git rebase, git pull, or git checkout, and your CI systems become faster and stabler as they don't need special setup - just clone your application and it'll just work.
² Before someone mentions it - checking-in binary files within a repository is perfectly fine. The reason why node_modules were a very bad thing to check-in within your repository was because of the exponential number of text files, which was putting a huge strain on Git - technically, but also philosophically as code reviews were made impossible.
In the case I described we don't suffer from the same problem, because the number of files is constrained (exactly one file for each package), and reviewing them is very easy - in fact, it's better in that you can clearly see how many new packages are added to your project by a PR!
It imports only the parts of a package you are going to use, making the bloated node_modules folder much, much leaner.
Think about for example having relative big libraries like lodash or ramda when you use only 4-5 functions from them - how much you could save getting only the actually used minimum.
I believe it is not yet 100% fully stable, but still a nice option to keep on your radar :)

DESTDIR vs prefix options in a build system?

Can someone explain me the purpose of a $(DESTDIR) variable in a build system?
I mean that I know that it points a temporary directory for currently installed package, but I can not imagine what is practical use of it.
To clarify, well, I know what --prefix option is, for instance if we point the buldsystem like ./configure --prefix="/usr" all the package's files will belong /usr, like /usr/lib, /usr/share and so on, but in Makefiles I've also seen the following constructions:
$(DESTDIR)/$(prefix)
And what's id purpose of that? In short, is there a difference between DESTDIR and prefix and when should both be used?
Yes, there's a very important difference... in some environments.
The prefix is intended to be the location where the package will be installed (or appear to be installed) after everything is finalized. For example, if there are hardcoded paths in the package anywhere they would be based on the prefix path (of course, we all hope packages avoid hardcoded paths for many reasons).
DESTDIR allows people to actually install the content somewhere other than the actual prefix: the DESTDIR is prepended to all prefix values so that the install location has exactly the same directory structure/layout as the final location, but rooted somewhere other than /.
This can be useful for all sorts of reasons. One example are facilities like GNU stow which allow multiple instances to be installed at the same time and easily controlled. Other examples are creating package files using RPM or DEB: after the package is installed you want it unpacked at root, but in order to create the package you need it installed at some other location.
And other people use it for their own reasons: basically it all boils down to DESTDIR is used to create a "staging area" for the installation, without actually installing into the final location.
Etc.

A Chicken Scheme equivalent to Python's virtualenv?

Is there a way to create an equivalent of Python's virtual environments (virtualenv)? With virtualenvs, one can install Python packages inside the virtual environment (a separate directory) without messing up the global python environment. One can remove packages that one decides they don't need without worrying about removing a package that is depended upon by another Python project. I'm sure there are other benefits that I'm not thinking of at the moment. I notice that when I use chicken-install, it installs all of the eggs in my /usr/local/Cellar/chicken/4.12.0/lib/chicken/8/ dir. Is there a way to have them install that egg in a project specific directory similarly to how Python's virtualenv works?
There isn't really such a thing in CHICKEN 4. The problem here is that installing eggs to a different location is one part, the other is running programs so that they look up eggs in that location. You can emulate it by using something along these lines:
export LOCAL_EGGS=/path/to/project/local
chicken-install -init $LOCAL_EGGS
export CHICKEN_REPOSITORY=$LOCAL_EGGS
chicken-install r7rs ...
csc ...
The easiest way to do this is to simply install CHICKEN to a different location using the PREFIX option to make when building it (see the README for instructions). This allows you to have a CHICKEN specifically built for each of your projects. I vastly prefer this option over the others because it is very easy to understand, and CHICKEN itself is very fast to build and not very big, so I find the overhead of doing this quite acceptable.
Alternatively, use what wasamasa proposed, or use the -deploy option to install eggs with the program. See the deployment chapter in the manual for more info.
Actually you don't need any "virtual environment" - everything already in place.
There is straightforward way to change repository location:
CHICKEN_INSTALL_REPOSITORY is the place where eggs will be installed and which the egg-related tools like chicken-install, chicken-uninstall and chicken-status consult and update. Make sure the paths given in these environment variables are absolute and not relative.
and
CHICKEN_REPOSITORY_PATH is a directory (or a list of directories separated by :/;) where eggs are to be loaded from for all chicken-based programs.
Point CHICKEN_INSTALL_REPOSITORY to the location where you want it to be. Note that you need to point CHICKEN_REPOSITORY_PATH to your local repository as well as system one in order to be able to import extensions distributed with Chicken system.
Also you most likely need to setup installation prefix:
An alternative installation prefix that will be prepended to extension installation paths if specified. It is set by the -prefix option or environment variable CHICKEN_INSTALL_PREFIX.
and update your PATH:
PATH="$CHICKEN_INSTALL_PREFIX/bin:$PATH"
This allows you to install extensions which provide console programs.
The only thing left to do is export all these variables.
That's it!
This is basically what Python's virtualenv activate script does in essence. As you can see, this is very simple steps to do. You don't need a dedicated tool to manage that. A very simple shell script can serve very well.
How it works?
This works by introducing an one more depth level local hierarchy, (as it does for /usr and /usr/local, please, see FHS). If you wonder what the hell is local hierarchy, take a look at your $HOME/.local - probably you have something interesting inside.
Bonus
As far as setting up per-project extensions repository involves only environment modification, this definitely can be automated. There is very handy tool to solve this kind of problems in general: direnv. Using this simple function in your $HOME/.envrc:
use_chicken() {
LOCAL=$(expand_path .local)
system_repository=$(chicken-install -repository)
binary_version=${system_repository##*/}
local_repository=${LOCAL}/lib/chicken/${binary_version}
path_add CHICKEN_REPOSITORY_PATH ${system_repository}
path_add CHICKEN_REPOSITORY_PATH ${local_repository}
export CHICKEN_REPOSITORY_PATH
export CHICKEN_INSTALL_REPOSITORY=${local_repository}
export CHICKEN_INSTALL_PREFIX=${LOCAL}
PATH_add ${LOCAL}/bin
}
you can setup your Chicken project just with these two lines in .envrc inside project directory:
source_up
use chicken
I know https://github.com/ursetto/cenv exists (never used it myself), and it is for CHICKEN 5 only, though (it won't work with CHICKEN 4). Thought about mentioning it in case you plan to migrate to CHICKEN 5.

gentoo: how delete all config files on unmerging package (from its ebuild)

I am making my own personal package to have collection of usefull programs and configs. Main idea is to emerge this package and have system prepared for my prefferencies. Mainly it works (it simply depends on all my favourite programs), but I have two problems here:
how to install USE flags, UNMASK and such before affected programs are installed?
how to uninstall it (emerge --unmerge does NOT delete files in /etc, so even after uninstalling the package the USE flags (and others) are still kept - my intent is to REMOVE them, so next rebuild of world would NOT use them anymore - yes it means a lot of programs would lose some functionalities like support for some languages, support for some other programs and so on, it is desired result)
My solutions so far are:
The package have some files in /etc/portage/package.*
1.1. I emerge that package with --nodeps (so the config files are installed)
1.2. I emerge it again without that flag (so dependencies are installed
with right configuration))
I create (and install) script to parse /var/db/packages for my package CONTENTS and delete all /etc/portage/something files "manually" and I have to rum this script before unmerging the package
Is there better way to do it ?
You just doing/understanding it wrong! (sorry :)
First of all, instead of a metapackage (an empty ebuild that have only runtime dependencies) there is other ways:
use sets to describe your preferred packages. Manage your USE flags in a usual way (including per package USE if needed).
medium complexity solution is to write a metapackage ebuild (your current case) -- but, you can't mask/unmask USE flags anyway…
if you already have your overlay (obviously) -- defining your own profile would solve everything! Here you can manage everything just like you want: mask/unmask any USE flags, define what is system predefined package means for you, & etc…
Unfortunately, I don't use Gentoo portage (and emerge) and have no idea if it's possible to have multiple additive profiles. I have my own profiles here and it works perfectly with Paludis.
Second, never remove any configuration files (config-protected) after uninstall! There is no packages that do that, and there is a bunch of reasons for that… The main one is that user may have them modified and don't want to loose his changes. Moreover, personally I prefer to have all configs that I've ever touched to be in a dedicated VCS repo -- it wouldn't be nice, if someone, except me, would remove smth…
Imagine a real life example: user wants to reinstall some package and he has a bunch of configuration files, he spent some time to carefully edit them. Trivial way is to uninstall and then install again -- Oops! He lost his configs!
Moreover, from ebuild's POV, you have pkg_prerm and pkg_postrm functions, but both of them are called even at upgrade time (i.e. when unmerge followed by immediate merge phase). You have to be really careful to distinct that use cases… And what is more scare, having any "hardcoded" (and unique) rules in any package, you don't have any influence on them…
So, please, never remove any config protected files, let the user to take care of them (he is the boss, not a package manager)…
Update: If you really want to be able to remove some config-protected files, setting up your own profile looks even more better idea. You can set CONFIG_PROTECT_MASK to enforce unprotect files and/or directories. In that way you don't need to modify any ebuilds and/or write an ugly cleanup code.

Where should I put a template folder for a bash script?

I'm on OS-X (Mavericks, if that matters), and I'm making a bash script that will use resources from a folder called "templates". I'm trying to figure out where I should put it (the templates folder). I'd like to make it so the user doesn't need to modify their path when they install it, so I'd rather not do it the way the terminal mysql command does it (it lives in a folder in /usr/local/mysql/bin). I really want to be able to put them into usr/bin, but I don't know if it's "polite" to put folders in there (I don't see any in there).
Right now I'm leaning towards putting the scripts in usr/bin and having the templates in usr/lib. Is that how this type of thing is normally done, or is there another way? I'd like to follow a convention, assuming there is one. I'd also like it to apply to as many Unix platforms as possible (I'd like to put in a directory where bash scripts live that's consistent across as many Unix platforms as possible). Thanks.
If you follow the Filesystem Hierarchy Standard (FHS), your executable goes in /usr/local/bin, while read-only template files go in /usr/local/share/YOURAPP/. To quote the FHS:
/usr/local/share
The requirements for the contents of this directory are the same as /usr/share. […]
and:
The /usr/share hierarchy is for all read-only architecture independent data files.
(Emphasis added)
If the system admin is meant to customize the template files to take effect system-wide, then they would simply go in /etc/YOURAPP/templates (or something like that).
If the template files are customized on a per-user basis, then the modified copies of the templates (copied from /usr/local/share/YOURAPP/templates) need to be saved in the user's directory, under $HOME/.config/YOURAPP/templates or something like that (thanks to technosaurus for the correction).
You mentioned that you want to install the templates in a directory alongside your executable. That is not the standard approach on UNIX, at least going by the FHS. If you really want to go this route, there is a sort of convention of installing your app to /opt/YOURAPP/, using whatever organization you want inside that folder.
In all cases, it is not good practice to install executables directly to /usr/bin, as that directory is considered to be under the exclusive control of the OS/distribution. If you want to install there, the accepted way to do that is to create a package for the package manager of every supported OS/distribution.

Resources