How does the GRUB2 UEFI loader know where to look for the configuration file (or where the 2nd stage's files are located)? - boot

If I use GRUB2 on a GPT-enabled partition how does loader "know" where to find its configuration file and other 2nd stage's files?
Note: I found some mentions about a config file which is located in the same folder as GRUB's EFI loader and contains chained load of "primary" configuration file from the specified partition, but that definitely is not true - there is only one "something.efi" file.

There are actual several ways this can happen:
Load an embedded config file.
Load a config file in the same directory as the GRUB binary.
Load a config file from a path decided at grub-mkimage (called by grub-install) execution time.
The latter is probably the functionality you are really asking for - and it's a combination of the default config file name (grub.cfg), the prefix (default /boot/grub, but can be explicitly specified to grub-mkimage) and the grub partition name for the partition where the prefix is located.
If I run strings /boot/efi/EFI/debian/grubx64.efi | tail -1 on my current workstation, it prints out the stored value: (,gpt2)/boot/grub, telling grubx64.efi to look for its configuration file in /boot/grub on GPT partition 2. The bit before the comma (the GRUB disk device name) gets filled in at runtime based on which disk the grubx64.efi image itself was loaded from.
Dynamically loaded modules will also be searched for under this location, but in an architecture/platform-specific directory - in this case /boot/grub/x86_64-efi.

for EFI image, I found that grub-install or grub-mkimage will always embed an early config into the result EFI binary, regardless of whether or not you have specified the --config FILE option. If you do not specify the --config FILE option, it will try to embed /boot/grub/x86-64_efi/load.cfg,
This early config file looks like this:
search.fs_uuid 8ef704aa-041d-443c-8ce6-71ac7e7f30da root hd0,gpt1
set prefix=($root)'/boot/grub'
configfile $prefix/grub.cfg # this line seems can be omitted, because it seems to be the default next action
The uuid means uuid of file system, not of partition, you can use blkid to list it.
The hd0,gpt1 is just a hint.
You can change the first line into set root=hd0,gpt1
This default behavior of auto embedding is different as in BIOS mode, the latter by default only embed a prefix string like (,gpt3)/boot without bothering search.uuid.
I also found that Ubuntu bionic EFI image embedded a early config like this
https://source.puri.sm/pureos/core/grub2/blob/master/debian/build-efi-images#L64
if [ -z "\$prefix" -o ! -e "\$prefix" ]; then
if ! search --file --set=root /.disk/info; then
search --file --set=root /.disk/mini-info
fi
set prefix=(\$root)/boot/grub
fi
if [ -e \$prefix/$platform/grub.cfg ]; then
source \$prefix/$platform/grub.cfg
elif [ -e \$prefix/grub.cfg ]; then
source \$prefix/grub.cfg
else
source \$cmdpath/grub.cfg
fi
The cmdpath is the DIR of efi binary, so it will fallback to the grub.cfg in the same dir of the efi binary, as you found.

Related

Which kernel function does on-demand module loading triggered by accessing a non-existent device node under /dev?

In /usr/lib/modules/$(uname -r)/modules.devname, the first line is a comment:
# Device nodes to trigger on-demand module loading.
Assuming what it means is that the first time a device file under /dev is accessed, a module which populates that file will be automatically loaded.
But I don't see the code that does module loading when a file lookup failed, in /drivers/base/devtmpfs.c or /mm/shmem.c(tmpfs). Where does that logic live then?
The modules.devname file has nothing to do with module auto-loading. It contains information that can be used during system initialization to create files in the /dev directory. The file is read by the kmod static-nodes command. By default, kmod static-nodes produces human-readable output, but during system initialization it is run as kmod static-nodes --format=tmpfiles to generate output in a more machine-parseable form. Each line contains information that can be used to create a single directory or a single special file (see the tmpfiles.d man page for details of the format). It does not contain the module name.
On systems using Systemd init, the kmod command is run from the kmod-static-nodes.service service. The output file in tmpfiles.d format is placed in "/run/tmpfiles.d/static-nodes.conf", which will be read later by the systemd-tmpfiles --prefix=/dev --create --boot command run from the systemd-tmpfiles-setup-dev.service service to create the actual files in "/dev".
On systems using Sysv init, the kmod command may be run by the /etc/init.d/udev init script (on Debian type systems) or from somewhere else. The same init script creates the actual files in "/dev" based on the output from kmod.
When a character special file for an unregistered character device number is being opened, the kernel will request the module with alias char-major-MAJOR-MINOR or char-major-MAJOR where MAJOR and MINOR are the major and minor device numbers of the special file. (See base_probe() in "fs/char_dev.c".) If the kernel is configured with CONFIG_BLOCK_LEGACY_AUTOLOAD=y, there is similar functionality when opening block special files for unregistered block device numbers, the kernel will request the module with alias block-major-MAJOR-MINOR or block-major-MAJOR. (See blk_request_module() in "block/genhd.c" and blkdev_get_no_open() in "block/bdev.c".)
The source code for a module uses the MODULE_ALIAS_CHARDEV(), MODULE_ALIAS_CHARDEV_MAJOR(), MODULE_ALIAS_BLOCKDEV(), or MODULE_ALIAS_BLOCKDEV_MAJOR() macros (which wrap the MODULE_ALIAS() macro) to put these aliases into the module's .modinfo section where the depmod command can find them.

How to override settings in /etc/sysctl.conf CentOS 7?

I was trying to set certain kernel parameters using "/etc/sysctl.conf" file on Cent OS 7.5. I copied "/etc/sysctl.conf" file into "/etc/sysctl.d/sysctl.conf" and updated certain parameters and reloaded settings using "sysctl --system".
But I see parameters inside "/etc/sysctl.conf" overwrites those present inside (/etc/sysctl.d/sysctl.conf) . (I can also see the same when I execute command i.e settings from /etc/sysctl.d/sysctl.conf gets applied first and then settings from "/etc/sysctl.conf" gets applied which causes issue.)
But according to man page as sysctl --system should have ignored settings inside "/etc/sysctl.conf" as I have created file with same name inside "/etc/sysctl.d/sysctl.conf" which gets read first. ( Reference : http://man7.org/linux/man-pages/man8/sysctl.8.html ).
--system
Load settings from all system configuration files. Files are
read from directories in the following list in given order
from top to bottom. ***Once a file of a given filename is
loaded, any file of the same name in subsequent directories is
ignored.***
/run/sysctl.d/*.conf
/etc/sysctl.d/*.conf
/usr/local/lib/sysctl.d/*.conf
/usr/lib/sysctl.d/*.conf
/lib/sysctl.d/*.conf
/etc/sysctl.conf ```
The man page does not agree with the source code sysctl.c. According to the source code of the PreloadSystem() function, it processes the *.conf files in the various sysctl.d search directories (skipping those *.conf filenames that have already been seen, as described in the man page). Then it processes the default /etc/sysctl.conf file if it exists without checking whether the sysctl.conf filename has already been seen.
In summary, the settings in /etc/sysctl.conf cannot be overridden by the *.conf files in /etc/sysctl.d/ and other sysctl.d directories, because the settings in /etc/sysctl.conf are always applied last.

sql loader without .dat extension

Oracle's sqlldr defaults to a .dat extension. That I want to override. I don't like to rename the file. When googled get to know few answers to use . like data='fileName.' which is not working. Share your ideas, please.
Error message is fileName.dat is not found.
Sqlloder has default extension for all input files data,log,control...
data= .dat
log= .log
control = .ctl
bad =.bad
PARFILE = .par
But you have to pass filename without apostrophe and dot
sqlloder pass/user#db control=control data=data
sqloader will add extension. control.ctl data.dat
Nevertheless i do not understand why you do not want to specify extension?
You can't, at least in Unix/Linux environments. In Windows you can use the trailing period trick, specifying either INFILE 'filename.' in the control file or DATA=filename. on the command line. WIndows file name handling allows that; you can for instance do DIR filename. at a command prompt and it will list the file with no extension (as will DIR filename). But you can't do that with *nix, from a shell prompt or anywhere else.
You said you don't want to copy or rename the file. Temporarily renaming it might be the simplest solution, but as you may have a reason not to do that even briefly you could instead create a hard or soft link to the file which does have an extension, and use that link as the target instead. You could wrap that in a shell script that takes the file name argument:
# set variable from correct positional parameter; if you pass in the control
# file name or other options, this might not be $1 so adjust as needed
# if the tmeproary file won't be int he same directory, need to be full path
filename=$1
# optionally check file exists, is readable, etc. but overkill for demo
# can also check temporary file does not already exist - stop or remove
# create soft link somewhere it won't impact any other processes
ln -s ${filename} /tmp/${filename##*/}.dat
# run SQL*Loader with soft link as target
sqlldr user/password#db control=file.ctl data=/tmp/${filename##*/}.dat
# clean up
rm -f /tmp/${filename##*/}.dat
You can then call that as:
./scriptfile.sh /path/to/filename
If you can create the link in the same directory then you only need to pass the file, but if it's somewhere else - which may be necessary depending on why renaming isn't an option, and desirable either way - then you need to pass the full path of the data file so the link works. (If the temporary file will be int he same filesystem you could use a hard link, and you wouldn't have to pass the full path then either, but it's still cleaner to do so).
As you haven't shown your current command line options you may have to adjust that to take into account anything else you currently specify there rather than in the control file, particularly which positional argument is actually the data file path.
I have the same issue. I get a monthly download of reference data used in medical application and the 485 downloaded files don't have file extensions (#2gb). Unless I can load without file extensions I have to copy the files with .dat and load from there.

pg_backup.sh: "No such file or directory" for pg_backup.config

In order to create backups of a postgres DB, I downloaded the backup scripts provided here.
I created a pg_backup.config which is located in /etc. Put pg_backup.sh in /usr/local/bin. When trying to run the script, like so:
$ pg_backup.sh -c /etc/pg_backup.config
... I get the following error message:
/usr/local/bin/pg_backup.sh: line 27: /usr/local/bin/pg_backup.config: No such file or directory
Is there something I'm doing wrong? Obviously, the script tries to load the configuration from /usr/local/bin/pg_backup.config, although I specified /etc/pg_backup.config as config input. How can I specify from where to load the configuration?
That script seems to accept a config file on the command line but then also requires one to exist next-to the binary itself (and finds that location in a less than robust way1).
The script is just broken.
The very first thing it does is parse its command line options (using a manual while loop). As it does so it removes them from the positional parameters (using shift). It stops that loop when there are no more positional parameters (when [ $# -gt 0 ] is no longer true).
Immediately upon draining the positional parameters it then goes and checks the count of positional parameters again (if [ $# = 0 ]; then) and when, inevitably, it finds that there are no more positional parameters it attempts to source its default configuration file.
The pg_backup_rotated.sh script, in contrast, has a much more reasonable mechanism for loading a configuration file (though it still uses the less-than robust way of finding the location for the default config file).

No space left on device - write chef remote_file

I get a strange error when a chef-client tries to execute remote_resource for a big local file.
From stack trace I guess ruby copy files itself. My disk has a lot of free space. Also var and tmp folders has at leas 2 Gbytes. If I do this job myself with cp command or I replace remote_file resource with execute one it's okay.
Chef complains about lack of disk space.
This resource fails for a file of 4G size with message No space on device.
remote_file "/tmp/copy.img" do
source "file://tmp/origin.img"
action :create
end
I made workaround with bash resource and it works.
execute "disk-image" do
command "cp /tmp/origin.img /tmp/copy.img"
not_if "cmp /tmp/origin.img /tmp/copy.img"
end
It's not going to work. remote_file downloads the remote file to somewhere within /var/chef IIRC, then copies to its destination.
Since /var has only 2Gb of space and the file is 4Gb big, it correctly throws the No space left on device error.
Thank you #lamont for the explanation. To cut to the chase a bit, the only solution that worked for me was to add the following to my Chef recipe, prior to any calls to remote_file:
ENV['TMP'] = '/mytmp'
ENV['TMPDIR'] = '/mytmp'
where /mytmp is a directory on a volume with enough space to hold my file.
The promising feature of adding:
file_staging_uses_destdir true
to /etc/chef/client.rb currently does not work, due to this bug: https://tickets.opscode.com/browse/CHEF-5311.
9/20/2016: Chef 12.0 shipped with file_stating_uses_destdir being defaulted to true so this should no longer be an issue (the remote_file but where it streams to /tmp may still exist).
First the real simple statement: If you've got a 4GB file in /tmp and you only have 2GB left in /tmp, then obviously copying the 4GB will fail, and nothing can help you. I'm assuming you have at least 4GB in /tmp and only 2GB left in /var which is the only interesting case to address.
As of 11.6.0 (to 11.10.2 at least) chef-client will create a tempfile using ruby's Tempfile.new and will copy the contents to that temp file and then will mv it into place. The tempfile location will be determined by ENV['TMPDIR'] and that differs based on your O/S distro (e.g. on a Mac that will be something like /var/folders/rs/wqwmj_1n59135dhhbvfdrlqh0001yj/T/ and not just /tmp or even /var/tmp), so it may not be obvious where the intermediate tempfile is created. You may be running into that problem. You should be able to see from the chef-client -l debug output what tempfile location chef is using and if you df -k that directory you might see that it is 100%.
Also, look at df -i to see if you've run out of inodes somehow which will also throw a no space left on device error.
You can set chef-client globally to use the destination directory as the tmpdir for creating files via adding this to client.rb:
file_staging_uses_destdir true
Then if your destination dir is '/tmp' the tempfile will get created there and then will simply get renamed in the directory in order to deploy it. That ensures that if there's enough space on the target device to hold the result, then the resource should always succeed to write the tempfile. It also avoids the problem if /tmp and the destdir are on different filesystems that the mv to rename and deploy the file will get translated into a copy-and-unlink-src operation which can fail in several different ways.
The answer by #cassianoleal is not correct in stating that chef-client always uses /var/cache as a temp location. Changing file_cache_path will also not have an effect. That is confusing a common pattern of downloading remote_files into the Chef file_cache_path directory for how remote_file works internally -- those are not the same thing. There is no file_cache_path in the question, so there should not be any file_cache_path in the answer.
The behavior of remote_file with file:// URLs is a bit round-a-bout, but that is because they're necessary for all other URLs (as #cassianoleal correctly mentioned). The behavior with file_staging_uses_destdir is probably correct, however, since you do want to take into account edge conditions where you run out of room and truncate the file or the server crashes in the middle of a copy operation and you don't want a half-populated file left over. By writing to a tempfile and closing it and then renaming a lot of those edge conditions are avoided.

Resources