Buildroot with readonly filesystem: allow writing on /etc - embedded-linux

I'm preparing a Buildroot IoT project based on Orange PI Zero, so I will make it a readonly system. Anyway, I need to persistently write on /etc to update wpa_supplicant.conf when user configure it for his WiFi network. I also need to update a custom text file with some config parameters, if the user want to.
I'd like to avoid remounting the whole filesystem in r/w everytime I need to update a single file.
Which is the best way to accomplish this?

You can set up a writable overlay on top of /etc so changes goes there. Options are either overlayfs in the kernel or unionfs using fuse. As init / initscripts already use /etc you may need to create a wrapper script around init to setup this overlay before executing init. - E.G. something like:
mount -t proc proc /proc
mount /mnt/data
mount -o bind /etc/ /mnt/rom-etc
unionfs -o cow,allow_other,use_ino,nonempty \
mnt/data=RW:/mnt/rom-etc=RO /etc/
exec /sbin/init $*

Related

Is there a way to create a link for the machine ID without modifying Yocto?

I am running Linux 4.14.149 built with Yocto Zeus (3.0.0). I am running a read only filesystem, and recently found an issue where my UID (/etc/machine-id)was getting changed every boot (a result of this question - https://superuser.com/questions/1668481/dhcpd-doesnt-issue-the-same-lease-after-reboot ).
I am trying to make that file a link to the user-data partition so it will persist across reboots. I have tried making the link as part of a base-files_%.bbappend which is the way I made the link for the hostname (which works). This is the contents of that file (/var/local is our user data partition with is mounted RW in the init script):
FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"
hostname = ""
machine-id = ""
do_install_append() {
ln -sfn /var/local/etc/hostname ${D}/${sysconfdir}/hostname
ln -sfn /var/local/etc/machine-id ${D}/${sysconfdir}/machine-id
}
But I am seeing the following error when I tried to build that:
Exception: bb.process.ExecutionError: Execution of '/home/gen-ccm-root/workdir/tools/poky/build-dev/tmp/work/mi_nhep-poky-linux-gnueabi/mi-dev/1.0-r0/temp/run.read_only_rootfs_hook.50286' failed with exit code 1:
touch: cannot touch '/home/gen-ccm-root/workdir/tools/poky/build-dev/tmp/work/mi_nhep-poky-linux-gnueabi/mi-dev/1.0-r0/rootfs/etc/machine-id': No such file or directory
WARNING: exit code 1 from a shell command.
It turns out there are two things that touch that file; the rootfs-postcommands.bbclass and the systemctl python script (found in meta/recipes-core/systemd/systemd-systemctl/systemctl), the former of which (I think) is causing the error. It is failing in the do_rootfs step.
What is the best way to create this link? If there is a choice, I would rather not modify Yocto sources if that is possible.
You can do this by defining your own rootfs post-command, and appending it to ROOTFS_POSTPROCESS_COMMAND so that it runs after Yocto's built-in read_only_rootfs_hook that creates the empty /etc/machine-id file using touch.
# setup-machine-id-symlink.bbclass
ROOTFS_POSTPROCESS_COMMAND += "install_machine_id_symlink ;"
install_machine_id_symlink () {
ln -sfn /var/local/etc/machine-id ${IMAGE_ROOTFS}/etc/machine-id
}
# your-image.bb
inherit setup-machine-id-symlink
The Image Generation docs have more detail on how postprocessing commands are applied during the build.
Note: You will need to ensure that your persistent partition is mounted early, so that reading /etc/machine-id doesn't result in a broken symlink.
Alternatively, use bind mounts:
You could also do this at runtime by installing a systemd service that runs early in the boot sequence and bind mounts your persistent machine-id over the blank one provided by Yocto in the rootfs.
Using a systemd service (rather than a bind mount entry in /etc/fstab) is necessary because you will need to ensure the persistent machine-id file actually exists before creating the bind mount. Though, you may be able to make use of tmpfiles.d to do that instead.
After first boot, when machine-id is generated, update bootargs U-Boot environment variable to include systemd.machine_id= in the kernel command line parameter.
An ID specified in this manner has higher priority and will be used instead of the ID stored in /etc/machine-id.

Is it possible to set read-only for myself on unix?

I have been given the address to a very large folder on a shared Unix server. I've been given a path to some files on a unix server I'm working on through ssh. I don't want to waste space by creating a duplicate in my home area so I've linked the folder through ln -s. However I don't want to risk making any changes to the data within the folder.
How would I go about setting the files to read-only for myself? Do I have to ask the owner of the folder/file? Do I need sudo access? I am not the owner of the file and I do not have root access.
Read about chmod command to change the mask on the files the links point to.
The owner or root can restrict access to files.
Also you probably need to mount that shared folder as read-only. But I am not sure how your folder is connected
UPDATE
The desired behaviour can be achieved using mount tool. (man page for mount).
Note that the filesystem mount options will remain the same as those on the original mount point, and cannot be changed by passing the -o option along with --bind/--rbind. The mount options can be changed by a separate remount command, for example:
mount --bind olddir newdir
mount -o remount,ro newdir
Here is the similiar question to yours. Also solved via mount tool.

get container's name from shared directory

Currently I am running docker with more than 15 containers with various apps. I am exactly at the point that I am getting sick and tired of looking into my docs every time the command I used to create the container. Trying to create scripts and alias commands to get this procedure easier I encountered this problem:
Is there a way to get the container's name from the host's shared folder?
For example, I have a directory "MyApp" and inside this I start a container with a shared folder "shared". It would be perfect if:
a. I had a global script somewhere and an alias command set respectively and
b. I could just run something like "startit"/"stopit"/"rmit" from any of my "OneOfMyApps" directory and its subdirectories. I would like to skip docker ps-> Cp -> etc etc every time, and just get the container's name from the script. Any ideas?
Well, one solution would be to use environment variables to pass the name into the container and use some pre-determined file in the volume to store the name. So, you would create the container with -e flag
docker create --name myapp -e NAME=myapp myappimage
And inside the image entry point script you would have something like
cd /shared/volume
echo $NAME >> .containers
And in your shell script you would do something like
function stopit() {
for name in `cat .containers`; do
docker stop $name;
done;
}
But this is a bit fragile. If you are going to script the commands anyway, I would suggest using docker ps to get a list of containers and then using docker inspect to find which ones use this particular shared volume. You can do all of it inside the script, so what is the problem.

Mount Namespace: Normalize file path to init/root mount namespace

I looked for various sources of information regarding mount namespaces in the Linux kernel and I have to say that I couldn't find much information on how it works underneath the hood (layout of structures and how they are all interrelated).
What I'd like to do is take a given path in process X's mount namespace and get the same file path in the init/root process namespace.
Example:
block device A has a file as blah/whatever/fileX
In the init/root process mount namespace, this bdev A is mounted on folder /root making the path /root/blah/whatever/fileX
In the process X mount namespace, this bdev A is mounted on folder /myfolder making the path /myfolder/blah/whatever/fileX
When a specific system call using the pathname into the kernel is made from process X's world, I'd like to take the pathname /myfolder/blah/whatever/fileX and convert it as it would be in init/root's world making the pathname /root/blah/whatever/fileX (or NULL if the file is not accessible through any mount point of init/root)
Some related question:
Linux - understanding the mount namespace & clone CLONE_NEWNS flag
If I understand it correctly, you are not exactly looking for mount namespace but just a option which is called bind mount in mount system call or cli.
For root/init the device is mounted in blah/whatever/fileX ... you can bind mount the same blah/whatever in /myfolder/blah ... thus the same fileX and same dirs are visible with two paths..
now unless you look for isolation i.e. you do not want any other process to look for these mount points, you can go and use mount namespace. The simplest what of doing this is starting processX using "unshare"

Detecting a remote mount

Is there a single, universal bash shell variable or common Linux command that will reliably indicate if a given directory or file is on a remote filesystem -- be it NFS, SSHFS, SMB, or any other remotely mounted filesystem?
CONTEXT...
This a root-only access, single-user, multi-host Linux development "lab" using SSH and SSHFS for semi-seamless loose-coupling the systems. Relevent directory structure on each host is...
/0
/0/HOST1
/0/HOST2
/0/HOST3
/bin
/boot
:
Directories in /0 are SSHFS mounted to '/' on the named host. 'Host1', etc. are mountpoint directories named for each host.
I could of course, establish an environment variable something like...
REMOTE_FS=/0
...and test for the dirname starting with '/0'. However that's not very portable or reliable.
Obvious question...
Having made the effort to make it seamless, why do I want to know when accessing something non-local?
Answer...
Going through a mounted filesystem puts all the processing load on the initiating host. I'd like to know when I have the option of using SSH instead of SSHFS to offload the background processing (ls, grep, awk, etc) to the remote (and usually more powerful) host, leaving just the GUI and control logic on the local machine.
df -l <file>
This will return a non-zero exit code if the file or directory is not local.

Resources