AppArmor: How to block pid=host container with CAP_SYS_ADMIN/CAP_SYS_CHROOT from reading (some) host files? - linux-capabilities

Given is a container that has pid=host (so it is in the initial PID namespace and has a full view on all processes). This container (rather, its process) additionally has the capabilities CAP_SYS_ADMIN and CAP_SYS_CHROOT, so it can change mount namespaces using setns(2).
Is it possible using AppArmor to block this container from accessing arbitrary files in the host (the initial mount namespace), except for some files, such as /var/run/foo?
How does AppArmor evaluate filesystem path names with respect to mount namespaces? Does it "ignore" mount namespaces and just take the specified path, or does it translate a path, for instance when dealing with bind-mounted subtrees, etc?

An ingrained restriction of AppArmor's architecture is that in case of filesystem resources (files, directories) it mediates access using the access path. While AppArmor uses labeling, as does SELinux, AppArmor derives only implicit filesystem resource labels from the access path. In contrast, SELinux uses explicit labels which are stored in the extended attributes of files on filesystems supporting POSIX extended attributes.
Now, the access path always is the path as seen in the caller's current mount namespace. Optionally, AppArmor can take chroot into account. So the answer to the second question item is: AppArmor "ignores" mount namespaces and just takes the (access) path. It does not translate the bind mounts, as far as I understand (there's nowhere any indication to be seen it would do).
As for the first question item: in general "no", due to AppArmor mediating access path (labels), not file resource labels. A limited access restriction is possible when accepting that there won't be any access path differentiation between what's inside a container and what's in the host outside the container (same for what's inside other containers). This is basically what Docker's default container AppArmor profile does: restricting all access to a few highly sensitive /proc/ entries and restricting to read-only access for many other /proc/ entries.
But blocking access to certain host file access paths always comes with the danger of blocking the same access path for a perfectly valid use inside a container (different mount namespace), so this requires great care, lots of research and testing, as well as the constant danger of things breaking in the next update of a container. AppArmor seems to not be designed for such usecases.

Related

Get DNS infos for local machine interfaces

I need the DNS suffix of all my local interfaces on my PC.
Is there way how I can achieve this via Go?
Best case would be for any OS
Necessary: working on Windows
I have tried net.Inferfaces() and all the net commands but I haven't found anything regarding the DNS server.
EDIT
I have found the solution for the Windows-specific version but it would be interesting if there is anything that works for Linux and macOS too.
I don't think there is a solution that work for any OS. In Linux the DNS suffix is not interface specific but system wide, it is configured in /etc/resolv.conf. Here is an excerpt from the man page:
search Search list for host-name lookup.
By default, the search list contains one entry, the local domain name. It is determined from the local hostname returned by gethostname(2); the local domain name is taken to be everything after the first '.'. Finally, if the hostname does not contain a '.', the root domain is assumed as the
local domain name.
This may be changed by listing the desired domain search path following the search keyword with spaces or tabs separating the names. Resolver queries having fewer than ndots dots (default is 1) in them will be attempted using each component of the search path in turn until a match is found.
For environments with multiple subdomains please read options ndots:n below to avoid man-in-the-middle attacks and unnecessary traffic for the root-dns-servers. Note that this process may be slow and will generate a lot of network traffic if the servers for the listed domains are not local, and
that queries will time out if no server is available for one of the domains.
If there are multiple search directives, only the search list from the last instance is used.
The net package standard library parses this file to get the DNS config, so the DNS resolver should behave as expected, however, the parsing functionality is not exposed.
The libnetwork.GetSearchDomains func in the libnetwork library should be able to help you out. If there are no search entries in /etc/resolv.conf, you should use the hostname, which can be gotten with the os.Hostname func.
I believe this also works for FreeBSD and Mac OS since they are both "UNIX like". But I am not 100% sure.

What's the proper storage location for a database for a cross platform command line program?

I wrote a simple note taking program that's nothing more than a dictionary mapping a key to a value. IE
$ hlp -key age -value 25
$ hlp age
25
and it just stores information in a json file hardcoded to ~/.hlp.json. But I was wondering if there's likely some standard location I should be putting this file. Is there a standard location for databases like this?
A useful resource here is the hier(7) man page. (http://linux.die.net/man)
Data that is only going to be used by you belongs in $HOME, traditionally hosted under /home.
For something that is used to support the system itself, you'd be using /var. For applications that are just hosted on the system, you'd use /var/opt.
If the application is something big that could be replicated or moved to another system, you'd create a separate filesystem with a mount point outside any of those listed in hier(7). This could be a filesystem mounted from a SAN or NAS, which whould help mobility of the application.
Once you actually need to access the data from different machines, you'd have to move it to a network accessable key/value store or sql database.

How to clear phpFastCache when path set to /tmp/

I'm using phpFastCache in a frontend-application, setting the path to the server's "/tmp/" directory:
phpFastCache::setup('path',"/tmp/");
I do not want to use phpFastCache's automatically found cache-directory, because it clutters my home directory with an extra directoy for every domain through which users are reaching the application (several are connected).
In the backend I would like to display cache-statistics and be able to clear the cache. This doesn't work anymore, now that I have set /tmp/ as the cache path. Statistics show up empty and the cache is not cleared. I did configure the cache-directoy to the same "/tmp/" in the backend-application as well.
How can phpFastCache be configured to be able to achieve this?
After looking at the phpFastCache-code, I'm able to answer my own question:
To achieve what I wanted (have only ONE cache-directory, regardless of domain used; be able to list statistics and clear cache from a separate application) I had to make two config-settings:
phpFastCache::setup('path', '/path-to-my-home-dir');
phpFastCache::setup('securityKey', 'phpfastcache');
I'm setting these identically in both my frontend- and backend-applications.
This will make phpFastCache use /path-to-my-home-dir/phpfastcache as its only cache-directory.
Had I not set the 'securityKey', phpFastCache would have generated one from the current domain (in most cases), therefore my backend application would have only "seen" that part of the cache residing in the directory for the currently used domain.

Determine actually mounted volumes and remote file systems in OSX

I need to gather a list of all mounted "mount points" that the local file system has access to.
This includes:
Any ordinarily mounted volume under /Volumes.
Any NFS volume that's currently mounted under /net.
Any local or remote file system mounted with the "mount" command or auto-mounted somehow.
But I need to avoid accessing any file systems that can be auto-mounted but are currently not mounted. I.e, I do not want to cause any auto-amounting.
My current method is as follows:
Call FSGetVolumeInfo() in a loop to gather all known volumes. This will give me all local drives under /Volumes as well as /net, /home, and NFS mounts under /net.
Call FSGetVolumeParms() to get each volume's "device ID" (this turns out to be the mount path for network volumes).
If the ID is a POSIX path (i.e. it's starting with "/"), I use readdir() on its path's parent to check whether the parent dir contains actually the mount point item (e.g. if ID is /net/MyNetShare, then I readdir /net). If it's not available, I assume this is a auto-mount point with a yet-unmounted volume and therefore exclude it from my list of mounted volumes.
Lastly, if the volume appears mounted, I check if it contains any items. If it does, I add it to my list.
Step 3 is necessary to see whether the path is actually mounted. If I'd instead call lstat() on the full path, it would attempt to automount the file system, which I need to avoid.
Now, even though the above works most of the time, there are still some issues:
The mix of calls to the BSD and Carbon APIs, along with special casing the "device ID" value, is rather unclean.
The FSGetVolumeInfo() call gives me mount points such as "/net" and "/home" even though these do not seem to be actual mount points - the mount points would rather appear inside these. For example, if I'd mount a NFS share at "/net/MyNFSVolume", I'd gather both a "/net" point and a "/net/MyNFSVolume", but the "/net" point is no actual volume.
Worst of all, sometimes the above process still causes active attempts to contact the off-line server, leading to long timeouts.
So, who can show me a better way to find all the actually mounted volumes?
By using the BSD level function getattrlist(), asking for the ATTR_DIR_MOUNTSTATUS attribute, one can test the DIR_MNTSTATUS_TRIGGER flag.
This flag seems to be only set when an automounted share point is currently unreachable. The status of this flag appears to be directly related to the mount status maintained by the automountd daemon that manages re-mounting such mount points: As long as automountd reports that a mount point isn't available, due to the server not responding, the "trigger" flag is set.
Note, however, that this status is not immediately set once a network share becomes inaccessible. Consider this scenario:
The file /etc/auto_master has this line added at the end:
/- auto_mymounts
The file /etc/auto_mymounts has the following content:
/mymounts/MYSERVER1 -nfs,soft,bg,intr,net myserver1:/
This means that there will be a auto-mounting directory at /mymounts/MYSERVER1, giving access to the root of myserver1's exported NFS share.
Let's assume the server is initially reachable. Then we can browse the directory at /mymounts/MYSERVER1, and the DIR_MNTSTATUS_TRIGGER flag will be cleared.
Next, let's make the server become unreachable by simply killing the network connection (such as removing the ethernet cable to turning off Wi-Fi). At this point, when trying to access /mymounts/MYSERVER1 again, we'll get delays and timeouts, and we might even get seemingly valid results such as non-empty directory listings despite the unavailable server. The DIR_MNTSTATUS_TRIGGER flag will remain cleared at this point.
Now put the computer to sleep and wake it up again. At this point, automountd tries to reconnect all auto-mounted volumes again. It will notice that the server is offline and put the mount point into "trigger" state. Now the DIR_MNTSTATUS_TRIGGER flag will be set as desired.
So, while this trigger flag is not the perfect indicator to tell when the remote server is unreachable, it's good enough to tell when the server has become offline for a longer time, as it's usually happening when moving the client computer between different networks, such as between work and home, with the computer being put to sleep in between, thus causing the automountd daemon to detect the reachability of the NFS server.

C++ daemon forking causes mysql errors

I have a daemon that forks the process.
This daemon access a database using mysql connector library.
When I do not fork, I am able to open and read a database fine, however, when I fork, I get
MySQL server has gone away
errors consistently on the first query...
Anyone know what could be causing this?
Edit Oh, my apologies for misinterpreting
Still the problems with differences between daemonized/non-daemonized are roughly with the following class of options:
environment variables
LIBPATH
PATH
HOME, UID, EUID (HOME surprisingly enough gets (ab)used way too often)
mysql specific variables
permissions
what user is the daemon running as? elevated or privilege separation?
current working directory (traditionally / for daemons, where / might be a chroot jail instead of 'real' /)
Starting with kernel 2.4.19, Linux provides per-process mount namespaces. A
mount namespace is the set of file system mounts that are visible to a
process. Mount-point namespaces can be (and usually are) shared between
multiple processes, and changes to the namespace (i.e., mounts and unmounts)
by one process are visible to all other processes sharing the same namespace.
(The pre-2.4.19 Linux situation can be considered as one in which a single
namespace was shared by every process on the system.)
detached stdin/stdout causing trouble (IMO that would mean badly designed library, but who am I)
watch it that specific resources (file locks, socket connections, threads (!)) are NOT inherited across fork/execve. I recommend reading the linked on daemonization (below), especially for example the section on 'Mutual Exclusion and Running a Single Copy [open,lockf,getpid]'
I'm sure I'm forgetting stuff
Ermm... what are you starting a mysql server process for? Mysql has plenty of sound init scripts that do work.
On the subject of proper daemonization: http://www.enderunix.org/docs/eng/daemon.php
Pay attention to the effects of sharing resources with fork children (e.g. file descriptors).
Besides that, you could just be missing basic environment settings. Peruse the official init scripts for mysql to find out which you need.

Resources