Determine actually mounted volumes and remote file systems in OSX - macos

I need to gather a list of all mounted "mount points" that the local file system has access to.
This includes:
Any ordinarily mounted volume under /Volumes.
Any NFS volume that's currently mounted under /net.
Any local or remote file system mounted with the "mount" command or auto-mounted somehow.
But I need to avoid accessing any file systems that can be auto-mounted but are currently not mounted. I.e, I do not want to cause any auto-amounting.
My current method is as follows:
Call FSGetVolumeInfo() in a loop to gather all known volumes. This will give me all local drives under /Volumes as well as /net, /home, and NFS mounts under /net.
Call FSGetVolumeParms() to get each volume's "device ID" (this turns out to be the mount path for network volumes).
If the ID is a POSIX path (i.e. it's starting with "/"), I use readdir() on its path's parent to check whether the parent dir contains actually the mount point item (e.g. if ID is /net/MyNetShare, then I readdir /net). If it's not available, I assume this is a auto-mount point with a yet-unmounted volume and therefore exclude it from my list of mounted volumes.
Lastly, if the volume appears mounted, I check if it contains any items. If it does, I add it to my list.
Step 3 is necessary to see whether the path is actually mounted. If I'd instead call lstat() on the full path, it would attempt to automount the file system, which I need to avoid.
Now, even though the above works most of the time, there are still some issues:
The mix of calls to the BSD and Carbon APIs, along with special casing the "device ID" value, is rather unclean.
The FSGetVolumeInfo() call gives me mount points such as "/net" and "/home" even though these do not seem to be actual mount points - the mount points would rather appear inside these. For example, if I'd mount a NFS share at "/net/MyNFSVolume", I'd gather both a "/net" point and a "/net/MyNFSVolume", but the "/net" point is no actual volume.
Worst of all, sometimes the above process still causes active attempts to contact the off-line server, leading to long timeouts.
So, who can show me a better way to find all the actually mounted volumes?

By using the BSD level function getattrlist(), asking for the ATTR_DIR_MOUNTSTATUS attribute, one can test the DIR_MNTSTATUS_TRIGGER flag.
This flag seems to be only set when an automounted share point is currently unreachable. The status of this flag appears to be directly related to the mount status maintained by the automountd daemon that manages re-mounting such mount points: As long as automountd reports that a mount point isn't available, due to the server not responding, the "trigger" flag is set.
Note, however, that this status is not immediately set once a network share becomes inaccessible. Consider this scenario:
The file /etc/auto_master has this line added at the end:
/- auto_mymounts
The file /etc/auto_mymounts has the following content:
/mymounts/MYSERVER1 -nfs,soft,bg,intr,net myserver1:/
This means that there will be a auto-mounting directory at /mymounts/MYSERVER1, giving access to the root of myserver1's exported NFS share.
Let's assume the server is initially reachable. Then we can browse the directory at /mymounts/MYSERVER1, and the DIR_MNTSTATUS_TRIGGER flag will be cleared.
Next, let's make the server become unreachable by simply killing the network connection (such as removing the ethernet cable to turning off Wi-Fi). At this point, when trying to access /mymounts/MYSERVER1 again, we'll get delays and timeouts, and we might even get seemingly valid results such as non-empty directory listings despite the unavailable server. The DIR_MNTSTATUS_TRIGGER flag will remain cleared at this point.
Now put the computer to sleep and wake it up again. At this point, automountd tries to reconnect all auto-mounted volumes again. It will notice that the server is offline and put the mount point into "trigger" state. Now the DIR_MNTSTATUS_TRIGGER flag will be set as desired.
So, while this trigger flag is not the perfect indicator to tell when the remote server is unreachable, it's good enough to tell when the server has become offline for a longer time, as it's usually happening when moving the client computer between different networks, such as between work and home, with the computer being put to sleep in between, thus causing the automountd daemon to detect the reachability of the NFS server.

Related

AppArmor: How to block pid=host container with CAP_SYS_ADMIN/CAP_SYS_CHROOT from reading (some) host files?

Given is a container that has pid=host (so it is in the initial PID namespace and has a full view on all processes). This container (rather, its process) additionally has the capabilities CAP_SYS_ADMIN and CAP_SYS_CHROOT, so it can change mount namespaces using setns(2).
Is it possible using AppArmor to block this container from accessing arbitrary files in the host (the initial mount namespace), except for some files, such as /var/run/foo?
How does AppArmor evaluate filesystem path names with respect to mount namespaces? Does it "ignore" mount namespaces and just take the specified path, or does it translate a path, for instance when dealing with bind-mounted subtrees, etc?
An ingrained restriction of AppArmor's architecture is that in case of filesystem resources (files, directories) it mediates access using the access path. While AppArmor uses labeling, as does SELinux, AppArmor derives only implicit filesystem resource labels from the access path. In contrast, SELinux uses explicit labels which are stored in the extended attributes of files on filesystems supporting POSIX extended attributes.
Now, the access path always is the path as seen in the caller's current mount namespace. Optionally, AppArmor can take chroot into account. So the answer to the second question item is: AppArmor "ignores" mount namespaces and just takes the (access) path. It does not translate the bind mounts, as far as I understand (there's nowhere any indication to be seen it would do).
As for the first question item: in general "no", due to AppArmor mediating access path (labels), not file resource labels. A limited access restriction is possible when accepting that there won't be any access path differentiation between what's inside a container and what's in the host outside the container (same for what's inside other containers). This is basically what Docker's default container AppArmor profile does: restricting all access to a few highly sensitive /proc/ entries and restricting to read-only access for many other /proc/ entries.
But blocking access to certain host file access paths always comes with the danger of blocking the same access path for a perfectly valid use inside a container (different mount namespace), so this requires great care, lots of research and testing, as well as the constant danger of things breaking in the next update of a container. AppArmor seems to not be designed for such usecases.

WNetGetUniversalName failing when called from a scheduled task

I have a (win32) program that is run through a scheduled task.
When run, my software should map a number of local drive letters to UNC resources, verify that the mappings have been successful, run a few other tasks and then unmap the drives.
When running under the context of a local user, all works fine. However, when I run it through the system task scheduler, the verify task fails.
The verify tasks takes the drive letter, checks if the drive is a network drive (through GetDriveType) then, if the drive if of type DRIVE_REMOTE, calls WNetGetUniversalName and compare the result with the expected mapping.
When run from a regular user context, this works. But when the process is called through the task scheduler, WNetGetUniversalName fails with error 87: The parameter is incorrect.
After trying to isolate the issue, I came to the following conclusions:
The issue is not linked to user rights: even when the user is made a member of both the local administrators group and the domain administrators group, the error remains.
The parameters I pass to the functions are ALWAYS the same: It's the drive letter concatenated with :\.
I have tried repeating the call after a short wait (100ms): same symptoms.
The mapping (made through WNetAddConnection2) actually succeeds.
The issue is not dependent on where the executable is located: same thing happen if it's on the local machine or run from an UNC path.
The issue occurs whether the scheduled task has been set to "run with highest privilege" or not.
Here is the exact call I use:
APIResult := WNetGetUniversalName(PWideChar(pathToCheck), UNIVERSAL_NAME_INFO_LEVEL, #RemoteNameInfo, Size);
I'm out of ideas about what to check next.
Edit Right now, I have reverted to a different behavior: All drives status is checked (GetDriveType), if it's a network drive, it's unmaped, checked again and then mapped. This seems to work but it's slower (of course) and it feels less secure.

Change of hostName in Terminal to: m:~ myMacUserName$?

I just opened Terminal and I see this m:~ luka$ and not the default that I usually get of "lukasMacBookProEtcEtc:~ luka$
Anyone know why this is? Does this make an impact on what I write into the command line?
Can't find anything on the elsewhere.
This means your computer's host name has changed. This is typical if you switch to a different network, unless you have a statically assigned name or IP address. By default, your computer asks the current network what name to use for itself.
Whether this particular host name is correct or incorrect requires more context. If network operations are working fine for you, then there probably isn't an issue.
No, in general the hostname doesn't directly affect what you write into the command line unless you're writing commands that depend on the current hostname or this unexpected hostname is a symptom of a networking issue and you try entering a command that depends on networking.
Note that on OS X a given computer has at least two different names. One is the “computer name”, typically assigned by the user, and the other is the “host name”, typically assigned to your computer by a server on the local network. The former is the “Computer Name:” at the top of System Preferences > Sharing, and users can usually feel free to name their computers as they wish unless otherwise directed by a system administrator. The host name is visible as the output of the hostname command and is typically displayed in the shell command prompt. You should normally not attempt to change the host name unless directed to by a system administrator.

How to ensure network drives are connected for an application?

I have a desktop Windows application that is installed in small office environments.
The application uses an .MDB database file as its database which is stored on a network drive.
Configuration files specify the path of the .MDB file on the server using a letter drive: eg. f:\data\db.mdb
The application needs to access this database file when it starts. How can I ensure the network drive is connected and accessible when the application starts?
Sometimes Windows doesn't reconnect network drives and the only way to connect them is to double-click on them in My Computer, even when "Reconnect at logon" is ticked when mapping the drive.
Would a solution be to use \\machine_name\share instead of drive letters?
You asked, "Would a solution be to use \machine_name\share instead of drive letters?"
I think, yes, it could be. A UNC path avoids 2 problems:
share not connected to a drive letter
share is connected, but mapped to a different drive letter than you expect
The unknown is whether anything in your application makes a UNC path for the MDB either a complication or a flat out deal-breaker.
You should use UNC paths, because not everyone will have your drive mapped to the same letter.
Determine UNC path
First, I would determine the UNC path of your file as it exists on your local computer at F:\data\db.mdb using one of the techniques found here:
Creating UNC paths from mapped drives
Basically, you look at the way Windows Explorer lists the network mapped drive, then use this to deduce the UNC path.
Check Availability using WMI
Assuming the drive is actually mapped on every local computer that plans to use the application, use the Win32_MappedLogicalDisk class to determine availability of the mapped network drive.
I have some sample code here that can be adapted to determine whether a given network drive is available (scroll down to the Mapped Drives Information section). You check .ProviderName to match the UNC path, so you know which is the correct drive, then check the value of .Availability to determine if the mapped network drive can be accessed.
You should definitely abandon the network drive mapping possibilities:
using this technique forces you to manipulate 'physically' each computer using your db, where you have to assign a letter to the network drive.
every computer user can easily change it
any disconnection from the network might force the user to 'manually' reconnect to the disk drive
Though you are on a domain, I would not advise you to use a name, as computers, for multiple reasons, might not always find it 'easily' on the network, specially when its IP is regularly changed.
You should definitely find a way to assign a fixed IP to your disk: it is the most stable and permanent solution you can think about. Ask your domain administrator to arrange it for you.
Testing the presence of your network disk can then be done very easily. There are multiple solutions, including trying to open directly the mdb file. You could also test the existence of the file (through the file object I think) or even use any external program or windows API that you could launch from your code. Please google 'VB test IP' or something similar to find a solution at your convenience.
EDIT: windows has even a proposal to simulate a ping with some VB code. Check it here
EDIT2: I found in one of my apps this VBA code, which allows to quick-check if a file exists (and can ba accessed) somewhere on your network. It was basically set to test if a new version of the user interface is available.
Function fileIsAvailable(x_nom As Variant) As Boolean
On Error GoTo ERREUR
Application.Screen.MousePointer = 11
Dim fso as object
Set fso = CreateObject("Scripting.FileSystemObject")
If Not fso.FileExists(x_nom) Then
fileIsAvailable = False
Else
fileIsAvailable = True
End If
Set fso = Nothing
Application.Screen.MousePointer = 0
On Error GoTo 0
Exit Function
ERREUR:
Application.Screen.MousePointer = 0
debug.print Err.Number, Err.description
End Function
You can easily call this function by supplying your file's network name, such as:
if fileIsAvailable("\\192.168.1.110\myFileName.mdb") then ...
You did not make it clear what your application was written in, however before you attempt to connect to the database for the first time, presumably in a splash screen or something of that nature, check that f:\data\db.mdb
exists.
Make sure this script is ran right before the application is started:
net use f: \\machine_name\share /user:[username] [password] /persistent:yes
This will map the share drive on the letter you specified !

Is it possible to trash an Azure role host and get it started on the same host without cleanup?

Suppose my Azure role creates a lot of temporary files in Windows temporary folder and forgets to delete them. At some point it will receive "can't create temporary file" error. Suppose that once that happens my role code throws an exception out of RoleEntryPoint.Run() and the role is restarted.
I'm not talking about perfect Azure aware code here. My role might use third-party black box code that would now nothing about Azure and "local storage" and would just call System.IO.Path.GetTempPath() and thus create files right in some not Azure friendly location.
The problem is that if the role is started on the very same host and the temporary folder is not cleaned up by some third party the folder is still full of files and the role will be unable to function. According to this answer it might happen that local changes are preserved for my role which is a huge problem in the above scenario.
Are local changes like created temporary files guaranteed to be reset when a role is restarted? How do I ensure that the started role is in reasonably clean state?
The role gets reset on new deployments, upgrades, and newly scaled instances from the golden image (base guest OS vhd). Generally for reboots and crashes, you get the same VHD and machine.
The code you write will not have permission to write to the OS drive (D:) - without elevation that is (or logging in via RDP to do this). Further, there is a quota on the user's role root drive (E:) that will prevent you from accidentally filling the drive with files. This used to be 10% of the package size was all you were allowed to write. There is also a quota on the resource drive (C:), but that is much more generous and depends on VM size.
Nothing will be cleaned up on non-local resource drives but you will eventually get errors if you try to exceed quotas. You can turn off sticky storage on local resources and they will be cleaned up on reboot. Of course, like other changes to the disk, these non-local resource temp files will occasionally be lost when the guest OS is upgraded (or underlying root OS). If you are running elevated and really screw up your installation (which you can do), you will need to hit the "Reimage" button on the portal and it will all go back to the golden image.

Resources