I use the ansible filesystem module to format the data disks of a newly provisioned database cluster.
- name: Format data disk
community.general.filesystem:
fstype: ext4
dev: /dev/sdc
...
But what I want is a way to automatically ckeck before formatting to make sure it doesn't run if the disk is already formatted, although I noticed that the module seems to do that checking on every run but I'm still not sure of its behavior.
Any thoughts?
Yes, the module already checks for you. According to the documentation:
If state=present, the filesystem is created if it doesn’t already exist, that is the default behaviour if state is omitted.
So, the module will not reformat the device if it's already formatted with the provided fstype. To force a reformat, you can change the fstype to something else, or set force=true.
Related
Let me explain the problem and context. This is a server for a database solution. The database was created with docker, and added a volume to the server. Then all docker installation path was moved to the volume added to the server (for security and backup mantain). Then for monitoring, i added a metricbeat agent to capture data, like disk and other stuff, but for this context occurs the problem.
Im searching for a specific mount (the is a volume mount), and when in terminal type df -aTh | grep "/dev" for show filesystem, its show this image:
Then in metricbeat.yaml i have this configuration for system module:
- module: system
period: 30s
metricsets: ["filesystem"]
processors:
- drop_event.when.regexp:
system.filesystem.mount_point: '^/(sys|cgroup|proc|etc|host|hostfs)($|/)'
Notice in last line i omitted "dev" , because i want to obtain the mount volume "/dev/sda" high lighted in the screen shot. But when i discoverd in kibana, that device is not showed, and i dont now why, must be showed.
Thanks for reading and help :) . All of this if for monitoring, and show data in grafana. But i can't find the filesystem "/dev/sda/", for the disk dashboard...
From the documentation about the setting filesystem.ignore_types:
A list of filesystem types to ignore. Metrics will not be collected from filesystems matching these types. This setting also affects the fsstats metricset. If this option is not set, metricbeat ignores all types for virtual devices in systems where this information is available (e.g. all types marked as nodev in /proc/filesystems in Linux systems). This can be set to an empty list ([]) to make filebeat report all filesystems, regardless of type.
If you check the file /proc/filesystems you can see which files are marked as "nodev". Is it possible that ext4 is marked as nodev?
Can you try to set the setting filesystem.ignore_types: [] to see if the file is now considered?
I am running into a problem with labeling. In order to lock down access to a file /etc/avahi/avahi-daemon.conf I decided to label it as a part of the avahi_t domain.
I am working on an embedded system. When I boot up the system from a version update, the file system is relabeled with the .autorelabel flag set.
Unfortunately the file /etc/avahi/avahi-daemon.conf remains in the unlabeled_t type. Due to the label being wrong, it is unable to read the file and avahi fails to initialize properly with an avc read denied on an unlabeled_t file. I want to have the label correctly set and not modify policy to read an unlabeled file. I also want it to be protected so the configuration can not be modified.
I have properly labeled it in the .fc file with the following:
/etc/avahi/avahi-daemon.conf -- gen_context(system_u:object_r:avahi_t,s0)
When I try a restorecon on the file system it attempts to relabel the file but is blocked by SELinux with a relabelto avc violation. Similarly changing it with chcon -t fails to change it. I do not wish to open relabelto up on an embedded system as it can then be relabeled and take down the avahi initialization. If I take out the SD card, and relabel the file on a different system. And place it back into the target system it is properly labeled. And avahi operates correctly. So I am certain that the labeling is causing the problem.
In looking in the reference policy an init_daemon_domain(avahi_t,avahi_exec_t) is being performed.
In looking at the documentation for init_daemon_domain() it states the following:
"The types will be made usable as a domain and file, making calls to domain_type() and files_type() redundant."
This is unusual in that if I add files_type(avahi_t) to the .te file, it properly labels after version update.
I am really wanting to know more information about this, and unfortunately my searches on the internet have been less than fruitful in this regard.
Is the documentation for SELinux wrong? Am I missing something about init_daemon_domain() in that it only works with processes and not files?
Or is the files_type(avahi_t) truly needed?
I know this comes off as a trivial issue since there is a path to where it is working. However I am hoping to get an explanation as to why files_type(avahi_t) is necessary?
Thanks
most recently (not sure why) vagrant (1.8.1) started asking for a root password.
however at work no root privileges are given to us (no sodoer)
I am looking for a way to tell vargant to stop the nfs pruning all together
sadly the documentation does not say how to modify this particular flag and I don't know ruby much
the code gives away that there should be a flag but can't figure out to put the "false" in there
I intend to disable NFS or skip that part all together. so both would be welcome.
my starting point is my ~/.vagrant.d/Vagrantfile
Vagrant.configure('2') do |config|
config.vagrant.host :nfs_prune => false
end
error message is: Pruning invalid NFS exports. Administrator privileges will be required...
PS: no, I do not use nfs in my shared folders
you should be able to disable by using the config.nfs.functional = false
functional (bool) - Defaults to true. If false, then NFS will not be
used as a synced folder type. If a synced folder specifically requests
NFS, it will error.
vagrantfile can be loaded from multiple sources, see LOAD ORDER AND MERGING
Vagrant actually loads a series of Vagrantfiles, merging the settings
as it goes. This allows Vagrantfiles of varying level of specificity
to override prior settings. Vagrantfiles are loaded in the order shown
below. Note that if a Vagrantfile is not found at any step, Vagrant
continues with the next step.
Vagrantfile packaged with the box that is to be used for a given
machine.
Vagrantfile in your Vagrant home directory (defaults to
~/.vagrant.d). This lets you specify some defaults for your system
user.
Vagrantfile from the project directory. This is the Vagrantfile
that you will be modifying most of the time.
As you mentioned you already check point 3 and 2, check the Vagrantfile from the particular box (if any)
I need to gather a list of all mounted "mount points" that the local file system has access to.
This includes:
Any ordinarily mounted volume under /Volumes.
Any NFS volume that's currently mounted under /net.
Any local or remote file system mounted with the "mount" command or auto-mounted somehow.
But I need to avoid accessing any file systems that can be auto-mounted but are currently not mounted. I.e, I do not want to cause any auto-amounting.
My current method is as follows:
Call FSGetVolumeInfo() in a loop to gather all known volumes. This will give me all local drives under /Volumes as well as /net, /home, and NFS mounts under /net.
Call FSGetVolumeParms() to get each volume's "device ID" (this turns out to be the mount path for network volumes).
If the ID is a POSIX path (i.e. it's starting with "/"), I use readdir() on its path's parent to check whether the parent dir contains actually the mount point item (e.g. if ID is /net/MyNetShare, then I readdir /net). If it's not available, I assume this is a auto-mount point with a yet-unmounted volume and therefore exclude it from my list of mounted volumes.
Lastly, if the volume appears mounted, I check if it contains any items. If it does, I add it to my list.
Step 3 is necessary to see whether the path is actually mounted. If I'd instead call lstat() on the full path, it would attempt to automount the file system, which I need to avoid.
Now, even though the above works most of the time, there are still some issues:
The mix of calls to the BSD and Carbon APIs, along with special casing the "device ID" value, is rather unclean.
The FSGetVolumeInfo() call gives me mount points such as "/net" and "/home" even though these do not seem to be actual mount points - the mount points would rather appear inside these. For example, if I'd mount a NFS share at "/net/MyNFSVolume", I'd gather both a "/net" point and a "/net/MyNFSVolume", but the "/net" point is no actual volume.
Worst of all, sometimes the above process still causes active attempts to contact the off-line server, leading to long timeouts.
So, who can show me a better way to find all the actually mounted volumes?
By using the BSD level function getattrlist(), asking for the ATTR_DIR_MOUNTSTATUS attribute, one can test the DIR_MNTSTATUS_TRIGGER flag.
This flag seems to be only set when an automounted share point is currently unreachable. The status of this flag appears to be directly related to the mount status maintained by the automountd daemon that manages re-mounting such mount points: As long as automountd reports that a mount point isn't available, due to the server not responding, the "trigger" flag is set.
Note, however, that this status is not immediately set once a network share becomes inaccessible. Consider this scenario:
The file /etc/auto_master has this line added at the end:
/- auto_mymounts
The file /etc/auto_mymounts has the following content:
/mymounts/MYSERVER1 -nfs,soft,bg,intr,net myserver1:/
This means that there will be a auto-mounting directory at /mymounts/MYSERVER1, giving access to the root of myserver1's exported NFS share.
Let's assume the server is initially reachable. Then we can browse the directory at /mymounts/MYSERVER1, and the DIR_MNTSTATUS_TRIGGER flag will be cleared.
Next, let's make the server become unreachable by simply killing the network connection (such as removing the ethernet cable to turning off Wi-Fi). At this point, when trying to access /mymounts/MYSERVER1 again, we'll get delays and timeouts, and we might even get seemingly valid results such as non-empty directory listings despite the unavailable server. The DIR_MNTSTATUS_TRIGGER flag will remain cleared at this point.
Now put the computer to sleep and wake it up again. At this point, automountd tries to reconnect all auto-mounted volumes again. It will notice that the server is offline and put the mount point into "trigger" state. Now the DIR_MNTSTATUS_TRIGGER flag will be set as desired.
So, while this trigger flag is not the perfect indicator to tell when the remote server is unreachable, it's good enough to tell when the server has become offline for a longer time, as it's usually happening when moving the client computer between different networks, such as between work and home, with the computer being put to sleep in between, thus causing the automountd daemon to detect the reachability of the NFS server.
I have a NetApp filer, with a CIFS export. The permissions have been locked down on it, to a point where it's no longer accessible.
I need to reset the permissions on this - I've figured out I can probably do this by changing the qtree to Unix security mode and back again (provided I'm prepared to unexport the share temporarily).
However, I think I should be able to use the fsecurity command to do this. There's just one problem - the manpage example refers to 'applying ACLs from a config file':
https://library.netapp.com/ecmdocs/ECMP1196890/html/man1/na_fsecurity_apply.1.html
But what it doesn't do, is give me an example of what a 'security definition file' actually looks like.
Is anyone able to give me an example? Resetting a directory structure to Everyone/Full Control is sufficient for my needs, as re-applying permissions isn't a problem.
Create a conf file containing the following:
cb56f6f4
1,0,"/vol/vol_name/qtree_name/subdir",0,"D:P(A;CIOI;0x1f01ff;;;Everyone)"
Save it on your filer somewhere (example in manpage is /etc/security.conf).
Run:
fsecurity show /vol/vol_name/qtree_name/subdir
fsecurity apply /etc/security.conf
fsecurity show /vol/vol_name/qtree_name/subdir
This will set Everyone / Full Control: inheritable. Which is a massive security hole, so you should now IMMEDIATELY go and fix the permissions on that directory structure to something a little more sensible.
You can get create more detailed ACLs using the 'secedit' utility, available from NetApp's support site. But this one did what I needed it to.