How to identify the newly added disk in virtual linux(RHEL 7 &6) host using ansible yaml code - ansible

i am planing to write ansible playbook for file system creation. i am using Logical volume manager(LVM).
can any one help me to identify the new lun using ansible modules.

Generally speaking, this is very operating system specific. Your best path forward is to figure out what steps you would do from the command line to detect that there is a new disk and then use shell, parsing the result stdout. For example, on ubuntu, you might run fdisk -l and look for unpartitioned / unallocated drives and parse the output.

Related

Is there a way to create a link for the machine ID without modifying Yocto?

I am running Linux 4.14.149 built with Yocto Zeus (3.0.0). I am running a read only filesystem, and recently found an issue where my UID (/etc/machine-id)was getting changed every boot (a result of this question - https://superuser.com/questions/1668481/dhcpd-doesnt-issue-the-same-lease-after-reboot ).
I am trying to make that file a link to the user-data partition so it will persist across reboots. I have tried making the link as part of a base-files_%.bbappend which is the way I made the link for the hostname (which works). This is the contents of that file (/var/local is our user data partition with is mounted RW in the init script):
FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"
hostname = ""
machine-id = ""
do_install_append() {
ln -sfn /var/local/etc/hostname ${D}/${sysconfdir}/hostname
ln -sfn /var/local/etc/machine-id ${D}/${sysconfdir}/machine-id
}
But I am seeing the following error when I tried to build that:
Exception: bb.process.ExecutionError: Execution of '/home/gen-ccm-root/workdir/tools/poky/build-dev/tmp/work/mi_nhep-poky-linux-gnueabi/mi-dev/1.0-r0/temp/run.read_only_rootfs_hook.50286' failed with exit code 1:
touch: cannot touch '/home/gen-ccm-root/workdir/tools/poky/build-dev/tmp/work/mi_nhep-poky-linux-gnueabi/mi-dev/1.0-r0/rootfs/etc/machine-id': No such file or directory
WARNING: exit code 1 from a shell command.
It turns out there are two things that touch that file; the rootfs-postcommands.bbclass and the systemctl python script (found in meta/recipes-core/systemd/systemd-systemctl/systemctl), the former of which (I think) is causing the error. It is failing in the do_rootfs step.
What is the best way to create this link? If there is a choice, I would rather not modify Yocto sources if that is possible.
You can do this by defining your own rootfs post-command, and appending it to ROOTFS_POSTPROCESS_COMMAND so that it runs after Yocto's built-in read_only_rootfs_hook that creates the empty /etc/machine-id file using touch.
# setup-machine-id-symlink.bbclass
ROOTFS_POSTPROCESS_COMMAND += "install_machine_id_symlink ;"
install_machine_id_symlink () {
ln -sfn /var/local/etc/machine-id ${IMAGE_ROOTFS}/etc/machine-id
}
# your-image.bb
inherit setup-machine-id-symlink
The Image Generation docs have more detail on how postprocessing commands are applied during the build.
Note: You will need to ensure that your persistent partition is mounted early, so that reading /etc/machine-id doesn't result in a broken symlink.
Alternatively, use bind mounts:
You could also do this at runtime by installing a systemd service that runs early in the boot sequence and bind mounts your persistent machine-id over the blank one provided by Yocto in the rootfs.
Using a systemd service (rather than a bind mount entry in /etc/fstab) is necessary because you will need to ensure the persistent machine-id file actually exists before creating the bind mount. Though, you may be able to make use of tmpfiles.d to do that instead.
After first boot, when machine-id is generated, update bootargs U-Boot environment variable to include systemd.machine_id= in the kernel command line parameter.
An ID specified in this manner has higher priority and will be used instead of the ID stored in /etc/machine-id.

What is the proper way to read an option in INI-file on remote node with Ansible?

I am writing an Ansible role that installs and updates some specific enterprise software. I would like to compare the installed version (if it is installed) to the one I am trying to install, for various reasons, but mainly to be able to verify that installation is necessary and allowed before actually executing the installer. Both installer package and installation contain an INI-file which contains component versions as options (component_name=version).
What is the proper way in Ansible to read some option(s) from some INI-file on remote node? As far as I understand:
ini_file -module is meant for modifying target file, which is not what I want to do.
ini lookup is meant for files on controller, not on remote nodes.
I can see two possibilities here:
Use fetch -module to get file from remote node to controller machine, then use ini lookup.
Use command or shell -module, parse INI file using grep/sed/awk and register output.
The first option seems unnecessarily clumsy (although I do realize I may think about it in the wrong way). Second one seems a bit clumsy from another point of view (yet another INI-file parsing method), but I may be wrong here too. Right now I am leaning on the latter, but I can't help thinking that there must be an easier and more elegant way.
Seems like a use case for facts.d.
Write a shell or Python script that inspects those ini files and dumps required fields as JSON object to stdout.
Place this script into /etc/ansible/facts.d/custom_soft.fact and make it executable.
Then you can use these facts as follows:
- shell: install_custom_soft.sh
when: ansible_local.custom_soft.component_ver | int > 4
If your ini files are very simple, you may do the job even without script, just make a link like this:
ln -s /etc/custom_soft/config.ini /etc/ansible/facts.d/custom_soft.fact
and all config.ini keys will be available to Ansible via ansible_local.custom_soft variable.
P.S. Despite the name "local facts" this should be done on remote machine.

Last access time not updated?

Here is what I am trying to do: I need to know whenever a file is read or used by a tool (e.g. compiler). I use ls to get the last accessed time using the following command
ls -l --time=access -u --sort=time --time-style=+%H:%M:%S
or
stat "filename"
But my files access times are not getting updated, I figured its because of caching! please correct me if I am wrong. So my next step was how can I clear the cache, researching it I came across some variations of the following command:
sync && echo 3 | sudo tee /proc/sys/vm/drop_caches
The thing is even after I execute this command my file access time is not updated! My way of testing access time is by opening the file in gEdit or call gcc on my source file.
My setting: Ubunto 12.0.4 running on VMware, which is running on Win 7
Question: what am I missing or doing wrong that my access time is not being updated??
What you're observing is the change in the default mount option starting 2.6.30 in order to bring about filesystem performance improvement.
Quoting from man mount:
relatime
Update inode access times relative to modify or change time.
Access time is only updated if the previous access time was ear‐
lier than the current modify or change time. (Similar to noat‐
ime, but doesn't break mutt or other applications that need to
know if a file has been read since the last time it was modi‐
fied.)
Since Linux 2.6.30, the kernel defaults to the behavior provided
by this option (unless noatime was specified), and the stricta‐
time option is required to obtain traditional semantics. In
addition, since Linux 2.6.30, the file's last access time is
always updated if it is more than 1 day old.
(Also refer to this and this.) You might be looking for the following mount option:
strictatime
Allows to explicitly requesting full atime updates. This makes
it possible for kernel to defaults to relatime or noatime but
still allow userspace to override it. For more details about the
default system mount options see /proc/mounts.

Making checks before rsyncing external drive on OSX

I have the following issue on OSX though I guess this could equally be filed under bash. I have several encrypted portable drives that I use to sync an offsite data store or as an on-the-go data store etc. I keep these updated using rsync with several options including --del and an includes file.
This is currently done very statically i.e.
rsync <options> --include-file=... /Volumes /Volumes/PortableData
where the includes file would read something like
+ /Abc/
+ /Def/
...
- *
I would like to do the following:
Check the correct drive is mounted and find its mount-point
Check that all the + /...../ entries are mounted under /Volumes
rsync
To achieve 1 I was intending to store the uuid of the drives in variables in my profile so that I could search for them and find the relevant mount point. A bash function in .bashrc that takes a uuid and returns a mount point. I have seen some web entries for achieving this.
2 I am a little more stuck on. What is the best way of retrieving only those entries that are both + and top level folder designations in the include files then iterating to check they are mounted and readable? Again, I'm thinking of trying to put some of this logic in functions for re-usability.
Is there a better way of achieving this? I have thought of CCC, but like the idea of scripting in bash and using rsync as it is a good way of getting to know the command line.
rsync can call in a file that is a list of exclusions.
I would write a script that dumped directories to text file that are NOT + and top level folder designations in the include files
You are going to want an exclusion to look like this:(you can use wildcards if it helps)
dirtoexlude1
dirtoexlude2
dirtoexlude
Then just direct an rsync to that exclusion file.
Your Rsync command will be something like this:
rsync -aP --exclude-from=rsyncexclusion.txt
a is for recursive essentially (with hand waving) and P is for verbose.
good luck.

using alias instead of IP in scp

I have a desktop in the office that I often need to access from home and use scp to copy files. Currently I am doing it like this
scp username#x.x.x.x ...
I want a mechanism that I dont have to type the IP address each time I want to scp something. I was trying to do it by creating an alias, but it doesn't seem to work.
Can I give my desktop machine a name so that instead of typing the ip address I can use the name of the machine instead ?
One way to deal with this is to create an entry in your ssh configuration. This can be done on a system wide basis or, if you don't have root access on this box, just for your user.
The per user configuration file is ~/.ssh/config and uses the following format
host my_desktop
hostname 11.22.33.44
This method is also nice because you can specify other options like the user name. To find out more about the options available try man ssh_config.
You should have a HOSTS file on your system that's designed to do exactly that. On my Linux system, it's located at /etc/hosts. If you add a line that looks like this:
11.22.33.44 my_desktop
then all accesses to the name my_desktop will be mapped to the IP address listed. This change only affects the machine whose HOSTS file was modified, though. If you want to make it so that anybody can access an IP using a specific name, then you're looking at something a little more difficult (this is the general problem that DNS servers were designed to resolve).
Use a environment variable to hold your IP and username - then use the variable in the scp command.
user#crunchbang:~$ export mypc='myuser#x.x.x.x'
user#crunchbang:~$ scp $mypc: ......

Resources