How to list for all Openssl certificates through ansible-playbook - ansible

Want to know how to write an ansible-playbook to list all the Openssl certificates in the localhost and their expiration date

As I will have a similar use case and probably some more in the future, I've setup a short test on a RHEL 7.9 environment.
Since I am behind a proxy I had to specify it in Bash for CLI, as well for Ansible modules.
export HTTP_PROXY="http://localhost:3128"
export HTTPS_PROXY="http://localhost:3128"
The x509_certificate_module comes from the Community Collections, therefore an installation is necessary before.
ansible-galaxy collection install community.crypto # --ignore-certs
Process install dependency map
Starting collection install process
Installing 'community.crypto:1.9.3' to '/home/${USER}/.ansible/collections/ansible_collections/community/crypto'
As well addig the collection path to the library.
vi ansible.cfg
...
[defaults]
library = /usr/share/ansible/plugins/modules:~/.ansible/plugins/modules:~/.ansible/collections/ansible_collections/
...
On remote host the Python cryptography module needs to be installed.
- name: Make sure dependencies are resolved
pip:
name: cryptography
extra_args: '--index-url https://"{{ ansible_user }}":"{{ API_KEY }}"#repository.example.com/artifactory/api/pypi/pypi-remote/simple --upgrade'
tags: check_certs
... I am using an internal private repository service even for Python packages, so I do not need to specify the proxy for pip_module, but the index-url here instead of in /etc/pip.conf.
environment:
HTTP_PROXY: "localhost:3128"
HTTPS_PROXY: "localhost:3128"
Than it is simply possible to gather and list certificate information.
- name: Get certificate information
community.crypto.x509_certificate_info:
path: /etc/pki/ca-trust/source/anchors/DC_com_DC_example_CA_cert.pem
register: cert_info
tags: check_certs
- name: Show certificate information
debug:
msg: "{{ cert_info.not_after }}"
tags: check_certs
... Due to some interoperability problems I found out that it seems to be the best to use a POSIX portable file name character set for certificate file names only.
Further reading, implementation parts and enhancement I'll leave to you.
Documentation:
Ansible pip_module
Ansible x509_certificate_module

Related

Is it safe to use Ansible shell module and gather_facts: no?

I was writing an Ansible playbook to update Linux headers and build essentials for Debian operating system. But the playbook hang in gathering facts step. So after lots of search on internet I introduced gather_facts: no and then it ran successfully. But I want to know:
Is is OK to use gather_facts: no. Please give some understanding of gather_facts and what it does internally?
To get the kernel version I used the shell module. Is there any other Ansible command to get the kernel version of host machine?
- hosts: DEV1
become: yes
gather_facts: no
tasks:
- name: "Getting the debian kernal version"
shell:
cmd: uname -r
register: kernal_version_output
- name: "Debug content for kernal version"
debug:
msg: "kernal version is => {{ kernal_version_output.stdout }}"
- name: "Update apt-get repo and cache"
apt:
name:
- libreadline-gplv2-dev
- libreadline-dev
- linux-headers-{{ kernal_version_output.stdout }}
- build-essential
Regarding your questions
Please give some understanding of gather_facts
you may have a look into Playbook Vars Facts "Discovering variables: facts and magic variables" and the setup_module.
Quote: "With Ansible you can retrieve or discover certain variables containing information about your remote systems or about Ansible itself. Variables related to remote systems are called facts."
... and what it does internally?
For Ansible ad-hoc commands and the setup_module the source code of setup.py is a good starting point to research how it works internally. In summary, it is just executing some scripts to collect certain system information.
Is is OK to use gather_facts: no
Yes, of course. If you don't need specific information about the remote system to perform tasks and if commands and playbook do not depend on gathered information, you can leave it switched off.
To get the kernel version I used the shell module. Is there any other Ansible command to get the kernel version of host machine?
Even if that is an anti-pattern with Ansible, without facts there seems to be no other possibilities.

Ansible development modules - where to download

How do I install/download the Ansible development modules?
https://docs.ansible.com/ansible/devel/modules/list_of_windows_modules.html
# rpm -qa |grep ansib
ansible-2.6.20-1.el7ae.noarch
# cat win-list-services.yml
---
- name: Get info for all installed services
hosts: '{{ host }}'
gather_facts: no
vars:
execute: false
tasks:
- name: Get info for all installed services
win_service_info:
register: servicelist
# ansible-playbook -v win-list-services.yml
Using /etc/ansible/ansible.cfg as config file
ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path.
The error appears to be in '/root/playbook/win-list-services.yml': line 8, column 5, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
tasks:
- name: Get info for all installed services
^ here
It appears that the windows modules are part of the move to ansible-collections, and thus you may be able to run them using a "normal" ansible 2.9 install after following the collection install instructions
The pragmatic implication is that it is unlikely you can follow Zeitounator's instructions since those windows modules no longer live in the ansible repo, so using pip install -e will not provide them (unless you use a git sha earlier than the current devel)
However, either way, being on ansible 2.6 as shown in your question is quite old, so you will want to get on a modern version anyway
So far, I found out that we can download from Ansible Galaxy.
win_service_info is available from below.
https://galaxy.ansible.com/ansible/windows
It require Ansible 2.9 as described by mdaniel.

Error while using vars_prompt in ansible playbook [duplicate]

Ansible shows an error:
ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path.
What is wrong?
The exact transcript is:
ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path.
The error appears to have been in 'playbook.yml': line 10, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
---
- name: My task name
^ here
Reason #1
You are using an older version of Ansible which did not have the module you try to run.
How to check it?
Open the list of modules module documentation and find the documentation page for your module.
Read the header at the top of the page - it usually shows the Ansible version in which the module was introduced. For example:
New in version 2.2.
Ensure you are running the specified version of Ansible or later. Run:
ansible-playbook --version
And check the output. It should show something like:
ansible-playbook 2.4.1.0
Reason #2
You tried to write a role and put a playbook in my_role/tasks/main.yml.
The tasks/main.yml file should contain only a list of tasks. If you specified:
---
- name: Configure servers
hosts: my_hosts
tasks:
- name: My first task
my_module:
parameter1: value1
Ansible tries to find an action module named hosts and an action module named tasks. It doesn't, so it throws an error.
Solution: specify only a list of tasks in the tasks/main.yml file:
---
- name: My first task
my_module:
parameter1: value1
Reason #3
The action module name is misspelled.
This is pretty obvious, but overlooked. If you use incorrect module name, for example users instead of user, Ansible will report "no action detected in task".
Ansible was designed as a highly extensible system. It does not have a limited set of modules which you can run and it cannot check "in advance" the spelling of each action module.
In fact you can write and then specify your own module named qLQn1BHxzirz and Ansible has to respect that. As it is an interpreted language, it "discovers" the error only when trying to execute the task.
Reason #4
You are trying to execute a module not distributed with Ansible.
The action module name is correct, but it is not a standard module distributed with Ansible.
If you are using a module provided by a third party - a vendor of software/hardware or another module shared publicly, you must first download the module and place it in appropriate directory.
You can place it either in modules subdirectory of the playbook or in a common path.
Ansible looks ANSIBLE_LIBRARY or the --module-path command line argument.
To check what paths are valid, run:
ansible-playbook --version
and check the value of:
configured module search path =
Ansible version 2.4 and later should provide a list of paths.
Reason #5
You really don't have any action inside the task.
The task must have some action module defined. The following example is not valid:
- name: My task
become: true
I can't really improve upon #techraf answer https://stackoverflow.com/a/47159200/619760.
I wanted to add reason #6 my special case
Reason #6
Incorrectly using roles: to import/include roles as a subtask.
This does not work, you can not include roles in this way as subtasks in a play.
---
- hosts: somehosts
tasks:
- name: include somerole
roles:
- somerole
Use include_role
According to the documentation
you can now use roles inline with any other tasks using import_role or include_role:
- hosts: webservers
tasks:
- debug:
msg: "before we run our role"
- import_role:
name: example
- include_role:
name: example
- debug:
msg: "after we ran our role"
Put the roles at the right place inline with hosts
Include the roles at the top
---
- hosts: somehosts
roles:
- somerole
tasks:
- name: some static task
import_role:
name: somerole
hosts: some host
- include_role:
name: example
You need to understand the difference between import/include static/dynamic
I got this error when I referenced the debug task as ansible.builtin.debug
Causes a syntax failure in CI (but worked locally):
- name: "Echo the jenkins job template"
ansible.builtin.debug:
var: template_xml
verbosity: 1
Works locally and in CI:
- name: "Echo the jenkins job template"
debug:
var: template_xml
verbosity: 1
I believe - but have not confirmed - that the differences in local vs CI was ansible versions.
Local : 2.10
CI : 2.7
Explanation of the error :
No tasks to execute means it can not do the action that was described in your playbook
Root cause:
the installed version of Ansible doesn't support it
How to check :
ansible --version
Solution:
upgrade Ansible to a version which supports the feature you are trying to use
How to upgrade Ansible:
https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html#selecting-an-ansible-version-to-install
Quick instruction for Ubuntu :
sudo apt update
sudo apt install software-properties-common
sudo apt-add-repository --yes --update ppa:ansible/ansible
sudo apt install ansible
P.S: followed this path and upgraded from version 2.0.2 to 2.9
After upgrade, same playbook worked like a charm
For me the problem occurred with "systemd" module. Turned out that my ansible --version was 2.0.0.2 and module was first introduced in version 2.2. Updating my ansible to latest version fixed the problem.
playbook.yaml
- name: "Enable and start docker service and ensure it's not masked"
systemd:
name: docker
state: started
enabled: yes
masked: no
Error
ERROR! no action detected in task
etc..
etc..
etc..
- name: "Enable and start docker service and ensure it's not masked"
^ here
In my case this was fix:
ansible-galaxy collection install ansible.posix

How to set an Ansible tier-specific inventory variable?

Is it possible to create an Ansible inventory variable that isn't associated with an inventory host or group but the server tier the playbook is run on? I have an Ansible role that installs the libffi-dev package using APT, but I may want to install a different version of that package on each server tier. I've created a variable "libffi-dev_ver" for that purpose. I also have the inventory files "development", "staging", and "production" that correspond to each of my tiers.
My role's main task, which is called from my main site.yml playbook, checks that version variable exists prior to running the role:
# roles/misc_libs/tasks/main.yml
- include: check_vars.yml tags=misc_libs
- include: misc_libs.yml tags=misc_libs
check_vars.yml checks to ensure that the package version variable exists:
# roles/misc_libs_tasks/check_vars.yml
- name: check that required role variables are set
fail: msg="{{ item }} is not defined"
when: not {{ item }}
with_items:
- libffi-dev_ver
The misc_libs role actually uses that variable to install the package:
# roles/misc_libs/tasks/misc_libs.yml
- name: install libffi-dev
apt: >
pkg=libffi-dev={{ libffi-dev_ver }}
update_cache=yes
cache_valid_time=3600
become: True
My development inventory file looks like this:
# development
[webservers]
web01.example.com ansible_ssh_host=<ip_address>
[dev:children]
webservers
[webservers:vars]
libffi-dev_ver="3.1-2+b2"
When I run this command:
ansible-playbook -i development site.yml -l webservers
I get this Ansible error:
fatal: [web01.example.com] => error while evaluating conditional: not libffi-dev_ver
What is the correct way to declare a package versioning variable like this in Ansible? The variable's value depends on the server tier which indicates to me that it goes in an inventory file since inventory files are server tier-specific. But all inventory variables seem to have to be associated with a host or group. I've done that but the role still doesn't see the variable. I could add a task to the role that detects the server tier and uses a "when" conditional to set the variable accordingly but that solution seems ugly because if you're installing multiple packages in a role, you'd need three conditionals for each package version variable. I've looked through the Ansible documentation and read numerous blog posts on setting up multi-tier playbooks but I don't see this particular situation addressed. What's the right way to declare a tier-specific variable like this?
The problem was that the variable 'libffi-dev_ver' I declared is actually a Jinja2 identifier that must adhere to Python 2.x naming rules. The '-' (dash) is an invalid character according to these rules. Once I changed it to an '_' (underscore), I no longer got the error.
Also, the check_vars.yml playbook is actually unnecessary. There is an Ansible configuration variable error_on_undefined_vars which will cause steps containing an undefined variable to fail. Since it's true by default, I don't need to run check_vars.yml as all variables are already being checked.
One place to declare server tier-specific variables seems to be in a file in the group_vars directory that has the same name as the group which is named after that tier in your inventory file. So in my case my 'development' inventory file contains a 'dev' child group. This group contains the web server where I want to install the libffi-dev package. Therefore, I created a file 'group_vars/dev' and declared a variable in that file called 'libffi_dev_ver' which I can reference in my misc_libs.yml playbook.
I don't get what you are attempting to accomplish. Why is:
# roles/misc_libs/tasks/misc_libs.yml
- name: install libffi-dev
apt: >
pkg=libffi-dev={{ libffi-dev_ver }}
update_cache=yes
cache_valid_time=3600
become: True
when: libffi-dev_ver is defined
not enough?

ansible - tranferring files and changing ownership

I'm new to Ansible. The following is my requirement,
Transfer files(.tar.gz) from one host to many machines (38+ Nodes) under /tmp as user1
Log in to each machines as user2 and switch to root user using sudo su - (With Password)
extract it to another directory (/opt/monitor)
Change a configuration in the file (/opt/monitor/etc/config -> host= )
Start the process under /opt/monitor/init.d
For this, should I use playbooks or ad-hoc commands ?
I'll happy to use ad-hoc mode in ansible as I'm afraid of Playbooks.
Thanks in advance
You’d have to write several ad hoc commands to accomplish this. I don’t see any good reason to not use a playbook here. You will want to learn about playbooks, but it’s not much more to learn than the ad hoc commands. The sudo parts are taken care of for you by using the -b option to “become” the using sudo. Ansible takes care of the logging in for you via ssh.
The actions you’ll want to make use of are common for this type of setup where you’re installing something from source, commands like yum, get_url, unarchive, service. As an example, here’s a pretty similar process to what you need, demonstrating installing redis from source on a RedHat-family system:
- name: install yum dependencies for redis
yum: name=jemalloc-devel ... state=present
- name: get redis from file server
get_url: url={{s3uri}}/common/{{redis}}.tar.gz dest={{tmp}}
- name: extract redis
unarchive: copy=no src={{tmp}}/{{redis}}.tar.gz dest={{tmp}} creates={{tmp}}/{{redis}}
- name: build redis
command: chdir={{tmp}}/{{redis}} creates=/usr/local/bin/redis-server make install
- name: copy custom systemd redis.service
copy: src=myredis.service dest=/usr/lib/systemd/system/
# and logrotate, redis.conf, etc
- name: enable myredis service
service: name=myredis state=started enabled=yes
You could define custom variables like tmp and redis in a global_vars/all.yaml file. You’ll also want a site.yaml file to define your hosts and a role(s).
You’d invoke the playbook with something like:
ansible-playbook site.yaml -b --ask-become-pass -v
This can operate on your 38+ nodes as easily as on one.
You'll want a playbook to do this. At the simplest level, since you mention unpacking, it might look something like this:
- name: copy & unpack the file
unarchive: src=/path/to/file/on/local/host
dest=/path/to/target
copy=yes
- name: copy custom config
copy: src=/path/to/src/file
dest=/path/to/target
- name: Enable service
service: name=foo enabled=yes state=started

Resources