How to hide unreacheble hosts? - ansible

I need to execute some commands through the shell module, but when I execute them on a group of hosts, they are displayed in the terminal unreachable. How to make it so that information is displayed only on available hosts?
For now, running
ansible all -m shell -a "df -h"
Results in:
Mint-5302 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ssh: connect to host 192.168.53.2 port 22: No route to host",
"unreachable": true
}

You can find the documentation here
Ignoring unreachable host errors
- name: Execute shell
shell: "df -h"
ignore_unreachable: yes
And at the playbook level, to ignoring each unreachable's hosts
- hosts: all
ignore_unreachable: yes
tasks:
- name: Execute shell
shell: "df -h"

You can achieve this behavior by using community.general.diy callback plugin.
Create ansible.cfg file with following content -
[defaults]
bin_ansible_callbacks = True
stdout_callback = community.general.diy
[callback_diy]
runner_on_unreachable_msg=""
Run your ad-hoc command and you will get the following output
$ ansible -m ping 192.168.10.1
PLAY [Ansible Ad-Hoc] *************************************************************************
TASK [ping] ***********************************************************************************
PLAY RECAP ************************************************************************************
192.168.10.1 : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0

Related

Ansible dynamic inventory could not resolve hostname

In ansible (please see my Repo I have a dynamic inventory (hosts_aws_ec2.yml). It shows this
ansible-inventory -i hosts_aws_ec2.yml --graph
#all:
|--#aws_ec2:
| |--linuxweb01
| |--winweb01
|--#iis:
| |--winweb01
|--#linux:
| |--linuxweb01
|--#nginx:
| |--linuxweb01
|--#ungrouped:
|--#webserver:
| |--linuxweb01
| |--winweb01
|--#windows:
| |--winweb01
When I run any playbook, for example configure_iis_web_server.yml or ping_novars.yml in my repo It says host is unreachable.
ansible-playbook ping_novars.yml -i hosts_aws_ec2.yml --ask-vault-pas --limit linuxweb01
PLAY [linux] ******************************************************************************************************************
TASK [Gathering Facts] ********************************************************************************************************
fatal: [linuxweb01]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname linuxweb01: Name or service not known", "unreachable": true}
PLAY RECAP ********************************************************************************************************************
linuxweb01 : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
ansible all -i hosts_aws_ec2.yml -m debug -a "var=ip" --ask-vault-pass shows that it finds the ip addresses for the files in host_vars folder.
winweb01 | SUCCESS => {
"ip": "3.92.5.126"
}
linuxweb01 | SUCCESS => {
"ip": "52.55.134.86"
}
I used to have this working when I didn't have this in hosts_aws_ec2.yml:
hostnames:
- tag:Name
and the files in host_vars where the actual public IPv4 DNS addresses for example ec2-3-92-5-126.compute-1.amazonaws.com.yml instead of winweb01.Then the inventory would list the public dns not the name.
Is there anyway to use the name tag in the inventory but provide the ip address?
I was able to make it work by adding compose to my dynamic host script:
hostnames:
- tag:Name
compose:
ansible_host: public_dns_name
found answer here: Displaying a custom name for a host

Gitlab CI with Ansible

I am creating a pipeline which is automatically triggered when I push my code on gitlab.com.
The project is about the provisioning of a machine.
Here my .gitlab-ci.yml file
ansible_build:
image: debian:10
script:
- apt-get update -q -y
- apt-get install -y ansible git openssh-server keychain
- service ssh stop
- service ssh start
- cp files/<ad-hoc-created-key> key.pem && chmod 600 key.pem
- eval `keychain --eval` > /dev/null 2>&1
- ssh-add key.pem
- ansible-galaxy install -r requirements.yml
- ansible-playbook provision.yml --inventory hosts --limit local
When I push my code, the gitlab environment starts running all commands, but then it exits with the following error
$ ansible-playbook provision.yml --inventory hosts --limit local
PLAY [Provision step] **********************************************************
TASK [Gathering Facts] *********************************************************
fatal: [127.0.0.1]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Host key verification failed.", "unreachable": true}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit #/builds/team/ansible/provision.retry
PLAY RECAP *********************************************************************
127.0.0.1 : ok=0 changed=0 unreachable=1 failed=0
In my local PC, I solved it using the ssh-copy-id <path-to-the-key> <localhost> command, but I don't know how to solve it for gitlab-ci, given that it's not an environment which I can control.
I tried also to replace the 127.0.0.1 IP address with localhost.
ansible-playbook provision.yml --inventory hosts --limit localhost
Then it fails:
ansible-playbook provision.yml --inventory hosts --limit localhost
[WARNING] Ansible is being run in a world writable directory (/builds/teamiguana/minerva-ansible), ignoring it as an ansible.cfg source. For more information see https://docs.ansible.com/ansible/devel/reference_appendices/config.html#cfg-in-world-writable-dir
[WARNING]: Found both group and host with same name: localhost
PLAY [Provision step] **********************************************************
TASK [Gathering Facts] *********************************************************
fatal: [localhost]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.\r\nroot#localhost: Permission denied (publickey,password).", "unreachable": true}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit #/builds/teamiguana/minerva-ansible/provision.retry
PLAY RECAP *********************************************************************
localhost : ok=0 changed=0 unreachable=1 failed=0
I don't have experience setting up similar - but my first thought would be to check
What system user is Gitlab trying to SSH as?
What system user has the corresponding public keys on the remote hosts?
You can override which user Ansible connects with either in the playbooks, or via --user <user> command-line flag, see https://docs.ansible.com/ansible/latest/cli/ansible-playbook.html#cmdoption-ansible-playbook-u.
Though maybe I'm misunderstanding, because I just noticed that you've set --limit local in your command?
You may try to add env variable ANSIBLE_TRANSPORT with value "local" to your ansible-playbook command, like this:
ANSIBLE_TRANSPORT=local ansible-playbook provision.yml --inventory hosts --limit local

List running java processes in a linux host via ansible playbook

I want to list the java processes running in the mentioned hosts. However am not getting the desired o/p
Have created a ansible-playbook file to read the shell commands.
name: 'Check the running java processes'
hosts: all
tasks:
- name: 'check running processes'
shell: ps -ef | grep -i java```
output: PLAY [Check the running java processes] ****************************************
TASK [setup] *******************************************************************
ok: [target1]
ok: [target2]
TASK [check running processes] *************************************************
changed: [target1]
changed: [target2]
PLAY RECAP *********************************************************************
target1 : ok=2 changed=1 unreachable=0 failed=0
target2 : ok=2 changed=1 unreachable=0 failed=0
However, it's not listing the processes.
You could see the result with your actual playbook by running ansible in verbose mode.
However, the good way to do this is to register the output of your task and display its content in a debug task (or use it in any other task/loop).
To know how your registered var looks like, you can read the doc about common and shell's specific return values... or you can simply debug the full var in your playbook and have a look at it.
Here is an example just to get the full stdout of the command:
---
- name: Check the running java processes
hosts: all
tasks:
- name: Check running processes
shell: ps -ef | grep -i java
register: ps_cmd
- name: Show captured processes
debug:
var: ps_cmd.stdout

Encrypting ansible inventory file

I want to encrypt my ansible inventory file using ansible vault as it contains the IP/Passwords/Key file paths etc, which I do not want to keep it in readable format.
This is what I have tried.
My folder structure looks like below
env/
hosts
hosts_details
plays/
test.yml
files/
vault_pass.txt
env/hosts
[server-a]
server-a-name
[server-b]
server-b-name
[webserver:children]
server-a
server-b
env/hosts_details (file which I want to encrypt)
[server-a:vars]
env_name=server-a
ansible_ssh_user=root
ansible_ssh_host=10.0.0.1
ansible_ssh_private_key_file=~/.ssh/xyz-key.pem
[server-b:vars]
env_name=server-b
ansible_ssh_user=root
ansible_ssh_host=10.0.0.2
ansible_ssh_private_key_file=~/.ssh/xyz-key.pem
test.yml
---
- hosts: webserver
tasks:
- name: Print Hello world
debug:
msg: "Hello World"
Execution without encryption runs successfully without any errors
ansible-playbook -i env/ test.yml
When I encrypt my env/hosts_details file with vault file in files/vault_pass.txt and then execute the playbook I get the below error
ansible-playbook -i env/ test.yml --vault-password-file files/vault_pass.txt
PLAY [webserver]
******************************************************************
TASK [setup]
*******************************************************************
Thursday 10 August 2017 11:21:01 +0100 (0:00:00.053) 0:00:00.053 *******
fatal: [server-a-name]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname server-a-name: Name or service not known\r\n", "unreachable": true}
fatal: [server-b-name]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname server-b-name: Name or service not known\r\n", "unreachable": true}
PLAY RECAP
*********************************************************************
server-a-name : ok=0 changed=0 unreachable=1 failed=0
server-b-name : ok=0 changed=0 unreachable=1 failed=0
I want to know if I am missing anything or is it possible to have inventory file encrypted.
Is there any other alternative for the same?
As far as I know, you can't encrypt inventory files.
You should use group vars files instead.
Place your variables into ./env/group_vars/server-a.yml and server-b.yml in YAML format:
env_name: server-a
ansible_ssh_user: root
ansible_ssh_host: 10.0.0.1
ansible_ssh_private_key_file: ~/.ssh/xyz-key.pem
And encrypt server-a.yml and server-b.yml.
This way your inventory (hosts file) will be in plain text, but all inventory (host and group) variables will be encrypted.

Ansible file creation fails without any error

After running the the below Ansible Yaml file the output shows file is created and the content is changed
The YAML File
---
- hosts: all
gather_facts: yes
connection: local
tasks:
- name: Check the date on the server.
action: command touch /opt/b
- name: cat the Content
action: command cat /opt/b
Running the Playbook
root#my-ubuntu:/var/lib/awx/projects/test# ansible-playbook main.yml
PLAY [all] *********************************************************************
TASK [setup] *******************************************************************
ok: [ansible-ubuntu-1604-db]
TASK [Check the date on the server.] *******************************************
changed: [ansible-ubuntu-1604-db]
[WARNING]: Consider using file module with state=touch rather than running touch
TASK [cat the Content] *********************************************************
changed: [ansible-ubuntu-1604-db]
PLAY RECAP *********************************************************************
ansible-ubuntu-1604-db : ok=3 changed=2 unreachable=0 failed=0
The Message Display changed=2 and tasks doesnt created any file
ubuntu#ansible-ubuntu-1604-db:~$ ls -l /opt/
total 0
The Env
Ansible Controller on the MAC Local Desktop
Taget Node is on Cloud
With connection: local in your playbook, you tell Ansible to execute all tasks on your local ansible controller. So file is created on your local machine.
Remove connection: local and try again.

Resources