Ansible dynamic inventory could not resolve hostname - ansible

In ansible (please see my Repo I have a dynamic inventory (hosts_aws_ec2.yml). It shows this
ansible-inventory -i hosts_aws_ec2.yml --graph
#all:
|--#aws_ec2:
| |--linuxweb01
| |--winweb01
|--#iis:
| |--winweb01
|--#linux:
| |--linuxweb01
|--#nginx:
| |--linuxweb01
|--#ungrouped:
|--#webserver:
| |--linuxweb01
| |--winweb01
|--#windows:
| |--winweb01
When I run any playbook, for example configure_iis_web_server.yml or ping_novars.yml in my repo It says host is unreachable.
ansible-playbook ping_novars.yml -i hosts_aws_ec2.yml --ask-vault-pas --limit linuxweb01
PLAY [linux] ******************************************************************************************************************
TASK [Gathering Facts] ********************************************************************************************************
fatal: [linuxweb01]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname linuxweb01: Name or service not known", "unreachable": true}
PLAY RECAP ********************************************************************************************************************
linuxweb01 : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
ansible all -i hosts_aws_ec2.yml -m debug -a "var=ip" --ask-vault-pass shows that it finds the ip addresses for the files in host_vars folder.
winweb01 | SUCCESS => {
"ip": "3.92.5.126"
}
linuxweb01 | SUCCESS => {
"ip": "52.55.134.86"
}
I used to have this working when I didn't have this in hosts_aws_ec2.yml:
hostnames:
- tag:Name
and the files in host_vars where the actual public IPv4 DNS addresses for example ec2-3-92-5-126.compute-1.amazonaws.com.yml instead of winweb01.Then the inventory would list the public dns not the name.
Is there anyway to use the name tag in the inventory but provide the ip address?

I was able to make it work by adding compose to my dynamic host script:
hostnames:
- tag:Name
compose:
ansible_host: public_dns_name
found answer here: Displaying a custom name for a host

Related

How to hide unreacheble hosts?

I need to execute some commands through the shell module, but when I execute them on a group of hosts, they are displayed in the terminal unreachable. How to make it so that information is displayed only on available hosts?
For now, running
ansible all -m shell -a "df -h"
Results in:
Mint-5302 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ssh: connect to host 192.168.53.2 port 22: No route to host",
"unreachable": true
}
You can find the documentation here
Ignoring unreachable host errors
- name: Execute shell
shell: "df -h"
ignore_unreachable: yes
And at the playbook level, to ignoring each unreachable's hosts
- hosts: all
ignore_unreachable: yes
tasks:
- name: Execute shell
shell: "df -h"
You can achieve this behavior by using community.general.diy callback plugin.
Create ansible.cfg file with following content -
[defaults]
bin_ansible_callbacks = True
stdout_callback = community.general.diy
[callback_diy]
runner_on_unreachable_msg=""
Run your ad-hoc command and you will get the following output
$ ansible -m ping 192.168.10.1
PLAY [Ansible Ad-Hoc] *************************************************************************
TASK [ping] ***********************************************************************************
PLAY RECAP ************************************************************************************
192.168.10.1 : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0

Create file in LXC container with Ansible playbook

I have a playbook:
- hosts: Server-52
gather_facts: false
tasks:
- name: Run a command in a container
lxc_container:
name: Jitsi
container_log: true
state: started
container_command: |
touch FUFUFU.txt
This playbook must to create a file FUFUFU.txt in my LXC container Jitsi
My container:
root#devel-lxd01:/etc/keepalived# lxc list
+----------+---------+------+------+-----------+-----------+-------------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | LOCATION |
+----------+---------+------+------+-----------+-----------+-------------+
| Jitsi | RUNNING | | | CONTAINER | 0 | devel-lxd01 |
But when I tried deploy this playbook, I get error:
PLAY [Server-52] ***********************************************************************************************************************************
TASK [Run a command in a container] ****************************************************************************************************************
fatal: [Server-52]: FAILED! => {"changed": false, "msg": "Failed to find required executable \"lxc-create\" in paths: /root/.vscode-server/bin/3a6960b964327f0e3882ce18fcebd07ed191b316/bin:/root/.vscode-server/bin/3a6960b964327f0e3882ce18fcebd07ed191b316/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin"}
PLAY RECAP *****************************************************************************************************************************************
Server-52 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Could you please tell me where I was wrong?
Looks like you are using lxd rather than lxc to run your containers.
Try using the module lxd_container rather than lxc_container, for your task.

Gitlab CI with Ansible

I am creating a pipeline which is automatically triggered when I push my code on gitlab.com.
The project is about the provisioning of a machine.
Here my .gitlab-ci.yml file
ansible_build:
image: debian:10
script:
- apt-get update -q -y
- apt-get install -y ansible git openssh-server keychain
- service ssh stop
- service ssh start
- cp files/<ad-hoc-created-key> key.pem && chmod 600 key.pem
- eval `keychain --eval` > /dev/null 2>&1
- ssh-add key.pem
- ansible-galaxy install -r requirements.yml
- ansible-playbook provision.yml --inventory hosts --limit local
When I push my code, the gitlab environment starts running all commands, but then it exits with the following error
$ ansible-playbook provision.yml --inventory hosts --limit local
PLAY [Provision step] **********************************************************
TASK [Gathering Facts] *********************************************************
fatal: [127.0.0.1]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Host key verification failed.", "unreachable": true}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit #/builds/team/ansible/provision.retry
PLAY RECAP *********************************************************************
127.0.0.1 : ok=0 changed=0 unreachable=1 failed=0
In my local PC, I solved it using the ssh-copy-id <path-to-the-key> <localhost> command, but I don't know how to solve it for gitlab-ci, given that it's not an environment which I can control.
I tried also to replace the 127.0.0.1 IP address with localhost.
ansible-playbook provision.yml --inventory hosts --limit localhost
Then it fails:
ansible-playbook provision.yml --inventory hosts --limit localhost
[WARNING] Ansible is being run in a world writable directory (/builds/teamiguana/minerva-ansible), ignoring it as an ansible.cfg source. For more information see https://docs.ansible.com/ansible/devel/reference_appendices/config.html#cfg-in-world-writable-dir
[WARNING]: Found both group and host with same name: localhost
PLAY [Provision step] **********************************************************
TASK [Gathering Facts] *********************************************************
fatal: [localhost]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.\r\nroot#localhost: Permission denied (publickey,password).", "unreachable": true}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit #/builds/teamiguana/minerva-ansible/provision.retry
PLAY RECAP *********************************************************************
localhost : ok=0 changed=0 unreachable=1 failed=0
I don't have experience setting up similar - but my first thought would be to check
What system user is Gitlab trying to SSH as?
What system user has the corresponding public keys on the remote hosts?
You can override which user Ansible connects with either in the playbooks, or via --user <user> command-line flag, see https://docs.ansible.com/ansible/latest/cli/ansible-playbook.html#cmdoption-ansible-playbook-u.
Though maybe I'm misunderstanding, because I just noticed that you've set --limit local in your command?
You may try to add env variable ANSIBLE_TRANSPORT with value "local" to your ansible-playbook command, like this:
ANSIBLE_TRANSPORT=local ansible-playbook provision.yml --inventory hosts --limit local

Encrypting ansible inventory file

I want to encrypt my ansible inventory file using ansible vault as it contains the IP/Passwords/Key file paths etc, which I do not want to keep it in readable format.
This is what I have tried.
My folder structure looks like below
env/
hosts
hosts_details
plays/
test.yml
files/
vault_pass.txt
env/hosts
[server-a]
server-a-name
[server-b]
server-b-name
[webserver:children]
server-a
server-b
env/hosts_details (file which I want to encrypt)
[server-a:vars]
env_name=server-a
ansible_ssh_user=root
ansible_ssh_host=10.0.0.1
ansible_ssh_private_key_file=~/.ssh/xyz-key.pem
[server-b:vars]
env_name=server-b
ansible_ssh_user=root
ansible_ssh_host=10.0.0.2
ansible_ssh_private_key_file=~/.ssh/xyz-key.pem
test.yml
---
- hosts: webserver
tasks:
- name: Print Hello world
debug:
msg: "Hello World"
Execution without encryption runs successfully without any errors
ansible-playbook -i env/ test.yml
When I encrypt my env/hosts_details file with vault file in files/vault_pass.txt and then execute the playbook I get the below error
ansible-playbook -i env/ test.yml --vault-password-file files/vault_pass.txt
PLAY [webserver]
******************************************************************
TASK [setup]
*******************************************************************
Thursday 10 August 2017 11:21:01 +0100 (0:00:00.053) 0:00:00.053 *******
fatal: [server-a-name]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname server-a-name: Name or service not known\r\n", "unreachable": true}
fatal: [server-b-name]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname server-b-name: Name or service not known\r\n", "unreachable": true}
PLAY RECAP
*********************************************************************
server-a-name : ok=0 changed=0 unreachable=1 failed=0
server-b-name : ok=0 changed=0 unreachable=1 failed=0
I want to know if I am missing anything or is it possible to have inventory file encrypted.
Is there any other alternative for the same?
As far as I know, you can't encrypt inventory files.
You should use group vars files instead.
Place your variables into ./env/group_vars/server-a.yml and server-b.yml in YAML format:
env_name: server-a
ansible_ssh_user: root
ansible_ssh_host: 10.0.0.1
ansible_ssh_private_key_file: ~/.ssh/xyz-key.pem
And encrypt server-a.yml and server-b.yml.
This way your inventory (hosts file) will be in plain text, but all inventory (host and group) variables will be encrypted.

Ansible variable precedence

So I have this playbook, in this playbook I have a variable, let's call it the_var
Now this variable should always have the same value default except for certain inventories where it should be not_default
This is how I did it, under group_vars/all.yml I put the_var: default and under inventories/my_special_inv I put the_var=not_default (under [all:vars])
When I run ansible-playbook -i inventories/my_special_inv I expect the variables value to be not_default (since I overrode the default behaviour with the inventory file). but it is set to default
How do I implement this behaviour correctly?
I am giving you working example that will help you to understand it better:
.
|-- default
| |-- group_vars
| | `-- server.yml
| `-- inventory
|-- site.yml
|-- special
| |-- group_vars
| | `-- server.yml
| `-- inventory
In this example I have just tested it against the localhost host so inside both the special/inventory and default/inventory, I have this group, but you can put whatever as per your need:
[server]
localhost
Important thing is the group name, it should match under the default/group_vars and special/group_vars file name (in my case it is server but in your case it can be anything):
So in default/group_vars, I have placed:
---
the_var: default
and in special/group_vars, I have placed:
---
the_var: not_default
In my test playbook(site.yml in this case) have:
---
- hosts: all
gather_facts: no
tasks:
- debug:
msg: "{{ the_var }}"
Now when I call the playbook against the default inventory, got this value:
anansible-playbook -i default site.yml -c local
PLAY [all] *********************************************************************
TASK [debug] *******************************************************************
ok: [localhost] => {
"msg": "default"
}
PLAY RECAP *********************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0
when I call the playbook against the special inventory, got this value:
ansible-playbook -i special site.yml -c local
PLAY [all] *********************************************************************
TASK [debug] *******************************************************************
ok: [localhost] => {
"msg": "not_default"
}
PLAY RECAP *********************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0
-c local is for localhost connection which you don't need in your live environment, I am sure you are working on remote host with ssh connection, which is default. Hope it help you.

Resources