Mixing Ansible static and dynamic groups in inventory - amazon-ec2

I'm trying to combine static & dynamic (EC2) inventory. have two ec2 instances:
ansible control machine
ami based host
Trying to ping 'ami' host from control machine. Here's my host file:
[local]
localhost ansible_connection=local
[tag_Name_ami]
[tag_Name_redhat]
[amazon:children]
tag_Name_ami
tag_Name_redhat
To successfully ping 'ami' host, I need to use two specific variables:
ansible_ssh_user: ec2-user (my control machine is ubuntu)
ansible_ssh_private_key_file: /home/ubuntu/.ssh/klucze.pem
Trying to achieve it by creating files in group_vars directory:
.
├── demo_setup.yml
├── ec2.ini
├── ec2.py
├── group_vars
│   ├── amazon.yml
│   ├── aws-redhats
│   ├── tag_Name_ami.yml
│   └── tag_Name_redhat.yml
├── hosts
├── hosts.bckp
└── host_vars
$ cat group_vars/tag_Name_ami.yml
ansible_ssh_user: ec2-user
$ cat group_vars/amazon.yml
ansible_ssh_private_key_file: /home/ubuntu/.ssh/klucze.pem
Problem is that ansible seems to "see" only tag_Name_ami.yml with ansible_ssh_user, ignoring my amazon.yml with ansible_ssh_private_key_file value. Some output below:
$ ansible tag_Name_ami -i ec2.py -m ping -vvv
<52.59.246.244> ESTABLISH CONNECTION FOR USER: ec2-user
<52.59.246.244> REMOTE_MODULE ping
<52.59.246.244> EXEC ssh -C -tt -v -o ControlMaster=auto -o ControlPersist=60s -o ControlPath="/home/ubuntu/.ansible/cp/ansible-ssh-%h-%p-%r" -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 52.59.246.244 /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1452256637.43-34398544897068 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1452256637.43-34398544897068 && echo $HOME/.ansible/tmp/ansible-tmp-1452256637.43-34398544897068'
52.59.246.244 | FAILED => SSH Error: Permission denied (publickey).
while connecting to 52.59.246.244:22
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.
$ ansible amazon -i ec2.py -m ping
No hosts matched
$
When I add ansible_ssh_private_key_file to my tag_Name_ami, the ping is Successfull:
$ ansible tag_Name_ami -i ec2.py -m ping -vvv
<52.59.246.244> ESTABLISH CONNECTION FOR USER: ec2-user
<52.59.246.244> REMOTE_MODULE ping
<52.59.246.244> EXEC ssh -C -tt -v -o ControlMaster=auto -o ControlPersist=60s -o ControlPath="/home/ubuntu/.ansible/cp/ansible-ssh-%h-%p-%r" -o IdentityFile="/home/ubuntu/.ssh/klucze.pem" -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 52.59.246.244 /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1452256765.34-42269843852436 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1452256765.34-42269843852436 && echo $HOME/.ansible/tmp/ansible-tmp-1452256765.34-42269843852436'
<52.59.246.244> PUT /tmp/tmpbFP5sH TO /home/ec2-user/.ansible/tmp/ansible-tmp-1452256765.34-42269843852436/ping
<52.59.246.244> EXEC ssh -C -tt -v -o ControlMaster=auto -o ControlPersist=60s -o ControlPath="/home/ubuntu/.ansible/cp/ansible-ssh-%h-%p-%r" -o IdentityFile="/home/ubuntu/.ssh/klucze.pem" -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 52.59.246.244 /bin/sh -c 'LANG=en_US.UTF-8 LC_CTYPE=en_US.UTF-8 /usr/bin/python /home/ec2-user/.ansible/tmp/ansible-tmp-1452256765.34-42269843852436/ping; rm -rf /home/ec2-user/.ansible/tmp/ansible-tmp-1452256765.34-42269843852436/ >/dev/null 2>&1'
52.59.246.244 | success >> {
"changed": false,
"ping": "pong"
}
$
ubuntu#ip-172-31-20-41:/etc/ansible$ cat group_vars/tag_Name_ami.yml
ansible_ssh_user: ec2-user
ansible_ssh_private_key_file: /home/ubuntu/.ssh/klucze.pem
But it's not what I want, I want every new EC2 instance have this ansible_ssh_private_key_file variable defined (it'll be part of 'amazon' static group), and ami/redhat instances additionally have ansible_ssh_user defined.
Thanks in advance for any help provided!
*********** UPDATE ****************
All I've been able to achive is doing it this way:
$ ansible-playbook demo_ping.yml --private-key=/home/ubuntu/.ssh/klucze.pem -u ec2-user
PLAY [webserver] **************************************************************
GATHERING FACTS ***************************************************************
ok: [ec2-54-93-114-191.eu-central-1.compute.amazonaws.com]
TASK: [Execute ping] **********************************************************
ok: [ec2-54-93-114-191.eu-central-1.compute.amazonaws.com]
PLAY RECAP ********************************************************************
ec2-54-93-114-191.eu-central-1.compute.amazonaws.com : ok=2 changed=0 unreachable=0 failed=0
Using my static hosts file with webserver group. The playbook looks like:
---
- hosts: amazon
remote_user: ec2-user
tasks:
- name: Execute ping
ping:
...
Putting 'amazon' as hosts value in playbook returns error:
PLAY [amazon] *****************************************************************
skipping: no hosts matched
Also tried executing playbook with '-i ec2.py', same error

You could loop over the ec2 hosts and set the variable ansible_ssh_private_key_file in the playbook.
- hosts: amazon
gather_facts: false
tasks:
- set_fact:
ansible_ssh_private_key_file: '/home/ubuntu/.ssh/klucze.pem'
...

Related

Ansible error while trying to change "command: sudo ..." to a module with become

I have got a simple playbook that just restarts a service:
- hosts: rmq-node2.lan
gather_facts: no
tasks:
- name: Restart RabbitMQ
become: yes
become_method: sudo
systemd:
name: rabbitmq-server
state: restarted
force: yes
Inventory:
rabbit:
hosts:
rmq-node1.lan: {}
all:
vars:
ansible_user: usbp-deploy-adt
ansible_password: q12345
ansible_become_pass: "{{ ansible_password }}"
It gives me the following error:
fatal: [rmq-node2.lan]: FAILED! => {"ansible_facts":
{"discovered_interpreter_python": "/usr/bin/python"},
"changed": false,
"module_stderr": "Shared connection to rmq-node2.lan closed.\r\n",
"module_stdout": "\r\n",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1}
-vvv mode:
<rmq-node2.lan> ESTABLISH SSH CONNECTION FOR USER: usbp-deploy-adt
<rmq-node2.lan> SSH: EXEC sshpass -d9 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'User="usbp-deploy-adt"' -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/3f3f334328 rmq-node2.lan '/bin/sh -c '"'"'echo ~usbp-deploy-adt && sleep 0'"'"''
<rmq-node2.lan> (0, b'/home/usbp-deploy-adt\n', b'')
<rmq-node2.lan> ESTABLISH SSH CONNECTION FOR USER: usbp-deploy-adt
<rmq-node2.lan> SSH: EXEC sshpass -d9 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'User="usbp-deploy-adt"' -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/3f3f334328 rmq-node2.lan '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /home/usbp-deploy-adt/.ansible/tmp/ansible-tmp-1617921012.2138484-192947282332368 `" && echo ansible-tmp-1617921012.2138484-192947282332368="` echo /home/usbp-deploy-adt/.ansible/tmp/ansible-tmp-1617921012.2138484-192947282332368 `" ) && sleep 0'"'"''
<rmq-node2.lan> (0, b'ansible-tmp-1617921012.2138484-192947282332368=/home/usbp-deploy-adt/.ansible/tmp/ansible-tmp-1617921012.2138484-192947282332368\n', b'')
<rmq-node2.lan> Attempting python interpreter discovery
<rmq-node2.lan> ESTABLISH SSH CONNECTION FOR USER: usbp-deploy-adt
<rmq-node2.lan> SSH: EXEC sshpass -d9 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'User="usbp-deploy-adt"' -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/3f3f334328 rmq-node2.lan '/bin/sh -c '"'"'echo PLATFORM; uname; echo FOUND; command -v '"'"'"'"'"'"'"'"'/usr/bin/python'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.7'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.6'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python3.5'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python2.7'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python2.6'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'/usr/libexec/platform-python'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'/usr/bin/python3'"'"'"'"'"'"'"'"'; command -v '"'"'"'"'"'"'"'"'python'"'"'"'"'"'"'"'"'; echo ENDFOUND && sleep 0'"'"''
<rmq-node2.lan> (0, b'PLATFORM\nLinux\nFOUND\n/usr/bin/python\n/usr/bin/python3.6\n/usr/bin/python2.7\n/usr/libexec/platform-python\n/usr/bin/python3\n/usr/bin/python\nENDFOUND\n', b'')
<rmq-node2.lan> ESTABLISH SSH CONNECTION FOR USER: usbp-deploy-adt
<rmq-node2.lan> SSH: EXEC sshpass -d9 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'User="usbp-deploy-adt"' -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/3f3f334328 rmq-node2.lan '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"''
<rmq-node2.lan> (0, b'{"osrelease_content": "NAME=\\"Red Hat Enterprise Linux Server\\"\\nVERSION=\\"7.9 (Maipo)\\"\\nID=\\"rhel\\"\\nID_LIKE=\\"fedora\\"\\nVARIANT=\\"Server\\"\\nVARIANT_ID=\\"server\\"\\nVERSION_ID=\\"7.9\\"\\nPRETTY_NAME=\\"Red Hat Enterprise Linux\\"\\nANSI_COLOR=\\"0;31\\"\\nCPE_NAME=\\"cpe:/o:redhat:enterprise_linux:7.9:GA:server\\"\\nHOME_URL=\\"https://www.redhat.com/\\"\\nBUG_REPORT_URL=\\"https://bugzilla.redhat.com/\\"\\n\\nREDHAT_BUGZILLA_PRODUCT=\\"Red Hat Enterprise Linux 7\\"\\nREDHAT_BUGZILLA_PRODUCT_VERSION=7.9\\nREDHAT_SUPPORT_PRODUCT=\\"Red Hat Enterprise Linux\\"\\nREDHAT_SUPPORT_PRODUCT_VERSION=\\"7.9\\"\\n", "platform_dist_result": ["redhat", "7.9", "Maipo"]}\n', b'')
Using module file /usr/lib/python3.7/site-packages/ansible/modules/system/systemd.py
<rmq-node2.lan> PUT /root/.ansible/tmp/ansible-local-2929y0v8kody/tmpl7fkd6zd TO /home/usbp-deploy-adt/.ansible/tmp/ansible-tmp-1617921012.2138484-192947282332368/AnsiballZ_systemd.py
<rmq-node2.lan> SSH: EXEC sshpass -d9 sftp -o BatchMode=no -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'User="usbp-deploy-adt"' -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/3f3f334328 '[rmq-node2.lan]'
<rmq-node2.lan> (0, b'sftp> put /root/.ansible/tmp/ansible-local-2929y0v8kody/tmpl7fkd6zd /home/usbp-deploy-adt/.ansible/tmp/ansible-tmp-1617921012.2138484-192947282332368/AnsiballZ_systemd.py\n', b'')
<rmq-node2.lan> ESTABLISH SSH CONNECTION FOR USER: usbp-deploy-adt
<rmq-node2.lan> SSH: EXEC sshpass -d9 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'User="usbp-deploy-adt"' -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/3f3f334328 rmq-node2.lan '/bin/sh -c '"'"'chmod u+x /home/usbp-deploy-adt/.ansible/tmp/ansible-tmp-1617921012.2138484-192947282332368/ /home/usbp-deploy-adt/.ansible/tmp/ansible-tmp-1617921012.2138484-192947282332368/AnsiballZ_systemd.py && sleep 0'"'"''
<rmq-node2.lan> (0, b'', b'')
<rmq-node2.lan> ESTABLISH SSH CONNECTION FOR USER: usbp-deploy-adt
<rmq-node2.lan> SSH: EXEC sshpass -d9 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'User="usbp-deploy-adt"' -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/3f3f334328 -tt rmq-node2.lan '/bin/sh -c '"'"'sudo -H -S -p "[sudo via ansible, key=qejohkgmxxluzqnxhpvqakuitlgmqaoe] password:" -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-qejohkgmxxluzqnxhpvqakuitlgmqaoe ; /usr/bin/python /home/usbp-deploy-adt/.ansible/tmp/ansible-tmp-1617921012.2138484-192947282332368/AnsiballZ_systemd.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<rmq-node2.lan> (1, b'\r\n', b'Shared connection to rmq-node2.lan closed.\r\n')
<rmq-node2.lan> Failed to connect to the host via ssh: Shared connection to rmq-node2.lan closed.
<rmq-node2.lan> ESTABLISH SSH CONNECTION FOR USER: usbp-deploy-adt
<rmq-node2.lan> SSH: EXEC sshpass -d9 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'User="usbp-deploy-adt"' -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/3f3f334328 rmq-node2.lan '/bin/sh -c '"'"'rm -f -r /home/usbp-deploy-adt/.ansible/tmp/ansible-tmp-1617921012.2138484-192947282332368/ > /dev/null 2>&1 && sleep 0'"'"''
<rmq-node2.lan> (0, b'', b'')
fatal: [rmq-node2.lan]: FAILED! => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"module_stderr": "Shared connection to rmq-node2.lan closed.\r\n",
"module_stdout": "\r\n",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
However, if I change the playbook to the following:
- hosts: rmq-node2.lan
tasks:
- name: Restart RabbitMQ
command: "sudo systemctl restart rabbitmq-server"
everything works just fine.
How can I avoid using command/shell/etc. with sudo and replace it with built in modules and become?
Error appears both on python2 and python3 with ansible versions 2.9.1 and 2.9.10
Edit 1: sudoers on a remote machine (comments are omitted):
Defaults !visiblepw
Defaults always_set_home
Defaults match_group_by_gid
Defaults always_query_group_plugin
Defaults env_reset
Defaults env_keep = "COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS"
Defaults env_keep += "MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE"
Defaults env_keep += "LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES"
Defaults env_keep += "LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE"
Defaults env_keep += "LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY"
Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin
root ALL=(ALL) ALL
## Allows people in group wheel to run all commands
%wheel ALL=(ALL) ALL
## Read drop-in files from /etc/sudoers.d (the # here does not mean a comment)
#includedir /etc/sudoers.d
usbp-deploy-adt ALL=(ALL)NOPASSWD:/usr/bin/journalctl *
usbp-deploy-adt ALL=(rabbitmq) ALL
usbp-deploy-adt ALL=(ALL)NOPASSWD:/usr/bin/systemctl * rabbitmq-server
#usbp-deploy-adt ALL=(ALL:ALL) ALL
Edit 2: switching become_method to su:
fatal: [rmq-node2.lan]: FAILED! => {
"msg": "Incorrect su password"
}
I would check if there is any issues with the privilege escalation method run by ansible.
Is your user part of the wheel group, or set up on the sudoers file (depending on the linux distribution)?
default privilege escalation method is sudo, which you can change by using --become-method=METHOD when running the ansible command (substitute METHOD with su for example, to see if the behavior changes.
You might have to add a password for this test, with the parameter ask-become-pass
If changing the privilege escalation method works, i would post some more information about distribution and sudoers/wheel configuration.
to make sure the user is configured properly.
Here is some documentation about become in ansible:
https://docs.ansible.com/ansible/latest/user_guide/become.html

Ansible hangs on action in handler, but works fine with action in task (reloading pf)

I'm attempting to reload pf as part of a role to provision a FreeBSD server after copying a new pf.conf to the system. When I do this step independently as a task as part of it's own playbook, it works flawlessly. However, when I have exactly the same action as a handler, ansible always hangs during the execution of that handler.
The play that succeeds:
- hosts: tag_Name_web ; all ec2 instances tagged with web
gather_facts: True
vars:
ansible_python_interpreter: /usr/local/bin/python2.7
ansible_become_pass: xxx
tasks:
- name: copy pf.conf
copy:
src: pf.template
dest: /etc/pf.conf
become: yes
become_method: su
- name: reload pf
shell: /sbin/pfctl -f /etc/pf.conf
become: yes
become_method: su
- name: echo
shell: echo "test"
become: yes
become_method: su
(I included the echo as a test, as I thought it might be succeeding because the reload was the last thing the play was doing, but it works fine).
The handler, which fails is:
# handlers file for jail_host
- name: Start iocage
command: service iocage start
- name: Reload sshd
service: name=sshd state=reloaded
- name: Reload pf
shell: "/sbin/pfctl -f /etc/pf.conf"
The handler definitely gets called, and it starts to work, and then it just hangs. (When I run pfctl -sa on the system, it shows me that the new pf.conf was actually reloaded. So it's working, it's just never returning and therefore making the rest of the ansible run not happen).
Below is the debug output of the handler running, but I don't see any errors that I can make sense of. There is no timeout as far as I can tell; I've let it run for 30 minutes before I Ctrl-C.
RUNNING HANDLER [JoergFiedler.freebsd-jail-host : Reload pf] *******************
Using module file /usr/local/lib/python2.7/site-packages/ansible/modules/core/commands/command.py
<54.244.77.100> ESTABLISH SSH CONNECTION FOR USER: ec2-user
<54.244.77.100> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="/usr/local/etc/ansible/xxx_aws.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r 54.244.77.100 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo ~/.ansible/tmp/ansible-tmp-1487698172.0-93173364920700 `" && echo ansible-tmp-1487698172.0-93173364920700="` echo ~/.ansible/tmp/ansible-tmp-1487698172.0-93173364920700 `" ) && sleep 0'"'"''
<54.244.77.100> PUT /tmp/tmpBrFVdu TO /home/ec2-user/.ansible/tmp/ansible-tmp-1487698172.0-93173364920700/command.py
<54.244.77.100> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="/usr/local/etc/ansible/xxx_aws.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r '[54.244.77.100]'
<54.244.77.100> ESTABLISH SSH CONNECTION FOR USER: ec2-user
<54.244.77.100> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="/usr/local/etc/ansible/xxx_aws.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r 54.244.77.100 '/bin/sh -c '"'"'chmod u+x /home/ec2-user/.ansible/tmp/ansible-tmp-1487698172.0-93173364920700/ /home/ec2-user/.ansible/tmp/ansible-tmp-1487698172.0-93173364920700/command.py && sleep 0'"'"''
<54.244.77.100> ESTABLISH SSH CONNECTION FOR USER: ec2-user
<54.244.77.100> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o 'IdentityFile="/usr/local/etc/ansible/xxx_aws.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ec2-user -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r -tt 54.244.77.100 '/bin/sh -c '"'"'su root -c '"'"'"'"'"'"'"'"'/bin/sh -c '"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-cntrcxqxlwicicvwtinmaadrnzzzujfp; /usr/local/bin/python2.7 /home/ec2-user/.ansible/tmp/ansible-tmp-1487698172.0-93173364920700/command.py; rm -rf "/home/ec2-user/.ansible/tmp/ansible-tmp-1487698172.0-93173364920700/" > /dev/null 2>&1'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"'"''"'"'"'"'"'"'"'"' && sleep 0'"'"''
I've also tried a lot of other ways of reloading pf.. using the service module, using command: service pf reload, and they all have exactly the same effect. I've also attempted to make the handler async, with
- name: Reload pf
shell: "/sbin/pfctl -f /etc/pf.conf"
async: 1
poll: 0
with no change.
Does anyone have an idea as to why my role with the handler fails, while a straightforward play with tasks succeeds? And more importantly, how can I get the handler to work properly?
Thanks in advance!
(I should note I'm using Ansible 2.2.1).
This seems to be more an issue with PF not with ansible, give a try again to your playbook but this time use this on your pf.rules:
pass all
You can indeed also test by login to the instance and just run:
/sbin/pfctl -Fa -f /etc/pf.conf.all
where /etc/pf.conf.all contains pass all, it should not log you out or your current session should remain active.
What probably is happening is that your pf rules are dropping/flushing existing connections when applied therefore your SSH (ansible) hangs.
Maybe you need the following in your handler(s)?
become: yes
become_method: su

ansible unable to connect to centos

When i try connecting to CentOS server, i get following error
boby#hon-pc-01:~/www/ansible $ ansible centos -vvv -i hosts -a "uname -a"
Using /home/boby/www/ansible/ansible.cfg as config file
<root#209.236.74.192:3333> ESTABLISH SSH CONNECTION FOR USER: root
<root#209.236.74.192:3333> SSH: EXEC ssh -C -q -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/home/boby/.ansible/cp/ansible-ssh-%h-%p-%r -tt root#209.236.74.192:3333 'mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1484629049.5-55764328572466 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1484629049.5-55764328572466 )"'
root#209.236.74.192:3333 | UNREACHABLE! => {
"changed": false,
"msg": "ERROR! SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue",
"unreachable": true
}
boby#hon-pc-01:~/www/ansible $
I am able to connect Debian server with out any issue
boby#hon-pc-01:~/www/ansible $ ansible ubuntu -vvv -i hosts -a "uname -a"
Using /home/boby/www/ansible/ansible.cfg as config file
<vm705n> ESTABLISH SSH CONNECTION FOR USER: root
<vm705n> SSH: EXEC ssh -C -q -o ControlMaster=auto -o ControlPersist=60s -o Port=3333 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/home/boby/.ansible/cp/ansible-ssh-%h-%p-%r -tt vm705n 'mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1484629067.62-202068262196976 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1484629067.62-202068262196976 )"'
<vm705n> PUT /tmp/tmpWzw_nH TO /root/.ansible/tmp/ansible-tmp-1484629067.62-202068262196976/command
<vm705n> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o Port=3333 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/home/boby/.ansible/cp/ansible-ssh-%h-%p-%r '[vm705n]'
<vm705n> ESTABLISH SSH CONNECTION FOR USER: root
<vm705n> SSH: EXEC ssh -C -q -o ControlMaster=auto -o ControlPersist=60s -o Port=3333 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/home/boby/.ansible/cp/ansible-ssh-%h-%p-%r -tt vm705n 'LANG=en_IN LC_ALL=en_IN LC_MESSAGES=en_IN /usr/bin/python /root/.ansible/tmp/ansible-tmp-1484629067.62-202068262196976/command; rm -rf "/root/.ansible/tmp/ansible-tmp-1484629067.62-202068262196976/" > /dev/null 2>&1'
vm705n | SUCCESS | rc=0 >>
Linux hon-vpn 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt25-2 (2016-04-08) x86_64 GNU/Linux
boby#hon-pc-01:~/www/ansible $
Here is my hosts file
boby#hon-pc-01:~/www/ansible $ cat hosts
[ubuntu]
vm705n:3333
[centos]
root#209.236.74.192:3333
boby#hon-pc-01:~/www/ansible $
Any idea why it is not working for CentOS 6 server ?
EDIT
I got it fixed. The problem was root# in hosts file. For some reason, the SSH command did not take port 3333 because root# present on host file.
The problem was in hosts file.
boby#hon-pc-01:~/www/ansible $ cat hosts
[ubuntu]
vm705n:3333
[centos]
root#209.236.74.192:3333
boby#hon-pc-01:~/www/ansible $
Replaced root#209.236.74.192:3333 with 209.236.74.192:3333 and it started working.

Ansible not picking up proxy settings

I am trying to run an Ansible job on a remote host. But for that to happen, I need to go through a proxy.
Proxy server is: 142.133.134.161
Proxy port is: 1088
My playbook is simple for now:
---
- hosts: LAB1
tasks:
- name: Copy file
template: src=/tmp/file1 dest=/tmp/file1
My environment file is:
[LAB1]
10.169.99.189
10.169.99.190
My ansible.cfg file is:
Host 10.169.99.*
ProxyCommand nc -x 142.133.134.161:1088 %h %p
But when I run a job, it says "Connection timed out":
[root#vm1 ANSIBLE]# ansible -i /root/ANSIBLE/env/target LAB1 -m ping
10.169.99.190 | FAILED => SSH Error: ssh: connect to host 10.169.99.190 port 22: Connection timed out
while connecting to 10.169.99.190:22
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.
10.169.99.189 | FAILED => SSH Error: ssh: connect to host 10.169.99.189 port 22: Connection timed out
while connecting to 10.169.99.189:22
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.
When I run this in debug mode:
[root#vm1 ANSIBLE]# ansible -i /root/ANSIBLE/env/target LAB1 -m ping -vvvvv
<10.169.99.190> ESTABLISH CONNECTION FOR USER: msdp
<10.169.99.190> REMOTE_MODULE ping
<10.169.99.189> ESTABLISH CONNECTION FOR USER: msdp
<10.169.99.189> REMOTE_MODULE ping
<10.169.99.190> EXEC sshpass -d8 ssh -C -tt -vvv -o ControlMaster=auto -o ControlPersist=60s -o ControlPath="/root/.ansible/cp/ansible-ssh-%h-%p-%r" -o StrictHostKeyChecking=no -o GSSAPIAuthentication=no -o PubkeyAuthentication=no -o User=msdp -o ConnectTimeout=10 10.169.99.190 /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1473612082.62-116308097993503 && echo $HOME/.ansible/tmp/ansible-tmp-1473612082.62-116308097993503'
<10.169.99.189> EXEC sshpass -d9 ssh -C -tt -vvv -o ControlMaster=auto -o ControlPersist=60s -o ControlPath="/root/.ansible/cp/ansible-ssh-%h-%p-%r" -o StrictHostKeyChecking=no -o GSSAPIAuthentication=no -o PubkeyAuthentication=no -o User=msdp -o ConnectTimeout=10 10.169.99.189 /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1473612082.63-269107268980760 && echo $HOME/.ansible/tmp/ansible-tmp-1473612082.63-269107268980760'
10.169.99.189 | FAILED => SSH Error: ssh: connect to host 10.169.99.189 port 22: Connection timed out
while connecting to 10.169.99.189:22
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.
10.169.99.190 | FAILED => SSH Error: ssh: connect to host 10.169.99.190 port 22: Connection timed out
while connecting to 10.169.99.190:22
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.
This does not indicate that it is using the Proxy. Is that the issue here?
Given your ProxyCommand syntax is correct and you want to include it in the ansible.cfg, the correct syntax would be to add an argument to the ssh_args in the [ssh_connection] section of the file:
[ssh_connection]
ssh_args = -o ForwardAgent=yes -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o ControlMaster=auto -o ControlPersist=60s -o ProxyCommand="nc -x 142.133.134.161:1088 %h %p"

Why is my playbook not calling up the roles?

I have an ansible playbook which I call from my Vagrantfile:
config.vm.provision "ansible" do |ansible|
ansible.playbook = "provision/playbook.yml"
ansible.sudo = true
ansible.verbose = "vvvv"
ansible.limit = "all"
# ansible.inventory_path = "provision/hosts"
end
This is the playbook:
---
- hosts: all
roles:
- common
My directory structure is:
.
├── provision
│   ├── hosts
│   ├── playbook.yml
│   └── roles
│   └── common
│   ├── install_conda.yml
│   └── reqs.yml
├── ubuntu-xenial-16.04-cloudimg-console.log
└── Vagrantfile
My problem is when I run vagrant up it does not run the install_conda.yml and reqs.yml
Related output:
ansible/.vagrant/provisioners/ansible/inventory --sudo -vvvv provision/playbook.yml
Using /home/lowks/.ansible.cfg as config file
Loaded callback default of type stdout, v2.0
1 plays in provision/playbook.yml
PLAY ***************************************************************************
TASK [setup] *******************************************************************
<127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: ubuntu
<127.0.0.1> SSH: EXEC ssh -C -vvv -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=2222 -o 'IdentityFile="/home/lowks/Projects/personal/ansible/.vagrant/machines/default/virtualbox/private_key"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=30 -o ControlPath=/home/lowks/.ansible/cp/ansible-ssh-%h-%p-%r -tt 127.0.0.1 '( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1465289160.96-19604199657213 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1465289160.96-19604199657213 )" )'
<127.0.0.1> PUT /tmp/lowks/tmpzSsrdn TO /home/ubuntu/.ansible/tmp/ansible-tmp-1465289160.96-19604199657213/setup
<127.0.0.1> SSH: EXEC sftp -b - -C -vvv -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=2222 -o 'IdentityFile="/home/lowks/Projects/personal/ansible/.vagrant/machines/default/virtualbox/private_key"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=30 -o ControlPath=/home/lowks/.ansible/cp/ansible-ssh-%h-%p-%r '[127.0.0.1]'
<127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: ubuntu
<127.0.0.1> SSH: EXEC ssh -C -vvv -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=2222 -o 'IdentityFile="/home/lowks/Projects/personal/ansible/.vagrant/machines/default/virtualbox/private_key"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=30 -o ControlPath=/home/lowks/.ansible/cp/ansible-ssh-%h-%p-%r -tt 127.0.0.1 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-xokgppdafgvlsnbystytwqbmniidqhhq; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/ubuntu/.ansible/tmp/ansible-tmp-1465289160.96-19604199657213/setup; rm -rf "/home/ubuntu/.ansible/tmp/ansible-tmp-1465289160.96-19604199657213/" > /dev/null 2>&1'"'"'"'"'"'"'"'"''"'"''
ok: [default]
PLAY RECAP *********************************************************************
default : ok=1 changed=0 unreachable=0 failed=0
install_conda.yml
---
# necessary steps to deploy the role.
- hosts: all
- name: check if already installed
stat: path=/opt/miniconda2/bin/conda
register: bin_conda
changed_when: bin_conda.stat.exists == False
- name: download miniconda installer
# sudo: no
get_url:
url=https://repo.continuum.io/miniconda/Miniconda-latest-Linux-x86_64.sh
dest=/tmp/miniconda.sh
mode=0755
register: miniconda_downloaded
when: bin_conda.stat.exists == False
- name: install miniconda
# sudo: no
shell: "/tmp/miniconda.sh -b -p /opt/miniconda2 creates=/opt/miniconda2 executable=/bin/bash"
register: miniconda_installed
when: miniconda_downloaded | success
notify:
- remove miniconda setup script
- update conda to latest version
What am I missing here ?
Ansible looks for a main.yml file in your roles/common/tasks.
Just add it to your folder
.
├── provision
│ ├── hosts
│ ├── playbook.yml
│ └── roles
│ └── common
│ └── tasks
│ ├── main.yml
│ ├── install_conda.yml
│ └── reqs.yml
├── ubuntu-xenial-16.04-cloudimg-console.log
└── Vagrantfile
and include the other roles:
---
- include: install_conda.yml
- include: reqs.yml

Resources