I am currently running ansible 2.7 and I have the following playbook
info.yaml
---
- hosts: routers
gather_facts: true
tasks:
- name: show run
ios_command:
commands:
- show running-config
register: config
I have the following inventory file:
[local]
127.0.0.1 ansible_connection=local
[routers]
LAB-RTR-1
LAB-RTR-2
[routers:vars]
ansible_ssh_user= {{ cisco_user }}
ansible_ssh_pass= {{ cisco_pass }}
ansible_network_os=ios
ansible_connection=network_cli
ansible_become = yes
ansible_become_method = enable
anisble_become_pass = {{ auth_pass }}
Have the following in the vault
cisco_user: “admin”
cisco_pass: “password123”
auth_pass: “password123”
When i try to run this via cli like this:
ansible-playbook info.yaml --ask-vault-pass -vvv
I keep getting the following errors for some reason, and i can’t figure this out. I’ve been going crazy on this for the last few hours
The full traceback is:
Traceback (most recent call last):
File "/usr/bin/ansible-connection", line 106, in start
self.connection._connect()
File "/usr/lib/python2.7/site-packages/ansible/plugins/connection/network_cli.py", line 341, in _connect
self._terminal.on_become(passwd=auth_pass)
File "/usr/lib/python2.7/site-packages/ansible/plugins/terminal/ios.py", line 78, in on_become
raise AnsibleConnectionFailure('unable to elevate privilege to enable mode, at prompt [%s] with error: %s' % (prompt, e.message))
AnsibleConnectionFailure: unable to elevate privilege to enable mode, at prompt [None] with error: timeout value 10 seconds reached while trying to send command: enable
fatal: [LAB-RTR-1]: FAILED! => {
"msg": "unable to elevate privilege to enable mode, at prompt [None] with error: timeout value 10 seconds reached while trying to send command: enable"
}
Haven't used ansible in a while (but the above looks ok to me), and never with cisco, but I found an open issue that looks very similar to yours which you may want to keep an eye on:
https://github.com/ansible/ansible/issues/51436
Related
I have a ansible-playbook that has gather_facts set to true. But it fails to get the uid. Here is the error I am getting:
TASK [Gathering Facts] **************************************************************************************************************************************************************
The full traceback is:
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/ansible/executor/task_executor.py", line 144, in run
res = self._execute()
File "/usr/lib/python2.7/site-packages/ansible/executor/task_executor.py", line 516, in _execute
self._play_context = self._play_context.set_task_and_variable_override(task=self._task, variables=variables, templar=templar)
File "/usr/lib/python2.7/site-packages/ansible/playbook/play_context.py", line 335, in set_task_and_variable_override
new_info.remote_user = pwd.getpwuid(os.getuid()).pw_name
KeyError: 'getpwuid(): uid not found: 1001'
fatal: [localhost]: FAILED! => {
"msg": "Unexpected failure during module execution.",
"stdout": ""
}
Now, the 1001 uid is present in the setup:
$ echo $UID
1001
I am running this inside a container, could that be an issue? Any pointers to help debug this are appreciated. TIA.
I am running this inside a container, could that be an issue?
While that doesn't automatically make it a problem, it is perhaps relevant since you can more easily execute a process as an arbitrary UID inside a docker container. You don't typically see that problem on a virtual machine because in order to run anything on the virtual host, you have to actually be authenticated first, which almost always involves looking up all kinds of user information in /etc/passwd. However, there is usually no "login" process for a container, since it is just Linux namespace trickery
You can try it yourself by running docker run --rm -u 12345 ubuntu:18.04 id -a and observe uid=12345 gid=0(root) groups=0(root) but there is no entry in /etc/passwd for UID 12345 (notice the missing (something) after the uid= result)
2 solutions:
- hosts: all
gather_facts: no
or
ansible_test:
image: docker.io/major/ansible:fedora29
script:
- echo "tempuser:x:$(id -u):$(id -g):,,,:${HOME}:/bin/bash" >> /etc/passwd
- echo "tempuser:x:$(id -G | cut -d' ' -f 2)" >> /etc/group
- id
- ansible-playbook -i hosts playbook.yml
https://major.io/2019/03/22/running-ansible-in-openshift-with-arbitrary-uids/
I am trying to use ansible to do some automate on our cisco devices
playbook
- name: change config
hosts: switches
gather_facts: false
tasks:
- name: add an acl
ios_config:
lines:
- ip access-list standard 7 permit 172.16.1.0 0.0.0.255
hosts file
[switches]
sw1 ansible_host=172.16.1.1
group_vars\switches.yml
ansible_connection: network_cli
ansible_network_os: ios
ansible_become: yes
ansible_become_method: enable
ansible_ssh_user: *****
ansible_ssh_pass: *****
ansible_become_pass: ******
If I just do an ios_command, there are no issues at all, but if I try to use ios_config to change the configuration I will get below error.
[sw1]: FAILED! => {"msg": "unable to elevate privilege to enable mode, at prompt [None] with error: timeout trying to send command: enable"}
ssh to the gears
3Fb>en
HCC password:
3Fb#
we have a non-default prompt, how to change on ansible to match this and is there any other need to be done to get this fixed. Thank you.
[sw1]: FAILED! => {"msg": "unable to elevate privilege to enable mode, at prompt [None] with error: timeout trying to send command: enable"}
Q: "We have a non-default prompt, how to change on ansible to match this?"
A: IMHO, it's not related to the prompt. The prompt defaulted to NONE after _exec_cli_command had failed with "error: timeout trying to send command: enable". See nxos.py
try:
self._exec_cli_command(to_bytes(json.dumps(cmd), errors='surrogate_or_strict'))
prompt = self._get_prompt()
if prompt is None or not prompt.strip().endswith(b'enable#'):
raise AnsibleConnectionFailure('failed to elevate privilege to enable mode still at prompt [%s]' % prompt)
except AnsibleConnectionFailure as e:
prompt = self._get_prompt()
raise AnsibleConnectionFailure('unable to elevate privilege to enable mode, at prompt [%s] with error: %s' % (prompt, e.message)
You might want to start debugging at "exec_command" in connection.py
SSH Authentication problem
TASK [backup : Gather facts (ops)] *************************************************************************************
fatal: [10.X.X.X]: FAILED! => {"msg": " [WARNING] Ansible is being run
in a world writable directory
(/mnt/c/Users/AnirudhSomanchi/Desktop/KVM/Scripting/Ansible/network/backup),
ignoring it as an ansible.cfg source. For more information see
https://docs.ansible.com/ansible/devel/reference_appendices/config.html#cfg-in-world-writable-dir\n{\"socket_path\":
(https://docs.ansible.com/ansible/devel/reference_appendices/config.html#cfg-in-world-writable-dir/n{/'socket_path/':)
\"/home/SaiAnirudh/.ansible/pc/87fd82198c\", \"exception\":
\"Traceback (most recent call last):\n File
\\"/usr/bin/ansible-connection\\", line 104, in start\n
self.connection._connect()\n File
\\"/usr/lib/python2.7/dist-packages/ansible/plugins/connection/network_cli.py\\",
line 327, in _connect\n ssh = self.paramiko_conn._connect()\n
File
\\"/usr/lib/python2.7/dist-packages/ansible/plugins/connection/paramiko_ssh.py\\",
line 245, in _connect\n self.ssh = SSH_CONNECTION_CACHE[cache_key]
= self._connect_uncached()\n File \\"/usr/lib/python2.7/dist-packages/ansible/plugins/connection/paramiko_ssh.py\\",
line 368, in _connect_uncached\n raise
AnsibleConnectionFailure(msg)\nAnsibleConnectionFailure: paramiko:
The authenticity of host '10.X.X.X' can't be established.\nThe
ssh-rsa key fingerprint is 4b595d868720e28de57bef23c90546ad.\n\",
\"messages\": [[\"vvvv\", \"local domain socket does not exist,
starting it\"], [\"vvvv\", \"control socket path is
/home/SaiAnirudh/.ansible/pc/87fd82198c\"], [\"vvvv\", \"loaded
cliconf plugin for network_os vyos\"], [\"log\", \"network_os is set
to vyos\"], [\"vvvv\", \"\"]], \"error\": \"paramiko: The authenticity
of host '10.X.X.X' can't be established.\nThe ssh-rsa key fingerprint
is 4b595d868720e28de57bef23c90546ad.\"}"}
Command used-- ansible-playbook playbook.yml -i hosts --ask-vault-pass
Tried changing
host_key_checking = False in Ansible.cfg
Command used-- ansible-playbook playbook.yml -i hosts --ask-vault-pass
=====================================================================
playbook.yml
---
- hosts: all
gather_facts: false
roles:
- backup
===============================================================
Main.yml
C:\Ansible\network\backup\roles\backup\tasks\main.yml
- name: Gather facts (ops)
vyos_facts:
gather_subset: all
- name: execute Vyos run to initiate backup
vyos_command:
commands:
- sh configuration commands | no-more
register: v_backup
- name: local_action
local_action:
module: copy
dest: "C:/Users/Desktop/KVM/Scripting/Ansible/network/backup/RESULTS/Backup.out"
content: "{{ v_backup.stdout[0] }}"
===============================================================
Ansible.cfg
[defaults]
host_key_checking = False
===============================================================
Hosts
[all]
10.X.X.X
[all:vars]
ansible_user=
ansible_ssh_pass=
ansible_connection=network_cli
ansible_network_os=vyos
We need the backup of the vyatta devices but getting the following error
Vault password:
PLAY [all] *************************************************************************************************************
TASK [backup : Gather facts (ops)] *************************************************************************************
fatal: [10.X.X.X]: FAILED! => {"msg": " [WARNING] Ansible is being run
in a world writable directory
(/mnt/c/Users/AnirudhSomanchi/Desktop/KVM/Scripting/Ansible/network/backup),
ignoring it as an ansible.cfg source. For more information see
https://docs.ansible.com/ansible/devel/reference_appendices/config.html#cfg-in-world-writable-dir\n{\"socket_path\":
(https://docs.ansible.com/ansible/devel/reference_appendices/config.html#cfg-in-world-writable-dir/n{/'socket_path/':)
\"/home/SaiAnirudh/.ansible/pc/87fd82198c\", \"exception\":
\"Traceback (most recent call last):\n File
\\"/usr/bin/ansible-connection\\", line 104, in start\n
self.connection._connect()\n File
\\"/usr/lib/python2.7/dist-packages/ansible/plugins/connection/network_cli.py\\",
line 327, in _connect\n ssh = self.paramiko_conn._connect()\n
File
\\"/usr/lib/python2.7/dist-packages/ansible/plugins/connection/paramiko_ssh.py\\",
line 245, in _connect\n self.ssh = SSH_CONNECTION_CACHE[cache_key]
= self._connect_uncached()\n File \\"/usr/lib/python2.7/dist-packages/ansible/plugins/connection/paramiko_ssh.py\\",
line 368, in _connect_uncached\n raise
AnsibleConnectionFailure(msg)\nAnsibleConnectionFailure: paramiko:
The authenticity of host '10.X.X.X' can't be established.\nThe
ssh-rsa key fingerprint is 4b595d868720e28de57bef23c90546ad.\n\",
\"messages\": [[\"vvvv\", \"local domain socket does not exist,
starting it\"], [\"vvvv\", \"control socket path is
/home/SaiAnirudh/.ansible/pc/87fd82198c\"], [\"vvvv\", \"loaded
cliconf plugin for network_os vyos\"], [\"log\", \"network_os is set
to vyos\"], [\"vvvv\", \"\"]], \"error\": \"paramiko: The authenticity
of host '10.X.X.X' can't be established.\nThe ssh-rsa key fingerprint
is 4b595d868720e28de57bef23c90546ad.\"}"}
PLAY RECAP *************************************************************************************************************
10.X.X.X : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
[WARNING] Ansible is in a world writable directory , ignoring it as an ansible.cfg source.
I also got this warning but this won't lead you to the failure of your command execution in ansible. it is just a warning which means you have given full permission(i.e drwxrwxrwx or 777) to your directory which is not secure. you can remove this warning by changing permission with chmod command(like sudo chmod 755).
For me this was not working....
I needed to unmount it and then mount it again and use another command:
sudo umount /mnt/c
sudo mount -t drvfs C: /mnt/c -o metadata
Go to the actual folder and:
chmod o-w .
More Info, Link1 Link2
if you don't want to change permissions , all you need to do is to append your config file into the default ansible config file
. in linux it is in /etc/ansible/ansible.cfg
I have created an Ansible playbook which just checks the connection to target hosts on a specific port.
Here is my playbook
- name: ACL check to target machines.
hosts: all
gather_facts: False
vars:
target_hosts:
- server1 18089
- server1 8089
- server1 18000
tasks:
- name: execute the command
command: "nc -vz {{ item }}"
with_items: "{{ target_hosts }}"
The output i get when i execute the playbook contains both changed(success) and failed.
In real scenario i have many number of target hosts, & my output is very large.
What i wanted here is, i want to have final report at bottom which shows only list of all failed connections between source and target.
Thanks
i want to have final report at bottom which shows only list of all failed connections between source and target
I believe the knob you are looking for is stdout callback plugins. While I don't see one that does exactly as you wish, there are two of them that seem like they may get you close:
The actionable one claims it will only emit Failed and Changed events:
$ ANSIBLE_STDOUT_CALLBACK=actionable ansible-playbook ...
Then, moving up the complexity ladder, the json one will, as its name implies, emit a record of the steps in JSON, allowing you to filter the output for exactly what you want (.failed is a boolean on each task that indicates just that):
$ ANSIBLE_STDOUT_CALLBACK=json ansible-playbook ...
Then, as the "plugins" part implies, if you were so inclined you could also implement your own that does exactly what you want; there are a lot of examples provided, in addition to the docs.
I have a similar playbook and want the same list of failed connection, what I do is:
Save the output command register: nc
Add the ignore_errors: true
In the debug part show the ok and failed connections asking for the return code.
My playbook:
---
- name: test-connection
gather_facts: false
hosts: "{{ group_hosts }}"
tasks:
- name: test connection
shell: "nc -w 5 -zv {{ inventory_hostname }} {{ listen_port }}"
register: nc
ignore_errors: true
- name: ok connections
debug: var=nc.stderr
when: "nc.rc == 0"
- name: failed connections
debug: var=nc.stderr
when: "nc.rc != 0"
An output example for the failed connections:
TASK [failed connections] **********************************************************************************************
ok: [10.1.2.100] => {
"nc.stderr": "nc: connect to 10.1.2.100 port 22 (tcp) timed out: Operation now in progress"
}
ok: [10.1.2.101] => {
"nc.stderr": "nc: connect to 10.1.2.101 port 22 (tcp) timed out: Operation now in progress"
}
ok: [10.1.2.102] => {
"nc.stderr": "nc: connect to 10.1.2.102 port 22 (tcp) timed out: Operation now in progress"
The answer by #mdaniel on 2018 Aug 18 is unfortunately no longer correct as the 'actionable' callback has been deprecated:
community.general.actionable has been removed. Use the 'default' callback plugin with 'display_skipped_hosts = no' and 'display_ok_hosts = no' options. This feature was removed from community.general in version 2.0.0. Please update your playbooks.
We are to use the default callback with some parameters instead.
I used this in my ansible.cfg
stdout_callback = default
display_skipped_hosts = no
display_ok_hosts = no
It worked for the live output but still showed all the ok and skipped hosts in the summary at the end.
Also, $ DISPLAY_SKIPPED_HOSTS=no DISPLAY_OK_HOSTS=no ansible-playbook ... didn't seem to work which is unfortunate because I prefer the yaml callback normally and only occasionally want to override this with the above.
I am trying to use ansible to connect to my switches and just do a show version. For some reason when i run the ansible playbook i keep getting the error "Failed to open session", i don't know why i keep getting it. I am able to ssh directly to the box with no issues.
[Ansible.cfg]
enable_task_debugger=True
hostfile=inventory
transport=paramiko
host_key_checking=False
[inventory/hosts]
127.0.0.1 ansible_connection=local
[routers]
192.168.10.1
[test.yaml]
---
- hosts: routers
gather_facts: true
connection: paramiko
tasks:
- name: show run
ios_command:
commands:
- show version
then i try to run it like this
ansible-playbook -vvv -i inventory test.yaml -u username -k
And then this is the last line of the error
EXEC /bin/sh -c 'echo ~ && sleep 0'
fatal: [192.168.10.1]: UNREACHABLE! => {
"changed": false,
"msg": "Failed to open session",
"unreachable": true
}
Anisble version is 2.4.2.0
Please use::
connection: local
change - hosts: routers to - hosts: localhost