I am trying to use ansible to do some automate on our cisco devices
playbook
- name: change config
hosts: switches
gather_facts: false
tasks:
- name: add an acl
ios_config:
lines:
- ip access-list standard 7 permit 172.16.1.0 0.0.0.255
hosts file
[switches]
sw1 ansible_host=172.16.1.1
group_vars\switches.yml
ansible_connection: network_cli
ansible_network_os: ios
ansible_become: yes
ansible_become_method: enable
ansible_ssh_user: *****
ansible_ssh_pass: *****
ansible_become_pass: ******
If I just do an ios_command, there are no issues at all, but if I try to use ios_config to change the configuration I will get below error.
[sw1]: FAILED! => {"msg": "unable to elevate privilege to enable mode, at prompt [None] with error: timeout trying to send command: enable"}
ssh to the gears
3Fb>en
HCC password:
3Fb#
we have a non-default prompt, how to change on ansible to match this and is there any other need to be done to get this fixed. Thank you.
[sw1]: FAILED! => {"msg": "unable to elevate privilege to enable mode, at prompt [None] with error: timeout trying to send command: enable"}
Q: "We have a non-default prompt, how to change on ansible to match this?"
A: IMHO, it's not related to the prompt. The prompt defaulted to NONE after _exec_cli_command had failed with "error: timeout trying to send command: enable". See nxos.py
try:
self._exec_cli_command(to_bytes(json.dumps(cmd), errors='surrogate_or_strict'))
prompt = self._get_prompt()
if prompt is None or not prompt.strip().endswith(b'enable#'):
raise AnsibleConnectionFailure('failed to elevate privilege to enable mode still at prompt [%s]' % prompt)
except AnsibleConnectionFailure as e:
prompt = self._get_prompt()
raise AnsibleConnectionFailure('unable to elevate privilege to enable mode, at prompt [%s] with error: %s' % (prompt, e.message)
You might want to start debugging at "exec_command" in connection.py
Related
I'm trying to create an ansible playbook that will work in my current work environment. I login to servers as user "myuser" using ssh keys. I was never given a password, so I don't know it. Most of the commands I run are executed as a different non-root user - e.g. "appadmin". I become these users via "sudo su - appadmin", since I don't have the passwords for this user either.
Different variations I've tried either complain "sudo: a password is required" or time out after 12 seconds. I'll show this second example.
The playbook is very simple:
---
- hosts: sudo-test
gather_facts: False
remote_user: myuser
become: yes
become_user: appadmin
tasks:
- name: who
shell: whoami > qwert.txt
My host entry is as follows:
[sudo-test]
appserver.example.com ansible_become_method=su ansible_become_exe="sudo su"
This is the error I get:
pablo#host=> ansible-playbook test_sudo.yml
PLAY [sudo-test] ****************************************************************************************************
TASK [who] **********************************************************************************************************
fatal: [appserver.example.com]: FAILED! => {"msg": "Timeout (12s) waiting for privilege escalation prompt: "}
to retry, use: --limit #/home/pablo/ansible_dir/test_sudo.retry
PLAY RECAP **********************************************************************************************************
appserver.example.com : ok=0 changed=0 unreachable=0 failed=1
At this point I agree that the playbook and inventory are configured correctly. I believe the issue is that /etc/sudoers doesn't permit my "appadmin" user to run in a way that allows me to leverage ansible's ability to become another user. This thread describes a similar scenario - and limitation.
The relevant section of /etc/sudoers looks like this:
User myuser may run the following commands on this host:
(root) NOPASSWD: /bin/su - appadmin
It seems I would have to have the sysadmin change this to:
User myuser may run the following commands on this host:
(root) NOPASSWD: /bin/su - appadmin *
Does this sound right?
i dont find any issue with yaml, infact i got it tested in my ansible2.8 environment.
---
- hosts: node1
gather_facts: False
remote_user: ansible
become: yes
become_user: testuser
tasks:
- name: who
shell: whoami
register: output
- debug: var=output
and inventory:
[node1]
node1.example.com ansible_become_method=su ansible_become_exe="sudo su"
output:
TASK [debug] ****************************************************************************************************************************
ok: [node1.example.com] =>
I would request you to increase ssh timer (uncomment timeout line and set it to 60, whatever seconds you wish) in ansible.cfg file and observer this scenario.
# SSH timeout
#timeout = 300
Try this one:
- hosts: application
become: yes
become_exe: "sudo su - appadmin"
become_method: su
tasks:
How do you add a sudo password to a delegated host?
eg.
hosts: host1
- name: Some remote command
command: sudo some command
register: result
delegate_to: host2
become: yes
I get "Incorrect sudo password" because I assume it is using the sudo pass for host1. Is there a way to make it use a different password?
It has been a while - but I was struggling with this as well and managed to solve it so here is a summarized answer:
As pointed out correctly the issue is that the ansible_become_password variable is set to to your original host (host1) when running the delegated task on host2.
Option 1: Add become password to inventory and delegate facts
One option to solve this is to specify the become password for host2 in your inventory, and secure it using ansible vault (as they have done here: How to specify become password for tasks delegated to localhost). Then you should be able to trigger using the correct sudo pw with delegate_facts as they did here Ansible delegate_to "Incorrect sudo password".
Option 2: Prompt and overwrite pass manually
If you prefer to get prompted for the second sudo password instead, you can do this by using a vars_promt to specify the second sudo pw during runtime:
- hosts: host1
vars_prompt:
- name: custom_become_pass
prompt: enter the custom become password for host2
private: yes
tasks:
...
Then you can just replace the variable ansible_become_password before running your delegated tasks and it will use the correct sudo password:
tasks:
- name: do stuff on host1
...
- name: set custom become
set_fact:
ansible_become_password: '{{ custom_become_pass }}'
- name: perform delegated task
command: sudo some command
register: result
delegate_to: host2
become: yes
You could try to use ansible_become_password variable directly inside the task's var section.
Ansible doc
I am currently running ansible 2.7 and I have the following playbook
info.yaml
---
- hosts: routers
gather_facts: true
tasks:
- name: show run
ios_command:
commands:
- show running-config
register: config
I have the following inventory file:
[local]
127.0.0.1 ansible_connection=local
[routers]
LAB-RTR-1
LAB-RTR-2
[routers:vars]
ansible_ssh_user= {{ cisco_user }}
ansible_ssh_pass= {{ cisco_pass }}
ansible_network_os=ios
ansible_connection=network_cli
ansible_become = yes
ansible_become_method = enable
anisble_become_pass = {{ auth_pass }}
Have the following in the vault
cisco_user: “admin”
cisco_pass: “password123”
auth_pass: “password123”
When i try to run this via cli like this:
ansible-playbook info.yaml --ask-vault-pass -vvv
I keep getting the following errors for some reason, and i can’t figure this out. I’ve been going crazy on this for the last few hours
The full traceback is:
Traceback (most recent call last):
File "/usr/bin/ansible-connection", line 106, in start
self.connection._connect()
File "/usr/lib/python2.7/site-packages/ansible/plugins/connection/network_cli.py", line 341, in _connect
self._terminal.on_become(passwd=auth_pass)
File "/usr/lib/python2.7/site-packages/ansible/plugins/terminal/ios.py", line 78, in on_become
raise AnsibleConnectionFailure('unable to elevate privilege to enable mode, at prompt [%s] with error: %s' % (prompt, e.message))
AnsibleConnectionFailure: unable to elevate privilege to enable mode, at prompt [None] with error: timeout value 10 seconds reached while trying to send command: enable
fatal: [LAB-RTR-1]: FAILED! => {
"msg": "unable to elevate privilege to enable mode, at prompt [None] with error: timeout value 10 seconds reached while trying to send command: enable"
}
Haven't used ansible in a while (but the above looks ok to me), and never with cisco, but I found an open issue that looks very similar to yours which you may want to keep an eye on:
https://github.com/ansible/ansible/issues/51436
I need to make the script prompt for enable password after entering in unprivileged mode in cisco ios. So far this is what I have which works but I don't want to put my real "enable" password on anywhere on my computer.
---
- name: Basic Show Commands
hosts: cisco
gather_facts: False
connection: local
tasks:
- name: show run
ios_command:
commands:
- show run
provider:
authorize: yes
auth_pass: my_enable_password
register: show_run
- debug:
var: show_run.stdout_lines
- name: copy show run to file
local_action: copy content={{show_run.stdout[0]}} dest=/mnt/c/Ansible/show_run
I run the playbook as follows:
ansible-playbook -u my_username -k /mnt/c/Ansible/show_run.yaml
How do I make this happen?
This is very old thread, but for the sake of someone new searching for the answer, I did this-
ansible-playbook -u my_username --ask-pass --ask-become-pass /mnt/c/Ansible/show_run.yaml
Also, in the host file,
[host_group:vars]
ansible_become=yes
ansible_become_method=enable
ansible_network_os=ios
An option "to make the script prompt for enable password" would be to use vars_prompt. See the example below
...
vars_prompt:
- name: "my_enable_password"
prompt: "Cisco auth_pass:"
tasks:
...
is there any mechanism that checks if the SSH/SUDO password is correct? When deploying a playbook across the whole environment, after putting in the wrong password, ansible runs on all hosts with the wrong password, it fails and my LDAP/AD account is locked out.
Since, as it turns out, Ansible does not seem to have this functionality, I decided to create a workaround myself:
in site.yml, I added a role that only runs on one server and has 1 or optionally 2 tasks in it. The first one checks if login itself works, the second one checks if sudo works.
- name: Check ssh password first
command: echo "ssh password correct"
changed_when: false
- name: Check sudo password first
command: echo "sudo password correct"
become: yes
changed_when: false
As a good workaround, I usually put this in site.yml:
- hosts: all
gather_facts: false
tasks:
- name: site.yml | Check if Password is correct
become: true
command: echo "PW is correct"
run_once: true
tags:
- always
That task will run always, no matter what tags you start the playbook with and will check if the ssh/sudo password works on one host before hammering all your servers with login requests.