When using an ssh proxy and the netwrok_cli connection method, I've been getting a "Error reading SSH protocol banner" error in Ansible.
Seems like it may be the banner_timeout setting, which I can set if I write a netmiko script and it works but I don't think I can set that in a playbook.
This is what the playbook looks like:
- name: playbook
hosts: target_dev_hostname
gather_facts: no
connection: network_cli
vars:
ansible_user: username
ansible_ssh_pass: password
ansible_become_pass: password
ansible_network_os: ios
tasks:
- name: set mode for private key
file:
path: jump_host.pem
mode: 0400
delegate_to: localhost
- name: config proxy
set_fact:
ansible_ssh_common_args: '-o ProxyCommand="ssh -o StrictHostKeyChecking=no -i /jump_host.pem -W %h:%p -q jump_user#jump_host.me.com"'
delegate_to: localhost
- name: basic show run
ios_command:
commands: show run
register: running_config
- name: show run results
debug: var=running_config
Any suggestions? Seems like this issue has been popping up in Ansible since about last year.... found this issue being reported:
https://github.com/ansible-collections/ansible.netcommon/issues/46
I should add, this same playbook works in Ansible 2.5.1
Related
I have three hosts:
my local ansible controller
a jump/bastion host (jump_host) for my infrastructure
a target host I want to run ansible tasks against (target_host) which is only accessible through jump_host
As part of my inventory file, I have the details of both jump_host and target_host as follows:
jump_host:
ansible_host: "{{ jump_host_ip }}"
ansible_port: 22
ansible_user: root
ansible_password: password
target_host:
ansible_host: "{{ target_host_ip }}"
ansible_port: 22
ansible_user: root
ansible_password: password
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q root#{{ jump_host_ip }}"'
How can we configure ansible to use the password mentioned in the jump_host settings from the inventory file instead of using any additional configurations from ~/.ssh/config file?
There is no direct way to provide the password for the jump host as part of the ProxyCommand.
So, I ended up doing the following:
# Generate SSH keys on the controller
- hosts: localhost
become: false
tasks:
- name: Generate the localhost ssh keys
community.crypto.openssh_keypair:
path: ~/.ssh/id_rsa
force: no
# Copy the host keys of Ansible host into the jump_host .ssh/authorized_keys file
# to ensure that no password is prompted while logging into jump_host
- hosts: jump_host
become: false
tasks:
- name: make sure public key exists on target for user
ansible.posix.authorized_key:
user: "{{ ansible_user }}"
key: "{{ lookup('file', '~/.ssh/id_rsa') }}"
state: present
I want to provision a new vps. The way this is typically done: 1) try login manually as a non-root user, and 2) if that fails then perform the provisioning.
But I can't connect. I can't even login as root. (I can ssh from the shell, so the password is correct.)
hosts:
[server]
42.42.42.42
playbook.yml:
---
- hosts: all
vars:
ROOT_PASSWORD: foo
gather_facts: no
tasks:
- name: set root password
set_fact: ansible_password={{ ROOT_PASSWORD }}
- name: try login with password
local_action: "command ssh -q -o BatchMode=yes -o ConnectTimeout=3 root#{{ inventory_hostname }} 'echo ok'"
ignore_errors: true
changed_when: false
# more stuff here...
I tried the following, but all don't connect:
I stored the password in a variable like above
I prompted for the password using ansible-playbook -k playbook.yml
I moved the password to the inventory file
[server]
42.42.42.42 ansible_user=root ansible_password=foo
I added the ssh flag -o PreferredAuthentications=password to force password auth
But none of the above connects. I always get the error
root#42.42.42.42: Permission denied (publickey,password).
If I remove -o BatchMode=yes then it prompts me for a password, and does connect. But that prevents automation, the idea is to do this without user intervention.
What am I doing wrong?
This is a new vps, nothing is set up yet - so I'm looking for the simplest possible example of a playbook that connects using root and a password.
You're close. The variable is ansible_ssh_password, not ansible_ssh_pass. The variables with _ssh in the name are legacy names, so you can juse use ansible_user and ansible_password instead.
If I have an inventory like this:
[server]
example ansible_host=192.168.122.148 ansible_user=root ansible_password=secret
Then I can run this command successfully:
$ ansible all -i hosts -m ping
example | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
If the above ad-hoc command works correctly, then a playbook should work correctly as well. E.g., still assuming the above inventory, I can use the following playbook:
---
- hosts: all
gather_facts: false
tasks:
- ping:
And I can call it like this:
$ ansible-playbook playbook.yml -i hosts
PLAY [all] ***************************************************************************
TASK [ping] **************************************************************************
ok: [example]
PLAY RECAP ***************************************************************************
example : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
...and it all works just fine.
Try using --ask-become-pass
ansible-playbook -k playbook.yml --ask-become-pass
That way it's not hardcoded.
Also, inside the playbook you can invoke:
---
- hosts: all
become: true
gather_facts: no
All SO answers and blog articles I've seen so far recommend doing it the way I've shown.
But after spending much time on this, I don't believe it could work that way, so I don't understand why it is always recommended. I noticed that ansible has changed its API many times, and maybe that approach is simply outdated!
So I came up with an alternative, using sshpass:
hosts:
[server]
42.42.42.42
playbook.yml:
---
- hosts: all
vars:
ROOT_PASSWORD: foo
gather_facts: no
tasks:
- name: try login with password (using out-of-band ssh connection)
local_action: command sshpass -p {{ ROOT_PASSWORD }} ssh -q -o ConnectTimeout=3 root#{{ inventory_hostname }} 'echo ok'
ignore_errors: true
register: exists_user
- name: ping if above succeeded (using in-band ssh connection)
remote_user: root
block:
- name: set root ssh password
set_fact:
ansible_password: "{{ ROOT_PASSWORD }}"
- name: ping
ping:
data: pong
when: exists_user is success
This is just a tiny proof of concept.
The actual use case is to try connect with a non-root user, and if that fails, to provision the server. The above is the starting point for such a playbook.
Unlike #larsks' excellent alternative, this does not assume python is installed on the remote, and performs the ssh connection test out of band, assisted by sshpass.
I'm provisioning a new server, and want to automatically add its public key to my local known_hosts file. My server is running on port 2222.
hosts:
[remotes]
my_server ansible_host:42.42.42.42 ansible_port:2222
playbook.yml:
---
hosts: all
gather_facts: no
tasks:
- name: get host key
local_action: command ssh-keyscan -t rsa -p {{ansible_port}} -H {{ansible_host}}
register: host_key
- name: add host key
when: host_key is success
delegate_to: localhost
known_hosts:
name: "{{item}}"
state: present
hash_host: yes
key: "{{host_key.stdout}}"
with_items:
- "{{ansible_host}}"
- "{{inventory_hostname}}"
This adds new entries to the known_hosts.
BUT ssh 42.42.42.42:2222 and ssh my_server:2222 still show the unknown key warning.
I suspect it's because 1) I'm running on a non-standard port (the docs for the known_host module don't show an option for setting the port), or 2) something to do with the hashing option.
How do I do this?
I found a solution buried in an old issue. The trick is to use [host]:port instead of host.
---
hosts: all
gather_facts: no
tasks:
# add entry to known_hosts for server's IP address
- name: get host key
local_action: command ssh-keyscan -t rsa -p {{ansible_port}} -H {{ansible_host}}
register: host_key
- name: add host key
when: host_key is success
delegate_to: localhost
known_hosts:
name: "[{{ansible_host}}]:{{ansible_port}}" # <--- here
state: present
hash_host: yes
key: "{{host_key.stdout}}"
# add entry to known_hosts for server's hostname
- name: get host key
local_action: command ssh-keyscan -t rsa -p {{ansible_port}} -H {{inventory_hostname}}
register: host_key
- name: add host key
when: host_key is success
delegate_to: localhost
known_hosts:
name: "[{{inventory_hostname}}]:{{ansible_port}}" # <--- here
state: present
hash_host: yes
key: "{{host_key.stdout}}"
I couldn't find a way to avoid the repetition, because with_items can't be applied to multiple tasks at once, so it's ugly but it works.
This allows ssh 42.42.42.42:2222 and ssh my_server:2222 without prompts (though my_server must be defined in /etc/hosts and/or ~/.ssh/config).
I just want to ping a host(DNS host) to check reachability. Looks there is no proper way to do this? I'm not sure. Below is my playbook with net_ping
---
- name: Set User
hosts: web_servers
gather_facts: false
become: false
vars:
ansible_network_os: linux
tasks:
- name: Pinging Host
net_ping
dest: 10.250.30.11
But,
TASK [Pinging Host] *******************************************************************************************************************
task path: /home/veeru/PycharmProjects/Miscellaneous/tests/ping_test.yml:10
ok: [10.250.30.11] => {
"changed": false,
"msg": "Could not find implementation module net_ping for linux"
}
With ping module
---
- name: Set User
hosts: dns
gather_facts: false
become: false
tasks:
- name: Pinging Host
action: ping
Looks like it is trying to ssh into the IP.(Checked in verbose mode). I don't know why? How can I do ICMP ping? I don't want to put the DNS IP in inventory also.
UPDATE1:
hmm, Looks like there no support for linux in ansible_network_os.
https://www.reddit.com/r/ansible/comments/9dn5ff/possible_values_for_ansible_network_os/
You can use ping command:
---
- hosts: all
gather_facts: False
connection: local
tasks:
- name: ping
shell: ping -c 1 -w 2 8.8.8.8
ignore_errors: true
Try to use delegate_to module to specify that this task should be executed on localhost. Maybe ansible is trying to connect to those devices to execute ping shell command. The following code sample works for me.
tasks:
- name: ping test
shell: ping -c 1 -w 2 {{ ansible_host }}
delegate_to: localhost
ignore_errors: true
can also run/check ping latency by below adhoc command in ansible
ansible all -m shell -a "ping -c3 google.com"
I need to make the script prompt for enable password after entering in unprivileged mode in cisco ios. So far this is what I have which works but I don't want to put my real "enable" password on anywhere on my computer.
---
- name: Basic Show Commands
hosts: cisco
gather_facts: False
connection: local
tasks:
- name: show run
ios_command:
commands:
- show run
provider:
authorize: yes
auth_pass: my_enable_password
register: show_run
- debug:
var: show_run.stdout_lines
- name: copy show run to file
local_action: copy content={{show_run.stdout[0]}} dest=/mnt/c/Ansible/show_run
I run the playbook as follows:
ansible-playbook -u my_username -k /mnt/c/Ansible/show_run.yaml
How do I make this happen?
This is very old thread, but for the sake of someone new searching for the answer, I did this-
ansible-playbook -u my_username --ask-pass --ask-become-pass /mnt/c/Ansible/show_run.yaml
Also, in the host file,
[host_group:vars]
ansible_become=yes
ansible_become_method=enable
ansible_network_os=ios
An option "to make the script prompt for enable password" would be to use vars_prompt. See the example below
...
vars_prompt:
- name: "my_enable_password"
prompt: "Cisco auth_pass:"
tasks:
...