get ansible to wait for ssh connection - ansible

How do you get ansible to wait or retry ssh connections? I have an ansible tsk that runs govc to upload a vm into vcenter but right after that I ssh into the machine to run commands like this:
hosts: vcenter
gather_facts: false
tasks:
- name: Download GOVC
get_url:
url: https://github.com/vmware/govmomi/releases/download/v0.20.0/govc_linux_amd64.gz
dest: /home/gkeadmin/govc_linux_amd64.gz
but doing it right after I get this: fatal: [139.178.66.91]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host 1.2.3.4 port 22: Operation timed out", "unreachable": true}
I rerun it again with the --retry command and then it continues. Seems like it just needs sometime before I can connect via ssh...how do I wait for an ssh connection to get established in ansible?

ansible supports retries. May this can help you.
---
- name: test
retries: 2
hosts: <hosts_name>
tasks:
- name: task
<module_name>:

you can add a section on the top of your playbook to wait for it, for example
---
- name: wait for ssh
tasks:
- wait_for:
port: 22
host: '{{ inventory_hostname }}'
delegate_to: localhost
- name: my playbook
hosts: vcenter
gather_facts: false
tasks:
- name: Download GOVC
[ ... etc ... ]
https://docs.ansible.com/ansible/latest/modules/wait_for_module.html#examples

Related

Ansible: Host localhost is unreachable

In my job there is a playbook developed in the following way that is executed by ansible tower.
This is the file that ansible tower executes and calls a playbook
report.yaml:
- hosts: localhost
gather_facts: false
connection: local
tasks:
- name: "Execute"
include_role:
name: 'fusion'
main.yaml from fusion role:
- name: "hc fusion"
include_tasks: "hc_fusion.yaml"
hc_fusion.yaml from fusion role:
- name: "FUSION"
shell: ansible-playbook roles/fusion/tasks/fusion.yaml --extra-vars 'fusion_ip_ha={{item.ip}} fusion_user={{item.username}} fusion_pass={{item.password}} fecha="{{fecha.stdout}}" fusion_ansible_become_user={{item.ansible_become_user}} fusion_ansible_become_pass={{item.ansible_become_pass}}'
fusion.yaml from fusion role:
- hosts: localhost
vars:
ansible_become_user: "{{fusion_ansible_become_user}}"
ansible_become_pass: "{{fusion_ansible_become_pass}}"
tasks:
- name: Validate
ignore_unreachable: yes
shell: service had status
delegate_to: "{{fusion_user}}#{{fusion_ip_ha}}"
become: True
become_method: su
This is a summary of the entire run.
Previously it worked but throws the following error.
stdout: PLAY [localhost] \nTASK [Validate] [1;31mfatal: [localhost -> gandalf#10.66.173.14]: UNREACHABLE! => {\"changed\": false, \"msg\": \"Failed to connect to the host via ssh: Warning: Permanently added '10.66.173.14' (RSA) to the list of known hosts.\ngandalf#10.66.173.14: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password), \"skip_reason\": \"Host localhost is unreachable\"
When I execute ansible-playbook roles/fusion/tasks/fusion.yaml --extra-vars XXXXXXXX from the command line with user awx it works.
Also I validated the connection from the server where ansible tower is running to where you want to connect with the ssh command and if it allows me to connect without requesting a password with the user awx
fusion.yaml does not explicitly specify connection plugin, thus default ssh type is being used. For localhost this approach usually brings a number of related problems (ssh keys, known_hosts, loopback interfaces etc.). If you need to run tasks on localhost you should define connection plugin local just like in your report.yaml playbook.
Additionally, as Zeitounator mentioned, running one ansible playbook from another with shell model is a really bad practice. Please, avoid this. Ansible has a number of mechanism for code re-use (includes, imports, roles etc.).

changing ssh port with ansible playbook

I want to change port of ssh server on client systems to custom one 2202 (port defined in group_var/all and also in roles/change-sshd-port/vars/main.yml). My requirement is that the playbook can also be run when port is already set to custom 2202 (then playbook should do nothing).
I already used ansible role basing on solution: https://github.com/Forcepoint/fp-pta-ansible-change-sshd-port
The port is changed fine when I run the script for the first time (when completed I can login client node on new port).
When I run the playbook again it fails because is trying to do some tasks via old 22 instead of new port 2202
TASK [change-sshd-port : Confirm host connection works] ********************************************************************************************************************
fatal: [192.168.170.113]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host 192.168.170.113 port 22: Connection refused", "unreachable": true}
I can't find why it is trying to use port 22 when ansible_port variable is set to new port in roles/change-sshd-port/vars/main.yml
---
# vars file for /home/me/ansible2/roles/change-sshd-port
ansible_port: 2202
The part of the role task roles/change-sshd-port/tasks/main.yml until failing ping task is:
- name: Set configured port fact
ansible.builtin.set_fact:
configured_port: "{{ ansible_port }}"
- name: Check if we're using the inventory-provided SSH port
ansible.builtin.wait_for:
port: "{{ ansible_port }}"
state: "started"
host: "{{ ansible_host }}"
connect_timeout: "5"
timeout: "10"
delegate_to: "localhost"
ignore_errors: "yes"
register: configured_ssh
- name: SSH port is configured properly
ansible.builtin.debug:
msg: "SSH port is configured properly"
when: configured_ssh is defined and
configured_ssh.state is defined and
configured_ssh.state == "started"
register: ssh_port_set
- name: Check if we're using the default SSH port
ansible.builtin.wait_for:
port: "22"
state: "started"
host: "{{ ansible_host }}"
connect_timeout: "5"
timeout: "10"
delegate_to: "localhost"
ignore_errors: "yes"
register: default_ssh
when: configured_ssh is defined and
configured_ssh.state is undefined
- name: Set inventory ansible_port to default
ansible.builtin.set_fact:
ansible_port: "22"
when: default_ssh is defined and
"state" in default_ssh and
default_ssh.state == "started"
register: ssh_port_set
- name: Fail if SSH port was not auto-detected (unknown)
ansible.builtin.fail:
msg: "The SSH port is neither 22 or {{ ansible_port }}."
when: ssh_port_set is undefined
- name: Confirm host connection works
ansible.builtin.ping:
Your question is missing a bunch of details (there's no way for us to
reproduce the problem from the information you've given in the
question), so I'm going to have to engage in some guesswork. There are
a couple of things that could be happening.
First, if you're mucking about with the ssh port in your playbooks,
you're going to need to disable fact gathering on the play. By
default, ansible runs the setup module on target hosts before
running the tasks in your play, and this is going to use whatever port
you've configured in your inventory. If sshd is running on a different
port than expected, this will fail.
Here's a playbook that ignores whatever your port you have in your
inventory and will successfully connect to a target host whether sshd
is running on port 22 or port 2222 (it will fail with an error if sshd
is not running on either of those ports):
- hosts: target
gather_facts: false
vars:
desired_port: 2222
default_port: 22
tasks:
- name: check if ssh is running on {{ desired_port }}
delegate_to: localhost
wait_for:
port: "{{ desired_port }}"
host: "{{ ansible_host }}"
timeout: 10
ignore_errors: true
register: desired_port_check
- name: check if ssh is running on {{ default_port }}
delegate_to: localhost
wait_for:
port: "{{ default_port }}"
host: "{{ ansible_host }}"
timeout: 10
ignore_errors: true
register: default_port_check
- fail:
msg: "ssh is not running (or is running on an unknown port)"
when: default_port_check is failed and desired_port_check is failed
- when: default_port_check is success
block:
- debug:
msg: "ssh is running on default port"
- name: configure ansible to use port {{ default_port }}
set_fact:
ansible_port: "{{ default_port }}"
- when: desired_port_check is success
block:
- debug:
msg: "ssh is running on desired port"
- name: configure ansible to use port {{ desired_port }}
set_fact:
ansible_port: "{{ desired_port }}"
- name: run a command on the target host
command: uptime
register: uptime
- debug:
msg: "{{ uptime.stdout }}"

how can I use ansible to push playbooks with SSH keys authentification

I am new to ansible and try to push playbooks to my nodes. I would like to push via ssh-keys. Here is my playbook:
- name: nginx install and start services
hosts: <ip>
vars:
ansible_ssh_private_key_file: "/path/to/.ssh/id_ed25519"
become: true
tasks:
- name: install nginx
yum:
name: nginx
state: latest
- name: start service nginx
service:
name: nginx
state: started
Here is my inventory:
<ip> ansible_ssh_private_key_file=/path/to/.ssh/id_ed25519
before I push, I check if it works: ansible-playbook -i /home/myuser/.ansible/hosts nginx.yaml --check
it gives me:
fatal: [ip]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: user#ip: Permission denied (publickey,password).", "unreachable": true}
On that server I don't have root privileges, I cant do sudo. That's why I use my own inventory in my home directory. To the target node where I want to push that nginx playbook, I can do a SSH connection and perform a login. The public key is on the remote server in /home/user/.ssh/id_ed25119.pub
What am i missing?
Copy /etc/ansible/ansible.cfg into the directory from which you are running the nginx.yaml playbook, or somewhere else per the documentation: https://docs.ansible.com/ansible/latest/reference_appendices/config.html#ansible-configuration-settings-locations
Then edit that file to change this line:
#private_key_file = /path/to/file
to read:
private_key_file = /path/to/.ssh/id_ed25519
Also check the remote user_user entry.

Start a DB service on a remote host

In Ansible playboook, I am getting an error while running the start service module, as I want to start the DB on a remote host. I am new to Ansible.
I came up with this:
- name: This starts the MySQL Database in the host
hosts: dbserver
connection: ssh
become: yes
become_method: sudo
tasks:
- name: Start the DB in host
become: yes
become_user: root
service:
name: mysql
state: started
I am getting this error:
fatal: [10.138.12.67]: FAILED! => {"changed": false, "msg": "Could not find the requested service mysql: host"}
#Zeitounator is correct just adding a right playbook which can be used:
- name: This starts the MySQL Database in the host
hosts: dbserver
become: yes
become_user: root
tasks:
- name: Start the DB in host
service:
name: mysqld
state: started
enable: yes
Here is link where you can check Syntax: https://docs.ansible.com/ansible/latest/modules/service_module.html

Ansible local_action on host without local ssh daemon

How can I run a local command on a Ansible control server, if that control server does not have a SSH daemon running?
If I run the following playbook:
- name: Test commands
hosts: localhost
connection: local
gather_facts: false
tasks:
- name: Test local action
local_action: command echo "hello world"
I get the following error:
fatal: [localhost]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host localhost port 22: Connection refused", "unreachable": true}
It seems that local_action is the same as delegate_to: 127.0.0.1, so Ansible tries to ssh to the localhost. However, there is no SSH daemon running on the local controller host (only on the remote machines).
So my immediate question is how to run a specific command from Ansible, without Ansible first trying to SSH to localhost.
Crucial addition, not in the original question:
My host_vars contained the following line:
ansible_connection: ssh
how to run a specific command from Ansible, without Ansible first trying to SSH to localhost.
connection: local is sufficient to make the tasks run in the controller without using SSH.
Try,
- name: Test commands
hosts: localhost
connection: local
gather_facts: false
tasks:
- name: Test local action
command: echo "hello world"
I'll answer the details myself, perhaps it is useful to someone:
In my case:
ansible_connection was set to ssh in the host_vars.
ansible_host was set to localhost by local_action.
This combined let to a ssh to localhost that failed.
Further considerations:
delegate_to, local_action set ansible_host and ansible_connection, but any setting in the host_vars, playbook or task override that.
connection: local only sets ansible_connection (ansible_host is unmodified), but any setting of ansible_connection in the host_vars, playbook or task overrides it.
So my solution was to either remove the ansible_connection in the host_vars, or setting the var ansible_connection in a task.
That looks wrong for me.
- name: import profiles of VMs
connection: local
hosts: localhost
gather_facts: false
tasks:
- name: list files
find:
paths: .
recurse: no
delegate_to: localhost
He is still asking me for ssh password:
❯ ansible-playbook playbooks/import_vm_profiles.yml -i localhost, -k [WARNING]: Unable to parse the plugin filter file /Users/fredericclement/devops/ansible_refactored/etc/Plugin_filters.yml as module_blacklist is not a list. Skipping.
SSH password:

Resources