Not able to switch user in ansible - ansible

We have a system where we have user A and user B. We can switch to user B from A using "sudo su" command only. Direct login to B user is not allowed.
Now From Ansible master, we can login to A user (as ansible remote user) successfully. Our use case is, We have to run some commands as user B using ansible. But we are failing to switch to B user and run those commands.
Our yml file looks like -
Module to copy java to the target host.
- name: Copying Java jdk1.8.0_192
remote_user: A
become_user: B
become: true
become_method: su
copy:
src: /etc/ansible/jboss7-cluster/raw_template/jdk1.8.0_192.zip
dest: "{{ java_install_dir }}"
Any inputs?

In your case, I believe the become_method should be the default sudo. Have you tried using that? If so, what is the result? Can you copy/paste the result here?
Also, can you try to run an ad hoc command against the host, and post the result here?
Something like this:
ansible -i inventory.ini -u A --become --become-user B -m ping myhost
And one more thing: note that there are some restrictions when using become to switch to a non-privileged user:
"In addition to the additional means of doing this securely, Ansible 2.1 also makes it harder to unknowingly do this insecurely. Whereas in Ansible 2.0.x and below, Ansible will silently allow the insecure behaviour if it was unable to find another way to share the files with the unprivileged user, in Ansible 2.1 and above Ansible defaults to issuing an error if it can’t do this securely. If you can’t make any of the changes above to resolve the problem, and you decide that the machine you’re running on is secure enough for the modules you want to run there to be world readable, you can turn on allow_world_readable_tmpfiles in the ansible.cfg file. Setting allow_world_readable_tmpfiles will change this from an error into a warning and allow the task to run as it did prior to 2.1."
And just as a side note: please avoid using sudo su - use sudo -i, or at least sudo su - . These will populate the environment correctly, unlike sudo su. For a fun read about why you want this, see here.

Related

"sudo ansible-playbook" command fails even with --user option

I've a user foo which is able to do passwordless ssh to A(self) and B. The playbook requires sudo access inside which I'm escalating with become and the below command is working fine.
ansible-playbook -i ../inventory.ini --user=foo --become --become-user=root echo_playbook.yml
But the above command is part of a shell script which doesn't have permission for foo. So when I use sudo to trigger that shell script, ansible is saying host unreachable. So I tried the ansible command with sudo as shown below and same. It showed host is unreachable.
sudo ansible-playbook -i ../inventory.ini --user=foo --become --become-user=root echo_playbook.yml
I agree that sudo is escalating the ansible-playbook to root. But I'm also providing the --user to tell ansible that "foo" user needs to be used for ssh.
Basically to access the playbook I need sudo. To connect to other servers I need foo user. To execute the actions inside the playbook (commands in playbook) I need sudo again (which I am using become for).
Am I doing anything wrong? Can anybody tell me the exact command for the ansible-playbook for the above scenario where ansible-playbook needs to run as sudo ansible-playbook?
I'm not entirely clear on exactly where you're stuck. I don't think you're confused between the remote user and the local user. If the playbook works as foo, and from what you describe, I can only guess that ~foo/.ssh/id_rsa or another automatically provided key authenticates foo. But you can generate a key for any user and allow it access to the remote foo if you'd prefer. Or, you can run the playbook as another user. It's up to you. The only thing that won't work is relying on the environment or configuration of particular users and then not providing it.
the above command is part of a shell script which doesn't have permission for foo.
What I'm hearing is that:
a user foo can successfully run ansible job
a script runs (under root?) that cannot run the ansible job
If you're happy with how ansible works for the foo user, you can switch to the foo user to run the ansible:
sudo -u foo ansible-playbook ...
If the script runs as root, sudo will always succeed. Otherwise, you can configure sudo to allow one user to access another for one command or more.

Ansible Missing sudo password [duplicate]

How do I specify a sudo password for Ansible in non-interactive way?
I'm running Ansible playbook like this:
$ ansible-playbook playbook.yml -i inventory.ini \
--user=username --ask-sudo-pass
But I want to run it like this:
$ ansible-playbook playbook.yml -i inventory.ini \
--user=username` **--sudo-pass=12345**
Is there a way? I want to automate my project deployment as much as possible.
The docs strongly recommend against setting the sudo password in plaintext:
As a reminder passwords should never be stored in plain text. For information on encrypting your passwords and other secrets with Ansible Vault, see Encrypting content with Ansible Vault.
Instead you should be using --ask-become-pass on the command line when running ansible-playbook
Previous versions of Ansible have used --ask-sudo-pass and sudo instead of become.
You can pass variable on the command line via --extra-vars "name=value". Sudo password variable is ansible_sudo_pass. So your command would look like:
ansible-playbook playbook.yml -i inventory.ini --user=username \
--extra-vars "ansible_sudo_pass=yourPassword"
Update 2017: Ansible 2.2.1.0 now uses var ansible_become_pass. Either seems to work.
Update 2021: ansible_become_pass is still working, but for now, we should use -e instead of --extra-vars
Probably the best way to do this - assuming that you can't use the NOPASSWD solution provided by scottod - is to use Mircea Vutcovici's solution in combination with Ansible vault Archived.
For example, you might have a playbook something like this:
- hosts: all
vars_files:
- secret
tasks:
- name: Do something as sudo
service: name=nginx state=restarted
sudo: yes
Here we are including a file called secret which will contain our sudo password.
We will use ansible-vault to create an encrypted version of this file:
ansible-vault create secret
This will ask you for a password, then open your default editor to edit the file. You can put your ansible_sudo_pass in here.
e.g.: secret:
ansible_sudo_pass: mysudopassword
Save and exit, now you have an encrypted secret file which Ansible is able to decrypt when you run your playbook. Note: you can edit the file with ansible-vault edit secret (and enter the password that you used when creating the file)
The final piece of the puzzle is to provide Ansible with a --vault-password-file which it will use to decrypt your secret file.
Create a file called vault.txt and in that put the password that you used when creating your secret file. The password should be a string stored as a single line in the file.
From the Ansible Docs:
.. ensure permissions on the file are such that no one else can access your key and do not add your key to source control
Finally: you can now run your playbook with something like
ansible-playbook playbook.yml -u someuser -i hosts --sudo --vault-password-file=vault.txt
The above is assuming the following directory layout:
.
|_ playbook.yml
|_ secret
|_ hosts
|_ vault.txt
You can read more about Ansible Vault here: https://docs.ansible.com/playbooks_vault.html Archived
https://docs.ansible.com/ansible/latest/user_guide/vault.html
Looking at the code (runner/__init__.py), I think you can probably set it in your inventory file :
[whatever]
some-host ansible_sudo_pass='foobar'
There seem to be some provision in ansible.cfg config file too, but not implemented right now (constants.py).
I don't think ansible will let you specify a password in the flags as you wish to do.
There may be somewhere in the configs this can be set but this would make using ansible less secure overall and would not be recommended.
One thing you can do is to create a user on the target machine and grant them passwordless sudo privileges to either all commands or a restricted list of commands.
If you run sudo visudo and enter a line like the below, then the user 'privilegedUser' should not have to enter a password when they run something like sudo service xxxx start:
%privilegedUser ALL= NOPASSWD: /usr/bin/service
The sudo password is stored as a variable called ansible_sudo_pass.
You can set this variable in a few ways:
Per host, in your inventory hosts file (inventory/<inventoryname>/hosts)
[server]
10.0.0.0 ansible_sudo_pass=foobar
Per group, in your inventory groups file (inventory/<inventoryname>/groups)
[server:vars]
ansible_sudo_pass=foobar
Per group, in group vars (group_vars/<groupname>/ansible.yml)
ansible_sudo_pass: "foobar"
Per group, encrypted (ansible-vault create group_vars/<groupname>/ansible.yml)
ansible_sudo_pass: "foobar"
You can set the password for a group or for all servers at once:
[all:vars]
ansible_sudo_pass=default_sudo_password_for_all_hosts
[group1:vars]
ansible_sudo_pass=default_sudo_password_for_group1
I was tearing my hair out over this one, now I found a solution which does what i want:
1 encrypted file per host containing the sudo password
/etc/ansible/hosts:
[all:vars]
ansible_ssh_connection=ssh ansible_ssh_user=myuser ansible_ssh_private_key_file=~/.ssh/id_rsa
[some_service_group]
node-0
node-1
then you create for each host an encrypted var-file like so:
ansible-vault create /etc/ansible/host_vars/node-0
with content
ansible_sudo_pass: "my_sudo_pass_for_host_node-0"
how you organize the vault password (enter via --ask-vault-pass) or by cfg is up to you
based on this i suspect you can just encrypt the whole hosts file...
A more savvy way to do this is to store your sudo password in a secure vault such as LastPass or KeePass and then pass it to ansible-playbook using the -e# but instead of hardcoding the contents in an actual file, you can use the construct -e#<(...) to run a command in a sub-shell, and redirect its output (STDOUT) to a anonymous file descriptor, effectively feeding the password to the -e#<(..).
Example
$ ansible-playbook -i /tmp/hosts pb.yml \
-e#<(echo "ansible_sudo_pass: $(lpass show folder1/item1 --password)")
The above is doing several things, let's break it down.
ansible-playbook -i /tmp/hosts pb.yml - obviously running a playbook via ansible-playbook
$(lpass show folder1/item1 --password)" - runs the LastPass CLI lpass and retrieves the password to use
echo "ansible_sudo_pass: ...password..." - takes the string 'ansible_sudo_pass: ' and combines it with the password supplied by lpass
-e#<(..) - puts the above together, and connects the subshell of <(...) as a file descriptor for ansible-playbook to consume.
Further improvements
If you'd rather not type that every time you can simply things like so. First create an alias in your .bashrc like so:
$ cat ~/.bashrc
alias asp='echo "ansible_sudo_pass: $(lpass show folder1/item1 --password)"'
Now you can run your playbook like this:
$ ansible-playbook -i /tmp/hosts pb.yml -e#<(asp)
References
https://docs.ansible.com/ansible/2.4/ansible-playbook.html#cmdoption-ansible-playbook-e
If you are comfortable with keeping passwords in plain text files, another option is to use a JSON file with the --extra-vars parameter (be sure to exclude the file from source control):
ansible-playbook --extra-vars "#private_vars.json" playbook.yml
Ansible has supported this option since 1.3.
you can write sudo password for your playbook in the hosts file like this:
[host-group-name]
host-name:port ansible_sudo_pass='*your-sudo-password*'
Ansible vault has been suggested a couple of times here, but I prefer git-crypt for encrypting sensitive files in my playbooks. If you're using git to keep your ansible playbooks, it's a snap. The problem I've found with ansible vault is that I inevitably end up coming across encrypted copies of the file that I want to work with and have to go decrypt it before I can work. git-crypt offers a nicer workflow IMO.
https://github.com/AGWA/git-crypt
Using this, you can put your passwords in a var in your playbook, and mark your playbook as an encrypted file in .gitattributes like this:
my_playbook.yml filter=git-crypt diff=git-crypt
Your playbook will be transparently encrypted on Github. Then you just need to either install your encryption key on the host you use to run ansible, or follow the instruction on the documentation to set it up with gpg.
There's a good Q&A on forwarding gpg keys like your ssh-agent forwards SSH keys here: https://superuser.com/questions/161973/how-can-i-forward-a-gpg-key-via-ssh-agent.
My hack to automate this was to use an environment variable and access it via --extra-vars="ansible_become_pass='{{ lookup('env', 'ANSIBLE_BECOME_PASS') }}'".
Export an env var, but avoid bash/shell history (prepend with a space, or other methods). E.g.:
export ANSIBLE_BECOME_PASS='<your password>'
Lookup the env var while passing the extra ansible_become_pass variable into the ansible-playbook, E.g.:
ansible-playbook playbook.yml -i inventories/dev/hosts.yml -u user --extra-vars="ansible_become_pass='{{ lookup('env', 'ANSIBLE_BECOME_PASS') }}'"
Good alternate answers:
#toast38coza: simply use a vaulted value for ansible_become_pass. This is decent. However, for the paranoid teams that need to share ansible vault passwords, and execute ansible plays with induvidual accounts, they coudld use the shared vault password to reverse each others operating system password (identiy theft). Arguably, you need to trust your own team?
#slm's bash subshell output generated to temp file descriptor and using the # prefix to read the ansible variable from the file desriptor. Avoids bash history at least. Not sure, but hopefully subshell echo doesn't get caught and exposed in audit logging (e.g. auditd).
You can use ansible vault which will code your password into encrypted vault. After that you can use variable from vault in playbooks.
Some documentation on ansible vault:
http://docs.ansible.com/playbooks_vault.html
We are using it as vault per environment. To edit vault we have command as:
ansible-vault edit inventories/production/group_vars/all/vault
If you want to call vault variable you have to use ansible-playbook with parameters like:
ansible-playbook -s --vault-password-file=~/.ansible_vault.password
Yes we are storing vault password in local directory in plain text but it's not more dangerous like store root password for every system. Root password is inside vault file or you can have it like sudoers file for your user/group.
I'm recommending to use sudoers file on the server. Here is example for group admin:
%admin ALL=(ALL) NOPASSWD:ALL
Using ansible 2.4.1.0 and the following shall work:
[all]
17.26.131.10
17.26.131.11
17.26.131.12
17.26.131.13
17.26.131.14
[all:vars]
ansible_connection=ssh
ansible_user=per
ansible_ssh_pass=per
ansible_sudo_pass=per
And just run the playbook with this inventory as:
ansible-playbook -i inventory copyTest.yml
You can use sshpass utility as below,
$ sshpass -p "your pass" ansible pattern -m module -a args \
-i inventory --ask-sudo-pass
After five years, I can see this is still a very relevant subject. Somewhat mirroring leucos's answer which I find the best in my case, using ansible tools only (without any centralised authentication, tokens or whatever). This assumes you have the same username and the same public key on all servers. If you don't, of course you'd need to be more specific and add the corresponding variables next to the hosts:
[all:vars]
ansible_ssh_user=ansible
ansible_ssh_private_key_file=home/user/.ssh/mykey
[group]
192.168.0.50 ansible_sudo_pass='{{ myserver_sudo }}'
ansible-vault create mypasswd.yml
ansible-vault edit mypasswd.yml
Add:
myserver_sudo: mysecretpassword
Then:
ansible-playbook -i inv.ini my_role.yml --ask-vault --extra-vars '#passwd.yml'
At least this way you don't have to write more the variables which point to the passwords.
Just call your playbook with --extra-vars "become_pass=Password"
become_pass=('ansible_become_password', 'ansible_become_pass')
Just an addendum, so nobody else goes through the annoyance I recently did:
AFAIK, the best solution is one along the general lines of toast38coza's above. If it makes sense to tie your password files and your playbook together statically, then follow his template with vars_files (or include_vars). If you want to keep them separate, you can supply the vault contents on the command line like so:
ansible-playbook --ask-vault-pass -e#<PATH_TO_VAULT_FILE> <PLAYBOOK_FILE>
That's obvious in retrospect, but here are the gotchas:
That bloody # sign. If you leave it out, parsing will fail silently, and ansible-playbook will proceed as though you'd never specified the file in the first place.
You must explicitly import the contents of the vault, either with a command-line --extra-vars/-e or within your YAML code. The --ask-vault-pass flag doesn't do anything by itself (besides prompt you for a value which may or may not be used later).
May you include your "#"s and save an hour.
Above solution by #toast38coza worked for me; just that sudo: yes is deprecated in Ansible now.
Use become and become_user instead.
tasks:
- name: Restart apache service
service: name=apache2 state=restarted
become: yes
become_user: root
For new updates
just run your playbook with the flag -K and he will ask you for your sudo password
g.e ansible-playbook yourPlaybookFile.yaml -K
from the doc
To specify a password for sudo, run ansible-playbook with --ask-become-pass (-K for short)
Just hint to other solution.
You can to setup your ansible user to run sudo without password (it's default on GCP VMs)
sudo visudo
add line (tom is a user):
tom ALL=(ALL) NOPASSWD:ALL
we Can also Use EXPECT BLOCK in ansible to spawn bash and customize it as per your needs
- name: Run expect to INSTALL TA
shell: |
set timeout 100
spawn /bin/sh -i
expect -re "$ "
send "sudo yum remove -y xyz\n"
expect "$ "
send "sudo yum localinstall -y {{ rpm_remotehost_path_for_xyz }}\n"
expect "~]$ "
send "\n"
exit 0
args:
executable: /usr/bin/expect
If you are using the pass password manager, you can use the module passwordstore, which makes this very easy.
Let's say you saved your user's sudo password in pass as
Server1/User
Then you can use the decrypted value like so
{{ lookup('community.general.passwordstore', 'Server1/User')}}"
I use it in my inventory:
---
servers:
hosts:
server1:
ansible_become_pass: "{{ lookup('community.general.passwordstore', 'Server1/User')}}"
Note that you should be running gpg-agent so that you won't see a pinentry prompt every time a 'become' task is run.
You can pass it during playbook execution. the syntax is -
ansible-playbook -i inventory my.yml \
--extra-vars 'ansible_become_pass=YOUR-PASSWORD-HERE'
But that is not a good idea for security reasons. Better to use ansible vault
First update your inventory file as follows:
[cluster:vars]
k_ver="linux-image-4.13.0-26-generic"
ansible_user=vivek # ssh login user
ansible_become=yes # use sudo
ansible_become_method=sudo
ansible_become_pass='{{ my_cluser_sudo_pass }}'
[cluster]
www1
www2
www3
db1
db2
cache1
cache2
Next create a new encrypted data file named password.yml, run the following command:
$ ansible-vault create passwd.yml
Set the password for vault. After providing a password, the tool will start whatever editor you have defined with $EDITOR. Append the following
my_cluser_sudo_pass: your_sudo_password_for_remote_servers
Save and close the file in vi/vim. Finally run playbook as follows:
$ ansible-playbook -i inventory --ask-vault-pass --extra-vars '#passwd.yml' my.yml
How to edit my encrypted file again
ansible-vault edit passwd.yml
How to change password for my encrypted file
ansible-vault rekey passwd.yml
Very simple, and only add in the variable file:
Example:
$ vim group_vars/all
And add these:
Ansible_connection: ssh
Ansible_ssh_user: rafael
Ansible_ssh_pass: password123
Ansible_become_pass: password123
This worked for me...
Created file /etc/sudoers.d/90-init-users file with NOPASSWD
echo "user ALL=(ALL) NOPASSWD:ALL" > 90-init-users
where "user" is your userid.

Ansible root/password login

I'm trying to use Ansible to provision a server and the first thing I want to do is test the ssh access. If I use ssh directly I can log in fine...
ssh root#server
root#backups's password:
If I use Ansible I can't...
user#ansible:~$ ansible backups -m ping --user root --ask-pass
SSH password:
backups | UNREACHABLE! => {
"changed": false,
"msg": "Invalid/incorrect password: Permission denied, please try again.",
"unreachable": true
}
The password I'm using is correct - 100%.
Before anyone suggests using SSH keys - that's what part of what I'm looking to automate.
The issue was caused by the getting started documentation setting a trap.
It instructs you to create an inventory file with servers, use ansible all -m ping to ping those servers and to use the -u switch to change the remote user.
What it doesn't tell you is that if like me not all you servers have the same user, the advised way to specify a user per server is in the inventory file...
server1 ansible_connection=ssh ansible_user=user1
server2 ansible_connection=ssh ansible_user=user2
server3 ansible_connection=ssh ansible_user=user3
I was provisioning a server, and the only user I had available to me at the time was root. But trying to do ansible server3 -user root --ask-pass failed to authenticate. After a couple of wasted hours I discovered the -user switch is only effective if the inventory file doesn't have a user. This is intended precedence behaviour. There are a few gripes about this in GitHub issues but a firm 'intended behaviour' mantra is the response you get if you challenge it. It seems to go against the grain to me.
I subsequently discovered that you can specify -e 'ansible_ssh_user=root' to override the inventory user - I will see about creating a pull request to improve the docs.
While you're here, I might be able to save you some time with some further gotchas. This behaviour is the same if you use playbooks. In there you can specify a remote_user but this isn't honoured - presumably also because of precedence. Again you can override the inventory user with -e 'ansible_ssh_user=root'
Finally, until I realised Linode could provision a server with an SSH key deployed, I was trying to specify the root password to an ad-hoc command. You have to encrypt the password and this gives you a long string and this is almost certainly going to include $ in it which bash will treat as substitutions. Make sure you escape these.
The lineinfile module behaviour isn't intuitive either.
Write your hosts file like this. It will work.
192.168.2.4
192.168.1.4
[all:vars]
ansible_user=azureuser
Then execute the following command: ansible-playbook --ask-pass -i hosts main.yml --check to check before configuration.
Also create a ansible.cfg file. Then paste the following contents there:
[defaults]
inventory = hosts
host_key_checking = False
Note: All the 3 files namely, main.yml,ansible.cfg & hosts must be in the same folder.
Also, the code is tested for devices connected to a private network using Private IPs. I haven't checked using Public IPs. If using Azure/AWS, create a test VM and connect it to the VPN of the other devices.
Note: You need to install the SSHPass package to be able to authenticate with Password.
For Ubuntu: apt-get install sshpass

How to switch out of root acount during server set up?

I need to automate the deployment of some remote Debian servers. These servers come with only the root account. I wish to make it such that the only time I ever need to login as root is during the set up process. Subsequent logins will be done using a regular user account, which will be created during the set up process.
However during the set up process, I need to set PermitRootLogin no and PasswordAuthentication no in /etc/ssh/sshd_config. Then I will be doing a service sshd restart. This will stop the ssh connection because ansible had logged into the server using the root account.
My question is: How do I make ansible ssh into the root account, create a regular user account, set PermitRootLogin no and PasswordAuthentication no, then ssh into the server using the regular user account and do the remaining set up tasks?
It is entirely possible that my set-up process is flawed. I will appreciate suggestions.
You can actually manage the entire setup process with Ansible, without requiring manual configuration prerequisites.
Interestingly, you can change ansible_user and ansible_password on the fly, using set_fact. Remaining tasks executed after set_fact will be executed using the new credentials:
- name: "Switch remote user on the fly"
hosts: my_new_hosts
vars:
reg_ansible_user: "regular_user"
reg_ansible_password: "secret_pw"
gather_facts: false
become: false
tasks:
- name: "(before) check login user"
command: whoami
register: result_before
- debug: msg="(before) whoami={{ result_before.stdout }}"
- name: "change ansible_user and ansible_password"
set_fact:
ansible_user: "{{ reg_ansible_user }}"
ansible_password: "{{ reg_ansible_password }}"
- name: "(after) check login user"
command: whoami
register: result_after
- debug: msg="(after) whoami={{ result_after.stdout }}"
Furthermore, you don't have to fully restart sshd to cause configuration changes to take effect, and existing SSH connections will stay open. Per sshd(8) manpage:
sshd rereads its configuration file when it receives a hangup signal, SIGHUP....
So, your setup playbook could be something like:
login initially with the root account
create the regular user and set his password or configure authorized_keys
configure sudoers to allow regular user to execute commands as root
use set_fact to switch to that account for the rest of the playbook (remember to use become: true on tasks after this one, since you have switched from root to regular user. you might even try executing a test sudo command before locking root out)
change sshd configuration
execute kill -HUP<sshd_pid>
verify by setting ansible_user back to root, fail if login works
You probably just want to make a standard user account and add it to sudoers. You could then run ansible with the standard user and if you need a command to run as root, you just prefix with command with sudo.
I wrote an article about setting up a deploy user
http://www.altmake.com/2013/03/06/secure-lamp-setup-on-amazon-linux-ami/

Ansible: Test if SSH login possible without FATAL error?

I have a setup playbook that takes a freshly installed linux instance, logs in as the default user (we'll call user1), creates another user (we'll call user2), then disables user1. Because user1 can only access the instance before this set of tasks is executed, the tasks are in a special playbook we have to remember to run on new instances. After that, all the common tasks are run by user2 because user1 no longer exists.
I want to combine the setup and common playbooks so we don't have to run the setup playbook manually anymore. I tried to create a task to see which user exists on the instance to make the original setup tasks conditional by attempting to login via SSH as user1. The problem is that if I try the SSH login for either user, ansible exits with a FATAL error because it can't login: user2 doesn't exist yet on new instances or user1 has been disabled after the setup playbook executes.
I believe testing the login via SSH is the only way to determine externally what condition the instance is in. Is there a way to test SSH logins without getting a FATAL error to then execute tasks conditionally based on the results?
One approach would be to use shell via a local_action to invoke a simple ssh command to user1 and see if it succeeds or not. Something along these lines:
- name: Test for user1
local_action: shell ssh user1#{{ inventory_hostname }} "echo success"
register: user1_enabled
Then you could use something like this in another task to see if it worked:
when: user1_enabled.stdout.find("success") != -1
With Ansible >= 2.5 it is possible to use the wait_for_connection_module (https://docs.ansible.com/ansible/2.5/modules/wait_for_connection_module.html).
- name: Wait 600 seconds for target connection to become reachable/usable
wait_for_connection:

Resources