Ansible Bastion host connection [duplicate] - ansible

I have three hosts:
my local ansible controller
a jump/bastion host (jump_host) for my infrastructure
a target host I want to run ansible tasks against (target_host) which is only accessible through jump_host
As part of my inventory file, I have the details of both jump_host and target_host as follows:
jump_host:
ansible_host: "{{ jump_host_ip }}"
ansible_port: 22
ansible_user: root
ansible_password: password
target_host:
ansible_host: "{{ target_host_ip }}"
ansible_port: 22
ansible_user: root
ansible_password: password
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q root#{{ jump_host_ip }}"'
How can we configure ansible to use the password mentioned in the jump_host settings from the inventory file instead of using any additional configurations from ~/.ssh/config file?

There is no direct way to provide the password for the jump host as part of the ProxyCommand.
So, I ended up doing the following:
# Generate SSH keys on the controller
- hosts: localhost
become: false
tasks:
- name: Generate the localhost ssh keys
community.crypto.openssh_keypair:
path: ~/.ssh/id_rsa
force: no
# Copy the host keys of Ansible host into the jump_host .ssh/authorized_keys file
# to ensure that no password is prompted while logging into jump_host
- hosts: jump_host
become: false
tasks:
- name: make sure public key exists on target for user
ansible.posix.authorized_key:
user: "{{ ansible_user }}"
key: "{{ lookup('file', '~/.ssh/id_rsa') }}"
state: present

Related

Ansible shell - How to connect to hosts with custom credentials

My goal is to execute a shell command on different hosts with credentials given directly from the playbook. Until now I have tried two things.
Attempt one:
- name: test
shell:
command: "whoami"
with_items: "{{lookup('file', '../files/deviceList.txt').splitlines()}}"
delegate_to: "{{item.split(';')[0]}}"
args:
ansible_connection: network_cli
ansible_network_os: ios
ansible_user: "{{ cred_ios_r_user }}"
ansible_password: "{{ cred_ios_r_pass }}"
Attempt 2:
- name: set default credentials
set_fact:
ansible_connection: network_cli
ansible_network_os: ios
ansible_user: "{{ cred_ios_r_user }}"
ansible_password: "{{ cred_ios_r_pass }}"
with_items: "{{lookup('file', '../files/deviceList.txt').splitlines()}}"
delegate_to: "{{item.split(';')[0]}}"
- name: test
shell:
command: "whoami"
with_items: "{{lookup('file', '../files/deviceList.txt').splitlines()}}"
delegate_to: "{{item.split(';')[0]}}"
The Username in the variable {{cred_ios_r_user}} is 'User1'. But when I look in the Output from Ansible, it tells me it used the default ssh user named "SSH_User".
What do I need to change so that Ansible takes the given credentials?
first thing you should to check - its "remote_user" attribute of module:
- name: DigitalOcean | Disallow root SSH access
remote_user: root
lineinfile: dest=/etc/ssh/sshd_config
regexp="^PermitRootLogin"
line="PermitRootLogin no"
state=present
notify: Restart ssh
Next thing, its ssh_keys. By default, your connections have your own private key, but you can override it by the extraargs of ansible-playbook command, of variables in inventory-file or job-vars:
vars:
ansible_ssh_user: "root"
ansible_ssh_private_key_file: "{{ role_path }}/files/ansible.key"
so if you want, you can set custom keys you have in vars or files:
- name: DigitalOcean | Add Pub key
remote_user: root
authorized_key:
user: "{{ do_user }}"
key: "{{ lookup('file', do_key_public) }}"
state: present
you have take more information in my auto-droplet-digitalocean-role
Your vars should be... vars available for your task, not arguments passed to the shell module. So in a nutshell:
- name: test
vars:
ansible_connection: network_cli
ansible_network_os: ios
ansible_user: "{{ cred_ios_r_user }}"
ansible_password: "{{ cred_ios_r_pass }}"
shell:
command: "whoami"
with_items: "{{ lookup('file', '../files/deviceList.txt').splitlines() }}"
delegate_to: "{{ item.split(';')[0] }}"
Meanwhile this looks like a bad practice. You would normally use add_host to build an in-memory inventory with the correct hosts and credentials and then run your task using the natural host batch play loop. Something like
- name: manage hosts
hosts: localhost
gather_facts: false
tasks:
- name: add hosts to inventory
add_host:
name: "{{ item.split(';')[0] }}"
groups:
- my_custom_group
ansible_connection: network_cli
ansible_network_os: ios
ansible_user: "{{ cred_ios_r_user }}"
ansible_password: "{{ cred_ios_r_pass }}"
with_items: "{{ lookup('file', '../files/deviceList.txt').splitlines() }}"
- name: run my command on all added hosts
hosts: my_custom_group
gather_facts: false
tasks:
- name: test command
shell: whoami
Alternativelly, you can create a full inventory from scratch from your csv file and use it directly or use/develop an inventory plugin that will read your csv file directly (like in this example)

Ansible with items loop inside my inventory

I have a task based on a shell command that needs to run on a local computer. As part of the command, I need to add the IP address on part of the groups I have in my inventory file
Inventory file :
[server1]
130.1.1.1
130.1.1.2
[server2]
130.1.1.3
130.1.1.4
[server3]
130.1.1.5
130.1.1.6
I need to run the following command from the local computer on the Ips that are part of the Server 2 + 3 groups
ssh-copy-id user#<IP>
# <IP> should be 130.1.1.3 , 130.1.1.4 , 130.1.1.5 , 130.1.1.6
Playbook - were I'm missing the part of the ip
- hosts: localhost
gather_facts: no
become: yes
tasks:
- name: Generate ssh key for root user
shell: "ssh-copy-id user#{{ item }}"
run_once: True
with_items:
- server2 group
- server3 group
In a nutshell:
- hosts: server1:server2
gather_facts: no
become: true
tasks:
- name: Push local root pub key for remote user
shell: "ssh-copy-id user#{{ inventory_hostname }}"
delegate_to: localhost
Note that I kept your exact shell command which is actually a bad practice since there is a dedicated ansible module to manage that. So this could be translated to something like.
- hosts: server1:server2
gather_facts: no
become: true
tasks:
- name: Push local root pub key for remote user
authorized_key:
user: user
state: present
key: "{{ lookup('file', '/root/.ssh/id_rsa.pub') }}"

In ansible variable for hosts from vars_prompt no longer accepted [duplicate]

I want to write a bootstrapper playbook for new machines in Ansible which will reconfigure the network settings. At the time of the first execution target machines will have DHCP-assigned address.
The user who is supposed to execute the playbook knows the assigned IP address of a new machine. I would like to prompt the user for is value.
vars_prompt module allows getting input from the user, however it is defined under hosts section effectively preventing host address as the required value.
Is it possible without using a wrapper script modifying inventory file?
The right way to do this is to create a dynamic host with add_host and place it in a new group, then start a new play that targets that group. That way, if you have other connection vars that need to be set ahead of time (credentials/keys/etc) you could set them on an empty group in inventory, then add the host to it dynamically. For example:
- hosts: localhost
gather_facts: no
vars_prompt:
- name: target_host
prompt: please enter the target host IP
private: no
tasks:
- add_host:
name: "{{ target_host }}"
groups: dynamically_created_hosts
- hosts: dynamically_created_hosts
tasks:
- debug: msg="do things on target host here"
You could pass it with extra-vars instead.
Simply make your hosts section a variable such as {{ hosts_prompt }} and then pass the host on the command line like so:
ansible-playbook -i inventory/environment playbook.yml --extra-vars "hosts_prompt=192.168.1.10"
Or if you are using the default inventory file location of /etc/ansible/hosts you could simply use:
ansible-playbook playbook.yml --extra-vars "hosts_prompt=192.168.1.10"
Adding to Matt's answer for multiple hosts.
input example would be 192.0.2.10,192.0.2.11
- hosts: localhost
gather_facts: no
vars_prompt:
- name: target_host
prompt: please enter the target host IP
private: no
tasks:
- add_host:
name: "{{ item }}"
groups: dynamically_created_hosts
with_items: "{{ target_host.split(',') }}"
- hosts: dynamically_created_hosts
tasks:
- debug: msg="do things on target host here"
Disclaimer: The accepted answer offers the best solution to the problem. While this one is working it is based on a hack and I leave it as a reference.
I found out it was possible use a currently undocumented hack (credit to Bruce P for pointing me to the post) that turns the value of -i / --inventory parameter into an ad hoc list of hosts (reference). With just the hostname/ip address and a trailing space (like below) it refers to a single host without the need for the inventory file to exist.
Command:
ansible-playbook -i "192.168.1.21," playbook.yml
And accordingly playbook.yml can be run against all hosts (which in the above example will be equal to a single host 192.168.1.21):
- hosts: all
The list might contain more than one ip address -i "192.168.1.21,192.168.1.22"
Adding to Jacob's and Matt's examples, with the inclusion of a username and password prompt:
---
- hosts: localhost
pre_tasks:
- name: verify_ansible_version
assert:
that: "ansible_version.full is version_compare('2.10.7', '>=')"
msg: "Error: You must update Ansible to at least version 2.10.7 to run this playbook..."
vars_prompt:
- name: target_hosts
prompt: |
Enter Target Host IP[s] or Hostname[s] (comma separated)
(example: 1.1.1.1,myhost.example.com)
private: false
- name: username
prompt: Enter Target Host[s] Login Username
private: false
- name: password
prompt: Enter Target Host[s] Login Password
private: true
tasks:
- add_host:
name: "{{ item }}"
groups: host_groups
with_items:
- "{{ target_hosts.split(',') }}"
- add_host:
name: login
username: "{{ username }}"
password: "{{ password }}"
- hosts: host_groups
remote_user: "{{ hostvars['login']['username'] }}"
vars:
ansible_password: "{{ hostvars['login']['password'] }}"
ansible_become: yes
ansible_become_method: sudo
ansible_become_pass: "{{ hostvars['login']['password'] }}"
roles:
- my_role

How to get the list of authorized_keys of a given user?

I want to write a ansible playbook where we can provide a username and ansible will display the authorized keys for that user. The path to the authorized keys is {{user_home_dir}}/.ssh/authorized_keys.
I tried with shell module like below:
---
- name: Get authorized_keys
shell: cat "{{ user_home_dir }}"/.ssh/authorized_keys
register: read_key
- name: Prints out authorized_key
debug: var=read_key.stdout_lines
The problem is, it will show me the file inside /home/ansible/.ssh/authorized_keys. "ansible" is the user that I am using to connect to remote machine.
Below is vars/main.yml
---
authorized_user: username
user_home_dir: "{{ lookup('env','HOME') }}"
Any idea? FYI I am new to ansible and tried this link already.
In your vars file, you have
user_home_dir: "{{ lookup('env','HOME') }}"
Thanks to Konstantin for pointing it out... All lookups are executed on the control host. So the lookup to env HOME will always resolve to the home directory of the user, from which ansible is being invoked.
You could use the getent module from ansible to retrieve an user's info. The below snippet should help
---
- hosts: localhost
connection: local
remote_user: myuser
gather_facts: no
vars:
username: myuser
tasks:
- name: get user info
getent:
database: passwd
key: "{{ username }}"
register: info
- shell: "echo {{ getent_passwd[username][4] }}"
Below worked. We need to have become too otherwise we will get permission denied error.
---
- hosts: local
remote_user: ansible
gather_facts: no
become: yes
become_method: sudo
vars:
username: myuser
tasks:
- name: get user info
getent:
split: ":"
database: passwd
key: "{{ username }}"
- name: Get authorized_keys
shell: cat "{{ getent_passwd[username][4] }}"/.ssh/authorized_keys
register: read_key
- name: Prints out authorized_key
debug: var=read_key.stdout_lines

How to prompt user for a target host in Ansible?

I want to write a bootstrapper playbook for new machines in Ansible which will reconfigure the network settings. At the time of the first execution target machines will have DHCP-assigned address.
The user who is supposed to execute the playbook knows the assigned IP address of a new machine. I would like to prompt the user for is value.
vars_prompt module allows getting input from the user, however it is defined under hosts section effectively preventing host address as the required value.
Is it possible without using a wrapper script modifying inventory file?
The right way to do this is to create a dynamic host with add_host and place it in a new group, then start a new play that targets that group. That way, if you have other connection vars that need to be set ahead of time (credentials/keys/etc) you could set them on an empty group in inventory, then add the host to it dynamically. For example:
- hosts: localhost
gather_facts: no
vars_prompt:
- name: target_host
prompt: please enter the target host IP
private: no
tasks:
- add_host:
name: "{{ target_host }}"
groups: dynamically_created_hosts
- hosts: dynamically_created_hosts
tasks:
- debug: msg="do things on target host here"
You could pass it with extra-vars instead.
Simply make your hosts section a variable such as {{ hosts_prompt }} and then pass the host on the command line like so:
ansible-playbook -i inventory/environment playbook.yml --extra-vars "hosts_prompt=192.168.1.10"
Or if you are using the default inventory file location of /etc/ansible/hosts you could simply use:
ansible-playbook playbook.yml --extra-vars "hosts_prompt=192.168.1.10"
Adding to Matt's answer for multiple hosts.
input example would be 192.0.2.10,192.0.2.11
- hosts: localhost
gather_facts: no
vars_prompt:
- name: target_host
prompt: please enter the target host IP
private: no
tasks:
- add_host:
name: "{{ item }}"
groups: dynamically_created_hosts
with_items: "{{ target_host.split(',') }}"
- hosts: dynamically_created_hosts
tasks:
- debug: msg="do things on target host here"
Disclaimer: The accepted answer offers the best solution to the problem. While this one is working it is based on a hack and I leave it as a reference.
I found out it was possible use a currently undocumented hack (credit to Bruce P for pointing me to the post) that turns the value of -i / --inventory parameter into an ad hoc list of hosts (reference). With just the hostname/ip address and a trailing space (like below) it refers to a single host without the need for the inventory file to exist.
Command:
ansible-playbook -i "192.168.1.21," playbook.yml
And accordingly playbook.yml can be run against all hosts (which in the above example will be equal to a single host 192.168.1.21):
- hosts: all
The list might contain more than one ip address -i "192.168.1.21,192.168.1.22"
Adding to Jacob's and Matt's examples, with the inclusion of a username and password prompt:
---
- hosts: localhost
pre_tasks:
- name: verify_ansible_version
assert:
that: "ansible_version.full is version_compare('2.10.7', '>=')"
msg: "Error: You must update Ansible to at least version 2.10.7 to run this playbook..."
vars_prompt:
- name: target_hosts
prompt: |
Enter Target Host IP[s] or Hostname[s] (comma separated)
(example: 1.1.1.1,myhost.example.com)
private: false
- name: username
prompt: Enter Target Host[s] Login Username
private: false
- name: password
prompt: Enter Target Host[s] Login Password
private: true
tasks:
- add_host:
name: "{{ item }}"
groups: host_groups
with_items:
- "{{ target_hosts.split(',') }}"
- add_host:
name: login
username: "{{ username }}"
password: "{{ password }}"
- hosts: host_groups
remote_user: "{{ hostvars['login']['username'] }}"
vars:
ansible_password: "{{ hostvars['login']['password'] }}"
ansible_become: yes
ansible_become_method: sudo
ansible_become_pass: "{{ hostvars['login']['password'] }}"
roles:
- my_role

Resources