I have a task based on a shell command that needs to run on a local computer. As part of the command, I need to add the IP address on part of the groups I have in my inventory file
Inventory file :
[server1]
130.1.1.1
130.1.1.2
[server2]
130.1.1.3
130.1.1.4
[server3]
130.1.1.5
130.1.1.6
I need to run the following command from the local computer on the Ips that are part of the Server 2 + 3 groups
ssh-copy-id user#<IP>
# <IP> should be 130.1.1.3 , 130.1.1.4 , 130.1.1.5 , 130.1.1.6
Playbook - were I'm missing the part of the ip
- hosts: localhost
gather_facts: no
become: yes
tasks:
- name: Generate ssh key for root user
shell: "ssh-copy-id user#{{ item }}"
run_once: True
with_items:
- server2 group
- server3 group
In a nutshell:
- hosts: server1:server2
gather_facts: no
become: true
tasks:
- name: Push local root pub key for remote user
shell: "ssh-copy-id user#{{ inventory_hostname }}"
delegate_to: localhost
Note that I kept your exact shell command which is actually a bad practice since there is a dedicated ansible module to manage that. So this could be translated to something like.
- hosts: server1:server2
gather_facts: no
become: true
tasks:
- name: Push local root pub key for remote user
authorized_key:
user: user
state: present
key: "{{ lookup('file', '/root/.ssh/id_rsa.pub') }}"
Related
I have three hosts:
my local ansible controller
a jump/bastion host (jump_host) for my infrastructure
a target host I want to run ansible tasks against (target_host) which is only accessible through jump_host
As part of my inventory file, I have the details of both jump_host and target_host as follows:
jump_host:
ansible_host: "{{ jump_host_ip }}"
ansible_port: 22
ansible_user: root
ansible_password: password
target_host:
ansible_host: "{{ target_host_ip }}"
ansible_port: 22
ansible_user: root
ansible_password: password
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q root#{{ jump_host_ip }}"'
How can we configure ansible to use the password mentioned in the jump_host settings from the inventory file instead of using any additional configurations from ~/.ssh/config file?
There is no direct way to provide the password for the jump host as part of the ProxyCommand.
So, I ended up doing the following:
# Generate SSH keys on the controller
- hosts: localhost
become: false
tasks:
- name: Generate the localhost ssh keys
community.crypto.openssh_keypair:
path: ~/.ssh/id_rsa
force: no
# Copy the host keys of Ansible host into the jump_host .ssh/authorized_keys file
# to ensure that no password is prompted while logging into jump_host
- hosts: jump_host
become: false
tasks:
- name: make sure public key exists on target for user
ansible.posix.authorized_key:
user: "{{ ansible_user }}"
key: "{{ lookup('file', '~/.ssh/id_rsa') }}"
state: present
I'm trying to create a playbook that has to complete the following tasks:
retrieve hostnames and releases from a file file
connect to those hostnames one by one
retrieve the contents of another file another_file in each host that will give us the environment (dev, qa, prod)
so my first task is to retrieve the names of the hosts I need to connect to
- name: retrieve nodes
shell: cat file | grep ";;" | grep foo | grep -v "bar" | sort | uniq
register: nodes
- adding nodes:
add_host:
name: "{{ item }}"
group: "servers"
with_items: "{{ nodes.stdout_lines }}"
the next task connects to the hosts
- hosts: "{{ nodes }}"
become: yes
become_user: user1
become_method: sudo
gather_facts: no
tasks:
- name: check remote file
slurp:
src: /xyz/directory/another_file
register: thing
- debug: msg="{{ thing['content'] | b64decode }}"
but it doesn't work, when I add verbosity I still see the task being executed with the user running the playbook (local_user)
<node> ESTABLISH SSH CONNECTION FOR USER: local_user
what am I doing wrong?
ansible version is 2.7.1 over rhel 6.1
UPDATE
I've also used remote_user: user1
- hosts: "{{ nodes }}"
remote_user: user1
gather_facts: no
tasks:
- name: check remote file
slurp:
src: /xyz/directory/another_file
register: thing
- debug: msg="{{ thing['content'] | b64decode }}"
no luck so far, same error
You need to set remote_user, which is the one that controls what username is used when connecting to the server. Only after the connection is established, become_user is used.
https://docs.ansible.com/ansible/latest/user_guide/playbooks_intro.html#hosts-and-users
if you use become_user it is like using sudo command on the remote machine.
You want to connect to your machines as a specific user. And after the connection is established issue the commands with sudo.
This question has some answers that should help you.
I want to write a bootstrapper playbook for new machines in Ansible which will reconfigure the network settings. At the time of the first execution target machines will have DHCP-assigned address.
The user who is supposed to execute the playbook knows the assigned IP address of a new machine. I would like to prompt the user for is value.
vars_prompt module allows getting input from the user, however it is defined under hosts section effectively preventing host address as the required value.
Is it possible without using a wrapper script modifying inventory file?
The right way to do this is to create a dynamic host with add_host and place it in a new group, then start a new play that targets that group. That way, if you have other connection vars that need to be set ahead of time (credentials/keys/etc) you could set them on an empty group in inventory, then add the host to it dynamically. For example:
- hosts: localhost
gather_facts: no
vars_prompt:
- name: target_host
prompt: please enter the target host IP
private: no
tasks:
- add_host:
name: "{{ target_host }}"
groups: dynamically_created_hosts
- hosts: dynamically_created_hosts
tasks:
- debug: msg="do things on target host here"
You could pass it with extra-vars instead.
Simply make your hosts section a variable such as {{ hosts_prompt }} and then pass the host on the command line like so:
ansible-playbook -i inventory/environment playbook.yml --extra-vars "hosts_prompt=192.168.1.10"
Or if you are using the default inventory file location of /etc/ansible/hosts you could simply use:
ansible-playbook playbook.yml --extra-vars "hosts_prompt=192.168.1.10"
Adding to Matt's answer for multiple hosts.
input example would be 192.0.2.10,192.0.2.11
- hosts: localhost
gather_facts: no
vars_prompt:
- name: target_host
prompt: please enter the target host IP
private: no
tasks:
- add_host:
name: "{{ item }}"
groups: dynamically_created_hosts
with_items: "{{ target_host.split(',') }}"
- hosts: dynamically_created_hosts
tasks:
- debug: msg="do things on target host here"
Disclaimer: The accepted answer offers the best solution to the problem. While this one is working it is based on a hack and I leave it as a reference.
I found out it was possible use a currently undocumented hack (credit to Bruce P for pointing me to the post) that turns the value of -i / --inventory parameter into an ad hoc list of hosts (reference). With just the hostname/ip address and a trailing space (like below) it refers to a single host without the need for the inventory file to exist.
Command:
ansible-playbook -i "192.168.1.21," playbook.yml
And accordingly playbook.yml can be run against all hosts (which in the above example will be equal to a single host 192.168.1.21):
- hosts: all
The list might contain more than one ip address -i "192.168.1.21,192.168.1.22"
Adding to Jacob's and Matt's examples, with the inclusion of a username and password prompt:
---
- hosts: localhost
pre_tasks:
- name: verify_ansible_version
assert:
that: "ansible_version.full is version_compare('2.10.7', '>=')"
msg: "Error: You must update Ansible to at least version 2.10.7 to run this playbook..."
vars_prompt:
- name: target_hosts
prompt: |
Enter Target Host IP[s] or Hostname[s] (comma separated)
(example: 1.1.1.1,myhost.example.com)
private: false
- name: username
prompt: Enter Target Host[s] Login Username
private: false
- name: password
prompt: Enter Target Host[s] Login Password
private: true
tasks:
- add_host:
name: "{{ item }}"
groups: host_groups
with_items:
- "{{ target_hosts.split(',') }}"
- add_host:
name: login
username: "{{ username }}"
password: "{{ password }}"
- hosts: host_groups
remote_user: "{{ hostvars['login']['username'] }}"
vars:
ansible_password: "{{ hostvars['login']['password'] }}"
ansible_become: yes
ansible_become_method: sudo
ansible_become_pass: "{{ hostvars['login']['password'] }}"
roles:
- my_role
Playbook below. I'm trying to replace test#ip: with a way to pull from my inventory file the IP from a group I've created.
- hosts: firewall
gather_facts: no
tasks:
- name: run shell script
raw: 'sh /home/test/firewall.sh'
- hosts: localhost
gather_facts: no
tasks:
- name: Copy File to Local Machine
shell: 'scp test#ip:/home/test/test.test /Users/dest/Desktop'
You need to change your task like this:
- hosts: localhost
gather_facts: no
tasks:
- name: Copy File to Local Machine
shell: 'scp test#{{ item }}:/home/test/test.test /Users/dest/Desktop'
with_items: groups['your_group_name']
If you want to run on all the hosts in the inventory then you can use like this:
with_items: groups['all']
Hope that will help you.
I want to write a bootstrapper playbook for new machines in Ansible which will reconfigure the network settings. At the time of the first execution target machines will have DHCP-assigned address.
The user who is supposed to execute the playbook knows the assigned IP address of a new machine. I would like to prompt the user for is value.
vars_prompt module allows getting input from the user, however it is defined under hosts section effectively preventing host address as the required value.
Is it possible without using a wrapper script modifying inventory file?
The right way to do this is to create a dynamic host with add_host and place it in a new group, then start a new play that targets that group. That way, if you have other connection vars that need to be set ahead of time (credentials/keys/etc) you could set them on an empty group in inventory, then add the host to it dynamically. For example:
- hosts: localhost
gather_facts: no
vars_prompt:
- name: target_host
prompt: please enter the target host IP
private: no
tasks:
- add_host:
name: "{{ target_host }}"
groups: dynamically_created_hosts
- hosts: dynamically_created_hosts
tasks:
- debug: msg="do things on target host here"
You could pass it with extra-vars instead.
Simply make your hosts section a variable such as {{ hosts_prompt }} and then pass the host on the command line like so:
ansible-playbook -i inventory/environment playbook.yml --extra-vars "hosts_prompt=192.168.1.10"
Or if you are using the default inventory file location of /etc/ansible/hosts you could simply use:
ansible-playbook playbook.yml --extra-vars "hosts_prompt=192.168.1.10"
Adding to Matt's answer for multiple hosts.
input example would be 192.0.2.10,192.0.2.11
- hosts: localhost
gather_facts: no
vars_prompt:
- name: target_host
prompt: please enter the target host IP
private: no
tasks:
- add_host:
name: "{{ item }}"
groups: dynamically_created_hosts
with_items: "{{ target_host.split(',') }}"
- hosts: dynamically_created_hosts
tasks:
- debug: msg="do things on target host here"
Disclaimer: The accepted answer offers the best solution to the problem. While this one is working it is based on a hack and I leave it as a reference.
I found out it was possible use a currently undocumented hack (credit to Bruce P for pointing me to the post) that turns the value of -i / --inventory parameter into an ad hoc list of hosts (reference). With just the hostname/ip address and a trailing space (like below) it refers to a single host without the need for the inventory file to exist.
Command:
ansible-playbook -i "192.168.1.21," playbook.yml
And accordingly playbook.yml can be run against all hosts (which in the above example will be equal to a single host 192.168.1.21):
- hosts: all
The list might contain more than one ip address -i "192.168.1.21,192.168.1.22"
Adding to Jacob's and Matt's examples, with the inclusion of a username and password prompt:
---
- hosts: localhost
pre_tasks:
- name: verify_ansible_version
assert:
that: "ansible_version.full is version_compare('2.10.7', '>=')"
msg: "Error: You must update Ansible to at least version 2.10.7 to run this playbook..."
vars_prompt:
- name: target_hosts
prompt: |
Enter Target Host IP[s] or Hostname[s] (comma separated)
(example: 1.1.1.1,myhost.example.com)
private: false
- name: username
prompt: Enter Target Host[s] Login Username
private: false
- name: password
prompt: Enter Target Host[s] Login Password
private: true
tasks:
- add_host:
name: "{{ item }}"
groups: host_groups
with_items:
- "{{ target_hosts.split(',') }}"
- add_host:
name: login
username: "{{ username }}"
password: "{{ password }}"
- hosts: host_groups
remote_user: "{{ hostvars['login']['username'] }}"
vars:
ansible_password: "{{ hostvars['login']['password'] }}"
ansible_become: yes
ansible_become_method: sudo
ansible_become_pass: "{{ hostvars['login']['password'] }}"
roles:
- my_role