Can ansible variables be used to declare hosts in a playbook? - ansible

I have a playbook in the format below:
---
- hosts: myIP
tasks:
- name: Install a yum package in Ansible example
yum:
name: ThePackageIWantToInstall
state: present
where myIP and ThePackageIWantToInstall are variables.
When the job template runs, I would like the user in the extra variables popup to be able to go with:
myIP = 192.168.1.1
ThePackageIWantToInstall = nano
As the documentation doesn't provide an example of supplying a variable via a job template, is this possible?

Yes.
- name: Do The Thing
hosts: "{{ foo }}"
roles:
- "{{ role }}"
Need mustaches and quotes.
to run from popup
(I don't use this, but it was suggested as an edit, thanks...)
foo: value

I have achieved similar thing with add_hosts. Here iam not installing package but creating file with name passed from command line. Any number of hosts (separated by commas can be passed from command line).
# cat addhost2.yml
- hosts: localhost
gather_facts: no
tasks:
- add_host:
name: "{{ item }}"
groups: hosts_from_commandline
with_items: "{{ new_hosts_passed.split(',') }}"
- hosts: hosts_from_commandline
tasks:
- name: Ansible create file with name passed from commandline
file:
path: "/tmp/{{ filename_from_commandline }}"
state: touch
# ansible-playbook -i hosts addhost2.yml --extra-vars='new_hosts_passed=192.168.3.104,192.168.3.113 filename_from_commandline=lathamdkv'
Hope this helps

Related

Run Ansible playbook task with predefined username and password

This is code of my ansible script .
---
- hosts: "{{ host }}"
remote_user: "{{ user }}"
ansible_become_pass: "{{ pass }}"
tasks:
- name: Creates directory to keep files on the server
file: path=/home/{{ user }}/fabric_shell state=directory
- name: Move sh file to remote
copy:
src: /home/pankaj/my_ansible_scripts/normal_script/installation/install.sh
dest: /home/{{ user }}/fabric_shell/install.sh
- name: Execute the script
command: sh /home/{{ user }}/fabric_shell/install.sh
become: yes
I am running the ansible playbook using command>>>
ansible-playbook send_run_shell.yml --extra-vars "user=sakshi host=192.168.0.238 pass=Welcome01" .
But I don't know why am getting error
ERROR! 'ansible_become_pass' is not a valid attribute for a Play
The error appears to have been in '/home/pankaj/go/src/shell_code/send_run_shell.yml': line 2, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
---
- hosts: "{{ host }}"
^ here
We could be wrong, but this one looks like it might be an issue with
missing quotes. Always quote template expression brackets when they
start a value. For instance:
with_items:
- {{ foo }}
Should be written as:
with_items:
- "{{ foo }}"
Please guide , what I am doing wrong.
Thanks in advance ...
ansible_become_pass is a connection parameter which you can set as variable:
---
- hosts: "{{ host }}"
remote_user: "{{ user }}"
vars:
ansible_become_pass: "{{ pass }}"
tasks:
# ...
That said, you can move remote_user to variables too (refer to the whole list of connection parameters), save it to a separate host_vars- or group_vars-file and encrypt with Ansible Vault.
Take a look on this thread thread and Ansible Page. I propose to use become_user in this way:
- hosts: all
tasks:
- include_tasks: task/java_tomcat_install.yml
when: activity == 'Install'
become: yes
become_user: "{{ aplication_user }}"
Try do not use pass=Welcome01,
When speaking with remote machines, Ansible by default assumes you are using SSH keys. SSH keys are encouraged but password authentication can also be used where needed by supplying the option --ask-pass. If using sudo features and when sudo requires a password, also supply --ask-become-pass (previously --ask-sudo-pass which has been deprecated).

In ansible variable for hosts from vars_prompt no longer accepted [duplicate]

I want to write a bootstrapper playbook for new machines in Ansible which will reconfigure the network settings. At the time of the first execution target machines will have DHCP-assigned address.
The user who is supposed to execute the playbook knows the assigned IP address of a new machine. I would like to prompt the user for is value.
vars_prompt module allows getting input from the user, however it is defined under hosts section effectively preventing host address as the required value.
Is it possible without using a wrapper script modifying inventory file?
The right way to do this is to create a dynamic host with add_host and place it in a new group, then start a new play that targets that group. That way, if you have other connection vars that need to be set ahead of time (credentials/keys/etc) you could set them on an empty group in inventory, then add the host to it dynamically. For example:
- hosts: localhost
gather_facts: no
vars_prompt:
- name: target_host
prompt: please enter the target host IP
private: no
tasks:
- add_host:
name: "{{ target_host }}"
groups: dynamically_created_hosts
- hosts: dynamically_created_hosts
tasks:
- debug: msg="do things on target host here"
You could pass it with extra-vars instead.
Simply make your hosts section a variable such as {{ hosts_prompt }} and then pass the host on the command line like so:
ansible-playbook -i inventory/environment playbook.yml --extra-vars "hosts_prompt=192.168.1.10"
Or if you are using the default inventory file location of /etc/ansible/hosts you could simply use:
ansible-playbook playbook.yml --extra-vars "hosts_prompt=192.168.1.10"
Adding to Matt's answer for multiple hosts.
input example would be 192.0.2.10,192.0.2.11
- hosts: localhost
gather_facts: no
vars_prompt:
- name: target_host
prompt: please enter the target host IP
private: no
tasks:
- add_host:
name: "{{ item }}"
groups: dynamically_created_hosts
with_items: "{{ target_host.split(',') }}"
- hosts: dynamically_created_hosts
tasks:
- debug: msg="do things on target host here"
Disclaimer: The accepted answer offers the best solution to the problem. While this one is working it is based on a hack and I leave it as a reference.
I found out it was possible use a currently undocumented hack (credit to Bruce P for pointing me to the post) that turns the value of -i / --inventory parameter into an ad hoc list of hosts (reference). With just the hostname/ip address and a trailing space (like below) it refers to a single host without the need for the inventory file to exist.
Command:
ansible-playbook -i "192.168.1.21," playbook.yml
And accordingly playbook.yml can be run against all hosts (which in the above example will be equal to a single host 192.168.1.21):
- hosts: all
The list might contain more than one ip address -i "192.168.1.21,192.168.1.22"
Adding to Jacob's and Matt's examples, with the inclusion of a username and password prompt:
---
- hosts: localhost
pre_tasks:
- name: verify_ansible_version
assert:
that: "ansible_version.full is version_compare('2.10.7', '>=')"
msg: "Error: You must update Ansible to at least version 2.10.7 to run this playbook..."
vars_prompt:
- name: target_hosts
prompt: |
Enter Target Host IP[s] or Hostname[s] (comma separated)
(example: 1.1.1.1,myhost.example.com)
private: false
- name: username
prompt: Enter Target Host[s] Login Username
private: false
- name: password
prompt: Enter Target Host[s] Login Password
private: true
tasks:
- add_host:
name: "{{ item }}"
groups: host_groups
with_items:
- "{{ target_hosts.split(',') }}"
- add_host:
name: login
username: "{{ username }}"
password: "{{ password }}"
- hosts: host_groups
remote_user: "{{ hostvars['login']['username'] }}"
vars:
ansible_password: "{{ hostvars['login']['password'] }}"
ansible_become: yes
ansible_become_method: sudo
ansible_become_pass: "{{ hostvars['login']['password'] }}"
roles:
- my_role

lineinfile module of ansible with delegate_to localhost doesn't write all data to localhost, it writes only 1 random entry on localhost

I have 3 remote VMs and 1 ansible node.
I am getting the hostname of some VMs by running hostname command on those remote VMs through ansible shell module and registering that output in hostname_output variable.
Then I want to add those VM's IP (collected using gather_facts: True, {{ ansible_default_ipv4.address }} ) with their hostname and append it to a file temp_hostname on localhost, hence I am delegating the task to localhost.
But the issue is, when I see on console, the lineinfile module says that line has been added when the module executed for each node and delegated to localhost, but when I check the file on the localhost, only 1 entry is shown on localhost instead of 3.
---
- name: get hostnames of dynamically created VMs
hosts: all
remote_user: "{{ remote_user }}"
gather_facts: True
tasks:
- name: save hostname in variable, as this command is executed remotely, and we want the value on the ansible node
shell: hostname
register: hostname_output
- name: writing hostname_output in ansible node in file on ansible node
lineinfile:
line: "{{ ansible_default_ipv4.address }} {{ hostname_output.stdout }}"
dest: temp_hostname
state: present
delegate_to: 127.0.0.1
I even tried with copy module as specified in Ansible writing output from multiple task to a single file , but that also gave same result i.e 1 entry only.
---
- name: get hostnames of dynamically created VMs
hosts: all
remote_user: "{{ remote_user }}"
gather_facts: True
tasks:
- name: save hostname in variable, as this command is executed remotely, and we want the value on the ansible node
shell: hostname
register: hostname_output
- name: writing hostname_output in ansible node in file on ansible node
copy:
content: "{{ ansible_default_ipv4.address }} {{ hostname_output.stdout }}"
dest: /volume200gb/sushil/test/code_hostname/temp_hostname
delegate_to: 127.0.0.1
Finally when I used shell module with redirection operator, it worked as I wanted i.e 3 entries in file on localhost.
---
- name: get hostnames of dynamically created VMs
hosts: all
remote_user: "{{ remote_user }}"
gather_facts: True
tasks:
- name: save hostname in variable, as this command is executed remotely, and we want the value on the ansible node
shell: hostname
register: hostname_output
- name: writing hostname_output in ansible node in file on ansible node
shell: echo -e "{{ ansible_default_ipv4.address }} {{ hostname_output.stdout }}" >> temp_hostname
delegate_to: 127.0.0.1
I am calling this ansible-playbook get_hostname.yml using command:
ansible-playbook -i hosts get_hostname.yml --ssh-extra-args="-o StrictHostKeyChecking=no" --extra-vars "remote_user=cloud-user" -vvv
My hosts file is:
10.194.11.86 private_key_file=/root/.ssh/id_rsa
10.194.11.87 private_key_file=/root/.ssh/id_rsa
10.194.11.88 private_key_file=/root/.ssh/id_rsa
I am using ansible 2.1.0.0
I am using default ansible.cfg only, no modications
My question is why lineinfile and copy module didn't work? Did I miss anything or wrote something wrongly
I tried to reproduce your issue and it did not happen for me, I suspect this is a problem with your version of ansible, try with the latest.
That being said, I think you might be able to make it work using serial: 1, it is probably an issue with file locking that I don't see happening in ansible 2.3. I also think that instead of using a shell task to gather the hostname you could use the ansible_hostname variable which is provided as an ansible fact, and you can also avoid gathering ALL facts if all you want is the hostname by adding a task for that specifically. In the end, it would look like this:
---
- name: get hostnames of dynamically created VMs
hosts: all
serial: 1
remote_user: "{{ remote_user }}"
tasks:
- name: Get hostnames
setup:
filter: ansible_hostname
- name: writing hostname_output in ansible node in file on ansible node
lineinfile:
line: "{{ ansible_default_ipv4.address }} {{ ansible_hostname }}"
dest: temp_hostname
state: present
delegate_to: 127.0.0.1
I get inconsistent results using your first code block with lineinfile. Sometimes I get all 3 IPs and hostnames in the destination file and sometimes I only get 2. I'm not sure why this is happening but my guess is that Ansible is trying to save changes to the file at the same time and only one change gets picked up.
The second code block won't work since copy will overwrite the file unless content matches what is already there. The last host that runs will be the only IP/hostname in the destination file.
To work around this, you can loop over your play_hosts (the active hosts in the current play) and reference their variables using hostvars.
- name: writing hostname_output in ansible node in file on ansible node
lineinfile:
line: "{{ hostvars[item]['ansible_default_ipv4'].address }} {{ hostvars[item]['hostname_output'].stdout }}"
dest: temp_hostname
state: present
delegate_to: 127.0.0.1
run_once: True
with_items: "{{ play_hosts }}"
Or you can use a template with the same logic
- name: writing hostname_output in ansible node in file on ansible node
template:
src: IP_hostname.j2
dest: temp_hostname
delegate_to: 127.0.0.1
run_once: True
IP_hostname.j2
{% for host in play_hosts %}
{{ hostvars[host]['ansible_default_ipv4'].address }} {{ hostvars[host]['hostname_output'].stdout }}
{% endfor %}
The problem is here that there is multiple concurrent writes to only one file. That leads to unexpected results:
A solution for that is to use serial: 1 on your play, which forces non-parallel execution among your hosts.
But it can be a performance killer depending on the number of hosts.
I would suggest using another solution: instead of writing to only one file, each host delegation could write on its own file (here using the inventory_hostname value). Therefore, it will have no more concurrent writes.
After that, you can use the module assemble to merge all the file in one. Here is an example (untested):
---
- name: get hostnames of dynamically created VMs
hosts: all
remote_user: "{{ remote_user }}"
gather_facts: True
tasks:
- name: save hostname in variable, as this command is executed remotely, and we want the value on the ansible node
shell: hostname
register: hostname_output
- name: deleting tmp folder
file: path=/tmp/temp_hostname state=absent
delegate_to: 127.0.0.1
run_once: true
- name: create tmp folder
file: path=/tmp/temp_hostname state=directory
delegate_to: 127.0.0.1
run_once: true
- name: writing hostname_output in ansible node in file on ansible node
template: path=tpl.j2 dest=/tmp/temp_hostname/{{ inventory_hostname }}
delegate_to: 127.0.0.1
- name: assemble hostnames
assemble: src=/tmp/temp_hostname/ dest=temp_hostname
delegate_to: '{{ base_rundeck_server }}'
run_once: true
Obviously you have to create the tpl.j2 file.

List in Ansible variable

I have Ansible Playbook which i want to use as Generic Playbook(choronically same for all).
Is there is any possible way to create List in inside vars: section and iterate it inside PlayBook? So that i just go and edit my vars section or var.yml file
PFB the part of my desired playbook
- name: Test playbook
hosts: all
remote_user: root
vars:
list_dict3:
- packages: [ 'python-setuptools','python-dev','libfuzzy-dev','libffi-dev','screen']
tasks:
- name: Accessing list of dictionary
apt: pkg={{item.packages}} state=present
with_items: list_dict3
Not sure what you really want, because your variable structure is very strange, but to make your apt task correct, you should write:
- name: Accessing list of dictionary
apt:
name: "{{ item }}"
state: present
with_items: "{{ list_dict3[0].packages }}"

How to prompt user for a target host in Ansible?

I want to write a bootstrapper playbook for new machines in Ansible which will reconfigure the network settings. At the time of the first execution target machines will have DHCP-assigned address.
The user who is supposed to execute the playbook knows the assigned IP address of a new machine. I would like to prompt the user for is value.
vars_prompt module allows getting input from the user, however it is defined under hosts section effectively preventing host address as the required value.
Is it possible without using a wrapper script modifying inventory file?
The right way to do this is to create a dynamic host with add_host and place it in a new group, then start a new play that targets that group. That way, if you have other connection vars that need to be set ahead of time (credentials/keys/etc) you could set them on an empty group in inventory, then add the host to it dynamically. For example:
- hosts: localhost
gather_facts: no
vars_prompt:
- name: target_host
prompt: please enter the target host IP
private: no
tasks:
- add_host:
name: "{{ target_host }}"
groups: dynamically_created_hosts
- hosts: dynamically_created_hosts
tasks:
- debug: msg="do things on target host here"
You could pass it with extra-vars instead.
Simply make your hosts section a variable such as {{ hosts_prompt }} and then pass the host on the command line like so:
ansible-playbook -i inventory/environment playbook.yml --extra-vars "hosts_prompt=192.168.1.10"
Or if you are using the default inventory file location of /etc/ansible/hosts you could simply use:
ansible-playbook playbook.yml --extra-vars "hosts_prompt=192.168.1.10"
Adding to Matt's answer for multiple hosts.
input example would be 192.0.2.10,192.0.2.11
- hosts: localhost
gather_facts: no
vars_prompt:
- name: target_host
prompt: please enter the target host IP
private: no
tasks:
- add_host:
name: "{{ item }}"
groups: dynamically_created_hosts
with_items: "{{ target_host.split(',') }}"
- hosts: dynamically_created_hosts
tasks:
- debug: msg="do things on target host here"
Disclaimer: The accepted answer offers the best solution to the problem. While this one is working it is based on a hack and I leave it as a reference.
I found out it was possible use a currently undocumented hack (credit to Bruce P for pointing me to the post) that turns the value of -i / --inventory parameter into an ad hoc list of hosts (reference). With just the hostname/ip address and a trailing space (like below) it refers to a single host without the need for the inventory file to exist.
Command:
ansible-playbook -i "192.168.1.21," playbook.yml
And accordingly playbook.yml can be run against all hosts (which in the above example will be equal to a single host 192.168.1.21):
- hosts: all
The list might contain more than one ip address -i "192.168.1.21,192.168.1.22"
Adding to Jacob's and Matt's examples, with the inclusion of a username and password prompt:
---
- hosts: localhost
pre_tasks:
- name: verify_ansible_version
assert:
that: "ansible_version.full is version_compare('2.10.7', '>=')"
msg: "Error: You must update Ansible to at least version 2.10.7 to run this playbook..."
vars_prompt:
- name: target_hosts
prompt: |
Enter Target Host IP[s] or Hostname[s] (comma separated)
(example: 1.1.1.1,myhost.example.com)
private: false
- name: username
prompt: Enter Target Host[s] Login Username
private: false
- name: password
prompt: Enter Target Host[s] Login Password
private: true
tasks:
- add_host:
name: "{{ item }}"
groups: host_groups
with_items:
- "{{ target_hosts.split(',') }}"
- add_host:
name: login
username: "{{ username }}"
password: "{{ password }}"
- hosts: host_groups
remote_user: "{{ hostvars['login']['username'] }}"
vars:
ansible_password: "{{ hostvars['login']['password'] }}"
ansible_become: yes
ansible_become_method: sudo
ansible_become_pass: "{{ hostvars['login']['password'] }}"
roles:
- my_role

Resources