Ansible Playbook Variable Prompt Not Working [duplicate] - ansible

I have distilled a playbook that has three plays. The goal is to collect the database password from a prompt in one play and then use the same password in the other two plays.
---
- name: database password
hosts:
- webservers
- dbservers
vars_prompt:
- name: "db_password"
prompt: "Enter Database Password for databse user root"
default: "root"
- hosts: dbservers
tasks:
- command: echo {{db_password | mandatory }}
- hosts: webservers
tasks:
- command: echo {{db_password | mandatory }}
It fails as shown below.
Enter Database Password for databse user root [root]:
PLAY [database password] ******************************************************
GATHERING FACTS ***************************************************************
ok: [vc-dev-1]
PLAY [dbservers] **************************************************************
GATHERING FACTS ***************************************************************
ok: [vc-dev-1]
TASK: [command echo {{db_password | mandatory}}] ***************************
fatal: [vc-dev-1] => One or more undefined variables: 'db_password' is undefined
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit #.../playbook2.retry
vc-dev-1 : ok=3 changed=0 unreachable=1 failed=0

I have found the following workaround using set_fact to assign the variable entered by a user into a variable with playbook scope. It seems that var_prompt variables are not like facts and other variables, its scope is restricted in the play that prompts for them not the entire playbook. I am not sure if this is a feature or a bug.
- name: database password
hosts:
- webservers
- dbservers
vars_prompt:
- name: "db_password"
prompt: "Enter Database Password for databse user root"
default: "root"
tasks:
- set_fact:
db_root_password: "{{ db_password}}"
- hosts: dbservers
tasks:
- command: echo {{ db_root_password | mandatory }}
- hosts: webservers
tasks:
- command: echo {{ db_root_password | mandatory }}

Improvising gae123's answer, in case if your hosts are added dynamically, it will not be possible to get and set the fact on the existing group of servers, in which case localhost can be used to set and get.
---
- name: database password
hosts: localhost
vars_prompt:
- name: "db_password"
prompt: "Enter Database Password for databse user root"
default: "root"
tasks:
- set_fact:
db_root_password: "{{db_password}}"
- hosts: dbservers
vars:
- db_root_password: "{{ hostvars['localhost']['db_root_password'] }}"
tasks:
- command: echo {{db_root_password | mandatory }}
- hosts: webservers
vars:
- db_root_password: "{{ hostvars['localhost']['db_root_password'] }}"
tasks:
- command: echo {{db_root_password | mandatory }}

I ended up taking user3037143 answer for my localhost sudo password problem:
---
- hosts: localhost
tags:
- always
vars_prompt:
- name: sudo_password
prompt: "Sudo password for localhost ONLY"
tasks:
- set_fact: ansible_become_password={{ sudo_password }}
no_log: true
Now it's shared between every include playbooks I have.
I'm on ansible > 2.

Related

ansible, execute a task only if another specific host is also in the playbook run

I have a little problem I can't solve. I am writing an update/reboot playbook for my Linux servers and I want to make sure that a task is executed only if another host is in the same playbook run
for example:
stop app on app server when the database server is going to be rebooted
- hosts: project-live-app01
tasks:
- name: stop app before rebooting db servers
systemd:
name: app
state: stopped
when: <<< project-live-db01 is in this ansible run >>>
- hosts: dbservers
serial: 1
tasks:
- name: Unconditionally reboot the machine with all defaults
reboot:
post_reboot_delay: 20
msg: "Reboot initiated by Ansible"
- hosts: important_servers:!dbservers
serial: 1
tasks:
- name: Unconditionally reboot the machine with all defaults
reboot:
post_reboot_delay: 20
msg: "Reboot initiated by Ansible"
I want to use the same playbook to reboot hosts and If i --limit the execution to only some hosts and especially not the dbserver then I don't want to have the app stopped. I try to create a generic playbook for all my projects, which only executes tasks if certain servers are affected by the playbook run.
Is there any way for that?
thank you and have a great day!
cheers, Ringo
It would be possible to create a dictionary with the structure of the project, e.g. in group_vars
shell> cat group_vars/all
project_live:
app01:
dbs: [db01, db09]
app02:
dbs: [db02, db09]
app03:
dbs: [db03, db09]
Put all groups of the DB servers you want to use into the inventory, e.g.
shell> cat hosts
[dbserversA]
db01
db02
[dbserversB]
db02
db03
[dbserversC]
db09
Then the playbook below demonstrates the scenario
shell> cat playbook.yml
---
- name: Stop app before rebooting db servers
hosts: localhost
gather_facts: false
tasks:
- debug:
msg: "Stop app on {{ item.key }}"
loop: "{{ project_live|dict2items }}"
when: item.value.dbs|intersect(groups[dbservers])|length > 0
- name: Reboot db servers
hosts: "{{ dbservers }}"
gather_facts: false
tasks:
- debug:
msg: "Reboot {{ inventory_hostname }}"
For example
shell> ansible-playbook -i hosts playbook.yml -e dbservers=dbserversA
PLAY [Stop app before rebooting db servers] ***********************
msg: Stop app on app01
msg: Stop app on app02
PLAY [Reboot db servers] *******************************************
msg: Reboot db01
msg: Reboot db02
How can I stop services on app* when the play is running on the localhost? Either use delegate_to, or create dynamic group by add_host and use it in the next play, e.g.
shell cat playbook.yml
---
- name: Create group appX
hosts: localhost
gather_facts: false
tasks:
- add_host:
name: "{{ item.key }}"
groups: appX
loop: "{{ project_live|dict2items }}"
loop_control:
label: "{{ item.key }}"
when: item.value.dbs|intersect(groups[dbservers])|length > 0
- name: Stop app before rebooting db servers
hosts: appX
gather_facts: false
tasks:
- debug:
msg: "Stop app on {{ inventory_hostname }}"
- name: Reboot db servers
hosts: "{{ dbservers }}"
gather_facts: false
tasks:
- debug:
msg: "Reboot {{ inventory_hostname }}"
gives
shell> ansible-playbook -i hosts playbook.yml -e dbservers=dbserversA
PLAY [Create group appX] ******************************************
ok: [localhost] => (item=app01)
ok: [localhost] => (item=app02)
skipping: [localhost] => (item=app03)
PLAY [Stop app before rebooting db servers] ************************
msg: Stop app on app01
msg: Stop app on app02
PLAY [Reboot db servers] *******************************************
msg: Reboot db01
msg: Reboot db02

`remote_user` is ignored in playbooks and roles

I have defined the following in my ansible.cfg
# default user to use for playbooks if user is not specified
# (/usr/bin/ansible will use current user as default)
remote_user = ansible
However I have a playbook bootstrap.yaml where I connect with root rather than ansible
---
- hosts: "{{ target }}"
become: no
gather_facts: false
remote_user: root
vars:
os_family: "{{ osfamily }}}"
roles:
- role: papanito.bootstrap
However it seems that remote_user: root is ignored as I always get a connection error, because it uses the user ansible instead of root for the ssh connection
fatal: [node001]: UNREACHABLE! => {"changed": false,
"msg": "Failed to connect to the host via ssh:
ansible#node001: Permission denied (publickey,password).",
"unreachable": true}
The only workaround for this I could find is calling the playbook with -e ansible_user=root. But this is not convenient as I want to call multiple playbooks with the site.yaml, where the first playbook has to run with ansible_user root, whereas the others have to run with ansible
- import_playbook: playbooks/bootstrap.yml
- import_playbook: playbooks/networking.yml
- import_playbook: playbooks/monitoring.yml
Any suggestions what I am missing or how to fix it?
Q: "remote_user: root is ignored"
A: The playbook works as expected
- hosts: test_01
gather_facts: false
become: no
remote_user: root
tasks:
- command: whoami
register: result
- debug:
var: result.stdout
gives
"result.stdout": "root"
But, the variable can be overridden in the inventory. For example with the inventory
$ cat hosts
all:
hosts:
test_01:
vars:
ansible_connection: ssh
ansible_user: admin
the result is
"result.stdout": "admin"
Double-check the inventory with the command
$ ansible-inventory --list
Notes
It might be also necessary to double-check the role - role: papanito.bootstrap
See Controlling how Ansible behaves: precedence rules
I faced a similar issue, where ec2 instance required different username to ssh with. You could try with below example
- import_playbook: playbooks/bootstrap.yml
vars:
ansible_ssh_user: root
Try this
Instead of “remote_user: root”use “remote_user: ansible” and additional “become: yes” ,”become_user: root”,”become_method: sudo or su”

In ansible variable for hosts from vars_prompt no longer accepted [duplicate]

I want to write a bootstrapper playbook for new machines in Ansible which will reconfigure the network settings. At the time of the first execution target machines will have DHCP-assigned address.
The user who is supposed to execute the playbook knows the assigned IP address of a new machine. I would like to prompt the user for is value.
vars_prompt module allows getting input from the user, however it is defined under hosts section effectively preventing host address as the required value.
Is it possible without using a wrapper script modifying inventory file?
The right way to do this is to create a dynamic host with add_host and place it in a new group, then start a new play that targets that group. That way, if you have other connection vars that need to be set ahead of time (credentials/keys/etc) you could set them on an empty group in inventory, then add the host to it dynamically. For example:
- hosts: localhost
gather_facts: no
vars_prompt:
- name: target_host
prompt: please enter the target host IP
private: no
tasks:
- add_host:
name: "{{ target_host }}"
groups: dynamically_created_hosts
- hosts: dynamically_created_hosts
tasks:
- debug: msg="do things on target host here"
You could pass it with extra-vars instead.
Simply make your hosts section a variable such as {{ hosts_prompt }} and then pass the host on the command line like so:
ansible-playbook -i inventory/environment playbook.yml --extra-vars "hosts_prompt=192.168.1.10"
Or if you are using the default inventory file location of /etc/ansible/hosts you could simply use:
ansible-playbook playbook.yml --extra-vars "hosts_prompt=192.168.1.10"
Adding to Matt's answer for multiple hosts.
input example would be 192.0.2.10,192.0.2.11
- hosts: localhost
gather_facts: no
vars_prompt:
- name: target_host
prompt: please enter the target host IP
private: no
tasks:
- add_host:
name: "{{ item }}"
groups: dynamically_created_hosts
with_items: "{{ target_host.split(',') }}"
- hosts: dynamically_created_hosts
tasks:
- debug: msg="do things on target host here"
Disclaimer: The accepted answer offers the best solution to the problem. While this one is working it is based on a hack and I leave it as a reference.
I found out it was possible use a currently undocumented hack (credit to Bruce P for pointing me to the post) that turns the value of -i / --inventory parameter into an ad hoc list of hosts (reference). With just the hostname/ip address and a trailing space (like below) it refers to a single host without the need for the inventory file to exist.
Command:
ansible-playbook -i "192.168.1.21," playbook.yml
And accordingly playbook.yml can be run against all hosts (which in the above example will be equal to a single host 192.168.1.21):
- hosts: all
The list might contain more than one ip address -i "192.168.1.21,192.168.1.22"
Adding to Jacob's and Matt's examples, with the inclusion of a username and password prompt:
---
- hosts: localhost
pre_tasks:
- name: verify_ansible_version
assert:
that: "ansible_version.full is version_compare('2.10.7', '>=')"
msg: "Error: You must update Ansible to at least version 2.10.7 to run this playbook..."
vars_prompt:
- name: target_hosts
prompt: |
Enter Target Host IP[s] or Hostname[s] (comma separated)
(example: 1.1.1.1,myhost.example.com)
private: false
- name: username
prompt: Enter Target Host[s] Login Username
private: false
- name: password
prompt: Enter Target Host[s] Login Password
private: true
tasks:
- add_host:
name: "{{ item }}"
groups: host_groups
with_items:
- "{{ target_hosts.split(',') }}"
- add_host:
name: login
username: "{{ username }}"
password: "{{ password }}"
- hosts: host_groups
remote_user: "{{ hostvars['login']['username'] }}"
vars:
ansible_password: "{{ hostvars['login']['password'] }}"
ansible_become: yes
ansible_become_method: sudo
ansible_become_pass: "{{ hostvars['login']['password'] }}"
roles:
- my_role

Ansible Playbook- Parallel Execution

I have 3 tasks in a playbook. My requirement is to complete the first task and then the second and third tasks should happen in parallel. As by default, these three tasks will happen one after the other, is there a way to make the second and third task in parallel once the first one is done?
- hosts: conv4
remote_user: username
tasks:
- name: Start Services
script: app-stop.sh
register: console
- hosts: patchapp
remote_user: username
become_user: username
become_method: su
tasks:
- name: Stop APP Services
script: stopapp.sh
register: console
- debug: var=console.stdout_lines
- hosts: patchdb
remote_user: username
become_user: username
become_method: su
tasks:
- name: Stop DB Services
script: stopdb.sh
register: console
- debug: var=console.stdout_lines
I need to run Start Services task first and then once it is complete i need to run Stop APP Services and Stop DB Services tasks parallely.
1.Import playbook works fine.
play.yml
- import_playbook: play1.yml
- import_playbook: play2.yml
play1.yml
- hosts: localhost
tasks:
- debug:
msg: 'play1: {{ ansible_host }}'
play2.yml
- hosts:
- vm2
- vm3
tasks:
- debug:
msg: 'play2: {{ ansible_host }}'
2.Loop over include_tasks and delegate_to is a nogo.
An option would be to loop over include_tasks and delegate_to inside the task (see below). But there is an unsolved bug delegate_to with remote_user on ansible 2.0 .
Workflows in Ansible Tower require license.
Workflows are only available to those with Enterprise-level licenses.
play.yml
- name: 'Test loop delegate_to'
hosts: localhost
gather_facts: no
vars:
my_servers:
- vm2
- vm3
tasks:
- name: "task 1"
debug:
msg: 'Task1: Running at {{ ansible_host }}'
- include_tasks: task2.yml
loop: "{{ my_servers }}"
task2.yml
- debug:
msg: 'Task2: Running at {{ ansible_host }}'
delegate_to: "{{ item }}"
ansible-playbook play.yml
PLAY [Test loop delegate_to]
TASK [task 1]
ok: [localhost] =>
msg: 'Task1: Running at 127.0.0.1'
TASK [include_tasks]
included: ansible-examples/examples/example-014/task2.yml for localhost
included: ansible-examples/examples/example-014/task2.yml for localhost
TASK [debug]
ok: [localhost -> vm2] =>
msg: 'Task2: Running at vm2'
TASK [debug]
ok: [localhost -> vm3] =>
msg: 'Task2: Running at vm3'
PLAY RECAP
localhost : ok=5 changed=0 unreachable=0 failed=0

Why I cannot prompt for a variable that will be shared by multiple plays (ansible 1.6.5)

I have distilled a playbook that has three plays. The goal is to collect the database password from a prompt in one play and then use the same password in the other two plays.
---
- name: database password
hosts:
- webservers
- dbservers
vars_prompt:
- name: "db_password"
prompt: "Enter Database Password for databse user root"
default: "root"
- hosts: dbservers
tasks:
- command: echo {{db_password | mandatory }}
- hosts: webservers
tasks:
- command: echo {{db_password | mandatory }}
It fails as shown below.
Enter Database Password for databse user root [root]:
PLAY [database password] ******************************************************
GATHERING FACTS ***************************************************************
ok: [vc-dev-1]
PLAY [dbservers] **************************************************************
GATHERING FACTS ***************************************************************
ok: [vc-dev-1]
TASK: [command echo {{db_password | mandatory}}] ***************************
fatal: [vc-dev-1] => One or more undefined variables: 'db_password' is undefined
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit #.../playbook2.retry
vc-dev-1 : ok=3 changed=0 unreachable=1 failed=0
I have found the following workaround using set_fact to assign the variable entered by a user into a variable with playbook scope. It seems that var_prompt variables are not like facts and other variables, its scope is restricted in the play that prompts for them not the entire playbook. I am not sure if this is a feature or a bug.
- name: database password
hosts:
- webservers
- dbservers
vars_prompt:
- name: "db_password"
prompt: "Enter Database Password for databse user root"
default: "root"
tasks:
- set_fact:
db_root_password: "{{ db_password}}"
- hosts: dbservers
tasks:
- command: echo {{ db_root_password | mandatory }}
- hosts: webservers
tasks:
- command: echo {{ db_root_password | mandatory }}
Improvising gae123's answer, in case if your hosts are added dynamically, it will not be possible to get and set the fact on the existing group of servers, in which case localhost can be used to set and get.
---
- name: database password
hosts: localhost
vars_prompt:
- name: "db_password"
prompt: "Enter Database Password for databse user root"
default: "root"
tasks:
- set_fact:
db_root_password: "{{db_password}}"
- hosts: dbservers
vars:
- db_root_password: "{{ hostvars['localhost']['db_root_password'] }}"
tasks:
- command: echo {{db_root_password | mandatory }}
- hosts: webservers
vars:
- db_root_password: "{{ hostvars['localhost']['db_root_password'] }}"
tasks:
- command: echo {{db_root_password | mandatory }}
I ended up taking user3037143 answer for my localhost sudo password problem:
---
- hosts: localhost
tags:
- always
vars_prompt:
- name: sudo_password
prompt: "Sudo password for localhost ONLY"
tasks:
- set_fact: ansible_become_password={{ sudo_password }}
no_log: true
Now it's shared between every include playbooks I have.
I'm on ansible > 2.

Resources