The Script, running on a Linux host, should call some Windows hosts holding Oracle Databases. Each Oracle Database is in DNS with its name "db-[ORACLE_SID]".
Lets say you have a database with ORACLE SID TEST02, it can be resolved as db-TEST02.
The complete script is doing some more stuff, but this example is sufficient to explain the problem.
The db-[SID] hostnames must be added as dynamic hosts to be able to parallelize the processing.
The problem is that oracle_databases is not passed to the new playbook. It works if I change the hosts from windows to localhost, but I need to analyze something first and get some data from the windows hosts, so this is not an option.
Here is the script:
---
# ansible-playbook parallel.yml -e "databases=TEST01,TEST02,TEST03"
- hosts: windows
gather_facts: false
vars:
ansible_connection: winrm
ansible_port: 5985
ansible_winrm_transport: kerberos
ansible_winrm_kerberos_delegation: true
tasks:
- set_fact:
database: "{{ databases.split(',') }}"
- name: Add databases as hosts, to parallelize the shutdown process
add_host:
name: "db-{{ item }}"
groups: oracle_databases
loop: "{{ database | list}}"
##### just to check, what is in oracle_databases
- name: show the content of oracle_databases
debug:
msg: "{{ item }}"
with_inventory_hostnames:
- oracle_databases
- hosts: oracle_databases
gather_facts: true
tasks:
- debug:
msg:
- "Hosts, on which the playbook is running: {{ ansible_play_hosts }}"
verbosity: 1
My inventory file is just small, but there will be more windows hosts in future:
[adminsw1#obelix oracle_change_home]$ cat inventory
[local]
localhost
[windows]
windows68
And the output
[adminsw1#obelix oracle_change_home]$ ansible-playbook para.yml -l windows68 -e "databases=TEST01,TEST02"
/usr/lib/python2.7/site-packages/ansible/parsing/vault/__init__.py:44: CryptographyDeprecationWarning: Python 2 is no longer supported by the Python core team. Support for it is now deprecated in cryptography, and will be removed in a future release.
from cryptography.exceptions import InvalidSignature
/usr/lib/python2.7/site-packages/requests/__init__.py:89: RequestsDependencyWarning: urllib3 (1.23) or chardet (2.2.1) doesn't match a supported version!
RequestsDependencyWarning)
PLAY [windows] *****************************************************************************************************************************
TASK [set_fact] ****************************************************************************************************************************
ok: [windows68]
TASK [Add databases as hosts, to parallelize the shutdown process] *************************************************************************
changed: [windows68] => (item=TEST01)
changed: [windows68] => (item=TEST02)
TASK [show the content of oracle_databases] ************************************************************************************************
ok: [windows68] => (item=db-TEST01) => {
"msg": "db-TEST01"
}
ok: [windows68] => (item=db-TEST02) => {
"msg": "db-TEST02"
}
PLAY [oracle_databases] ********************************************************************************************************************
skipping: no hosts matched
PLAY RECAP *********************************************************************************************************************************
windows68 : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
It might be possible that Ansible is not parsing the updated inventory file file, or the hosts name is being malformed in as it updates the inventory.
In this scenario, you can use the -vv or -vvvv parameter in your Ansible command to get extra logging.
This will give you a complete picture into what Ansible is actually doing as it tries to parse hosts.
I found out, what the problem was. The playbook is restricted to a host "windows68" and therefore can´t be run at the hosts added by the dynamic inventory.
It will work that way:
[adminsw1#obelix oracle_change_home]$ ansible-playbook para.yml -l windows68,oracle_databases -e "databases=TEST01,TEST02"
Related
I have the following playbook:
- name: Get-IP
hosts: webservers
tasks:
- debug: var=ansible_all_ipv4_addresses
register: foo
- debug:
var: foo
- local_action: lineinfile line= {{ foo }} path=/root/file2.txt
I added debug to make sure variable foo carrying the IP address, and its working correctly, but when I try to save it to file on local desk, files remains empty. If I delete the file I get an error about the file does not exist.
Whats wrong with my playbook? Thanks.
Write the lines in a loop. For example,
shell> cat pb.yml
- hosts: webservers
tasks:
- debug:
var: ansible_all_ipv4_addresses
- lineinfile:
create: true
dest: /tmp/ansible_all_ipv4_addresses.webservers
line: "{{ item }} {{ hostvars[item].ansible_all_ipv4_addresses|join(',') }}"
loop: "{{ ansible_play_hosts }}"
run_once: true
delegate_to: localhost
gives
shell> ansible-playbook pb.yml
PLAY [webservers] ****************************************************************************
TASK [Gathering Facts] ***********************************************************************
ok: [test_11]
ok: [test_13]
TASK [debug] *********************************************************************************
ok: [test_11] =>
ansible_all_ipv4_addresses:
- 10.1.0.61
ok: [test_13] =>
ansible_all_ipv4_addresses:
- 10.1.0.63
TASK [lineinfile] ****************************************************************************
changed: [test_11 -> localhost] => (item=test_11)
changed: [test_11 -> localhost] => (item=test_13)
PLAY RECAP ***********************************************************************************
test_11: ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
test_13: ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
shell> cat /tmp/ansible_all_ipv4_addresses.webservers
test_11 10.1.0.61
test_13 10.1.0.63
According your description and example I understand your use case that you like to gather Ansible facts about Remote Nodes and Cache facts on the Control Node.
If fact cache plugin is enabled
~/test$ grep fact_ ansible.cfg
fact_caching = yaml # or json
fact_caching_connection = /tmp/ansible/facts_cache
fact_caching_timeout = 129600
a minimal example playbook
---
- hosts: test
become: false
gather_facts: true
gather_subset:
- "!all"
- "!min"
- "network"
tasks:
- name: Show Facts network
debug:
msg: "{{ ansible_facts }}"
will result into local files like
~/test$ grep -A1 _ipv4 /tmp/ansible/facts_cache/test.example.com
ansible_all_ipv4_addresses:
- 192.0.2.1
--
ansible_default_ipv4:
address: 192.0.2.1
--
...
By following this approach there is no need for re-implementing already existing functionality, less and easier to maintain code, less error prone, and so on.
Furthermore,
Fact caching can improve performance. If you manage thousands of hosts, you can configure fact caching to run nightly, then manage configuration on a smaller set of servers periodically throughout the day. With cached facts, you have access to variables and information about all hosts even when you are only managing a small number of servers.
Similar Q&A
How to gather IP addresses of Remote Nodes?
I have ansible installed on the Windows Subsystem for Linux. This version is 2.9.6.
I also have an ansible tower that is version 3.7.2 which has Ansible version 2.9.27.
I basically use the ansible installation on my WSL to play with and debug playbooks to get them working. Once they are working, I upload them to my Git Repository and pull them into the Ansible Tower for execution.
I am still fairly new to Ansible so perhaps this is a very simple issue. I have a playbook that runs just fine on my ansible (2.9.6) WLS environment.
When I run the same playbook in my Ansible Tower, it doesn't run any tasks.
The playbook is fairly simple. I want to use it to change the password on a local Windows account. The playbook is in a file named change_user_password.yml. The contents are shown below:
- name: Change user password
hosts: all
tasks:
- name: Include OS-specific variables.
include_vars: "{{ ansible_os_family }}.yml"
- name: Print OS Family
debug:
msg: "Ansible OS family is {{ ansible_os_family }}"
- name: Print uname
debug:
msg: "Uname variable is {{ uname }}"
- name: Print newpass
debug:
msg: "Newpass variable is {{ newpass }}"
- name: Change pwd (Redhat).
ping:
when: ansible_os_family == 'RedHat'
- name: Change pwd (Debian).
ping:
when: ansible_os_family == 'Debian'
- name: Change pwd (Windows).
win_user:
name: "{{ uname }}"
password: "{{ newpass }}"
when: ansible_os_family == 'Windows'
When run on the command line with ansible-playbook in my WSL environment I pass in the --extra-vars for uname and newpass variables as shown below:
ansible-playbook -i ../hosts.ini --limit cssvr-prod change_user_password.yml --extra-vars="uname=myadmin newpass=test1234TEST"
Output typically looks like this:
PLAY [Change user password] ***************************************************************************************************************************************************************************
TASK [Gathering Facts] ********************************************************************************************************************************************************************************
ok: [cssvr-prod]
TASK [Include OS-specific variables.] *****************************************************************************************************************************************************************
ok: [cssvr-prod]
TASK [Print OS Family] ********************************************************************************************************************************************************************************
ok: [cssvr-prod] => {
"msg": "Ansible OS family is Windows"
}
TASK [Print uname] ************************************************************************************************************************************************************************************
ok: [cssvr-prod] => {
"msg": "Uname variable is myadmin"
}
TASK [Print newpass] **********************************************************************************************************************************************************************************
ok: [cssvr-prod] => {
"msg": "Newpass variable is test1234TEST"
}
TASK [Change pwd (Redhat).] ***************************************************************************************************************************************************************************
skipping: [cssvr-prod]
TASK [Change pwd (Debian).] ***************************************************************************************************************************************************************************
skipping: [cssvr-prod]
TASK [Change pwd (Windows).] **************************************************************************************************************************************************************************
changed: [cssvr-prod]
PLAY RECAP ********************************************************************************************************************************************************************************************
cssvr-prod : ok=6 changed=1 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
When I run this playbook from Ansible Tower, I add uname and newpass as extra variables in the Extra Variables box on the Template for this play. I add the cssvr-prod host in the Limit box. When I run it, no tasks are run. NOTE: The warning below is expected, the inventory and groups are imported from Azure. Some of our Azure resource groups have hyphens in their name which apparently is illegal in the ansible hosts file as a group name.
Using /etc/ansible/ansible.cfg as config file
SSH password:
[WARNING]: Invalid characters were found in group names but not replaced, use
-vvvv to see details
PLAY [all] *********************************************************************
PLAY RECAP *********************************************************************
I'm pulling what little hair I have left out trying to figure out why the code behaves this way on Tower.
Network guy pretending to "code"...(insert laughter/shame here).
I am trying to create a playbook full of STIG requirements. I would like to be able to run this playbook against network devices, then easily copy the results into our the .ckl files.
In case it's not completely and utterly apparent, I have been using Ansible for less than a week.
*First I am having ansible register the output of a command.
*Then I would like ansible to validate certain words or phrases are in the output of the register.
*Then of course have a debug message state "Not a Finding {insert register here}", or "Open {insert register here}"
I cannot seem to get the "when {this pharse} (is (or not) in) register.stdout" to work.
Using Ansible 2.9
- hosts: ios
connection: network_cli
gather_facts: no
tasks:
- name: Gather Username Configuration Lines
ios_command:
commands: show run | i username localadmin
register: output
- debug:
msg: "{{ output.stdout }}"
- name: Username has correct privilege level
block:
- debug:
msg: "{{ output.stdout }}"
when: "'privilege 15' in output.stdout"
Output:
$ ansible-playbook ciscouserprivcheck.yml -u localadmin -k
SSH password:
PLAY [ios] *************************************************************************************************************************************
TASK [Gather Username Configuration Lines] *****************************************************************************************************
ok: [Cisco1]
TASK [debug] ***********************************************************************************************************************************
ok: [Cisco1] =>
msg:
- username localadmin privilege 15 secret 5 $1$o1t2$VoZhNwm3bMfsTJ6e8RIdl1
TASK [debug] ***********************************************************************************************************************************
skipping: [Cisco1]
PLAY RECAP *************************************************************************************************************************************
Cisco1 : ok=2 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
when you don't have any hosts in inventory, when running playbook there is only warning:
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
Is there a way to make that Error instead of Warning?
I find out that there is this parameter in ansible.cfg:
[inventory]
unparsed_is_failed = True
but it will only return error when there is no inventory file which you are trying to use. It didn't look into content.
One simple solution is:
Create the playbook "main.yml" like:
---
# Check first if the supplied host pattern {{ RUNNER.HOSTNAME }} matches with the inventory
# or forces otherwise the playbook to fail (for Jenkins)
- hosts: localhost
vars_files:
- "{{ jsonfilename }}"
tasks:
- name: "Hostname validation | If OK, it will skip"
fail:
msg: "{{ RUNNER.HOSTNAME }} not found in the inventory group or hosts file {{ ansible_inventory_sources }}"
when: RUNNER.HOSTNAME not in hostvars
# The main playbook starts
- hosts: "{{ RUNNER.HOSTNAME }}"
vars_files:
- "{{ jsonfilename }}"
tasks:
- Your tasks
...
...
...
Put your host variables in a json file "var.json":
{
"RUNNER": {
"HOSTNAME": "hostname-to-check"
},
"VAR1":{
"CIAO": "CIAO"
}
}
Run the command:
ansible-playbook main.yml --extra-vars="jsonfilename=var.json"
You can also adapt this solution as you like and pass directly the hostname with the command
ansible-playbook -i hostname-to-check, my_playbook.yml
but in this last case remember to put in your playbook:
hosts: all
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
Q: "Is there a way to make that Error instead of Warning?"
A: Yes. It is. Test it in the playbook. For example,
- hosts: localhost
tasks:
- fail:
msg: "[ERROR] Empty inventory. No host available."
when: groups.all|length == 0
- hosts: all
tasks:
- debug:
msg: Playbook started
gives with an empty inventory
fatal: [localhost]: FAILED! => {"changed": false, "msg": "[ERROR] Empty inventory. No host available."}
Example of a project for testing
shell> tree .
.
├── ansible.cfg
├── hosts
└── pb.yml
0 directories, 3 files
shell> cat ansible.cfg
[defaults]
gathering = explicit
inventory = $PWD/hosts
shell> cat hosts
shell> cat pb.yml
- hosts: localhost
tasks:
- fail:
msg: "[ERROR] Empty inventory. No host available."
when: groups.all|length == 0
- hosts: all
tasks:
- debug:
msg: Playbook started
gives
shell> ansible-playbook pb.yml
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit
localhost does not match 'all'
PLAY [localhost] *****************************************************************************
TASK [fail] **********************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "[ERROR] Empty inventory. No host available."}
PLAY RECAP ***********************************************************************************
localhost: ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Q: "Still I am getting a warning: [WARNING]: provided hosts list is empty, ..."
A: Feel free to turn the warning off. See LOCALHOST_WARNING.
shell> ANSIBLE_LOCALHOST_WARNING=false ansible-playbook pb.yml
PLAY [localhost] *****************************************************************************
TASK [fail] **********************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "[ERROR] Empty inventory. No host available."}
PLAY RECAP ***********************************************************************************
localhost: ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Is there a way to make a playbook waiting till a variable is defined?
To reduce some time in the execution of a playbook, I would like to to split it into multiple and start them at the same time. Some of them need a variables, which are defined in the other playbooks.
Is it possible?
IMHO it's not possible. Global scope is set only by config, environment variables and the command line.
Other variables are shared in the scope of a play. It is possible to import more playbooks into one playbook with import_playbook and share variables among the playbooks. But, it's not possible to let the imported playbooks run asynchronously and let them wait for each other.
An option would be to use an external shared memory (e.g. database) and to start such playbooks separately. For example, to share variables among the playbooks at the controller, a simple ini file would do the job.
$ cat shared-vars.ini
[global]
The playbook below
- hosts: localhost
tasks:
- wait_for:
path: "{{ playbook_dir }}/shared-vars.ini"
search_regex: "^shared_var1\\s*=(.*)"
- debug:
msg: "{{ lookup('ini', 'shared_var1 file=shared-vars.ini') }}"
waits for a variable shared_var1 in the file shared-vars.ini
$ ansible-playbook wait_for_var.yml
PLAY [localhost] *******************************************************
TASK [wait_for] ********************************************************
Next playbook
- hosts: localhost
tasks:
- ini_file:
path: "{{ playbook_dir }}/shared-vars.ini"
section: global
option: shared_var1
value: Test value set by declare_var.yml
writes the variable shared_var1 into the file shared-vars.ini
$ ansible-playbook declare_var.yml
PLAY [localhost] *******************************************************
TASK [ini_file] ********************************************************
changed: [localhost]
PLAY RECAP *************************************************************
localhost : ok=1 changed=1 unreachable=0 failed=0
First playbook which was waiting for the variable continues
TASK [debug] ***********************************************************
ok: [localhost] => {
"msg": "Test value set by declare_var.yml"
}
PLAY RECAP *************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0