I defined a envs.sh script inside the /etc/profile.d/ folder.
When executing a ansible-playbook, I'm trying to get the value of this env var but it instead throws me an error:
Ansible test:
debug: msg="{{ ansible_env.NGN_VAL }} is an environment variable"
Error:
fatal: [xxx.yyy.zzz.kkk] => One or more undefined variables: 'dict object' has no attribute 'NGN_VAL'
FATAL: all hosts have already failed -- aborting
Why it doesn't execute the scripts inside that folder? When I connect through ssh, I echo it and it displays its value. How do I set remote environment variables and obtain them during ansible execution?
Thanks
Test with this if you want catch the environment variable.
[jenkins#scsblnx-828575 jenkins]$ cat mypass
export MYPASSWD=3455637
[jenkins#scsblnx-828575 jenkins]$ cat test.yml
- hosts: all
user: jenkins
tasks:
- name: Test variables.
shell: source /apps/opt/jenkins/mypass && echo $MYPASSWD
register: myenvpass
- debug: var=myenvpass.stdout
[jenkins#scsblnx-828575 jenkins]$ ansible-playbook -i hosts test.yml
PLAY ***************************************************************************
TASK [setup] *******************************************************************
ok: [127.0.0.1]
TASK [Test variables.] *********************************************************
changed: [127.0.0.1]
TASK [debug] *******************************************************************
ok: [127.0.0.1] => {
"myenvpass.stdout": "3455637"
}
PLAY RECAP *********************************************************************
127.0.0.1 : ok=3 changed=1 unreachable=0 failed=0
In my point of view you can accomplish using a mix of ansible vault and system environment variables.
Create and encrypt a file with ansible vault (this is where you put the content of your remote env variable):
$ ansible-vault create vars_environment.yml
Put your password to encrypt this file and write content of your variable, for example:
ngn_val: supersecret
I use to load my permanent environment variables in /etc/profile.d directory but they are more paths:
/etc/profile.d
/etc/profile
/etc/environment
~/.profile
It is a good way to write a template in your playbook, although your can use the copy module:
---
# environment.yml
- name: Get enviroment variable server
hosts: debian.siccamdb.sm
vars:
dest_profile_environment: /etc/profile.d/environment.sh
vars_files:
- vars_environment.yml
tasks:
- name: "Template {{ dest_profile_enviroment }}"
template:
src: environment.sh.j2
dest: "{{ dest_profile_environment }}"
mode: '0644'
- name: Load env variable
shell: ". {{ dest_profile_environment }} && echo $NGN_VAL"
register: env_variable
- name: Debug env variable
debug:
var: env_variable.stdout_lines[0]
The important thing here is in the shell task because you have to source your environment variable and the use it by the register keyword which can be used to capture the output of a command in a variable.
This is the template's content in this example the file environment.sh.j2:
export NGN_VAL={{ ngn_val }}
Type your password o the vars_environment.yml file after run the playbook:
ansible-playbook --vault-id #prompt environment.yml
The output is the following:
PLAY [Get enviroment variable server] *************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ****************************************************************************************************************************************************************************************************************
ok: [debian.siccamdb.sm]
TASK [Template {{ dest_profile_enviroment }}] *****************************************************************************************************************************************************************************************
changed: [debian.siccamdb.sm]
TASK [Load env variable] **************************************************************************************************************************************************************************************************************
changed: [debian.siccamdb.sm]
TASK [Debug env variable] *************************************************************************************************************************************************************************************************************
ok: [debian.siccamdb.sm] => {
"env_variable.stdout_lines[0]": "supersecret"
}
PLAY RECAP ****************************************************************************************************************************************************************************************************************************
debian.siccamdb.sm : ok=4 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Another way to accomplish is using custom facts in your remote server. This is a good practice if you want to leave the content of your variable in the remote server and using in playbooks without the needs of ansible vault.
Related
I have ansible installed on the Windows Subsystem for Linux. This version is 2.9.6.
I also have an ansible tower that is version 3.7.2 which has Ansible version 2.9.27.
I basically use the ansible installation on my WSL to play with and debug playbooks to get them working. Once they are working, I upload them to my Git Repository and pull them into the Ansible Tower for execution.
I am still fairly new to Ansible so perhaps this is a very simple issue. I have a playbook that runs just fine on my ansible (2.9.6) WLS environment.
When I run the same playbook in my Ansible Tower, it doesn't run any tasks.
The playbook is fairly simple. I want to use it to change the password on a local Windows account. The playbook is in a file named change_user_password.yml. The contents are shown below:
- name: Change user password
hosts: all
tasks:
- name: Include OS-specific variables.
include_vars: "{{ ansible_os_family }}.yml"
- name: Print OS Family
debug:
msg: "Ansible OS family is {{ ansible_os_family }}"
- name: Print uname
debug:
msg: "Uname variable is {{ uname }}"
- name: Print newpass
debug:
msg: "Newpass variable is {{ newpass }}"
- name: Change pwd (Redhat).
ping:
when: ansible_os_family == 'RedHat'
- name: Change pwd (Debian).
ping:
when: ansible_os_family == 'Debian'
- name: Change pwd (Windows).
win_user:
name: "{{ uname }}"
password: "{{ newpass }}"
when: ansible_os_family == 'Windows'
When run on the command line with ansible-playbook in my WSL environment I pass in the --extra-vars for uname and newpass variables as shown below:
ansible-playbook -i ../hosts.ini --limit cssvr-prod change_user_password.yml --extra-vars="uname=myadmin newpass=test1234TEST"
Output typically looks like this:
PLAY [Change user password] ***************************************************************************************************************************************************************************
TASK [Gathering Facts] ********************************************************************************************************************************************************************************
ok: [cssvr-prod]
TASK [Include OS-specific variables.] *****************************************************************************************************************************************************************
ok: [cssvr-prod]
TASK [Print OS Family] ********************************************************************************************************************************************************************************
ok: [cssvr-prod] => {
"msg": "Ansible OS family is Windows"
}
TASK [Print uname] ************************************************************************************************************************************************************************************
ok: [cssvr-prod] => {
"msg": "Uname variable is myadmin"
}
TASK [Print newpass] **********************************************************************************************************************************************************************************
ok: [cssvr-prod] => {
"msg": "Newpass variable is test1234TEST"
}
TASK [Change pwd (Redhat).] ***************************************************************************************************************************************************************************
skipping: [cssvr-prod]
TASK [Change pwd (Debian).] ***************************************************************************************************************************************************************************
skipping: [cssvr-prod]
TASK [Change pwd (Windows).] **************************************************************************************************************************************************************************
changed: [cssvr-prod]
PLAY RECAP ********************************************************************************************************************************************************************************************
cssvr-prod : ok=6 changed=1 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
When I run this playbook from Ansible Tower, I add uname and newpass as extra variables in the Extra Variables box on the Template for this play. I add the cssvr-prod host in the Limit box. When I run it, no tasks are run. NOTE: The warning below is expected, the inventory and groups are imported from Azure. Some of our Azure resource groups have hyphens in their name which apparently is illegal in the ansible hosts file as a group name.
Using /etc/ansible/ansible.cfg as config file
SSH password:
[WARNING]: Invalid characters were found in group names but not replaced, use
-vvvv to see details
PLAY [all] *********************************************************************
PLAY RECAP *********************************************************************
I'm pulling what little hair I have left out trying to figure out why the code behaves this way on Tower.
The Script, running on a Linux host, should call some Windows hosts holding Oracle Databases. Each Oracle Database is in DNS with its name "db-[ORACLE_SID]".
Lets say you have a database with ORACLE SID TEST02, it can be resolved as db-TEST02.
The complete script is doing some more stuff, but this example is sufficient to explain the problem.
The db-[SID] hostnames must be added as dynamic hosts to be able to parallelize the processing.
The problem is that oracle_databases is not passed to the new playbook. It works if I change the hosts from windows to localhost, but I need to analyze something first and get some data from the windows hosts, so this is not an option.
Here is the script:
---
# ansible-playbook parallel.yml -e "databases=TEST01,TEST02,TEST03"
- hosts: windows
gather_facts: false
vars:
ansible_connection: winrm
ansible_port: 5985
ansible_winrm_transport: kerberos
ansible_winrm_kerberos_delegation: true
tasks:
- set_fact:
database: "{{ databases.split(',') }}"
- name: Add databases as hosts, to parallelize the shutdown process
add_host:
name: "db-{{ item }}"
groups: oracle_databases
loop: "{{ database | list}}"
##### just to check, what is in oracle_databases
- name: show the content of oracle_databases
debug:
msg: "{{ item }}"
with_inventory_hostnames:
- oracle_databases
- hosts: oracle_databases
gather_facts: true
tasks:
- debug:
msg:
- "Hosts, on which the playbook is running: {{ ansible_play_hosts }}"
verbosity: 1
My inventory file is just small, but there will be more windows hosts in future:
[adminsw1#obelix oracle_change_home]$ cat inventory
[local]
localhost
[windows]
windows68
And the output
[adminsw1#obelix oracle_change_home]$ ansible-playbook para.yml -l windows68 -e "databases=TEST01,TEST02"
/usr/lib/python2.7/site-packages/ansible/parsing/vault/__init__.py:44: CryptographyDeprecationWarning: Python 2 is no longer supported by the Python core team. Support for it is now deprecated in cryptography, and will be removed in a future release.
from cryptography.exceptions import InvalidSignature
/usr/lib/python2.7/site-packages/requests/__init__.py:89: RequestsDependencyWarning: urllib3 (1.23) or chardet (2.2.1) doesn't match a supported version!
RequestsDependencyWarning)
PLAY [windows] *****************************************************************************************************************************
TASK [set_fact] ****************************************************************************************************************************
ok: [windows68]
TASK [Add databases as hosts, to parallelize the shutdown process] *************************************************************************
changed: [windows68] => (item=TEST01)
changed: [windows68] => (item=TEST02)
TASK [show the content of oracle_databases] ************************************************************************************************
ok: [windows68] => (item=db-TEST01) => {
"msg": "db-TEST01"
}
ok: [windows68] => (item=db-TEST02) => {
"msg": "db-TEST02"
}
PLAY [oracle_databases] ********************************************************************************************************************
skipping: no hosts matched
PLAY RECAP *********************************************************************************************************************************
windows68 : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
It might be possible that Ansible is not parsing the updated inventory file file, or the hosts name is being malformed in as it updates the inventory.
In this scenario, you can use the -vv or -vvvv parameter in your Ansible command to get extra logging.
This will give you a complete picture into what Ansible is actually doing as it tries to parse hosts.
I found out, what the problem was. The playbook is restricted to a host "windows68" and therefore canĀ“t be run at the hosts added by the dynamic inventory.
It will work that way:
[adminsw1#obelix oracle_change_home]$ ansible-playbook para.yml -l windows68,oracle_databases -e "databases=TEST01,TEST02"
I need to store in variable & print the output of "ls -ltr" for a set of files on remote host. This variable with be used by another play in the same yml.
I tried the following however, it just prints the filename and not the complete results of "ls -ltr" command.
My lstest.yml looks like below:
- name: Play 4- Configure nodes
hosts: remotehost
tasks:
- name: "Collecting APP file information"
command: ls -ltr /app/{{ vars[item.split('.')[1]] }}/{{ item | basename }}
register: fdetails_APP
when: Layer == 'APP'
with_fileglob:
- "{{ playbook_dir }}/tmpfiles/*"
- debug: var=fdetails_APP.stdout_lines
- set_fact: fdet_APP={{ fdetails_APP.stdout_lines }}
- name: Printing fpath
debug:
var: fdet_APP
Output:
TASK [Collecting APP file information]
************************************************************************************ changed: [localhost] =>
(item=/app/Ansible/playbook/tmpfiles/filename1.exe) changed:
[localhost] => (item=/app/Ansible/playbook/tmpfiles/33211.sql)
changed: [localhost] =>
(item=/app/Ansible/playbook/tmpfiles/file1.mrt) changed: [localhost]
=> (item=/app/Ansible/playbook/tmpfiles/filename1.src)
PLAY RECAP
************************************************************************************************************************************************** localhost : ok=3 changed=1 unreachable=0
failed=0 skipped=0 rescued=0 ignored=0
Can you please suggest.
Note. In near future I would also like to add the checksum of the files stored in the same variable.
As per fileglob documentation- "Matching is against local system files on the Ansible controller. To iterate a list of files on a remote node, use the find module." For remote host fileglob cannot be used.
Is there a way to make a playbook waiting till a variable is defined?
To reduce some time in the execution of a playbook, I would like to to split it into multiple and start them at the same time. Some of them need a variables, which are defined in the other playbooks.
Is it possible?
IMHO it's not possible. Global scope is set only by config, environment variables and the command line.
Other variables are shared in the scope of a play. It is possible to import more playbooks into one playbook with import_playbook and share variables among the playbooks. But, it's not possible to let the imported playbooks run asynchronously and let them wait for each other.
An option would be to use an external shared memory (e.g. database) and to start such playbooks separately. For example, to share variables among the playbooks at the controller, a simple ini file would do the job.
$ cat shared-vars.ini
[global]
The playbook below
- hosts: localhost
tasks:
- wait_for:
path: "{{ playbook_dir }}/shared-vars.ini"
search_regex: "^shared_var1\\s*=(.*)"
- debug:
msg: "{{ lookup('ini', 'shared_var1 file=shared-vars.ini') }}"
waits for a variable shared_var1 in the file shared-vars.ini
$ ansible-playbook wait_for_var.yml
PLAY [localhost] *******************************************************
TASK [wait_for] ********************************************************
Next playbook
- hosts: localhost
tasks:
- ini_file:
path: "{{ playbook_dir }}/shared-vars.ini"
section: global
option: shared_var1
value: Test value set by declare_var.yml
writes the variable shared_var1 into the file shared-vars.ini
$ ansible-playbook declare_var.yml
PLAY [localhost] *******************************************************
TASK [ini_file] ********************************************************
changed: [localhost]
PLAY RECAP *************************************************************
localhost : ok=1 changed=1 unreachable=0 failed=0
First playbook which was waiting for the variable continues
TASK [debug] ***********************************************************
ok: [localhost] => {
"msg": "Test value set by declare_var.yml"
}
PLAY RECAP *************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0
I am using the following ansible script to import a playbook based on the user input,
---
- hosts: localhost
vars_prompt:
- name: "cleanup"
prompt: "Do you want to run cleanup? Enter [yes/no]"
private: no
- name: run the cleanup yaml file
import_playbook: cleanup.yml
when: cleanup == "yes"
Execution log:
bash-$ ansible-playbook -i hosts cleanup.yml
Do you want to run cleanup? Enter [yes/no]: no
PLAY [localhost] *********************************************************************************************************************
TASK [Gathering Facts] ***************************************************************************************************************
ok: [127.0.0.1]
PLAY [master] ********************************************************************************************************************
TASK [Gathering Facts] ***************************************************************************************************************
fatal: [192.168.56.128]: FAILED! => {"msg": "The conditional check 'cleanup == \"yes\"' failed. The error was: error while evaluating conditional (cleanup == \"yes\"): 'cleanup' is undefined"}
to retry, use: --limit #/home/admin/playbook/cleanup.retry
PLAY RECAP ***************************************************************************************************************************
127.0.0.1 : ok=1 changed=0 unreachable=0 failed=0
192.168.56.128 : ok=0 changed=0 unreachable=0 failed=1
It throws error in the imported playbook not in the mail playbook.
Please help me to import a playbook based on user input.
vars_prompt variables are only defined in the play in which they were called. In order to use them in other plays, a workaround is to use set_fact to bind the variable to a host, then use hostvars to access that value from the second play.
For instance:
---
- hosts: localhost
vars_prompt:
- name: "cleanup"
prompt: "Do you want to run cleanup? Enter [yes/no]"
private: no
tasks:
- set_fact:
cleanup: "{{cleanup}}"
- debug:
msg: 'cleanup is available in the play using: {{cleanup}}'
- debug:
msg: 'cleanup is also available globally using: {{hostvars["localhost"]["cleanup"]}}'
- name: run the cleanup yaml file
import_playbook: cleanup.yml
when: hostvars["localhost"]["cleanup"] == True