I am trying to setup a playbook which will run the command to check status of the service installed in the target machine. The command will only work only if the .env file executed. The command to execute the .env file is .<space>./.env_file_name and the file contains list of environment variables like export JAVA_HOME=/optware/java/jdk/1.2.
I tried to execute the environment file before running the command with the below playbook, but it is not working.
- hosts: name
tasks:
- name: `execute env file`
command: . ./.env_file_name
register: result
Is there any playbook to run the executable environment file to set the environments present on the target machine and then run our command??
First, the . ./.env_file_name syntax is a shell syntax and cannot work with the command module, you need to use the shell module.
Secondly, the shell environment context is reset at every task as each is an ssh command round-trip (so a new shell session), and loading the environment variables in one task will not not make them available for next tasks.
Depending on your context, you have some options:
1. Inventory environment variables
The best option is to have the environment at your inventory side in a variable with different value for each group/host through group_vars/host_vars, then to use it for the environment keyword
# host_vars/my_host.yml
---
env_vars:
VAR1: key1
VAR2: key2
- hosts: my_host
tasks:
- name: Display environment variables
command: env
environment: "{{ env_vars }}"
Pros:
full ansible solution
will work for environment of every module
Cons:
need to know the environment variables at ansible side
2. Loading environment variables for every tasks
If your tasks are all shell/command (which I don't advise, as it's better to use appropriate ansible module whenever possible), you can simply load the env file every time with shell module
- hosts: my_host
tasks:
- name: Display environment variables
shell: |
. ./.env_file_name && env
- name: Do another action
shell: |
. ./.env_file_name && do_something_else
Pros:
no need to know the environment variables at ansible side
Cons:
limited to tasks with shell module
3. Load environment variables from env_file into ansible fact
This option is to parse the env file once and for all and load it in an ansible fact to use with the environment keyword.
- hosts: my_host
tasks:
- name: Get env file content
slurp:
src: ./.env_file_name
register: env_file_content
- name: Parse environment
set_fact:
env_vars: "{{ ('{' + (env_file_content.content | b64decode).split('\n') | select | map('regex_replace', '([^=]*)=(.*)', '\"\\1\": \"\\2\"') | join(',') + '}') | from_json }}"
- name: Display environment variables
command: env
environment: "{{ env_vars }}"
Or, if the env file need to be executed instead of directly parsed:
- hosts: my_host
tasks:
- name: Get env file content
shell: . ./.env_file_name && env
register: env_file_result
- name: Parse environment
set_fact:
env_vars: "{{ ('{' + env_file_result.stdout_lines | map('regex_replace', '([^=]*)=(.*)', '\"\\1\": \"\\2\"') | join(',') + '}') | from_json }}"
- name: Display environment variables
command: env
environment: "{{ env_vars }}"
Pros:
will work for environment of every module
no need to know the environment variables at ansible side
Cons:
could fail on bad formatting of file
Related
If I run the below command directly in terminal, kubectl is getting enabled. If I use the same command with shell module in Ansible playbook, its executing but its not doing its job of enabling the kubectl.
export KUBECONFIG="/etc/rancher/rke2/rke2.yaml" \
&& export PATH="$PATH:/usr/local/bin:/var/lib/rancher/rke2/bin"
Ansible playbook
---
- name: Copy installer
hosts: FIRST_SERVER
gather_facts: yes
ignore_unreachable: true
any_errors_fatal: true
tasks:
- name: Execute enable kubectl on primary server
when: inventory_hostname in groups['FIRST_SERVER']
shell: |
set -o pipefail
export KUBECONFIG="/etc/rancher/rke2/rke2.yaml"
export PATH="$PATH:/usr/local/bin:/var/lib/rancher/rke2/bin"
args:
executable: /bin/bash
become: yes
Please suggest.
Your example is setting remote environment variables for the task temporary only.
For certain servers I am the following approach of
What do the scripts in /etc/profile.d do?
by using
- name: Provide environment variable script file
template:
src: "{{ item }}.j2"
dest: "/etc/profile.d/{{ item }}"
with_items:
- "environment.sh"
and in example
# /etc/profile.d/environment.sh
export ACCOUNT=$(who am i | cut -d " " -f 1)
export DOMAIN=$(hostname | cut -d "." -f 2-4)
Further Q&A
"Scripts placed in ... get sourced on login"
How to set an environment variable during package installation?
By doing this I am able to set persistent environment variables for specific software and services.
I am trying to convert an existing shell script into an Ansible role. In this role, I am reading two environment variables but Ansible does not display these even though it is available in the host. Can anyone please help me understand what I am doing wrong?
Note: I cannot hardcode env.sh into my Ansible role as each region will have its own settings.
/etc/synopsys/bin/env.sh:
#!/bin/sh
SITEID="us01-savvis"
MYGLOBAL="/remote/kickstart"
export SITEID MYGLOBAL
Ansible code:
---
- name: Gather Facts
setup:
gather_subset:
- '!all'
- '!any'
- facter
- network
- hardware
async: 300
poll: 20
- name: Check if env.sh exists
stat:
path: /etc/synopsys/bin/env.sh
register: stat_result
- name: Source env.sh file if it exists
shell: "source /etc/synopsys/bin/env.sh"
when: stat_result.stat.exists == True
- name: Printing all the environment​ variables in Ansible
debug:
# msg: "{{ ansible_env }}"
msg: "{{ lookup('env','SITEID','MYGLOBAL','HOME','SHELL') }}"
Ansible output (Note that SITEID and MYGLOBAL are not visible):
TASK [common/run_pkg_checker/v1 : Printing all the environment​ variables in Ansible] *************************************************
ok: [ansible-poc-cos6] => {
"msg": ",,/u/subburat,/usr/local/bin/tcsh"
}
Linux environment variables (SITEID and MYGLOBAL defined):
[root#ansible-poc-cos6 ~]# env |grep MYGLOBAL
MYGLOBAL=/remote/kickstart
[root#ansible-poc-cos6 ~]# env |grep SITEID
SITEID=us01-savvis
First, each Ansible tasks running inside a separate sub-process.
Second, sub-processes can't have effects on their parent processes.
So, for the task which runs source env.sh command, it really does nothing to the Ansible process and the following tasks.
For your question, you can run the source env.sh command first before running ansible command.
Or using ansible --extra-vars or -e option to avoid hard-coding values in your playbook.
source /etc/synopsys/bin/env.sh
ansible-playbook your-playbook.yml
# OR
ansible-playbook -e SITEID=xxx -e MYGLOBAL=yyy your-playbook.yml
I have the below Ansible script which runs on localhost
- name: set env
shell: ". /tmp/testenv"
- name: get env
debug:
msg: "{{ lookup('env','TEST') }}"
In above script I'm trying to source the file and access the environment variables using the lookup. But it seems like environment variable is not set. Is there anyway I can get this to work?
The shell command is one process and the lookup looks in the environment of the Ansible process, which is a different one. You have to echo the environment variable in your shell command and register the result.
- hosts: localhost
connection: local
gather_facts: no
tasks:
- name: set env
shell: "echo 'export TEST=test' > /tmp/testenv"
- name: check env
shell: ". /tmp/testenv; echo $TEST"
register: result
- name: get env
debug:
msg: "{{ result.stdout }}"
I know about Ansible's environment: command at the top of playbook, but I don't think that will work for me seeing how I don't know the variables value prior to the execution of the playbook. I'm trying to retrieve package versions and PHP Modules and log them to a file. I want to use regex to capture the version and store it to an environment variable. Then I want to write that variable equals that variable's value to an environment file with a shell command. I also want to pull an array from the environment and loop through that. Ansible doesn't seem to persist the shell environment and the environment variable gets wiped out between commands. This is simple in Bash. Is this possible in Ansible? I'm trying:
---
- hosts: all
become: yes
vars:
site_variables:
code_directory: /home/
dependency_versions:
WGET_VERSION: placeholder
PHP_MODULES: placeholder
tasks:
- name: Retrieve Environment
shell: export WGET_VERSION=$(wget --version | grep -o 'Wget [0-9]*.[0-9]*\+')
shell: export PHP_MODULES=$(php -m)
shell: echo "export {{ item }}={{ lookup('env', item ) }}" >> {{ site_variables.code_directory }}/.env.log
with_items:
- WGET_VERSION
- name: Write PHP Modules Out
shell: export PHP_MODULES=$(php -m)
shell: export PHP_MODULES=$(echo {{ lookup('env', 'PHP_MODULES') }} | sed 's/\[PHP Modules\]//g')
shell: export PHP_MODULES=$(echo {{ lookup('env', 'PHP_MODULES') }} | sed 's/\[Zend Modules\]//g')
shell: export PHP_MODULES=({{ lookup('env', 'PHP_MODULES') }})
shell: echo "# - {{ item.0 }}" >> {{ site_variables.code_directory }}/.env.log
with_items:
- "{{ lookup('env', 'PHP_MODULES') }}"
There's a lot going on here.
First, lookup always runs on the ansible control host, while the script that you pass to the shell module is running on the remote server. So you will never be able to get a remote environment variable using lookup.
For details: https://docs.ansible.com/ansible/playbooks_lookups.html
Secondly, environment variables don't propagate from a child to parent. If you have a script that does this...
export MYVARIABLE=foo
...and you run that script, your current environment will not suddenly have a variable named MYVARIABLE. This is just as true for processes spawned by Ansible as it is for processes spawned by your shell.
If you want to set an ansible variable, consider using the register keyword to get the value:
- hosts: localhost
gather_facts: false
tasks:
- name: get wget version
command: wget --version
register: wget_version_raw
- name: extract wget version
set_fact:
wget_version: "{{ wget_version_raw.stdout_lines[0].split()[2] }}"
- name: show wget version
debug:
msg: "wget version is: {{ wget_version }}"
I am trying to craft a list of environment variables to use in tasks that may have slightly different path on each host due to version differences.
For example, /some/common/path/v_123/rest/of/path
I created a list of these variables in variables.yml file that gets imported via roles.
roles/somerole/varables/main.yml contains the following
somename:
somevar: 'coolvar'
env:
SOME_LIB_PATH: /some/common/path/{{ unique_part.stdout }}/rest/of/path
I then have a task that runs something like this
- name: Get unique path part
shell: 'ls /some/common/path/'
register: unique_part
tags: workflow
- name: Perform some actions that need some paths
shell: 'binary argument argument'
environment: somename.env
But I get some Ansible errors about variables not being defined.
Alternatively I tried to predefine the unique_part.stdout in hopes of register overwriting predefined variable, but then I got other ansible errors - failure to template.
Is there another way to craft these variables based on command returns?
You can also use facts:
http://docs.ansible.com/set_fact_module.html
# Prepare unique variables
- hosts: myservers
tasks:
- name: Get unique path part
shell: 'ls /some/common/path/'
register: unique_part
tags: workflow
- name: Add as Fact per for each hosts
set_fact:
library_path: "{{ unique_part.stdout }}"
# launch roles that use those unique variables
- hosts: myservers
roles:
- somerole
This way you can dynamicaly add variable to your hosts before using them.
The vars files gets evaluated when it is read by Ansible. Your only chance would be to include a placeholder which you then later have to replace yourself, like this:
somename:
somevar: 'coolvar'
env:
SOME_LIB_PATH: '/some/common/path/[[ unique_part.stdout ]]/rest/of/path'
And then later in your playbook you can replace that placeholder:
- name: Perform some actions that need some paths
shell: 'binary argument argument'
environment: '{{ somename.env | replace("[[ unique_part.stdout ]]", unique_part.stdout) }}'