If I run the below command directly in terminal, kubectl is getting enabled. If I use the same command with shell module in Ansible playbook, its executing but its not doing its job of enabling the kubectl.
export KUBECONFIG="/etc/rancher/rke2/rke2.yaml" \
&& export PATH="$PATH:/usr/local/bin:/var/lib/rancher/rke2/bin"
Ansible playbook
---
- name: Copy installer
hosts: FIRST_SERVER
gather_facts: yes
ignore_unreachable: true
any_errors_fatal: true
tasks:
- name: Execute enable kubectl on primary server
when: inventory_hostname in groups['FIRST_SERVER']
shell: |
set -o pipefail
export KUBECONFIG="/etc/rancher/rke2/rke2.yaml"
export PATH="$PATH:/usr/local/bin:/var/lib/rancher/rke2/bin"
args:
executable: /bin/bash
become: yes
Please suggest.
Your example is setting remote environment variables for the task temporary only.
For certain servers I am the following approach of
What do the scripts in /etc/profile.d do?
by using
- name: Provide environment variable script file
template:
src: "{{ item }}.j2"
dest: "/etc/profile.d/{{ item }}"
with_items:
- "environment.sh"
and in example
# /etc/profile.d/environment.sh
export ACCOUNT=$(who am i | cut -d " " -f 1)
export DOMAIN=$(hostname | cut -d "." -f 2-4)
Further Q&A
"Scripts placed in ... get sourced on login"
How to set an environment variable during package installation?
By doing this I am able to set persistent environment variables for specific software and services.
Related
I am trying to convert an existing shell script into an Ansible role. In this role, I am reading two environment variables but Ansible does not display these even though it is available in the host. Can anyone please help me understand what I am doing wrong?
Note: I cannot hardcode env.sh into my Ansible role as each region will have its own settings.
/etc/synopsys/bin/env.sh:
#!/bin/sh
SITEID="us01-savvis"
MYGLOBAL="/remote/kickstart"
export SITEID MYGLOBAL
Ansible code:
---
- name: Gather Facts
setup:
gather_subset:
- '!all'
- '!any'
- facter
- network
- hardware
async: 300
poll: 20
- name: Check if env.sh exists
stat:
path: /etc/synopsys/bin/env.sh
register: stat_result
- name: Source env.sh file if it exists
shell: "source /etc/synopsys/bin/env.sh"
when: stat_result.stat.exists == True
- name: Printing all the environment​ variables in Ansible
debug:
# msg: "{{ ansible_env }}"
msg: "{{ lookup('env','SITEID','MYGLOBAL','HOME','SHELL') }}"
Ansible output (Note that SITEID and MYGLOBAL are not visible):
TASK [common/run_pkg_checker/v1 : Printing all the environment​ variables in Ansible] *************************************************
ok: [ansible-poc-cos6] => {
"msg": ",,/u/subburat,/usr/local/bin/tcsh"
}
Linux environment variables (SITEID and MYGLOBAL defined):
[root#ansible-poc-cos6 ~]# env |grep MYGLOBAL
MYGLOBAL=/remote/kickstart
[root#ansible-poc-cos6 ~]# env |grep SITEID
SITEID=us01-savvis
First, each Ansible tasks running inside a separate sub-process.
Second, sub-processes can't have effects on their parent processes.
So, for the task which runs source env.sh command, it really does nothing to the Ansible process and the following tasks.
For your question, you can run the source env.sh command first before running ansible command.
Or using ansible --extra-vars or -e option to avoid hard-coding values in your playbook.
source /etc/synopsys/bin/env.sh
ansible-playbook your-playbook.yml
# OR
ansible-playbook -e SITEID=xxx -e MYGLOBAL=yyy your-playbook.yml
I have the below Ansible script which runs on localhost
- name: set env
shell: ". /tmp/testenv"
- name: get env
debug:
msg: "{{ lookup('env','TEST') }}"
In above script I'm trying to source the file and access the environment variables using the lookup. But it seems like environment variable is not set. Is there anyway I can get this to work?
The shell command is one process and the lookup looks in the environment of the Ansible process, which is a different one. You have to echo the environment variable in your shell command and register the result.
- hosts: localhost
connection: local
gather_facts: no
tasks:
- name: set env
shell: "echo 'export TEST=test' > /tmp/testenv"
- name: check env
shell: ". /tmp/testenv; echo $TEST"
register: result
- name: get env
debug:
msg: "{{ result.stdout }}"
I am trying to setup a playbook which will run the command to check status of the service installed in the target machine. The command will only work only if the .env file executed. The command to execute the .env file is .<space>./.env_file_name and the file contains list of environment variables like export JAVA_HOME=/optware/java/jdk/1.2.
I tried to execute the environment file before running the command with the below playbook, but it is not working.
- hosts: name
tasks:
- name: `execute env file`
command: . ./.env_file_name
register: result
Is there any playbook to run the executable environment file to set the environments present on the target machine and then run our command??
First, the . ./.env_file_name syntax is a shell syntax and cannot work with the command module, you need to use the shell module.
Secondly, the shell environment context is reset at every task as each is an ssh command round-trip (so a new shell session), and loading the environment variables in one task will not not make them available for next tasks.
Depending on your context, you have some options:
1. Inventory environment variables
The best option is to have the environment at your inventory side in a variable with different value for each group/host through group_vars/host_vars, then to use it for the environment keyword
# host_vars/my_host.yml
---
env_vars:
VAR1: key1
VAR2: key2
- hosts: my_host
tasks:
- name: Display environment variables
command: env
environment: "{{ env_vars }}"
Pros:
full ansible solution
will work for environment of every module
Cons:
need to know the environment variables at ansible side
2. Loading environment variables for every tasks
If your tasks are all shell/command (which I don't advise, as it's better to use appropriate ansible module whenever possible), you can simply load the env file every time with shell module
- hosts: my_host
tasks:
- name: Display environment variables
shell: |
. ./.env_file_name && env
- name: Do another action
shell: |
. ./.env_file_name && do_something_else
Pros:
no need to know the environment variables at ansible side
Cons:
limited to tasks with shell module
3. Load environment variables from env_file into ansible fact
This option is to parse the env file once and for all and load it in an ansible fact to use with the environment keyword.
- hosts: my_host
tasks:
- name: Get env file content
slurp:
src: ./.env_file_name
register: env_file_content
- name: Parse environment
set_fact:
env_vars: "{{ ('{' + (env_file_content.content | b64decode).split('\n') | select | map('regex_replace', '([^=]*)=(.*)', '\"\\1\": \"\\2\"') | join(',') + '}') | from_json }}"
- name: Display environment variables
command: env
environment: "{{ env_vars }}"
Or, if the env file need to be executed instead of directly parsed:
- hosts: my_host
tasks:
- name: Get env file content
shell: . ./.env_file_name && env
register: env_file_result
- name: Parse environment
set_fact:
env_vars: "{{ ('{' + env_file_result.stdout_lines | map('regex_replace', '([^=]*)=(.*)', '\"\\1\": \"\\2\"') | join(',') + '}') | from_json }}"
- name: Display environment variables
command: env
environment: "{{ env_vars }}"
Pros:
will work for environment of every module
no need to know the environment variables at ansible side
Cons:
could fail on bad formatting of file
I know about Ansible's environment: command at the top of playbook, but I don't think that will work for me seeing how I don't know the variables value prior to the execution of the playbook. I'm trying to retrieve package versions and PHP Modules and log them to a file. I want to use regex to capture the version and store it to an environment variable. Then I want to write that variable equals that variable's value to an environment file with a shell command. I also want to pull an array from the environment and loop through that. Ansible doesn't seem to persist the shell environment and the environment variable gets wiped out between commands. This is simple in Bash. Is this possible in Ansible? I'm trying:
---
- hosts: all
become: yes
vars:
site_variables:
code_directory: /home/
dependency_versions:
WGET_VERSION: placeholder
PHP_MODULES: placeholder
tasks:
- name: Retrieve Environment
shell: export WGET_VERSION=$(wget --version | grep -o 'Wget [0-9]*.[0-9]*\+')
shell: export PHP_MODULES=$(php -m)
shell: echo "export {{ item }}={{ lookup('env', item ) }}" >> {{ site_variables.code_directory }}/.env.log
with_items:
- WGET_VERSION
- name: Write PHP Modules Out
shell: export PHP_MODULES=$(php -m)
shell: export PHP_MODULES=$(echo {{ lookup('env', 'PHP_MODULES') }} | sed 's/\[PHP Modules\]//g')
shell: export PHP_MODULES=$(echo {{ lookup('env', 'PHP_MODULES') }} | sed 's/\[Zend Modules\]//g')
shell: export PHP_MODULES=({{ lookup('env', 'PHP_MODULES') }})
shell: echo "# - {{ item.0 }}" >> {{ site_variables.code_directory }}/.env.log
with_items:
- "{{ lookup('env', 'PHP_MODULES') }}"
There's a lot going on here.
First, lookup always runs on the ansible control host, while the script that you pass to the shell module is running on the remote server. So you will never be able to get a remote environment variable using lookup.
For details: https://docs.ansible.com/ansible/playbooks_lookups.html
Secondly, environment variables don't propagate from a child to parent. If you have a script that does this...
export MYVARIABLE=foo
...and you run that script, your current environment will not suddenly have a variable named MYVARIABLE. This is just as true for processes spawned by Ansible as it is for processes spawned by your shell.
If you want to set an ansible variable, consider using the register keyword to get the value:
- hosts: localhost
gather_facts: false
tasks:
- name: get wget version
command: wget --version
register: wget_version_raw
- name: extract wget version
set_fact:
wget_version: "{{ wget_version_raw.stdout_lines[0].split()[2] }}"
- name: show wget version
debug:
msg: "wget version is: {{ wget_version }}"
I'm trying to run a python script from an ansible script. I would think this would be an easy thing to do, but I can't figure it out. I've got a project structure like this:
playbook-folder
roles
stagecode
files
mypythonscript.py
tasks
main.yml
release.yml
I'm trying to run mypythonscript.py within a task in main.yml (which is a role used in release.yml). Here's the task:
- name: run my script!
command: ./roles/stagecode/files/mypythonscript.py
args:
chdir: /dir/to/be/run/in
delegate_to: 127.0.0.1
run_once: true
I've also tried ../files/mypythonscript.py. I thought the path for ansible would be relative to the playbook, but I guess not?
I also tried debugging to figure out where I am in the middle of the script, but no luck there either.
- name: figure out where we are
stat: path=.
delegate_to: 127.0.0.1
run_once: true
register: righthere
- name: print where we are
debug: msg="{{righthere.stat.path}}"
delegate_to: 127.0.0.1
run_once: true
That just prints out ".". So helpful ...
try to use script directive, it works for me
my main.yml
---
- name: execute install script
script: get-pip.py
and get-pip.py file should be in files in the same role
If you want to be able to use a relative path to your script rather than an absolute path then you might be better using the role_path magic variable to find the path to the role and work from there.
With the structure you are using in the question the following should work:
- name: run my script!
command: ./mypythonscript.py
args:
chdir: "{{ role_path }}"/files
delegate_to: 127.0.0.1
run_once: true
An alternative/straight forward solution:
Let's say you have already built your virtual env under ./env1 and used pip3 install the needed python modules.
Now write playbook task like:
- name: Run a script using an executable in a system path
script: ./test.py
args:
executable: ./env1/bin/python
register: python_result
- name: Get stdout or stderr from the output
debug:
var: python_result.stdout
If you want to execute the inline script without having a separate script file (for example, as molecule test) you can write something like this:
- name: Test database connection
ansible.builtin.command: |
python3 -c
"
import psycopg2;
psycopg2.connect(
host='127.0.0.1',
dbname='db',
user='user',
password='password'
);
"
You can even insert Ansible variables in this string.