How to pass additional environment variables to the imported ansible playbook - ansible

I have a main_play.yml Ansible playbook in which I am importing a reusable playbook a.yml.
main_play.yml
- import_playbook: "reusable_playbooks/a.yml"
a.yml
---
- name: my_playbook
hosts: "{{ HOSTS }}"
force_handlers: true
gather_facts: false
environment:
APP_DEFAULT_PORT: "{{ APP_DEFAULT_PORT }}"
tasks:
- name: Print Msg
debug:
msg: "hello"
My question is: how can I pass an additional environment variable from my main_playbook.yml playbook to my re-usable playbook a.yml (if needed) so that the environment variables become like
environment:
APP_DEFAULT_PORT: "{{ APP_DEFAULT_PORT }}"
SPRING_PROFILE: "{{ SPRING_PROFILE }}"

import_playbook is not really a module but a core feature. It does not allow for any parameter to be passed to the imported playbook. You can see this keyword as a simple commodity to facilitate playing several playbooks in a row exactly as if they were defined in the same file.
So your problem comes down to:
How do I pass additional environment variables to a play ?
Here is one solution with illustrations to use it with extra_vars or setting a fact from a previous play. This far from being exhaustive but I hope it will guide you to you own best solution.
To ease readability:
I used the APP_ prefix for all environment variables in my below examples and filtered only on those for the results.
I truncated the playbook output to the only relevant debug task
We can define the following reusable.yml playbook containing a single play
---
- hosts: localhost
gather_facts: false
vars:
default_env:
APP_DEFAULT_PORT: "{{ APP_DEFAULT_PORT | d(8080) }}"
environment: "{{ default_env | combine(additionnal_env | d({})) }}"
tasks:
- name: get the output on env for APP_* vars
shell: env | grep -i app_
register: env_cmd
changed_when: false
- name: debug the output of env
debug:
var: env_cmd.stdout_lines
We can directly run this playbook as-is which will give
$ ansible-playbook reusable.yml
[... truncated ...]
TASK [debug the output of env] ************************************************************************************************************************************************************************************
ok: [localhost] => {
"env_cmd.stdout_lines": [
"APP_DEFAULT_PORT=8080"
]
}
We can override the default port with
$ ansible-playbook reusable.yml -e APP_DEFAULT_PORT=1234
[... truncated ...]
TASK [debug the output of env] ************************************************************************************************************************************************************************************
ok: [localhost] => {
"env_cmd.stdout_lines": [
"APP_DEFAULT_PORT=1234"
]
}
We can pass additional environment variables with:
$ ansible-playbook reusable.yml -e '{"additionnal_env":{"APP_SPRING_PROFILE": "/toto/pipo"}}'
[... truncated ...]
TASK [debug the output of env] ************************************************************************************************************************************************************************************
ok: [localhost] => {
"env_cmd.stdout_lines": [
"APP_SPRING_PROFILE=/toto/pipo",
"APP_DEFAULT_PORT=8080"
]
}
Now if we want to do this from a parent playbook, we can set the needed variable for the given host in a previous play. We can define a parent.yml playbook:
---
- hosts: localhost
gather_facts: false
tasks:
- name: define additionnal env vars for this host to be used in next play(s)
set_fact:
additionnal_env:
APP_WHATEVER: some_value
APP_VERY_IMPORTANT: "ho yes!"
- import_playbook: reusable.yml
which will give:
$ ansible-playbook parent.yml
[... truncated ...]
TASK [define additionnal env vars for this host to be used in next play(s)] ************************************************************************************************************************
ok: [localhost]
[... truncated ...]
TASK [debug the output of env] ************************************************************************************************************************************************************************************
ok: [localhost] => {
"env_cmd.stdout_lines": [
"APP_WHATEVER=some_value",
"APP_VERY_IMPORTANT=ho yes!",
"APP_DEFAULT_PORT=8080"
]
}

Related

How to write a file's content in variable in Ansible

I have written Ansible code where I am generating keys. The Script generates a private key file.
- name: Generating Public and Private Key
local_action:
module: command
cmd: './Auth-PUB-PVT-keytool.sh -privK {{OUTPUT_FOLDER}}/keys/{{PVT_KEY_NAME}}.key'
become: yes
become_user: "{{HOST_USER}}"
# run_once: True
no_log: "false"
Now I want to write the key data into an Ansible variable. For example: I have the file test.key with below content
jsbciusgdcxjasbciuygwndichsiuzgxciukjsdgniugziuduwyfmygxynYUXGNiusgzbuxtsaiuxdniufgdbyxfaiysrbcuyiacfxuyibstycfbxuybuyxtduyntzicytnyudn
Now I want that in my Ansible variable "MY_KEY_VALUE" the content of key file will be assigned i.e.
MY_KEY_VALUE: "jsbciusgdcxjasbciuygwndichsiuzgxciukjsdgniugziuduwyfmygxynYUXGNiusgzbuxtsaiuxdniufgdbyxfaiysrbcuyiacfxuyibstycfbxuybuyxtduyntzicytnyudn"
How to do it? Thanks in advance.
Probably the best approach for files on the Control Node would be using lookup plugins. See Ansible: Set variable to file content or How to store the contents of the file to a variable in Ansible?.
Another approach can be be to use the slurp module – Slurps a file from remote nodes.
For a file
~/test$ cat test.key
VALUE
a minimal example playbook
---
- hosts: localhost
become: false
gather_facts: false
tasks:
- name: Slurp var from file
# delegate_to: localhost # if necessary
slurp:
src: test.key
register: MY_KEY
- name: Show var
debug:
msg: "{{ MY_KEY['content'] | b64decode }}"
results into an output of
TASK [Slurp var from file] ******
ok: [localhost]
TASK [Show var] ******
ok: [localhost] =>
msg: VALUE
If the data structure of your file test.key is already YAML, you could just read it in via include_vars module – Load variables from files, dynamically within a task.
For a file
~/test$ cat test.key
MY_KEY: "VALUE"
a minimal example playbook
---
- hosts: localhost
become: false
gather_facts: false
tasks:
- name: Read var file
# delegate_to: localhost # if necessary
include_vars:
file: test.key
name: stuff
- name: Show var
debug:
var: stuff
will result into an output of
TASK [Read var file] ******
ok: [localhost]
TASK [Show var] ***********
ok: [localhost] =>
stuff:
MY_KEY: VALUE

how does one combine conditionals into one "when" statement?

Ansible 2.10.x
I looked at How to define multiple when conditions in Ansible, and similar posts.
I'm trying to test if 2 different substring are in a variable. I've tried
default/main.yml
----------------
# Default path can be overridden in task
repo_url: "https://someUrl/development"
tasks/main.yml
--------------
- debug:
msg: "URL={{ repo_url }}"
- name: Override default path
set_fact:
repo_url: "https://someUrl/releases"
when: ('"development" not in web_version') and
('"feature" not in web_version')
- debug:
msg: "URL={{ repo_url }}"
I use above task like this for example
$ ansible-playbook ... -e web_version=development_ myTask.yml
But I get
TASK [exa-web : debug] *************************************************
ok: [10.227.x.x] => {
"msg": "URL=https://someUrl/development"
}
TASK [exa-web : Override default path] *************************************************
ok: [10.227.x.x]
TASK [exa-web : debug] *************************************************
ok: [10.227.x.x] => {
"msg": "URL=https://someUrl/releases"
}
I don't expect the set_fact task to run, but it does; hence it overrides the default repo_url. So apparently I'm setting my when condition wrong.
I've also tried this to no avail.
- name: Override default path
set_fact:
repo_url: "https://someUrl/releases"
when: '"development_" not in web_version and
"feature_" not in web_version'
Essentially, I need the task to run if I execute my playbook like this
$ ansible-playbook ... -e web_version=1.4.44 myTask.yml
What's the correct syntax? TIA
UPDATE
Seems like when doesn't like ()? I just simplified the condition for now, and this works
- name: Override default path
set_fact:
repo_url: "https://someUrl/releases"
when: '"development" not in web_version'
but not this?
- name: Override default path
set_fact:
repo_url: "https://someUrl/releases"
when: ('"development" not in web_version')
Really???
Your second attempt...
- name: Override default path
set_fact:
repo_url: "https://someUrl/releases"
when: '"development_" not in web_version and
"feature_" not in web_version'
...seems syntactically correct. In a playbook like this:
- hosts: localhost
gather_facts: false
vars:
repo_url: "https://someUrl/development"
tasks:
- name: Override default path
set_fact:
repo_url: "https://someUrl/releases"
when: '"development_" not in web_version and
"feature_" not in web_version'
- debug:
msg: "URL={{ repo_url }}"
If we run it like this:
ansible-playbook -e web_version=development_ playbook.yaml
We see as output:
TASK [Override default path] ****************************************************************************
skipping: [localhost]
TASK [debug] ********************************************************************************************
ok: [localhost] => {
"msg": "URL=https://someUrl/development"
}
And if we run it like this:
ansible-playbook -e web_version=1.4.44 playbook.yaml
We see:
TASK [Override default path] ****************************************************************************
ok: [localhost]
TASK [debug] ********************************************************************************************
ok: [localhost] => {
"msg": "URL=https://someUrl/releases"
}
That seems to do exactly what you want. Note that you're looking for the string development_ (with a trailing underscore) in your when statement, rather than development as in the first example, but that's an easy fix.
While your code works just fine, I find it helpful to use one of YAML's quote operators for writing multi-line when statements, since it avoids me getting confused by nested quotes in the expression:
- hosts: localhost
gather_facts: false
vars:
repo_url: "https://someUrl/development"
tasks:
- name: Override default path
set_fact:
repo_url: "https://someUrl/releases"
when: >-
"development_" not in web_version and
"feature_" not in web_version
- debug:
msg: "URL={{ repo_url }}"
Re: your update, this doesn't work...
- name: Override default path
set_fact:
repo_url: "https://someUrl/releases"
when: ('"development" not in web_version')
...because of bad quoting. You are effectively writing:
when: ("a string")
And a non-empty string evaluates as true in a boolean expression. Always put the quotes at the beginning of the expression. E.g., this works just fine:
when: >-
("development" not in web_version)
As does the syntactically identical:
when: '("development" not in web_version)'

ansible flipping inventory still reads in the same order

I am working on a project aimed at populating the IP's of some routers based on East/West locations. The first host will always be the primary and the second will always be the secondary.
Based on the location passed, I flip the inventory. I see the inventory being flipped, but Ansible get the value from the list in the same order.
It doesn't matter what order the inventory list is read. I need for the first host to read the first element e.g. 20.21.22.23 and then the second host to read the second element 28.29.30.31.
Right now, ATL is always the first element and LAX the second.
ok: [ATL_isr_lab] => {
"msg": [
"20.21.22.23",
"24.25.26.27",
"24.25.26.28"
]
}
ok: [LAX_isr_lab] => {
"msg": [
"28.29.30.31",
"32.33.34.35",
"32.33.34.36"
]
}
------------------ Inventory Flipped -------------------------------
ok: [LAX_isr_lab] => {
"msg": [
"28.29.30.31",
"32.33.34.35",
"32.33.34.36"
]
}
ok: [ATL_isr_lab] => {
"msg": [
"20.21.22.23",
"24.25.26.27",
"24.25.26.28"
]
}
---
- hosts: test_hosts
vars:
region: east
_Hub_IP: [ 20.21.22.23, 28.29.30.31]
_Transit_IP: [ 24.25.26.27, 32.33.34.35]
_Neighbor_IP: [24.25.26.28, 32.33.34.36]
_idx: "{{ groups.all.index(inventory_hostname) }}"
#flips inventory if west
order: "{{ (region == 'east')|ternary('reverse_inventory', 'inventory') }}"
become: yes
ignore_unreachable: true
gather_facts: false
tasks:
- name: "Configure Router"
debug:
msg:
- "{{ _Hub_IP[_idx|int] }}"
- "{{ _Transit_IP[_idx|int] }}"
- "{{ _Neighbor_IP[_idx|int] }}"
Well, the issue is not coming with the reverse_inventory and inventory value of the order parameter like you seems to think it is.
The issue is to think that groups.all is indeed reversed when you do use the reverse_inventory value.
Here is an example of this, with the playbook:
- hosts: localhost
gather_facts: no
order: "{{ (region == 'east')|ternary('reverse_inventory', 'inventory') }}"
tasks:
- debug:
var: groups.all
Running it with, with the region as an extra-vars:
ansible-playbook play.yml --inventory inventory.yml --extra-vars "region=east"
Will yield:
ok: [localhost] =>
groups.all:
- LAX_isr_lab
- ATL_isr_lab
ansible-playbook play.yml --inventory inventory.yml --extra-vars "region=west"
Will yield:
ok: [localhost] =>
groups.all:
- LAX_isr_lab
- ATL_isr_lab
Still the sorting works, see:
- hosts: all
gather_facts: no
order: "{{ (region == 'east')|ternary('reverse_inventory', 'inventory') }}"
tasks:
- debug:
Run with:
ansible-playbook play.yml --inventory inventory.yml --extra-vars "region=east"
Will yield
ok: [ATL_isr_lab] =>
msg: Hello world!
ok: [LAX_isr_lab] =>
msg: Hello world!
ansible-playbook play.yml --inventory inventory.yml --extra-vars "region=west"
Will yield
ok: [LAX_isr_lab] =>
msg: Hello world!
ok: [ATL_isr_lab] =>
msg: Hello world!
So, what ends up being wrong is your _idx value.
To fix this, you could use the reverse filter of jinja with the same ternary as you are using in the order parameter, like this:
_idx: "{{ ((region == 'east')|ternary(groups.all|reverse, groups.all)).index(inventory_hostname) }}"
Working playbook:
- hosts: all
gather_facts: no
order: "{{ (region == 'east')|ternary('reverse_inventory', 'inventory') }}"
vars:
_Hub_IP: [20.21.22.23, 28.29.30.31]
_Transit_IP: [24.25.26.27, 32.33.34.35]
_Neighbor_IP: [24.25.26.28, 32.33.34.36]
_idx: "{{ ((region == 'east')|ternary(groups.all|reverse, groups.all)).index(inventory_hostname) }}"
tasks:
- debug:
msg:
- "{{ _Hub_IP[_idx|int] }}"
- "{{ _Transit_IP[_idx|int] }}"
- "{{ _Neighbor_IP[_idx|int] }}"
Running examples:
ansible-playbook play.yml --inventory inventory.yml --extra-vars "region=east"
Will yield:
ok: [ATL_isr_lab] =>
msg:
- 20.21.22.23
- 24.25.26.27
- 24.25.26.28
ok: [LAX_isr_lab] =>
msg:
- 28.29.30.31
- 32.33.34.35
- 32.33.34.36
ansible-playbook play.yml --inventory inventory.yml --extra-vars "region=west"
Will yield:
ok: [LAX_isr_lab] =>
msg:
- 20.21.22.23
- 24.25.26.27
- 24.25.26.28
ok: [ATL_isr_lab] =>
msg:
- 28.29.30.31
- 32.33.34.35
- 32.33.34.36
Got it working as posted originally. I had to upgrade to ansible version 2.11.6. I'm running Debian 10 and apt-get update/apt-get upgrade did not find a newer version.
My solution involved deleting the version and installing it again through pip. After that, I ran the code and it worked flawlessly.

How to use ansible-playbook --limit with an IP address, rather than a hostname?

Our inventory in INI style looks like this:
foo-host ansible_host=1.2.3.4 some_var=bla
bar-host ansible_host=5.6.7.8 some_var=blup
I can limit a playbook run to a single host by using the host alias:
$ ansible-playbook playbook.yml --limit foo-host
But I can't limit the run by mentioning the host's IP address from the ansible_host variable:
$ ansible-playbook playbook.yml --limit 1.2.3.4
ERROR! Specified hosts and/or --limit does not match any hosts
The reason I want to do that is because Ansible is triggered by an external system that only knows the IP address, but not the alias.
Is there a way to make this work? Mangling the IP address (e.g. ip_1_2_3_4) would be acceptable.
Things I've considered:
Turn it on its head and identify all hosts by IP address:
1.2.3.4 some_var=bla
5.6.7.8 some_var=blup
But now we can't use the nice host aliases anymore, and the inventory file is less readable too.
Write a custom inventory script that is run after the regular inventory, and creates a group like ip_1_2_3_4 containing only that single host, so we can use --limit ip_1_2_3_4. But there's no way to access previously loaded inventory from inventory scripts, so I don't know which groups to create.
Create the new groups dynamically using the group_by module. But because this is a task, it is run only after --limit has already decided that there are no hosts matching the pattern, and at that point Ansible just gives up and doesn't run the group_by task anymore.
Better solutions still welcome, but currently I'm doing it with a small inventory plugin, which (as opposed to an inventory script) does have access to previously added inventory:
plugins/inventory/ip_based_groups.py
import os.path
import re
from ansible.plugins.inventory import BaseInventoryPlugin
from ansible.inventory.group import Group
PATH_PLACEHOLDER = 'IP_BASED_GROUPS'
IP_RE = re.compile('^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$')
class InventoryModule(BaseInventoryPlugin):
'''
This inventory plugin does not create any hosts, but just adds groups based
on IP addresses. For each host whose ansible_host looks like an IPv4
address (e.g. 1.2.3.4), a corresponding group is created by prefixing the
IP address with 'ip_' and replacing dots by underscores (e.g. ip_1_2_3_4).
Use it by putting the literal string IP_BASED_GROUPS at the end of the list
of inventory sources.
'''
NAME = 'ip_based_groups'
def verify_file(self, path):
return self._is_path_placeholder(path)
def parse(self, inventory, loader, path, cache=True):
if not self._is_path_placeholder(path):
return
for host_name, host in inventory.hosts.items():
ansible_host = host.vars.get('ansible_host', '')
if self._is_ip_address(ansible_host):
group = 'ip_' + ansible_host.replace('.', '_')
inventory.add_group(group)
inventory.add_host(host_name, group)
def _is_path_placeholder(self, path):
return os.path.basename(path) == PATH_PLACEHOLDER
def _is_ip_address(self, s):
return bool(IP_RE.match(s))
ansible.cfg
[defaults]
# Load plugins from these directories.
inventory_plugins = plugins/inventory
# Directory that contains all inventory files, and placeholder to create
# IP-based groups.
inventory = inventory/,IP_BASED_GROUPS
[inventory]
# Enable our custom inventory plugin.
enable_plugins = ip_based_groups, host_list, script, auto, yaml, ini, toml
Q: Playbook running a script running a playbook... it would work, but it's a bit hacky
A: FWIW. It's possible to use json_query and avoid the script. For example
- hosts: all
gather_facts: false
tasks:
- set_fact:
my_host: "{{ hostvars|dict2items|json_query(query)|first }}"
vars:
query: "[?value.ansible_host == '{{ my_host_ip }}' ].key"
run_once: true
- add_host:
hostname: "{{ my_host }}"
groups: my_group
run_once: true
- hosts: my_group
gather_facts: false
tasks:
- debug:
var: inventory_hostname
Q: "Unfortunately I'm running ansible-playbook from AWX, so no wrapper scripts allowed."
A: It is possible to run the script from the playbook. For example the playbook below
- hosts: all
gather_facts: false
tasks:
- set_fact:
my_host: "{{ lookup('pipe', playbook_dir ~ '/script.sh ' ~ my_host_ip) }}"
delegate_to: localhost
run_once: true
- add_host:
hostname: "{{ my_host }}"
groups: my_group
run_once: true
- hosts: my_group
gather_facts: false
tasks:
- debug:
var: inventory_hostname
gives
$ ansible-playbook -e 'my_host_ip=10.1.0.53' play.yml
PLAY [all] ********************
TASK [set_fact] ***************
ok: [test_01 -> localhost]
TASK [add_host] ***************
changed: [test_01]
PLAY [my_group] ***************
TASK [debug] ************
ok: [test_03] => {
"inventory_hostname": "test_03"
}
(Fit the script to print the hostname only.)
Q: I can limit a playbook run to a single host by using the host alias:
$ ansible-playbook playbook.yml --limit foo-host
But I can't limit the run by mentioning the host's IP address from the ansible_host variable
$ ansible-playbook playbook.yml --limit 1.2.3.4
A: ansible-inventory and jq are able to resolve the host. For example the script
#!/bin/bash
my_host="$(ansible-inventory --list | jq '._meta.hostvars | to_entries[] | select (.value.ansible_host=="'"$1"'") | .key')"
my_host="$(echo $my_host | sed -e 's/^"//' -e 's/"$//')"
echo host: $my_host
ansible-playbook -l $my_host play.yml
with the inventory
test_01 ansible_host=10.1.0.51
test_02 ansible_host=10.1.0.52
test_03 ansible_host=10.1.0.53
and with the playbook.yml
- hosts: all
gather_facts: false
tasks:
- debug:
var: inventory_hostname
gives
$ ./script.bash 10.1.0.53
host: test_03
PLAY [all] **********
TASK [debug] ************
ok: [test_03] => {
"inventory_hostname": "test_03"
}

Ansible increment variable globally for all hosts

I have two servers in my inventory (hosts)
[server]
10.23.12.33
10.23.12.40
and playbook (play.yml)
---
- hosts: all
roles:
web
Inside web role in vars directory i have main.yml
---
file_number : 0
Inside web role in tasks directory i have main.yml
---
- name: Increment variable
set_fact: file_number={{ file_number | int + 1 }}
- name: create file
command: 'touch file{{ file_number }}'
Now i expect that in first machine i will have file1 and in second machine i will have file2 but in both machines i have file1
So this variable is local for every machine, how could i make it global for all machines.
My file structure is:
hosts
play.yml
roles/
web/
tasks/
main.yml
vars/
main.yml
Now i expect that in first machine i will have file1 and in second machine i will have file2 but in both machines i have file1
You need to keep in mind that variables in Ansible aren't global. Variables (aka 'facts') are applied uniquely to each host, so file_number for host1 is different than file_number for host2. Here's an example based loosely on what you posted:
roles/test/vars/main.yml:
---
file_number: 0
roles/test/tasks/main.yml:
---
- name: Increment variable
set_fact: file_number={{ file_number | int + 1 }}
- name: debug
debug: msg="file_number is {{ file_number }} on host {{ inventory_hostname }}"
Now suppose you have just two hosts defined, and you run this role multiple times in a playbook that looks like this:
---
- hosts: all
roles:
- { role: test }
- hosts: host1
roles:
- { role: test }
- hosts: all
roles:
- { role: test }
So in the first play the role is applied to both host1 & host2. In the second play it's only run against host1, and in the third play it's again run against both host1 & host2. The output of this playbook is:
PLAY [all] ********************************************************************
TASK: [test | Increment variable] *********************************************
ok: [host1]
ok: [host2]
TASK: [test | debug] **********************************************************
ok: [host1] => {
"msg": "file_number is 1 on host host1"
}
ok: [host2] => {
"msg": "file_number is 1 on host host2"
}
PLAY [host1] **************************************************
TASK: [test | Increment variable] *********************************************
ok: [host1]
TASK: [test | debug] **********************************************************
ok: [host1] => {
"msg": "file_number is 2 on host host1"
}
PLAY [all] ********************************************************************
TASK: [test | Increment variable] *********************************************
ok: [host1]
ok: [host2]
TASK: [test | debug] **********************************************************
ok: [host1] => {
"msg": "file_number is 3 on host host1"
}
ok: [host2] => {
"msg": "file_number is 2 on host host2"
}
So as you can see, the value of file_number is different for host1 and host2 since the role that increments the value ran against host1 more times than it did host2.
Unfortunately there really isn't a clean way making a variable global within Ansible. The entire nature of Ansible's ability to run tasks in parallel against large numbers of hosts makes something like this very tricky. Unless you're extremely careful with global variables in a parallel environment you can easily trigger a race condition, which will likely result in unpredictable (inconsistent) results.
I haven't found a solution with ansible although i made work around using shell to make global variable for all hosts.
Create temporary file in /tmp in localhost and place in it the starting count
Read the file for every host and increment the number inside the file
I created the file and initialized it in the playbook (play.yml)
- name: Manage localhost working area
hosts: 127.0.0.1
connection: local
tasks:
- name: Create localhost tmp file
file: path={{ item.path }} state={{ item.state }}
with_items:
- { path: '/tmp/file_num', state: 'absent' }
- { path: '/tmp/file_num', state: 'touch' }
- name: Managing tmp files
lineinfile: dest=/tmp/file_num line='0'
Then in web role in main.tml task i read the file and increment it.
- name: Get file number
local_action: shell file=$((`cat /tmp/file_num` + 1)); echo $file | tee /tmp/file_num
register: file_num
- name: Set file name
command: 'touch file{{ file_num.stdout }}'
Now i have in first host file1 and in second host file2
You can use Matt Martz's solution from here.
Basically your task would be like:
- name: Set file name
command: 'touch file{{ play_hosts.index(inventory_hostname) }}'
And you can remove all that code for maintaining global var and external file.

Resources