Setting and reading environment variables in Ansible does not work [duplicate] - ansible

I am deploying a CentOS machine and one among the tasks was to read a file that is rendered the Consul service which places it under /etc/sysconfig. I am trying to later read it in a variable using the lookup module but it is throwing an error below:
fatal: [ansible_vm1]: FAILED! => {"failed": true, "msg": "could not locate file in lookup: /etc/sysconfig/idb_EndPoint"}
But I am running the lookup task way below the point where the idb_EndPoint file is generated and also I looked it up manually logging in to verify the file was available.
- name: importing the file contents to variable
set_fact:
idb_endpoint: "{{ lookup('file', '/etc/sysconfig/idb_EndPoint') }}"
become: true
I also tried previlege escalations with another user become_user: deployuser along with become: true but didn't work still. Using the Ansible version 2.2.1.0.

All lookup plugins in Ansible are executed locally on the control machine.
Instead use slurp module:
- name: importing the file contents to variable
slurp:
src: /etc/sysconfig/idb_EndPoint
register: idb_endpoint_b64
become: true
- set_fact:
idb_endpoint: "{{ idb_endpoint_b64.content | b64decode }}"

Related

Access Ansible inventory variables using with_items and vmware_guest for vSphere

I'm trying to convert a playbook which deploys vSphere VMs. The current version
of the playbook gets individual IP addresses and source template information
from vars/main.yml in the role (I'm using the best practice directory layout.)
vms:
- name: demo-server-0
ip_address: 1.1.1.1
- name: demo-server-1
ip_address: 1.1.1.2
Template, and other information is stored elsewhere in the vars.yml file but
it makes more sense to use the standard inventory, so I created these entries
in the inventory file:
test_and_demo:
hosts:
demo-server-0:
ip-address: 1.1.1.1
demo-server-1:
ip-address: 1.1.1.2
vars:
vc_template: xdr-template-1
The play is pretty much unchanged, apart from this key change:
FROM:
with_items: '{{ vms }}'
TO:
with_items: "{{ query('inventory_hostnames', 'test_and_demo') }}"
But this throws the following error:
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ModuleNotFoundError: No module named 'pyVim'
failed: [localhost] (item=demo-deployer) => {"ansible_loop_var": "item", "changed": false, "item": "demo-deployer", "msg": "Failed to import the required Python library (PyVmomi) on lubuntu's Python /usr/bin/python3. Please read the module documentation and install it in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter"}
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ModuleNotFoundError: No module named 'pyVim'
I don't believe this is a platform issue, as reverting back to with_items: '{{ vms }}'
works just fine (vars\main.yml still exists and duplicates data).
I'm probably using query incorrectly but can anyone give me a hint what I'm
doing wrong?
Accessing sub-values
If I can get the VMs to deploy using with_items: + query I'd then need to
access the variables of the host and group for the various items I need to
specify, can someone advise me here, too.
Many thanks.
Although I was convinced the problem was in the way I was doing the query, it was in the root Play which I hadn't noticed. By adding become: yes I think it was switching to the root user which didn't have the module installed.
Once it was removed, the problem was resolved.
- hosts: localhost
roles:
- deploy_demo_servers
become: yes

Switching user for delegation to host outside of inventory with Ansible/awx

I am trying to do the following using Ansible 2.8.4 and awx:
Read some facts from Cisco IOS devices (works)
Put results into a local file using a template (works)
Copy/Move the resulting file to a different server
Since I have to use a different user to access IOS devices and servers, and the servers in question aren't part of the inventory used for the playbook, I am trying to achieve this using become_user and delegate_to.
The initial user (defined in the awx template) is allowed to connect to the IOS devices, while different_user can connect to servers using a ssh private key.
The playbook:
---
- name: Read Switch Infos
hosts: all
gather_facts: no
tasks:
- name: Gather IOS Facts
ios_facts:
- debug: var=ansible_net_version
- name: Set Facts IOS
set_fact:
ios_version: "{{ ansible_net_version }}"
- name: Create Output file
file: path=/tmp/test state=directory mode=0755
delegate_to: 127.0.0.1
run_once: true
- name: Run Template
template:
src: ios_firmware_check.j2
dest: /tmp/test/output.txt
delegate_to: 127.0.0.1
run_once: true
- name: Set up keys
become: yes
become_method: su
become_user: different_user
authorized_key:
user: different_user
state: present
key: "{{ lookup('file', '/home/different_user/.ssh/key_file') }}"
delegate_to: 127.0.0.1
run_once: true
- name: Copy to remote server
remote_user: different_user
copy:
src: /tmp/test/output.txt
dest: /tmp/test/output.txt
delegate_to: remote.server.fqdn
run_once: true
When run, the playbook fails in the Set up keys task trying to access the home directory with the ssh key:
TASK [Set up keys] *************************************************************
task path: /tmp/awx_2206_mz90qvh9/project/IOS/ios_version.yml:23
[WARNING]: Unable to find '/home/different_user/.ssh/key_file' in expected paths
(use -vvvvv to see paths)
File lookup using None as file
fatal: [host]: FAILED! => {
"msg": "An unhandled exception occurred while running the lookup plugin 'file'. Error was a <class 'ansible.errors.AnsibleError'>, original message: could not locate file in lookup: /home/different_user/.ssh/key_file"
}
I'm assuming my mistake is somehow related to which user is trying to access the /home/ directory on which device.
Is there a better/more elegant/working way of connecting to a different server using an ssh key to move around files?
I know one possibility would be to just scp using the shell module, but that always feels a bit hacky.
(sort of) solved using encrypted variables in hostvars with Ansible vault.
How to get there:
Encrypting the passwords:
This needs to be done from any commandline with Ansible installed, for some reason this can't be done in tower/awx
ansible-vault encrypt_string "password"
You'll be prompted for a password to encrypt/decrypt.
If you're doing this for Cisco devices, you'll want to encrypt both the ssh and the enable password using this method.
Add encrypted passwords to inventory
For testing, I put it in hostvars for a single switch, should be fine to put it into groupvars and use it on multiple switches as well.
ansible_ssh_pass should be the password to access the switch, ansible_become_pass is the enable password.
---
all:
children:
Cisco:
children:
switches:
switches:
hosts:
HOSTNAME:
ansible_host: ip-address
ansible_user: username
ansible_ssh_pass: !vault |
$ANSIBLE_VAULT;1.1;AES256
[encrypted string]
ansible_connection: network_cli
ansible_network_os: ios
ansible_become: yes
ansible_become_method: enable
ansible_become_pass: !vault |
$ANSIBLE_VAULT;1.1;AES256
[encrypted string]
Adding the vault password to tower/awx
Add a new credential with credential type "Vault" and the password you used earlier to encrypt the strings.
Now, all you need to do is add the credential to your job template (the template can have one "normal" credential (machine, network, etc.) and multiple vaults).
The playbook then automagically accesses the vault credential to decrypt the strings in the inventory.
Playbook to get Switch Infos and drop template file on a server
The playbook now looks something like below, and does the following:
Gather Facts on all Switches in Inventory
Write all facts into a .csv using a template, save the file on the ansible host
Copy said file to a different server using a different user
The template is configured with the user able to access the server, the user used to access switches with a password is stored in the inventory as seen above.
---
- name: Read Switch Infos
hosts: all
gather_facts: no
tasks:
- name: Create Output file
file: path=/output/directory state=directory mode=0755
delegate_to: 127.0.0.1
run_once: true
- debug:
var: network
- name: Gather IOS Facts
remote_user: username
ios_facts:
- debug: var=ansible_net_version
- name: Set Facts IOS
set_fact:
ios_version: "{{ ansible_net_version }}"
- name: Run Template
template:
src: ios_firmware_check.csv.j2
dest: /output/directory/filename.csv
delegate_to: 127.0.0.1
run_once: true
- name: Create Destination folder on remote server outside inventory
remote_user: different_username
file: path=/destination/directory mode=0755
delegate_to: remote.server.fqdn
run_once: true
- name: Copy to remote server outside inventory
remote_user: different_username
copy:
src: /output/directory/filename.csv
dest: /destination/directory/filename.csv
delegate_to: remote.server.fqdn
run_once: true

ansible synchronize via ssh if inventory_hostname is different

I like to use the synchronize module in Ansible to synchronize files from one server to some other servers via SSH
- name: Copy files to all servers
synchronize:
src: /source/path/
dest: "rsync://{{ ansible_nodename }}:/destination/path/"
delegate_to: src-host
So by default this module would use the inventory_name, but from the src-host the hostname is a different one. Only way I found so far was to use rsync://{{ ansible_nodename }}, but then it seems this is not happening via SSH anymore and I get a No route to host (113)\nrsync error: error in socket IO (code 10) at clientserver.c(128) [sender=3.1.0]
I tried also to overwrite inventory_hostname just for this one task, with no luck so far.
So for example I imagine something like this
- name: Copy custom config to all servers
synchronize:
src: /opt/app/dir/
dest: /opt/app/dir/
delegate_to: src-host
vars:
inventory_hostname: "{{ ansible_nodename }}"
But of course it fails when manipulating the inventory_hostname with following message
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: jinja2.exceptions.UndefinedError: "hostvars['{{ ansible_nodename }}']" is undefined
fatal: [my-host]: FAILED! => {"msg": "Unexpected failure during module execution.", "stdout": ""}
If you want to copy something
from serverA
to serverB
using serverB's interface eth4 address, it should be like this:
- name: Copy custom config from serverA serverB over
synchronize:
src: "/path/to/source/on/serverA/machine"
dest: "rsync://{{ hostvars[serverB]['ansible_facts']['ansible_eth0']['ipv4']['address'] }} }}/path/where/to/put/files/on/serverB/machines"
delegate_to: serverA
If you want to template target string based on some hostname, you can use:
special variable inventory_hostname
(in case that your inventory hostname is not the real address) access other hosts variables e.g.:
{{hostvars[inventory_hostname]['ansible_facts']['ansible_fqdn']}}

Read a file locally and use the vars remote in Ansible

I read a YAML file locally with the following playbook:
- name: Ensure the deploy_manifest var is defined and read deploy manifest
hosts: localhost
connection: local
gather_facts: False
tasks:
- assert:
that: deploy_manifest is defined
msg: |
Error: Must provide providers config path. Fix: Add '-e deploy_manifest=/path/to/manifest' to the ansible-playbook command
- name: Read deploy manifest
include_vars:
file: "{{ deploy_manifest }}"
name: manifest
register: manifest
- debug:
msg: "[{{ manifest.key }}]: {{ manifest.value }}"
with_dict: "{{ manifest.ansible_facts }}"
and then in the same playbook YAML file I run:
- name: Deploy Backend services
hosts: backend
remote_user: ubuntu
gather_facts: False
vars:
env: "{{ env }}"
services: "{{ manifest.ansible_facts }}"
tasks:
- include_role:
name: services_backend
when: backend | default(true) | bool
However it doesn't work because debug fails. It says that manifest is empty.
Which is the best way to read a YAML file or generally a configuration in a playbook and then have the variables passed in another playbook?
Your debug module doesn't say "that manifest is empty", it says the key manifest.key does not exist because it does not.
You registered a fact named manifest with:
register: manifest
You try to refer to a key of the above manifest named key and another key (!) named value:
msg: "[{{ manifest.key }}]: {{ manifest.value }}"
Please read Looping over Hashes chapter and acknowledge that (without using loop control) you refer to the iterated variable using item.
Please note that with name: manifest and register: manifest you read your vars file into manifest.ansible_facts.manifest.

how to read json file using ansible

I have a json file in the same directory where my ansible script is. Following is the content of json file:
{ "resources":[
{"name":"package1", "downloadURL":"path-to-file1" },
{"name":"package2", "downloadURL": "path-to-file2"}
]
}
I am trying to to download these packages using get_url. Following is the approach:
---
- hosts: localhost
vars:
package_dir: "/var/opt/"
version_file: "{{lookup('file','/home/shasha/devOps/tests/packageFile.json')}}"
tasks:
- name: Printing the file.
debug: msg="{{version_file}}"
- name: Downloading the packages.
get_url: url="{{item.downloadURL}}" dest="{{package_dir}}" mode=0777
with_items: version_file.resources
The first task is printing the content of the file correctly but in the second task, I am getting the following error:
[DEPRECATION WARNING]: Skipping task due to undefined attribute, in the future this
will be a fatal error.. This feature will be removed in a future release. Deprecation
warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
You have to add a from_json jinja2 filter after the lookup:
version_file: "{{ lookup('file','/home/shasha/devOps/tests/packageFile.json') | from_json }}"
In case if you need to read a JSON formatted text and store it as a variable, it can be also handled by include_vars .
- hosts: localhost
tasks:
- include_vars:
file: variable-file.json
name: variable
- debug: var=variable
for future visitors , if you are looking for a remote json file read. this won't work
as ansible lookups are executed in the local
you should use a module like Slurp

Resources