ansible synchronize via ssh if inventory_hostname is different - ansible

I like to use the synchronize module in Ansible to synchronize files from one server to some other servers via SSH
- name: Copy files to all servers
synchronize:
src: /source/path/
dest: "rsync://{{ ansible_nodename }}:/destination/path/"
delegate_to: src-host
So by default this module would use the inventory_name, but from the src-host the hostname is a different one. Only way I found so far was to use rsync://{{ ansible_nodename }}, but then it seems this is not happening via SSH anymore and I get a No route to host (113)\nrsync error: error in socket IO (code 10) at clientserver.c(128) [sender=3.1.0]
I tried also to overwrite inventory_hostname just for this one task, with no luck so far.
So for example I imagine something like this
- name: Copy custom config to all servers
synchronize:
src: /opt/app/dir/
dest: /opt/app/dir/
delegate_to: src-host
vars:
inventory_hostname: "{{ ansible_nodename }}"
But of course it fails when manipulating the inventory_hostname with following message
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: jinja2.exceptions.UndefinedError: "hostvars['{{ ansible_nodename }}']" is undefined
fatal: [my-host]: FAILED! => {"msg": "Unexpected failure during module execution.", "stdout": ""}

If you want to copy something
from serverA
to serverB
using serverB's interface eth4 address, it should be like this:
- name: Copy custom config from serverA serverB over
synchronize:
src: "/path/to/source/on/serverA/machine"
dest: "rsync://{{ hostvars[serverB]['ansible_facts']['ansible_eth0']['ipv4']['address'] }} }}/path/where/to/put/files/on/serverB/machines"
delegate_to: serverA

If you want to template target string based on some hostname, you can use:
special variable inventory_hostname
(in case that your inventory hostname is not the real address) access other hosts variables e.g.:
{{hostvars[inventory_hostname]['ansible_facts']['ansible_fqdn']}}

Related

Access Ansible inventory variables using with_items and vmware_guest for vSphere

I'm trying to convert a playbook which deploys vSphere VMs. The current version
of the playbook gets individual IP addresses and source template information
from vars/main.yml in the role (I'm using the best practice directory layout.)
vms:
- name: demo-server-0
ip_address: 1.1.1.1
- name: demo-server-1
ip_address: 1.1.1.2
Template, and other information is stored elsewhere in the vars.yml file but
it makes more sense to use the standard inventory, so I created these entries
in the inventory file:
test_and_demo:
hosts:
demo-server-0:
ip-address: 1.1.1.1
demo-server-1:
ip-address: 1.1.1.2
vars:
vc_template: xdr-template-1
The play is pretty much unchanged, apart from this key change:
FROM:
with_items: '{{ vms }}'
TO:
with_items: "{{ query('inventory_hostnames', 'test_and_demo') }}"
But this throws the following error:
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ModuleNotFoundError: No module named 'pyVim'
failed: [localhost] (item=demo-deployer) => {"ansible_loop_var": "item", "changed": false, "item": "demo-deployer", "msg": "Failed to import the required Python library (PyVmomi) on lubuntu's Python /usr/bin/python3. Please read the module documentation and install it in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter"}
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ModuleNotFoundError: No module named 'pyVim'
I don't believe this is a platform issue, as reverting back to with_items: '{{ vms }}'
works just fine (vars\main.yml still exists and duplicates data).
I'm probably using query incorrectly but can anyone give me a hint what I'm
doing wrong?
Accessing sub-values
If I can get the VMs to deploy using with_items: + query I'd then need to
access the variables of the host and group for the various items I need to
specify, can someone advise me here, too.
Many thanks.
Although I was convinced the problem was in the way I was doing the query, it was in the root Play which I hadn't noticed. By adding become: yes I think it was switching to the root user which didn't have the module installed.
Once it was removed, the problem was resolved.
- hosts: localhost
roles:
- deploy_demo_servers
become: yes

Switching user for delegation to host outside of inventory with Ansible/awx

I am trying to do the following using Ansible 2.8.4 and awx:
Read some facts from Cisco IOS devices (works)
Put results into a local file using a template (works)
Copy/Move the resulting file to a different server
Since I have to use a different user to access IOS devices and servers, and the servers in question aren't part of the inventory used for the playbook, I am trying to achieve this using become_user and delegate_to.
The initial user (defined in the awx template) is allowed to connect to the IOS devices, while different_user can connect to servers using a ssh private key.
The playbook:
---
- name: Read Switch Infos
hosts: all
gather_facts: no
tasks:
- name: Gather IOS Facts
ios_facts:
- debug: var=ansible_net_version
- name: Set Facts IOS
set_fact:
ios_version: "{{ ansible_net_version }}"
- name: Create Output file
file: path=/tmp/test state=directory mode=0755
delegate_to: 127.0.0.1
run_once: true
- name: Run Template
template:
src: ios_firmware_check.j2
dest: /tmp/test/output.txt
delegate_to: 127.0.0.1
run_once: true
- name: Set up keys
become: yes
become_method: su
become_user: different_user
authorized_key:
user: different_user
state: present
key: "{{ lookup('file', '/home/different_user/.ssh/key_file') }}"
delegate_to: 127.0.0.1
run_once: true
- name: Copy to remote server
remote_user: different_user
copy:
src: /tmp/test/output.txt
dest: /tmp/test/output.txt
delegate_to: remote.server.fqdn
run_once: true
When run, the playbook fails in the Set up keys task trying to access the home directory with the ssh key:
TASK [Set up keys] *************************************************************
task path: /tmp/awx_2206_mz90qvh9/project/IOS/ios_version.yml:23
[WARNING]: Unable to find '/home/different_user/.ssh/key_file' in expected paths
(use -vvvvv to see paths)
File lookup using None as file
fatal: [host]: FAILED! => {
"msg": "An unhandled exception occurred while running the lookup plugin 'file'. Error was a <class 'ansible.errors.AnsibleError'>, original message: could not locate file in lookup: /home/different_user/.ssh/key_file"
}
I'm assuming my mistake is somehow related to which user is trying to access the /home/ directory on which device.
Is there a better/more elegant/working way of connecting to a different server using an ssh key to move around files?
I know one possibility would be to just scp using the shell module, but that always feels a bit hacky.
(sort of) solved using encrypted variables in hostvars with Ansible vault.
How to get there:
Encrypting the passwords:
This needs to be done from any commandline with Ansible installed, for some reason this can't be done in tower/awx
ansible-vault encrypt_string "password"
You'll be prompted for a password to encrypt/decrypt.
If you're doing this for Cisco devices, you'll want to encrypt both the ssh and the enable password using this method.
Add encrypted passwords to inventory
For testing, I put it in hostvars for a single switch, should be fine to put it into groupvars and use it on multiple switches as well.
ansible_ssh_pass should be the password to access the switch, ansible_become_pass is the enable password.
---
all:
children:
Cisco:
children:
switches:
switches:
hosts:
HOSTNAME:
ansible_host: ip-address
ansible_user: username
ansible_ssh_pass: !vault |
$ANSIBLE_VAULT;1.1;AES256
[encrypted string]
ansible_connection: network_cli
ansible_network_os: ios
ansible_become: yes
ansible_become_method: enable
ansible_become_pass: !vault |
$ANSIBLE_VAULT;1.1;AES256
[encrypted string]
Adding the vault password to tower/awx
Add a new credential with credential type "Vault" and the password you used earlier to encrypt the strings.
Now, all you need to do is add the credential to your job template (the template can have one "normal" credential (machine, network, etc.) and multiple vaults).
The playbook then automagically accesses the vault credential to decrypt the strings in the inventory.
Playbook to get Switch Infos and drop template file on a server
The playbook now looks something like below, and does the following:
Gather Facts on all Switches in Inventory
Write all facts into a .csv using a template, save the file on the ansible host
Copy said file to a different server using a different user
The template is configured with the user able to access the server, the user used to access switches with a password is stored in the inventory as seen above.
---
- name: Read Switch Infos
hosts: all
gather_facts: no
tasks:
- name: Create Output file
file: path=/output/directory state=directory mode=0755
delegate_to: 127.0.0.1
run_once: true
- debug:
var: network
- name: Gather IOS Facts
remote_user: username
ios_facts:
- debug: var=ansible_net_version
- name: Set Facts IOS
set_fact:
ios_version: "{{ ansible_net_version }}"
- name: Run Template
template:
src: ios_firmware_check.csv.j2
dest: /output/directory/filename.csv
delegate_to: 127.0.0.1
run_once: true
- name: Create Destination folder on remote server outside inventory
remote_user: different_username
file: path=/destination/directory mode=0755
delegate_to: remote.server.fqdn
run_once: true
- name: Copy to remote server outside inventory
remote_user: different_username
copy:
src: /output/directory/filename.csv
dest: /destination/directory/filename.csv
delegate_to: remote.server.fqdn
run_once: true

Setting and reading environment variables in Ansible does not work [duplicate]

I am deploying a CentOS machine and one among the tasks was to read a file that is rendered the Consul service which places it under /etc/sysconfig. I am trying to later read it in a variable using the lookup module but it is throwing an error below:
fatal: [ansible_vm1]: FAILED! => {"failed": true, "msg": "could not locate file in lookup: /etc/sysconfig/idb_EndPoint"}
But I am running the lookup task way below the point where the idb_EndPoint file is generated and also I looked it up manually logging in to verify the file was available.
- name: importing the file contents to variable
set_fact:
idb_endpoint: "{{ lookup('file', '/etc/sysconfig/idb_EndPoint') }}"
become: true
I also tried previlege escalations with another user become_user: deployuser along with become: true but didn't work still. Using the Ansible version 2.2.1.0.
All lookup plugins in Ansible are executed locally on the control machine.
Instead use slurp module:
- name: importing the file contents to variable
slurp:
src: /etc/sysconfig/idb_EndPoint
register: idb_endpoint_b64
become: true
- set_fact:
idb_endpoint: "{{ idb_endpoint_b64.content | b64decode }}"

Trying to include a list of tasks from an playbook in Ansible

My folder structure:
First I'll give you this so you can see how this is laid out and reference it when reading below:
/environments
/development
hosts // Inventory file
/group_vars
proxies.yml
/custom_tasks
firewall_rules.yml // File I'm trying to bring in
playbook.yml // Root playbook, just brings in the plays
rev-proxy.yml // Reverse-proxy playbook, included by playbook.yml
playbook.yml:
---
- include: webserver.yml
- include: rev-proxy.yml
proxies.yml just contains firewall_custom_include_file: custom_tasks/firewall_rules.yml
firewall_rules.yml:
tasks:
- name: "Allowing traffic from webservers on 80"
ufw: src=10.10.10.3, port=80, direction=in, rule=allow
- name: "Allowing traffic all on 443"
ufw: port=443, rule=allow
and finally rev-proxy.yml play:
---
- hosts: proxies
become: yes
roles:
- { role: firewall }
- { role: geerlingguy.nginx }
pre_tasks:
# jessie-backports for nginx-extras 1.10
- name: "Adding jessie-backports repo"
copy: content="deb http://ftp.debian.org/debian jessie-backports main" dest="/etc/apt/sources.list.d/jessie-backports.list"
- name: Updating apt-cache.
apt: update_cache="yes"
- name: "Installing htop"
apt:
name: htop
state: present
- name: "Coopying SSL certificates"
copy: src=/vagrant/ansible/files/ssl/ dest=/etc/ssl/certs force=no
tasks:
- name: "Including custom firewall rules."
include: "{{ inventory_dir }}/{{ firewall_custom_include_file }}.yml"
when: firewall_custom_include_file is defined
vars_files:
- ./vars/nginx/common.yml
- ./vars/nginx/proxy.yml
What I'm trying to do:
Using Ansible 2.2.1.0
I'm trying to include a list of tasks that will be run if a variable firewall_custom_include_file is set. The list is included relative to the inventory directory by doing "{{ inventory_dir }}/{{ firewall_custom_include_file }}.yml" - in this case that works out to /vagrant/ansible/environments/development/custom_tasks/firewall_rules.yml
Essentially the idea here is that I need to have different firewall rules be executed based on what environment I'm in, and what hosts are being provisioned.
To give a simple example: I might want to whitelist a database server IP on the production webserver, but not on the reverse proxy, and also not on my development box.
The problem:
Whenever I include firewall_rules.yml like above, it tells me:
TASK [Including custom firewall rules.] ****************************************
fatal: [proxy-1]: FAILED! => {"failed": true, "reason": "included task files must contain a list of tasks"}
I'm not sure what it's expecting, I tried taking out the tasks: at the beginning of the file, making it:
- name: "Allowing traffic from webservers on 80"
ufw: src=10.10.10.3, port=80, direction=in, rule=allow
- name: "Allowing traffic all on 443"
ufw: port=443, rule=allow
But then it gives me the error:
root#ansible-control:/vagrant/ansible# ansible-playbook -i environments/development playbook.yml
ERROR! Attempted to execute "/vagrant/ansible/environments/development/custom_tasks/firewall_rules.yml" as inventory script: problem running /vagrant/ansible/environments/development/custom_tasks/firewall_rules.yml --list ([Errno 8] Exec format error)
Attempted to read "/vagrant/ansible/environments/development/custom_tasks/firewall_rules.yml" as YAML: 'AnsibleSequence' object has no attribute 'keys'
Attempted to read "/vagrant/ansible/environments/development/custom_tasks/firewall_rules.yml" as ini file: /vagrant/ansible/environments/development/custom_tasks/firewall_rules.yml:2: Expected key=value host variable assignment, got: name:
At this point I'm not really sure what it's looking for in the included file, and I can't seem to really find clear documentation on this, or other people having this issue.
Try to execute with -i environments/development/hosts instead of directory.
But I bet that storing tasks file inside inventory is far from best practices.
You may want to define list of custom rules as inventory variable, e.g.:
custom_rules:
- src: 10.10.10.3
port: 80
direction: in
rule: allow
- port: 443
rule: allow
And instead of include task, make something like this:
- ufw:
port: "{{ item.port | default(omit) }}"
rule: "{{ item.rule | default(omit) }}"
direction: "{{ item.direction | default(omit) }}"
src: "{{ item.src | default(omit) }}"
with_items: "{{ custom_rules }}"

Ansible synchronize does not read ansible_sudo_pass variable

I cannot get Ansible's synchronize module to work. I have a very simple task:
- name: Copy project local files
synchronize: src="{{ item.src }}"
dest="{{ item.dest }}"
with_items: project_local_files
When I execute ansible-playbook playbook.yml --tags deploy --ask-su-pass --ask-pass, all tasks go through. Then it stops at The synchronize task and prompts me for the password:
admin#'s password:
I enter the password and then the task breaks with the following error message:
msg: sudo: no tty present and no askpass program specified
How can I synchronize files with Ansible's synchronize?

Resources