I'm trying to access another host IP for usage in the Jinja2 template. So basically - I need to extract the IP address of another host to use it in my role template. Could you please direct me? Thank you
Example of inventory file, here i need web group host IP (web_ip) to use in role for lb group:
web:
hosts:
web_ip
vars:
ansible_user: ...
ansible_ssh_private_key_file: ...
lb:
hosts:
lb_ip
vars:
ansible_user: ..
ansible_ssh_private_key_file: ..
Example of template that will be used for lb host group:
upstream web {
server ${{ web_ip }};
}
Solved it myself. Here is the solution that worked for me:
{%for host in groups['web']%}{{hostvars[host]['ansible_env'].SSH_CONNECTION.split(' ')[2]}}{% endfor %}
Related
I have two DigitalOcean Droplets with public and private IP addresses, created by Vagrant and Ansible playbook.
I need to create was_route53 records for each address (two records per droplet)
How I can get addresses into vars to use it in playbook?
I made a playbook this weekend to do something very similar.
In my case, I create a droplet on digitalocean, save the IP address, and then add a DNS record to AWS.
I hope this helps. Note that the playbook still has some things hardcoded, such as region and size because I have not polished it off yet.
Also, this is the first time I have used ansible with DO and AWS, and it's the first time that I have used the "register" feature to save variables, so this is probably a long way from best practice.
One thing that seems ugly is my hardcoding of my venv python interpreter.
If you are happy to use your system python, then you don't need to worry about that. The problem is that when ansible sees connection: local, it uses the system python, which is a bit odd since the script is running in a venv on the same machine.
You need to pip install boto3
and:
ansible-galaxy collection install community.digitalocean
ansible-galaxy collection install community.aws
example playbook
---
- hosts: all
connection: local
become_user: tim
vars: #local connection defaults to using the system python
ansible_python_interpreter: /home/tim/pycharm_projects/django_api_sync/ansible/venv/bin/python3
vars_files:
- includes/secret_variables.yml
tasks:
- name: create a DigitalOcean Droplet
community.digitalocean.digital_ocean_droplet:
state: present
name: "{{droplet_name}}"
oauth_token: "{{digital_ocean_token}}"
size: "s-2vcpu-2gb"
region: SGP1
monitoring: yes
unique_name: yes
image: ubuntu-20-04-x64
wait_timeout: 500
ssh_keys: [ "{{digital_ocean_ssh_fingerprint}}"]
register: my_droplet
- name: Print IP address
ansible.builtin.debug:
msg: Droplet IP address is {{ my_droplet.data.ip_address }}
- name: Add A record with route53
community.aws.route53:
aws_access_key: "{{aws_access_key}}"
aws_secret_key: "{{aws_secret_key}}"
state: present
zone: growthpath.com.au
record: "{{ ansible_host }}"
type: A
ttl: 7200
value: "{{ my_droplet.data.ip_address }}"
wait: yes
Example inventory file:
all:
hosts:
test-dear1.growthpath.com.au:
droplet_name: test-dear1
ansible_host: test-dear1.growthpath.com.au
ansible-playbook -i inventory_aws_test.yml -v create_new_droplet.yml
I'm running an ansible-playbook configured to provision ec2 and configure the machine. I set the connection to local for the playbook because no machine to manage before the script runs. Once provisioned, I supposed to create directory in the remote server. Since the playbook runs in local connection, I set to delete_to: {{ remote_host }} so this directory creation executed in remote host but it still creates the directory in the control machine.
- name: provision instance for Apache
hosts: localhost
connection: local
remote_user: ubuntu
gather_facts: false
vars_files:
- vars/env.yml
vars:
allow_world_readable_tmpfiles: true
key_name: ansible-test
region: us-east-2
image: ami-0e82959d4ed12de3f # Ubuntu 18.04
id: "practice-akash-ajay"
sec_group: "{{ id }}-sec"
remote_host: ansible-test
remaining_days: 20
acme_directory: https://acme-staging-v02.api.letsencrypt.org/directory
# acme_directory: https://acme-v02.api.letsencrypt.org/directory
cert_name: "{{ app_slug }}.{{ app_domain}}"
intermediate_path: /etc/pki/letsencrypt/intermediate.pem
cert:
common_name: "{{ app_slug }}.{{ app_domain}}"
organization_name: PearlThoughts
email_address: "{{ letsencrypt_email }}"
subject_alt_name:
- "DNS:{{ app_slug }}.{{ app_domain}}"
roles:
- aws
- name: Create certificate storage directory
file:
dest: "{{item.path}}"
mode: 0750
state: directory
delegate_to: {{ remote_host }}
with_items:
- path: ~/lets-seng-test
When you set connection explicitly on the play, it will be used for all tasks in that play. So don't do that. Ansible will by default use a local connection for localhost unless you have explicitly changed that in your inventory (and again, don't do that).
If you remove the connection setting on your play, delegate_to might work the way you expect...but I don't think you want to do that.
If you have your playbook provisioning a new host for you, the way to target that host with Ansible is to have a new play with that host (or it's corresponding group) listed in the target hosts: for the play. Conceptually, you want:
- host: localhost
tasks:
- name: provisiong an AWS instance
aws_ec2: [...]
register: hostinfo
- add_host:
name: myhost
ansible_host: "{{ hostinfo... }}"
- hosts: myhost
tasks:
- name: do something on the new host
command: uptime
You probably need some logic in between provisioning the host and executing tasks against it to ensure that it is up and ready to service requests.
Instead of using the add_host module, a better solution is often to rely on the appropriate inventory plugin.
How do I get the IP or hostname of Ansible server in a jinja2 template? I'm talking about the IP of Ansible control machine, not the ip of the target servers, e.g. {{ ansible_fqdn }}
Make sure you have gathered facts for localhost. Do this in a seperate play in your playbook if needed. The following should be sufficient:
- name: Simply gather facts for localhost
hosts: localhost
gather_facts: true
Use the var from localhost in your template i.e. {{ hostvars['localhost'].ansible_fqdn }}
I would like to insert an IP address in to a J2 template which is used by an Ansible playbook. That IP adress is not the address of the host which is being provisioned, but the IP of the host from which the provisioning is done. Everything I have found so far covers using variables/facts related to the hosts being provisioned.
In other words: the IP I’d like to insert is the one in ['ansible_default_ipv4']['address'] when executing ansible -m setup 127.0.0.1.
I think that I could use a local playbook to write a dynamically generated template file containing the IP, but I was hoping that this might be possible “the Ansible way”.
Just use this:
{{ ansible_env["SSH_CLIENT"].split()[0] }}
You can force Ansible to fetch facts about the control host by running the setup module locally by using either a local_action or delegate_to. You can then either register that output and parse it or simply use set_fact to give it a useful name to use later in your template.
An example play might look something like:
tasks:
- name: setup
setup:
delegate_to: 127.0.0.1
- name: set ansible control host IP fact
set_fact:
ansible_control_host_address: "{{ hostvars[inventory_hostname]['ansible_eth0']['ipv4']['address'] }}"
delegate_to: 127.0.0.1
- name: template file with ansible control host IP
template:
src: /path/to/template.j2
dest: /path/to/destination/file
And then use the ansible_control_host_address variable in your template as normal:
...
Ansible control host IP: {{ ansible_control_host_address }}
...
This is how I solved the same problem (Ansible 2.7):
- name: Get local IP address
setup:
delegate_to: 127.0.0.1
delegate_facts: yes
- name: Add to hosts file
become: yes
lineinfile:
line: "{{ hostvars['127.0.0.1']['ansible_default_ipv4']['address'] }} myhost"
path: /etc/hosts
Seems to work like a charm. :-)
I'm using Vagrant and Ansible 1.9 to create a QA environment with an haproxy front-end (in a loadbalancer group) and two jetty backend servers (in a webservers group). The haproxy.cfg is a template with the following lines to display what Ansible is caching:
{% for host in groups['webservers'] %}
server {{ host }} {{ hostvars[host] }}:8080 maxconn 1024
{% endfor %}
The final config would use something like:
server {{ host }} {{ hostvars[host]['ansible_eth1']['ipv4']['address'] }}:8080 maxconn 1024
to configure the ip address of each backend service.
Based on this: http://docs.ansible.com/guide_rolling_upgrade.html I'm using the following to try provoke Ansible to cache the ip address of each webserver:
- hosts: webservers
user: vagrant
name: Gather facts from webservers
tasks: []
The webservers are built first, and the loadbalancer last. However, the webservers' ip addresses are not being cached. This is all that's there:
server qa01 {'ansible_ssh_host': '127.0.0.1', 'group_names': ['webservers'], 'inventory_hostname': 'qa01', 'ansible_ssh_port': 2222, 'inventory_hostname_short': 'qa01'}:8080 maxconn 1024
What do I need to do for those values to cache? It seems like Ansible knows the values, since:
- hosts: all
user: vagrant
sudo: yes
tasks:
- name: Display all variables/facts known for a host
debug: var=hostvars[inventory_hostname]
will display the full collection of values, including the ip address I need.
I learned at http://jpmens.net/2015/01/29/caching-facts-in-ansible/ that cached facts are flushed between groups by default. But all the facts, including the info I need from the webservers, can be made available when provisioning the loadbalancer by configuring ansible to drop the cache on disk. Creating this ansible.cfg in the same folder as my Vagrantfile fixes the issue:
[defaults]
fact_caching = jsonfile
fact_caching_connection = /tmp/mycachedir
This block was unnecessary and I removed it:
- hosts: webservers
user: vagrant
name: Gather facts from webservers
tasks: []