I am trying to use a variable in keepalived.conf.j2 template file that I push to the remote machine. Basically I am trying to insert the remote machine dynamic IP address for eth1 interface into keepalived.conf.j2.
Here is the task:
- name: Keepalived config push
template: src=keepalived.conf.j2 dest=/etc/keepalived/keepalived.conf force=yes owner=root mode=664
tags: Config push
Here is the content of the jinja2 conf file:
}
vrrp_instance 50 {
virtual_router_id 50
advert_int 1
priority 101
state MASTER
interface eth0
virtual_ipaddress {
{{ ansible_eth1:network}} dev eth0
What is the best way to implement this, so every time I push to the remote machine it will have its eth1 interface in the conf file?
Ok, It seems I have figured it out.
Your playbook has to have gather_facts: on and in j2 template you need to have the following line:
{{ ansible_eth1.ipv4.address }}
Related
I'm trying to access another host IP for usage in the Jinja2 template. So basically - I need to extract the IP address of another host to use it in my role template. Could you please direct me? Thank you
Example of inventory file, here i need web group host IP (web_ip) to use in role for lb group:
web:
hosts:
web_ip
vars:
ansible_user: ...
ansible_ssh_private_key_file: ...
lb:
hosts:
lb_ip
vars:
ansible_user: ..
ansible_ssh_private_key_file: ..
Example of template that will be used for lb host group:
upstream web {
server ${{ web_ip }};
}
Solved it myself. Here is the solution that worked for me:
{%for host in groups['web']%}{{hostvars[host]['ansible_env'].SSH_CONNECTION.split(' ')[2]}}{% endfor %}
I'm setting up an Ansible playbook to set up a couple servers. There are a couple of tasks that I only want to run if the current host is my local dev host, named "local" in my hosts file. How can I do this? I can't find it anywhere in the documentation.
I've tried this when statement, but it fails because ansible_hostname resolves to the host name generated when the machine is created, not the one you define in your hosts file.
- name: Install this only for local dev machine
pip:
name: pyramid
when: ansible_hostname == "local"
The necessary variable is inventory_hostname.
- name: Install this only for local dev machine
pip:
name: pyramid
when: inventory_hostname == "local"
It is somewhat hidden in the documentation at the bottom of this section.
You can limit the scope of a playbook by changing the hosts header in its plays without relying on your special host label ‘local’ in your inventory. Localhost does not need a special line in inventories.
- name: run on all except local
hosts: all:!local
This is an alternative:
- name: Install this only for local dev machine
pip: name=pyramid
delegate_to: localhost
I would like to insert an IP address in to a J2 template which is used by an Ansible playbook. That IP adress is not the address of the host which is being provisioned, but the IP of the host from which the provisioning is done. Everything I have found so far covers using variables/facts related to the hosts being provisioned.
In other words: the IP I’d like to insert is the one in ['ansible_default_ipv4']['address'] when executing ansible -m setup 127.0.0.1.
I think that I could use a local playbook to write a dynamically generated template file containing the IP, but I was hoping that this might be possible “the Ansible way”.
Just use this:
{{ ansible_env["SSH_CLIENT"].split()[0] }}
You can force Ansible to fetch facts about the control host by running the setup module locally by using either a local_action or delegate_to. You can then either register that output and parse it or simply use set_fact to give it a useful name to use later in your template.
An example play might look something like:
tasks:
- name: setup
setup:
delegate_to: 127.0.0.1
- name: set ansible control host IP fact
set_fact:
ansible_control_host_address: "{{ hostvars[inventory_hostname]['ansible_eth0']['ipv4']['address'] }}"
delegate_to: 127.0.0.1
- name: template file with ansible control host IP
template:
src: /path/to/template.j2
dest: /path/to/destination/file
And then use the ansible_control_host_address variable in your template as normal:
...
Ansible control host IP: {{ ansible_control_host_address }}
...
This is how I solved the same problem (Ansible 2.7):
- name: Get local IP address
setup:
delegate_to: 127.0.0.1
delegate_facts: yes
- name: Add to hosts file
become: yes
lineinfile:
line: "{{ hostvars['127.0.0.1']['ansible_default_ipv4']['address'] }} myhost"
path: /etc/hosts
Seems to work like a charm. :-)
During deployment needed setup ufw firewall rule: enable access for hosts group servers from inventory. Target host is anazon ec2 instance, but it maybe placed at other provider.
I tried:
- name: Ufw rules
ufw: rule=allow from_ip={{ hostvars[item]['ansible_eth0']['ipv4']['address'] }}
with_items: groups['servers']
notify:
- Restart ufw
file hosts:
..
[servers]
server1.site.com
server2.site.com
server3.site.com
..
But host server1.site.com, actually have host_vars:
"ansible_eth0": {
"active": true,
"device": "eth0",
"ipv4": {
"address": "10.x.x.x",
...
Ip address 10.x.x.x, as I understand, is internal amazon network ip address.
If execute ping server1.site.com from outside ec2, I get:
64 bytes from server1.site.com (46.x.x.x): icmp_seq=1 ttl=56 time=49.5 ms
...
Executing ansible server1.site.com -m setup -i hosts | grep 46.x.x.x found nothing.
How to known external ip address of host from inventory group using ansible?
Or how setup ufw firewall using host names from inventory?
I think your best bet is to use the lookup plugin written by #dsedivec (https://github.com/dsedivec/ansible-plugins).
Create a lookup_plugins/ directory at the top directory of your playbook.
Put dns.py in it
curl https://raw.githubusercontent.com/dsedivec/ansible-plugins/master/lookup_plugins/dns.py > lookup_plugins/dns.py
And use it like so :
- name: Ufw rules
ufw: rule=allow from_ip={{ lookup('dns', item) }}
with_items: groups['servers']
notify:
- Restart ufw
Good luck !
The easiest way is to add connection: local
---
- name: get my public IP
ipify_facts: api_url=http://api.ipify.org
connection: local
- firewalld: rich_rule='rule family=ipv4 source address={{ ipify_public_ip }} accept' permanent=no state=enabled timeout=300
become: true
The Question
When using the rax module to spin up servers and get inventory, how do I tell Ansible to connect to the IP address on an isolated network rather than the server's public IP?
Note: Ansible is being run from a server on the same isolated network.
The Problem
I spin up a server in the Rackspace Cloud using Ansible with the rax module, and I add it to an isolated/private network. I then add it to inventory and begin configuring it. The first thing I do is lock down SSH, in part by telling it to bind only to the IP address given to the host on the isolated network. The catch is, that means ansible can't connect over the public IP address, so I also set ansible_ssh_host to the private IP. (This happens when I add the host to inventory.)
- name: Add servers to group
local_action:
module: add_host
hostname: "{{ item.name }}"
ansible_ssh_host: "{{ item.rax.addresses.my_network_name[0].addr }}"
groups: launched
with_items: rax_response.success
when: rax_response.action = 'create'
This works just fine on that first run of creating and configuring new instances. Unfortunately, the next time I try to connect to these servers, the connection is refused because Ansible is trying at an IP address on which SSH isn't listening. This happens because:
Ansible tries to connect to ansible_ssh_host...
But the rax.py inventory script has set ansible_ssh_host to the accessIPv4 returned by Rackspace...
And Rackspace has set accessIPv4 to the public IP address of the server.
Now, I'm not sure what to do about this. Rackspace does allow an API call to update a server and set its accessIPv4, so I thought I could run another local_action after creating the server to do that. Unfortunately, the rax module doesn't appear to allow updating a server, and even if it did it depends on pyrax which in turn depends on novaclient, and novaclient only allows updating the name of the server, not accessIPv4.
Surely someone has done this before. What is the right way to tell Ansible to connect on the isolated network when getting dynamic inventory via the rax module?
You can manually edit the rax.py file and change line 125 and line 163 from:
hostvars['ansible_ssh_host'] = server.accessIPv4
to:
hostvars['ansible_ssh_host'] = server.addresses['private'][0]['addr']
This should make the value of ansible_ssh_host the private IP.
My first thought on this is to treat it like you have a tunnel you need to set up.
When you use the rax module, it creates a group called "raxhosts". By default, these are accessed using that public ipv4 address.
You could create another group using that group (via add_host), but specify the IP you want to actually access it through.
- name: Redirect to raxprivhosts
local_action:
module: add_host
hostname: "{{ item.name }}"
ansible_ssh_host: "{{ item.rax_addresses['private'][0]['addr'] }}"
ansible_ssh_pass: "{{ item.rax_adminpass }}"
groupname: raxprivhosts
with_items: raxhosts
Then apply playbooks against those groups as your follow on actions.
Let me know how this works out for you, I'm just throwing it out as an alternate to changing your rax.py manually.
You can set an environment variable in /etc/tower/settings.py to use the private network. It would either be
AWX_TASK_ENV['RAX_ACCESS_NETWORK'] = 'YOURNETWORK'
or
AWX_TASK_ENV['RAX_ACCESS_NETWORK'] = 'private'
You would then restart services afterwards with ansible-tower-service restart
You then need to refresh the inventory which you can do through the Tower interface. You'll now see that the host IP is set to the network you specified in the variable.
In the latest version of ansible you need to change:
hostvars['ansible_ssh_host'] = server.accessIPv4
to:
hostvars['ansible_ssh_host'] = server.addresses['private'][0]['addr']
AND
hostvars[server.name]['ansible_ssh_host'] = server.accessIPv4
to:
hostvars[server.name]['ansible_ssh_host'] = server.addresses['private'][0]['addr']