Set SSH Host IP Address on Rackspace for Ansible - ansible

The Question
When using the rax module to spin up servers and get inventory, how do I tell Ansible to connect to the IP address on an isolated network rather than the server's public IP?
Note: Ansible is being run from a server on the same isolated network.
The Problem
I spin up a server in the Rackspace Cloud using Ansible with the rax module, and I add it to an isolated/private network. I then add it to inventory and begin configuring it. The first thing I do is lock down SSH, in part by telling it to bind only to the IP address given to the host on the isolated network. The catch is, that means ansible can't connect over the public IP address, so I also set ansible_ssh_host to the private IP. (This happens when I add the host to inventory.)
- name: Add servers to group
local_action:
module: add_host
hostname: "{{ item.name }}"
ansible_ssh_host: "{{ item.rax.addresses.my_network_name[0].addr }}"
groups: launched
with_items: rax_response.success
when: rax_response.action = 'create'
This works just fine on that first run of creating and configuring new instances. Unfortunately, the next time I try to connect to these servers, the connection is refused because Ansible is trying at an IP address on which SSH isn't listening. This happens because:
Ansible tries to connect to ansible_ssh_host...
But the rax.py inventory script has set ansible_ssh_host to the accessIPv4 returned by Rackspace...
And Rackspace has set accessIPv4 to the public IP address of the server.
Now, I'm not sure what to do about this. Rackspace does allow an API call to update a server and set its accessIPv4, so I thought I could run another local_action after creating the server to do that. Unfortunately, the rax module doesn't appear to allow updating a server, and even if it did it depends on pyrax which in turn depends on novaclient, and novaclient only allows updating the name of the server, not accessIPv4.
Surely someone has done this before. What is the right way to tell Ansible to connect on the isolated network when getting dynamic inventory via the rax module?

You can manually edit the rax.py file and change line 125 and line 163 from:
hostvars['ansible_ssh_host'] = server.accessIPv4
to:
hostvars['ansible_ssh_host'] = server.addresses['private'][0]['addr']
This should make the value of ansible_ssh_host the private IP.

My first thought on this is to treat it like you have a tunnel you need to set up.
When you use the rax module, it creates a group called "raxhosts". By default, these are accessed using that public ipv4 address.
You could create another group using that group (via add_host), but specify the IP you want to actually access it through.
- name: Redirect to raxprivhosts
local_action:
module: add_host
hostname: "{{ item.name }}"
ansible_ssh_host: "{{ item.rax_addresses['private'][0]['addr'] }}"
ansible_ssh_pass: "{{ item.rax_adminpass }}"
groupname: raxprivhosts
with_items: raxhosts
Then apply playbooks against those groups as your follow on actions.
Let me know how this works out for you, I'm just throwing it out as an alternate to changing your rax.py manually.

You can set an environment variable in /etc/tower/settings.py to use the private network. It would either be
AWX_TASK_ENV['RAX_ACCESS_NETWORK'] = 'YOURNETWORK'
or
AWX_TASK_ENV['RAX_ACCESS_NETWORK'] = 'private'
You would then restart services afterwards with ansible-tower-service restart
You then need to refresh the inventory which you can do through the Tower interface. You'll now see that the host IP is set to the network you specified in the variable.

In the latest version of ansible you need to change:
hostvars['ansible_ssh_host'] = server.accessIPv4
to:
hostvars['ansible_ssh_host'] = server.addresses['private'][0]['addr']
AND
hostvars[server.name]['ansible_ssh_host'] = server.accessIPv4
to:
hostvars[server.name]['ansible_ssh_host'] = server.addresses['private'][0]['addr']

Related

Ansible Automatically Add Host if Not Found in Inventory

Is there any possible way to automatically add host if the host is not defined in /etc/ansible/hosts?
I have a playbook that will create a user in remote host when triggered.
Command:
# ansible-playbook -e "username=test_account host=10.0.1.123 acc_exp=2021-06-28" register_user.yml
I tried to create a task to register the host but the play stopped prematurely before getting into the task.
tasks:
- name: Register host IP address for the first time
add_host:
name: "{{ host }}"
group: register_user_hosts
UPDATE:
Prematurely means the play ended right after the Ansible command being executed.
Error:
PLAY [register_user.yml] *******************************************************
skipping: no hosts matched
I let the ansible host file empty under [register_user_hosts] group so that the playbook will append the host automatically.
The reason for this method is that I will not need to register host every time a new instance is provisioned. This can help to save time and allow fast deployment by not manually accessing Ansible controller just to append a new host.
Please advice.
Thank you.

Restarting a service after looped commands on multiple servers

I poked around a bit here but didn't see anything that quite matched up to what I am trying to accomplish, so here goes.
So I've put together my first Ansible playbook which opens or closes one or more ports on the firewall of one or more hosts, for one or more specified IP addresses. Works great so far. But what I want to do is restart the firewall service after all the tasks for a given host are complete (with no errors, of course).
NOTE: The hostvars/localhost references just hold vars_prompt input from the user in a task list above this one. I store prompted data in hosts: localhost build a dynamic host list based on what the user entered, and then have a separate task list to actually do the work.
So:
- name: Execute remote firewall-cmd for each host in "dynamically created host group"
hosts: dynamically_created_host_list
gather_facts: no
tasks:
- set_fact:
hostList: "{{hostvars['localhost']['hostList']}}"
- set_fact:
portList: "{{hostvars['localhost']['portList']}}"
- set_fact:
portStateRequested: "{{hostvars['localhost']['portStateRequested']}}"
- set_fact:
portState: "{{hostvars['localhost']['portState']}}"
- set_fact:
remoteIPs: "{{hostvars['localhost']['remoteIPs']}}"
- name: Invoke firewall-cmd remotely
firewalld:
.. module-specific stuff here ...
with_nested:
- "{{ remoteIPs.split(',') }}"
- "{{ portList.split(',') }}"
register: requestStatus
In my original version of the script, which only did 1 port for 1 host for 1 IP, I just did:
- name: Reload firewalld
when: requestStatus.changed
systemd:
name: firewalld
state: reloaded
But I don't think that will work as easily here because of the nesting. For example. Let's say I want to open port 9999 for a remote IP address of 1.1.1.1 on 10 different hosts. And let's say the 5th host has an error for some reason. I may not want to restart the firewall service at that point.
Actually, now that I think about it, I guess that in that scenario, there would be 4 new entries to the firewall config, and 6 that didn't take because of the error. Now I'm wondering if I need to track the successes, and have a rescue block within the Playbook to back those entries that did go through.
Grrr.... any ideas? Sorry, new to Ansible here. Plus, I hate YAML for things like this. :D
Thanks in advance for any guidance.
It looks to me like what you are looking for is what Ansible call handlers.
As we’ve mentioned, modules should be idempotent and can relay when
they have made a change on the remote system. Playbooks recognize this
and have a basic event system that can be used to respond to change.
These ‘notify’ actions are triggered at the end of each block of tasks
in a play, and will only be triggered once even if notified by
multiple different tasks.
For instance, multiple resources may indicate that apache needs to be
restarted because they have changed a config file, but apache will
only be bounced once to avoid unnecessary restarts.
Note that handlers are simply a pair of
A notify attribute on one or multiple tasks
A handler, with a name matching your above mentioned notify attribute
So your playbook should look like
- name: Execute remote firewall-cmd for each host in "dynamically created host group"
hosts: dynamically_created_host_list
gather_facts: no
tasks:
# set_fact removed for concision
- name: Invoke firewall-cmd remotely
firewalld:
# .. module-specific stuff here ...
with_nested:
- "{{ remoteIPs.split(',') }}"
- "{{ portList.split(',') }}"
notify: Reload firewalld
handlers:
- name: Reload firewalld
systemd:
name: firewalld
state: reloaded

Ansible Delegate_to WinRM

I am using the ansible vsphere_guest module to spin up a base windows machine on a VMWare environment. In my playbook, to do this I set Hosts: 127.0.0.1 connection: local. The reason I am doing this is I beleive im not targeting this playbook at any particular host, as I dont have one yet. I instead want to run the playbook locally.
When this runs, I get a new shiny windows server VM. What I now want to do is rename that VM's computer name. To do this I am trying to upload and run a powershell script like so rename_host.ps1 $newHostname. As I understand, I need to use the script module to do this. However, this time I want to target my brand new VM, which I get the IP address of through a fact, {{ newvm_ipaddress }}.
However, when I try and run this script with delegate_to: "{{ newvm_ipaddress}}", its trying to run as SSH. SSH wont work, im targeting a windows machine with remote powershell.
is there any way to set the connection to use winRM in the context of delegate_to? Perhaps there is a better way of doing this?
Thank you for your help
I managed to work out how to solve it. The answer is the ansible module 'add_host'. I have a play under vsphere_guest as follows. This creates a new in memory host, which can then be accessed by a different play.
- add_host group=new_machine name={{ vm_ipaddress }} ansible_connection=winrm
After this, I then have a new play that can now target this host.
- host: new_machine
Also to note, variables do not span across different hosts. The solution was to use the set_fact module in play A, which can then be accessed from within play B
-set_fact:
vm_ipaddress: "{{ hw_eth0.ipaddresses[1] }}" #hw_eth0 is the fact returned from the vsphere_guest module
What about updating the inventory with the new hosts name and with ssh winrm connection params before using delegate_to, or perhaps setting some default catch-all naming scheme with these params?
For example:
[databases]
db-[a:f].example.com:5986 ansible_user=Administrator ansible_connection=winrm ansible_winrm_server_cert_validation=ignore

Use Ansible control master's IP address in Jinja template

I would like to insert an IP address in to a J2 template which is used by an Ansible playbook. That IP adress is not the address of the host which is being provisioned, but the IP of the host from which the provisioning is done. Everything I have found so far covers using variables/facts related to the hosts being provisioned.
In other words: the IP I’d like to insert is the one in ['ansible_default_ipv4']['address'] when executing ansible -m setup 127.0.0.1.
I think that I could use a local playbook to write a dynamically generated template file containing the IP, but I was hoping that this might be possible “the Ansible way”.
Just use this:
{{ ansible_env["SSH_CLIENT"].split()[0] }}
You can force Ansible to fetch facts about the control host by running the setup module locally by using either a local_action or delegate_to. You can then either register that output and parse it or simply use set_fact to give it a useful name to use later in your template.
An example play might look something like:
tasks:
- name: setup
setup:
delegate_to: 127.0.0.1
- name: set ansible control host IP fact
set_fact:
ansible_control_host_address: "{{ hostvars[inventory_hostname]['ansible_eth0']['ipv4']['address'] }}"
delegate_to: 127.0.0.1
- name: template file with ansible control host IP
template:
src: /path/to/template.j2
dest: /path/to/destination/file
And then use the ansible_control_host_address variable in your template as normal:
...
Ansible control host IP: {{ ansible_control_host_address }}
...
This is how I solved the same problem (Ansible 2.7):
- name: Get local IP address
setup:
delegate_to: 127.0.0.1
delegate_facts: yes
- name: Add to hosts file
become: yes
lineinfile:
line: "{{ hostvars['127.0.0.1']['ansible_default_ipv4']['address'] }} myhost"
path: /etc/hosts
Seems to work like a charm. :-)

Access Box Name When Using Ansible for Vagrant Provisioning

I'm trying to do provisioning for a Vagrant multimachine setup with Ansible. I've got it installing the software I was after, but I want to be able to load in machine specific configuration it can use for things like the host name and domain. I had thought I would be able to access the name of the box as a variable, but nothing I've tried seems to be able to access that. Does anyone know what I can add to an ansible playbook for Vagrant provisioning to access the host name variable?
The hostname can be accessed with {{ inventory_hostname }}. This is the name as defined in the inventory of Ansible. Also there is {{ ansible_hostname }} which is the discovered hostname.
From Magic Variables, and How To Access Information About Other Hosts
Additionally, inventory_hostname is the name of the hostname as configured in Ansible’s inventory host file. This can be useful for when you don’t want to rely on the discovered hostname ansible_hostname or for other mysterious reasons. If you have a long FQDN, inventory_hostname_short also contains the part up to the first period, without the rest of the domain.
Also see Information Discovered from Systems Facts.

Resources