Access Box Name When Using Ansible for Vagrant Provisioning - vagrant

I'm trying to do provisioning for a Vagrant multimachine setup with Ansible. I've got it installing the software I was after, but I want to be able to load in machine specific configuration it can use for things like the host name and domain. I had thought I would be able to access the name of the box as a variable, but nothing I've tried seems to be able to access that. Does anyone know what I can add to an ansible playbook for Vagrant provisioning to access the host name variable?

The hostname can be accessed with {{ inventory_hostname }}. This is the name as defined in the inventory of Ansible. Also there is {{ ansible_hostname }} which is the discovered hostname.
From Magic Variables, and How To Access Information About Other Hosts
Additionally, inventory_hostname is the name of the hostname as configured in Ansible’s inventory host file. This can be useful for when you don’t want to rely on the discovered hostname ansible_hostname or for other mysterious reasons. If you have a long FQDN, inventory_hostname_short also contains the part up to the first period, without the rest of the domain.
Also see Information Discovered from Systems Facts.

Related

ansible vagrant windows and localhost

I've got a vagrant box that I use win_rm for, however, I need to fetch a file from it and
use blockinfile on my localhost (MacOs) and then copy the file back to the vagrant box.
Ansible does not like having two 127.0.0.1 items in the inventory. I've tried just about everything I can think of can't get them to work together.
The vagrant running on VirtualBox has NAT setup but I can't seem to access it other than through the loopback address. That might solve my issue.
I've also tried setting different IPs in the Vagrantfile and those have not been fruitful either.
Below is the inventory file that I've been working with.
[win]
127.0.0.1
[localhost]
control_machine ansible_host=local
[win:vars]
ansible_port=55985
ansible_winrm_transport=basic
ansible_winrm_scheme=http
ansible_user=vagrant
ansible_password="{{ lookup('env', 'WIN_GUEST_PASSWORD') }}"
nsible_connection=winrm
[localhost:vars]
ansible_user=test
ansible_connection=local
ansible_python_interpreter="/Library/Frameworks/Python.framework/Versions/3.8/bin/python3"
There are a few things to say about your inventory:
You seem to be confusing the concept of group and host
You are defining a group that has the same name as an implicit host (i.e. localhost) (maybe because of point 1)
You are explicitly defining a host for your ansible controller when this is actually probably not what you want since ansible defines an implicit localhost for you. Note that an explicit definition makes your controller match the all magic group which is typically not wanted in most situations.
From the information you gave, here is how I would write my inventory. The examples below use the ansible capability to organize vars in seperate files/folders. Please have a look at the inventory documentation for more info. I also used the yaml format for the inventory as it is easier to understand IMO. Feel free to translate back to ini if you whish
in inventories/my_env/hosts.yml (any filename will do)
---
all:
hosts:
my.windows.vagrant:
in inventories/my_env/host_vars/my.windows.vagrant.yml
---
ansible_host: 127.0.0.1
ansible_port: 55985
ansible_winrm_transport: basic
ansible_winrm_scheme: http
ansible_user: vagrant
ansible_password: "{{ lookup('env', 'WIN_GUEST_PASSWORD') }}"
ansible_connection: winrm
in inventories/my_env/host_vars/localhost.yml
---
ansible_python_interpreter: "/Library/Frameworks/Python.framework/Versions/3.8/bin/python3"
Note that I am not (re)defining the implicit localhost in the inventory, only defining the (non standard) python interpreter you use on that host. Note as well that I dropped the other vars for localhost as:
implicit localhost uses the local connection plugin by default
the local connection plugin does not take ansible_user into account and use the already logged in user (i.e. the one lauching the playbook on the controller)
Once you have done this, you can use the my.windows.vagrant target to address your vagrant windows box and localhost to run things on the controller.
Adapt to your exact needs.

include playbook with a variable name, which is defined on another host

I have some trouble getting my playbook to include another. I use a playbook to roll out a clean VM. After that, I'd like to run another playbook, to configure the VM in a certain way. I've added the new host to the inventory and have ssh access to it.
Our team has set up a project per servertype. I've retrieved the right path to the project in an early stage (running against localhost) and used set_fact to put it in "servertype_project".
I expect this to work (at the playbook-level, running against the new VM):
- name: "Run servertype playbook ({{ project }}) on VM"
vars:
project: "{{ hostvars['localhost']['servertype_project'] }}"
include: "{{ project }}/ansible/playbook.yml"
But it fails the syntax check with this message:
ERROR! {{ hostvars['localhost']['servertype_project'] }}: 'hostvars' is undefined
I can retrieve the right string if I refer to {{ hostvars['localhost']['servertype_project'] }} from within a task, but not from the level in which I can include another playbook.
Since the value is determined at runtime, I think set_fact is the correct way to store the variable, but that one is host-specific. Is there any way I can pass it along to this host as well? Or did I miss some global-var-like-option?
You are not missing anything. This is expected. Facts are bound to hosts and hostvars is not accessible in playbook scope.
There is no way to define a variable that would be accessible in the include declaration, except for passing it as an extra-vars argument in CLI. But if you use an in-memory inventory (which is a reasonable assumption), then calling another ansible-playbook instance is out of question.
I think you should rethink your flow and instead of including a dynamically-defined play, have statically-defined play(s) running against your in-memory inventory and include the roles conditionally.

Ansible Host: how to get the value for $HOME variable?

I know about ansible_env.HOME variable, which allow us to retrieve the path for the home user for the VMs we are connecting using Ansible.
However I need to get the home path for the ansible host. That means, the machine which is running the ansible playbook. Is there a short variable to retrieve that information? I was hoping to avoid running a local command and storing the result in a variable.
You should be able to access it with a lookup plugin like so:
- debug: msg="{{ lookup('env','HOME') }}"
Lookup plugins run on the control machine not the remote systems.

Ansible Delegate_to WinRM

I am using the ansible vsphere_guest module to spin up a base windows machine on a VMWare environment. In my playbook, to do this I set Hosts: 127.0.0.1 connection: local. The reason I am doing this is I beleive im not targeting this playbook at any particular host, as I dont have one yet. I instead want to run the playbook locally.
When this runs, I get a new shiny windows server VM. What I now want to do is rename that VM's computer name. To do this I am trying to upload and run a powershell script like so rename_host.ps1 $newHostname. As I understand, I need to use the script module to do this. However, this time I want to target my brand new VM, which I get the IP address of through a fact, {{ newvm_ipaddress }}.
However, when I try and run this script with delegate_to: "{{ newvm_ipaddress}}", its trying to run as SSH. SSH wont work, im targeting a windows machine with remote powershell.
is there any way to set the connection to use winRM in the context of delegate_to? Perhaps there is a better way of doing this?
Thank you for your help
I managed to work out how to solve it. The answer is the ansible module 'add_host'. I have a play under vsphere_guest as follows. This creates a new in memory host, which can then be accessed by a different play.
- add_host group=new_machine name={{ vm_ipaddress }} ansible_connection=winrm
After this, I then have a new play that can now target this host.
- host: new_machine
Also to note, variables do not span across different hosts. The solution was to use the set_fact module in play A, which can then be accessed from within play B
-set_fact:
vm_ipaddress: "{{ hw_eth0.ipaddresses[1] }}" #hw_eth0 is the fact returned from the vsphere_guest module
What about updating the inventory with the new hosts name and with ssh winrm connection params before using delegate_to, or perhaps setting some default catch-all naming scheme with these params?
For example:
[databases]
db-[a:f].example.com:5986 ansible_user=Administrator ansible_connection=winrm ansible_winrm_server_cert_validation=ignore

Set SSH Host IP Address on Rackspace for Ansible

The Question
When using the rax module to spin up servers and get inventory, how do I tell Ansible to connect to the IP address on an isolated network rather than the server's public IP?
Note: Ansible is being run from a server on the same isolated network.
The Problem
I spin up a server in the Rackspace Cloud using Ansible with the rax module, and I add it to an isolated/private network. I then add it to inventory and begin configuring it. The first thing I do is lock down SSH, in part by telling it to bind only to the IP address given to the host on the isolated network. The catch is, that means ansible can't connect over the public IP address, so I also set ansible_ssh_host to the private IP. (This happens when I add the host to inventory.)
- name: Add servers to group
local_action:
module: add_host
hostname: "{{ item.name }}"
ansible_ssh_host: "{{ item.rax.addresses.my_network_name[0].addr }}"
groups: launched
with_items: rax_response.success
when: rax_response.action = 'create'
This works just fine on that first run of creating and configuring new instances. Unfortunately, the next time I try to connect to these servers, the connection is refused because Ansible is trying at an IP address on which SSH isn't listening. This happens because:
Ansible tries to connect to ansible_ssh_host...
But the rax.py inventory script has set ansible_ssh_host to the accessIPv4 returned by Rackspace...
And Rackspace has set accessIPv4 to the public IP address of the server.
Now, I'm not sure what to do about this. Rackspace does allow an API call to update a server and set its accessIPv4, so I thought I could run another local_action after creating the server to do that. Unfortunately, the rax module doesn't appear to allow updating a server, and even if it did it depends on pyrax which in turn depends on novaclient, and novaclient only allows updating the name of the server, not accessIPv4.
Surely someone has done this before. What is the right way to tell Ansible to connect on the isolated network when getting dynamic inventory via the rax module?
You can manually edit the rax.py file and change line 125 and line 163 from:
hostvars['ansible_ssh_host'] = server.accessIPv4
to:
hostvars['ansible_ssh_host'] = server.addresses['private'][0]['addr']
This should make the value of ansible_ssh_host the private IP.
My first thought on this is to treat it like you have a tunnel you need to set up.
When you use the rax module, it creates a group called "raxhosts". By default, these are accessed using that public ipv4 address.
You could create another group using that group (via add_host), but specify the IP you want to actually access it through.
- name: Redirect to raxprivhosts
local_action:
module: add_host
hostname: "{{ item.name }}"
ansible_ssh_host: "{{ item.rax_addresses['private'][0]['addr'] }}"
ansible_ssh_pass: "{{ item.rax_adminpass }}"
groupname: raxprivhosts
with_items: raxhosts
Then apply playbooks against those groups as your follow on actions.
Let me know how this works out for you, I'm just throwing it out as an alternate to changing your rax.py manually.
You can set an environment variable in /etc/tower/settings.py to use the private network. It would either be
AWX_TASK_ENV['RAX_ACCESS_NETWORK'] = 'YOURNETWORK'
or
AWX_TASK_ENV['RAX_ACCESS_NETWORK'] = 'private'
You would then restart services afterwards with ansible-tower-service restart
You then need to refresh the inventory which you can do through the Tower interface. You'll now see that the host IP is set to the network you specified in the variable.
In the latest version of ansible you need to change:
hostvars['ansible_ssh_host'] = server.accessIPv4
to:
hostvars['ansible_ssh_host'] = server.addresses['private'][0]['addr']
AND
hostvars[server.name]['ansible_ssh_host'] = server.accessIPv4
to:
hostvars[server.name]['ansible_ssh_host'] = server.addresses['private'][0]['addr']

Resources