Ansible Delegate_to WinRM - windows

I am using the ansible vsphere_guest module to spin up a base windows machine on a VMWare environment. In my playbook, to do this I set Hosts: 127.0.0.1 connection: local. The reason I am doing this is I beleive im not targeting this playbook at any particular host, as I dont have one yet. I instead want to run the playbook locally.
When this runs, I get a new shiny windows server VM. What I now want to do is rename that VM's computer name. To do this I am trying to upload and run a powershell script like so rename_host.ps1 $newHostname. As I understand, I need to use the script module to do this. However, this time I want to target my brand new VM, which I get the IP address of through a fact, {{ newvm_ipaddress }}.
However, when I try and run this script with delegate_to: "{{ newvm_ipaddress}}", its trying to run as SSH. SSH wont work, im targeting a windows machine with remote powershell.
is there any way to set the connection to use winRM in the context of delegate_to? Perhaps there is a better way of doing this?
Thank you for your help

I managed to work out how to solve it. The answer is the ansible module 'add_host'. I have a play under vsphere_guest as follows. This creates a new in memory host, which can then be accessed by a different play.
- add_host group=new_machine name={{ vm_ipaddress }} ansible_connection=winrm
After this, I then have a new play that can now target this host.
- host: new_machine
Also to note, variables do not span across different hosts. The solution was to use the set_fact module in play A, which can then be accessed from within play B
-set_fact:
vm_ipaddress: "{{ hw_eth0.ipaddresses[1] }}" #hw_eth0 is the fact returned from the vsphere_guest module

What about updating the inventory with the new hosts name and with ssh winrm connection params before using delegate_to, or perhaps setting some default catch-all naming scheme with these params?
For example:
[databases]
db-[a:f].example.com:5986 ansible_user=Administrator ansible_connection=winrm ansible_winrm_server_cert_validation=ignore

Related

ansible vagrant windows and localhost

I've got a vagrant box that I use win_rm for, however, I need to fetch a file from it and
use blockinfile on my localhost (MacOs) and then copy the file back to the vagrant box.
Ansible does not like having two 127.0.0.1 items in the inventory. I've tried just about everything I can think of can't get them to work together.
The vagrant running on VirtualBox has NAT setup but I can't seem to access it other than through the loopback address. That might solve my issue.
I've also tried setting different IPs in the Vagrantfile and those have not been fruitful either.
Below is the inventory file that I've been working with.
[win]
127.0.0.1
[localhost]
control_machine ansible_host=local
[win:vars]
ansible_port=55985
ansible_winrm_transport=basic
ansible_winrm_scheme=http
ansible_user=vagrant
ansible_password="{{ lookup('env', 'WIN_GUEST_PASSWORD') }}"
nsible_connection=winrm
[localhost:vars]
ansible_user=test
ansible_connection=local
ansible_python_interpreter="/Library/Frameworks/Python.framework/Versions/3.8/bin/python3"
There are a few things to say about your inventory:
You seem to be confusing the concept of group and host
You are defining a group that has the same name as an implicit host (i.e. localhost) (maybe because of point 1)
You are explicitly defining a host for your ansible controller when this is actually probably not what you want since ansible defines an implicit localhost for you. Note that an explicit definition makes your controller match the all magic group which is typically not wanted in most situations.
From the information you gave, here is how I would write my inventory. The examples below use the ansible capability to organize vars in seperate files/folders. Please have a look at the inventory documentation for more info. I also used the yaml format for the inventory as it is easier to understand IMO. Feel free to translate back to ini if you whish
in inventories/my_env/hosts.yml (any filename will do)
---
all:
hosts:
my.windows.vagrant:
in inventories/my_env/host_vars/my.windows.vagrant.yml
---
ansible_host: 127.0.0.1
ansible_port: 55985
ansible_winrm_transport: basic
ansible_winrm_scheme: http
ansible_user: vagrant
ansible_password: "{{ lookup('env', 'WIN_GUEST_PASSWORD') }}"
ansible_connection: winrm
in inventories/my_env/host_vars/localhost.yml
---
ansible_python_interpreter: "/Library/Frameworks/Python.framework/Versions/3.8/bin/python3"
Note that I am not (re)defining the implicit localhost in the inventory, only defining the (non standard) python interpreter you use on that host. Note as well that I dropped the other vars for localhost as:
implicit localhost uses the local connection plugin by default
the local connection plugin does not take ansible_user into account and use the already logged in user (i.e. the one lauching the playbook on the controller)
Once you have done this, you can use the my.windows.vagrant target to address your vagrant windows box and localhost to run things on the controller.
Adapt to your exact needs.

Ansible execute command locally and then on remote server

I am trying to start a server using ansible shell module with ipmitools and then do configuration change on that server once its up.
Server with ansible installed also has ipmitools.
On server with ansible i need to execute ipmitools to start target server and then execute playbooks on it.
Is there a way to execute local ipmi commands on server running ansible to start target server through ansible and then execute all playbooks over ssh on target server.
You can run any command locally by providing the delegate_to parameter.
- shell: ipmitools ...
delegate_to: localhost
If ansible complains about connecting to localhost via ssh, you need to add an entry in your inventory like this:
localhost ansible_connection=local
or in host_vars/localhost:
ansible_connection: local
See behavioral parameters.
Next, you're going to need to wait until the server is booted and accessible though ssh. Here is an article from Ansible covering this topic and this is the task they have listed:
- name: Wait for Server to Restart
local_action:
wait_for
host={{ inventory_hostname }}
port=22
delay=15
timeout=300
sudo: false
If that doesn't work (since it is an older article and I think I previously had issues with this solution) you can look into the answers of this SO question.

Access Box Name When Using Ansible for Vagrant Provisioning

I'm trying to do provisioning for a Vagrant multimachine setup with Ansible. I've got it installing the software I was after, but I want to be able to load in machine specific configuration it can use for things like the host name and domain. I had thought I would be able to access the name of the box as a variable, but nothing I've tried seems to be able to access that. Does anyone know what I can add to an ansible playbook for Vagrant provisioning to access the host name variable?
The hostname can be accessed with {{ inventory_hostname }}. This is the name as defined in the inventory of Ansible. Also there is {{ ansible_hostname }} which is the discovered hostname.
From Magic Variables, and How To Access Information About Other Hosts
Additionally, inventory_hostname is the name of the hostname as configured in Ansible’s inventory host file. This can be useful for when you don’t want to rely on the discovered hostname ansible_hostname or for other mysterious reasons. If you have a long FQDN, inventory_hostname_short also contains the part up to the first period, without the rest of the domain.
Also see Information Discovered from Systems Facts.

Set SSH Host IP Address on Rackspace for Ansible

The Question
When using the rax module to spin up servers and get inventory, how do I tell Ansible to connect to the IP address on an isolated network rather than the server's public IP?
Note: Ansible is being run from a server on the same isolated network.
The Problem
I spin up a server in the Rackspace Cloud using Ansible with the rax module, and I add it to an isolated/private network. I then add it to inventory and begin configuring it. The first thing I do is lock down SSH, in part by telling it to bind only to the IP address given to the host on the isolated network. The catch is, that means ansible can't connect over the public IP address, so I also set ansible_ssh_host to the private IP. (This happens when I add the host to inventory.)
- name: Add servers to group
local_action:
module: add_host
hostname: "{{ item.name }}"
ansible_ssh_host: "{{ item.rax.addresses.my_network_name[0].addr }}"
groups: launched
with_items: rax_response.success
when: rax_response.action = 'create'
This works just fine on that first run of creating and configuring new instances. Unfortunately, the next time I try to connect to these servers, the connection is refused because Ansible is trying at an IP address on which SSH isn't listening. This happens because:
Ansible tries to connect to ansible_ssh_host...
But the rax.py inventory script has set ansible_ssh_host to the accessIPv4 returned by Rackspace...
And Rackspace has set accessIPv4 to the public IP address of the server.
Now, I'm not sure what to do about this. Rackspace does allow an API call to update a server and set its accessIPv4, so I thought I could run another local_action after creating the server to do that. Unfortunately, the rax module doesn't appear to allow updating a server, and even if it did it depends on pyrax which in turn depends on novaclient, and novaclient only allows updating the name of the server, not accessIPv4.
Surely someone has done this before. What is the right way to tell Ansible to connect on the isolated network when getting dynamic inventory via the rax module?
You can manually edit the rax.py file and change line 125 and line 163 from:
hostvars['ansible_ssh_host'] = server.accessIPv4
to:
hostvars['ansible_ssh_host'] = server.addresses['private'][0]['addr']
This should make the value of ansible_ssh_host the private IP.
My first thought on this is to treat it like you have a tunnel you need to set up.
When you use the rax module, it creates a group called "raxhosts". By default, these are accessed using that public ipv4 address.
You could create another group using that group (via add_host), but specify the IP you want to actually access it through.
- name: Redirect to raxprivhosts
local_action:
module: add_host
hostname: "{{ item.name }}"
ansible_ssh_host: "{{ item.rax_addresses['private'][0]['addr'] }}"
ansible_ssh_pass: "{{ item.rax_adminpass }}"
groupname: raxprivhosts
with_items: raxhosts
Then apply playbooks against those groups as your follow on actions.
Let me know how this works out for you, I'm just throwing it out as an alternate to changing your rax.py manually.
You can set an environment variable in /etc/tower/settings.py to use the private network. It would either be
AWX_TASK_ENV['RAX_ACCESS_NETWORK'] = 'YOURNETWORK'
or
AWX_TASK_ENV['RAX_ACCESS_NETWORK'] = 'private'
You would then restart services afterwards with ansible-tower-service restart
You then need to refresh the inventory which you can do through the Tower interface. You'll now see that the host IP is set to the network you specified in the variable.
In the latest version of ansible you need to change:
hostvars['ansible_ssh_host'] = server.accessIPv4
to:
hostvars['ansible_ssh_host'] = server.addresses['private'][0]['addr']
AND
hostvars[server.name]['ansible_ssh_host'] = server.accessIPv4
to:
hostvars[server.name]['ansible_ssh_host'] = server.addresses['private'][0]['addr']

How to apply proxy and DNS settings to GNU/Linux Debian using configuration management tool such as Ansible

I'm new to configuration management tool.
I want to use Ansible.
I'd like to set proxy to several GNU/Linux Debian (in fact several Raspbian).
I'd like to append
export http_proxy=http://cache.domain.com:3128
to /home/pi/.bashrc
I also want to append
Acquire::http::Proxy "http://cache.domain.com:3128";
to /etc/apt.conf
I want to set DNS to IP X1.X2.X3.X4 creating a
/etc/resol.conf file with
nameserver X1.X2.X3.X4
What playbook file should I write ? How should I apply this playbook to my servers ?
Start by learning a bit about Ansible basics and familiarize yourself with playbooks. Basically you ensure you can SSH in to your Raspian machines (using keys) and that the user Ansible invokes on these machines can run sudo. (That's the hard bit.)
The easy bit is creating the playbook for the tasks at hand, and there are plenty of pointers to example playbooks in the documentation.
If you really want to add a line to a file or two, use the lineinfile module, although I strongly recommend you create templates for the files you want to push to your machines and use those with the template module. (lineinfile can get quite messy.)
I second jpmens. This is a very basic problem in Ansible, and a very good way to get started using the docs, tutorials and example playbooks.
However, if you're stuck or in a hurry, you can solve this like this (everything takes place on the "ansible master") :
Create a roles structure like this :
cd your_playbooks_directory
mkdir -p roles/pi/{templates,tasks,vars}
Now create roles/pi/tasks/main.yml :
- name: Adds resolv.conf
template: src=resolv.conf.j2 dest=/etc/resolv.conf mode=0644
- name: Adds proxy env setting to pi user
lineinfile: dest=~pi/.bashrc regexp="^export http_proxy" insertafter=EOF line="export http_proxy={{ http_proxy }}"
Then roles/pi/templates/resolv.conf.j2 :
nameserver {{ dns_server }}
then roles/pi/vars/main.yml :
dns_server: 8.8.8.8
http_proxy: http://cache.domain.com:3128
Now make a top-level playbook to apply roles, at your playbook root, and call it site.yml :
- hosts : raspberries
roles:
- { role: pi }
You can apply your playbook using :
ansible-playbook site.yml
assuming your machines are in the raspberries group.
Good luck.

Resources