ansible vagrant windows and localhost - ansible

I've got a vagrant box that I use win_rm for, however, I need to fetch a file from it and
use blockinfile on my localhost (MacOs) and then copy the file back to the vagrant box.
Ansible does not like having two 127.0.0.1 items in the inventory. I've tried just about everything I can think of can't get them to work together.
The vagrant running on VirtualBox has NAT setup but I can't seem to access it other than through the loopback address. That might solve my issue.
I've also tried setting different IPs in the Vagrantfile and those have not been fruitful either.
Below is the inventory file that I've been working with.
[win]
127.0.0.1
[localhost]
control_machine ansible_host=local
[win:vars]
ansible_port=55985
ansible_winrm_transport=basic
ansible_winrm_scheme=http
ansible_user=vagrant
ansible_password="{{ lookup('env', 'WIN_GUEST_PASSWORD') }}"
nsible_connection=winrm
[localhost:vars]
ansible_user=test
ansible_connection=local
ansible_python_interpreter="/Library/Frameworks/Python.framework/Versions/3.8/bin/python3"

There are a few things to say about your inventory:
You seem to be confusing the concept of group and host
You are defining a group that has the same name as an implicit host (i.e. localhost) (maybe because of point 1)
You are explicitly defining a host for your ansible controller when this is actually probably not what you want since ansible defines an implicit localhost for you. Note that an explicit definition makes your controller match the all magic group which is typically not wanted in most situations.
From the information you gave, here is how I would write my inventory. The examples below use the ansible capability to organize vars in seperate files/folders. Please have a look at the inventory documentation for more info. I also used the yaml format for the inventory as it is easier to understand IMO. Feel free to translate back to ini if you whish
in inventories/my_env/hosts.yml (any filename will do)
---
all:
hosts:
my.windows.vagrant:
in inventories/my_env/host_vars/my.windows.vagrant.yml
---
ansible_host: 127.0.0.1
ansible_port: 55985
ansible_winrm_transport: basic
ansible_winrm_scheme: http
ansible_user: vagrant
ansible_password: "{{ lookup('env', 'WIN_GUEST_PASSWORD') }}"
ansible_connection: winrm
in inventories/my_env/host_vars/localhost.yml
---
ansible_python_interpreter: "/Library/Frameworks/Python.framework/Versions/3.8/bin/python3"
Note that I am not (re)defining the implicit localhost in the inventory, only defining the (non standard) python interpreter you use on that host. Note as well that I dropped the other vars for localhost as:
implicit localhost uses the local connection plugin by default
the local connection plugin does not take ansible_user into account and use the already logged in user (i.e. the one lauching the playbook on the controller)
Once you have done this, you can use the my.windows.vagrant target to address your vagrant windows box and localhost to run things on the controller.
Adapt to your exact needs.

Related

Iterate over inventory facts [duplicate]

I'm sitting in front of a fairly complex Ansible project that we're using to set up our local development environments (multiple VMs) and there's one role that uses the facts gathered by Ansible to set up the /etc/hosts file on every VM. Unfortunately, when you want to run the playbook for one host only (using the -limit parameter) the facts from the other hosts are (obviously) missing.
Is there a way to force Ansible to gather facts on all hosts, even if you limit the playbook to one specific host?
We tried to add a play to the playbook to gather facts from all hosts, but of course that also gets limited to the one host given by the -limit parameter. If there'd be a way to force this play to run on all hosts before the other plays, that would be perfect.
I've googled a bit and found the solution with fact caching with redis, but since our playbook is used locally, I wanted to avoid the need for additional software. I know, it's not a big deal, but I was just looking for a "cleaner", Ansible-only solution and was wondering, if that would exist.
Ansible version 2 introduced a clean, official way to do this using delegated facts (see: http://docs.ansible.com/ansible/latest/playbooks_delegation.html#delegated-facts).
when: hostvars[item]['ansible_default_ipv4'] is not defined is a check to ensure you don't check for facts in a host you already know the facts about
---
# This play will still work as intended if called with --limit "<host>" or --tags "some_tag"
- name: Hostfile generation
hosts: all
become: true
pre_tasks:
- name: Gather facts from ALL hosts (regardless of limit or tags)
setup:
delegate_to: "{{ item }}"
delegate_facts: True
when: hostvars[item]['ansible_default_ipv4'] is not defined
with_items: "{{ groups['all'] }}"
tasks:
- template:
src: "templates/hosts.j2"
dest: "/etc/hosts"
tags:
- hostfile
...
In general the way to get facts for all hosts even when you don't want to run tasks on all hosts is to do something like this:
- hosts: all
tasks: [ ]
But as you mentioned, the --limit parameter will limit what hosts this would be applied to.
I don't think there's a way to simply tell Ansible to ignore the --limit parameter on any plays. However there may be another way to do what you want entirely within Ansible.
I haven't used it personally, but as of Ansible 1.8 fact caching is available. In a nutshell, with fact caching enabled Ansible will use a redis server to cache all the facts about hosts it encounters and you'll be able to reference them in subsequent playbooks:
With fact caching enabled, it is possible for machine in one group to reference variables about machines in the other group, despite the fact that they have not been communicated with in the current execution of /usr/bin/ansible-playbook.
This still seems to be an issue without a clean solution here in 2016, but newer versions of Ansible offer a "jsonfile" fact caching backend, which seems to be a decent compromise to installing Redis locally just to address this need. Now I just fire off an ansible all -m setup before running a playbook with the --limit option. Good enough for jazz!
http://docs.ansible.com/ansible/playbooks_variables.html#fact-caching
You could modify your playbook to:
...
- hosts: "{{ limit_hosts|default('default_group') }}"
tasks:
...
...
And when you run it, if some_var is not defined (normal state) then it will run on the default_group inventory group, BUT if you run it as:
ansible-playbook --extra-vars "limit_hosts=myHost" myplaybook.yml
Then it will only run on your myHost, but you could still have other sections with different hosts: .. declarations, for fact gathering, or anything else actually.

How to set common ansible_user to all hosts in a group?

Does anyone know if ansible enables users share the same ansible_user settings across all hosts included in a group? This feature would be particularly useful when using some cloud computing platforms such as OpenStack, which enables users to launch multiple instances that share the same config, such as user accounts and SSH keys.
There are several behavioral parameters you can configure to modify the way Ansible connects to your host. Among them is the ansible_user variable. You can set it per host pr per group. You can also define a general ansible_user variable under the all hosts group, that you override at the group or host level.
If you were writing your inventory in just one hosts.yml file you'd do it like this:
all:
children:
ubuntu_linux:
hosts:
ubuntu_linux_1:
ubuntu_linux_2:
aws_linux:
hosts:
aws_linux_host_1:
aws_linux_host_2:
aws_linux_host_3:
vars:
ansible_user: ec2-user
vars:
ansible_host: ubuntu
And, if you are using a directory to create your inventory, you can set it inside the ./inventory/group/vars.yml file.
Check the "Connecting to hosts: behavioral inventory parameters" section of Ansible docs to see what other parameters you can configure.
I hope it helps

Ansible Delegate_to WinRM

I am using the ansible vsphere_guest module to spin up a base windows machine on a VMWare environment. In my playbook, to do this I set Hosts: 127.0.0.1 connection: local. The reason I am doing this is I beleive im not targeting this playbook at any particular host, as I dont have one yet. I instead want to run the playbook locally.
When this runs, I get a new shiny windows server VM. What I now want to do is rename that VM's computer name. To do this I am trying to upload and run a powershell script like so rename_host.ps1 $newHostname. As I understand, I need to use the script module to do this. However, this time I want to target my brand new VM, which I get the IP address of through a fact, {{ newvm_ipaddress }}.
However, when I try and run this script with delegate_to: "{{ newvm_ipaddress}}", its trying to run as SSH. SSH wont work, im targeting a windows machine with remote powershell.
is there any way to set the connection to use winRM in the context of delegate_to? Perhaps there is a better way of doing this?
Thank you for your help
I managed to work out how to solve it. The answer is the ansible module 'add_host'. I have a play under vsphere_guest as follows. This creates a new in memory host, which can then be accessed by a different play.
- add_host group=new_machine name={{ vm_ipaddress }} ansible_connection=winrm
After this, I then have a new play that can now target this host.
- host: new_machine
Also to note, variables do not span across different hosts. The solution was to use the set_fact module in play A, which can then be accessed from within play B
-set_fact:
vm_ipaddress: "{{ hw_eth0.ipaddresses[1] }}" #hw_eth0 is the fact returned from the vsphere_guest module
What about updating the inventory with the new hosts name and with ssh winrm connection params before using delegate_to, or perhaps setting some default catch-all naming scheme with these params?
For example:
[databases]
db-[a:f].example.com:5986 ansible_user=Administrator ansible_connection=winrm ansible_winrm_server_cert_validation=ignore

Can I set remote_user in host_vars or group_vars?

I'm starting to write my first serious playbook in ansible.
Something I'd like to do is to specify different remote_user values per host. I'm able to set remote_user in ansible.cfg, through the CLI -u option and even in play variables, like so:
---
- name: install dependencies
hosts: all
sudo: yes
vars:
remote_user: username
But setting the var at the host or group level (which makes the most sense for my approach) won't work. For instance, having this file as group_vars/all gets me an "Authentication failure" fatal error:
---
remote_user: username
What am I missing?
What you are doing appears to be undocumented. Specifically, you have this:
vars:
remote_user: username
when, according to the documentation, it should be like this:
remote_user: username
The fact that it happened to work when you did it the wrong way is irrelevant. There is some side effect that makes it work in that case, but of course it won't work in another case, and it may behave differently in different Ansible versions.
To log on as a different user in each host, the usual way is to specify ansible_ssh_user in the inventory. Whether this is a variable that can be overridden in host_vars or group_vars I'm not certain. See also issue 4688 for information about how ansible_ssh_user and remote_user may override each other.
In your group or host vars add a new one with a name like playbook_username, then in your playbook change remote_user to something like this - remote_user: "{{ playbook_username }}"
Normally host variables don't seem to overwrite remote_user, but this workaround works well enough
You can specify ansible_ssh_user in group_vars/all.yaml.
remote_user documentation.
(Documenting because I just run in to this quirk in 2023 on ansible [core 2.14.1])

How to apply proxy and DNS settings to GNU/Linux Debian using configuration management tool such as Ansible

I'm new to configuration management tool.
I want to use Ansible.
I'd like to set proxy to several GNU/Linux Debian (in fact several Raspbian).
I'd like to append
export http_proxy=http://cache.domain.com:3128
to /home/pi/.bashrc
I also want to append
Acquire::http::Proxy "http://cache.domain.com:3128";
to /etc/apt.conf
I want to set DNS to IP X1.X2.X3.X4 creating a
/etc/resol.conf file with
nameserver X1.X2.X3.X4
What playbook file should I write ? How should I apply this playbook to my servers ?
Start by learning a bit about Ansible basics and familiarize yourself with playbooks. Basically you ensure you can SSH in to your Raspian machines (using keys) and that the user Ansible invokes on these machines can run sudo. (That's the hard bit.)
The easy bit is creating the playbook for the tasks at hand, and there are plenty of pointers to example playbooks in the documentation.
If you really want to add a line to a file or two, use the lineinfile module, although I strongly recommend you create templates for the files you want to push to your machines and use those with the template module. (lineinfile can get quite messy.)
I second jpmens. This is a very basic problem in Ansible, and a very good way to get started using the docs, tutorials and example playbooks.
However, if you're stuck or in a hurry, you can solve this like this (everything takes place on the "ansible master") :
Create a roles structure like this :
cd your_playbooks_directory
mkdir -p roles/pi/{templates,tasks,vars}
Now create roles/pi/tasks/main.yml :
- name: Adds resolv.conf
template: src=resolv.conf.j2 dest=/etc/resolv.conf mode=0644
- name: Adds proxy env setting to pi user
lineinfile: dest=~pi/.bashrc regexp="^export http_proxy" insertafter=EOF line="export http_proxy={{ http_proxy }}"
Then roles/pi/templates/resolv.conf.j2 :
nameserver {{ dns_server }}
then roles/pi/vars/main.yml :
dns_server: 8.8.8.8
http_proxy: http://cache.domain.com:3128
Now make a top-level playbook to apply roles, at your playbook root, and call it site.yml :
- hosts : raspberries
roles:
- { role: pi }
You can apply your playbook using :
ansible-playbook site.yml
assuming your machines are in the raspberries group.
Good luck.

Resources