Create vars from host inventory - ansible

I have below infrastructure (3 servers running windows 2016 server edition) where master server runs a IIS service on port 80(example) and 2 agents need to connect to them. To allow the communication, I need to add windows firewall rules to whitelist the ip addresses
one master server (mas)
and two agent servers (agt)
The task which I need to execute through ansible is, I need to add the below firewall rule only on the master server and should not run on the agent hosts. How to run the below task only on the master server so that the ip address details of agent(agt) machines are used while configuring the firewall rules.
- hosts: mas, agt
tasks:
- name: Firewall Rule Modifications
win_firewall_rule:
name: "IIS port"
localport: "80"
direction: in
action: allow
remoteip: "{{ansible_ip_addresses[0]}}"
protocol: "tcp"
description: "Allow agents"
enabled: yes
state: present

I was able to create a solution(with a vagrant test setup with centos 7) as mentioned below but I think there should be a simpler way to achieve this :-)
Inventory File:
[master]
mas
[agents]
agt1
agt2
Playbook:
- name: Configure Iptables
hosts: all
serial: 1
tasks:
- name: create a file to store inventory IP's
file:
dest: /tmp/foo
state: touch
force: yes
delegate_to: localhost
- name: Register IP address
shell: echo "{{ ansible_enp0s8.ipv4.address }}"
register: op
delegate_to: localhost
- name: write IP's to a temp file
lineinfile:
dest: /tmp/foo
line: "{{ op.stdout_lines[0] }}"
insertafter: EOF
delegate_to: localhost
- name: Add firewall rules
iptables:
chain: INPUT
source: "{{item}}"
protocol: tcp
destination_port: 80
jump: ACCEPT
with_lines: cat /tmp/foo
when: ansible_hostname == 'mas'

Related

Ansible creates directory in the control machine even if delegate_to set to remote when running the playbook with local connection

I'm running an ansible-playbook configured to provision ec2 and configure the machine. I set the connection to local for the playbook because no machine to manage before the script runs. Once provisioned, I supposed to create directory in the remote server. Since the playbook runs in local connection, I set to delete_to: {{ remote_host }} so this directory creation executed in remote host but it still creates the directory in the control machine.
- name: provision instance for Apache
hosts: localhost
connection: local
remote_user: ubuntu
gather_facts: false
vars_files:
- vars/env.yml
vars:
allow_world_readable_tmpfiles: true
key_name: ansible-test
region: us-east-2
image: ami-0e82959d4ed12de3f # Ubuntu 18.04
id: "practice-akash-ajay"
sec_group: "{{ id }}-sec"
remote_host: ansible-test
remaining_days: 20
acme_directory: https://acme-staging-v02.api.letsencrypt.org/directory
# acme_directory: https://acme-v02.api.letsencrypt.org/directory
cert_name: "{{ app_slug }}.{{ app_domain}}"
intermediate_path: /etc/pki/letsencrypt/intermediate.pem
cert:
common_name: "{{ app_slug }}.{{ app_domain}}"
organization_name: PearlThoughts
email_address: "{{ letsencrypt_email }}"
subject_alt_name:
- "DNS:{{ app_slug }}.{{ app_domain}}"
roles:
- aws
- name: Create certificate storage directory
file:
dest: "{{item.path}}"
mode: 0750
state: directory
delegate_to: {{ remote_host }}
with_items:
- path: ~/lets-seng-test
When you set connection explicitly on the play, it will be used for all tasks in that play. So don't do that. Ansible will by default use a local connection for localhost unless you have explicitly changed that in your inventory (and again, don't do that).
If you remove the connection setting on your play, delegate_to might work the way you expect...but I don't think you want to do that.
If you have your playbook provisioning a new host for you, the way to target that host with Ansible is to have a new play with that host (or it's corresponding group) listed in the target hosts: for the play. Conceptually, you want:
- host: localhost
tasks:
- name: provisiong an AWS instance
aws_ec2: [...]
register: hostinfo
- add_host:
name: myhost
ansible_host: "{{ hostinfo... }}"
- hosts: myhost
tasks:
- name: do something on the new host
command: uptime
You probably need some logic in between provisioning the host and executing tasks against it to ensure that it is up and ready to service requests.
Instead of using the add_host module, a better solution is often to rely on the appropriate inventory plugin.

Creating an idempotent playbook that uses root then disables root server access

I'm provisioning a server and hardening it by disabling root access after creating an account with escalated privs. The tasks before the account creation require root so once root has been disabled the playbook is no longer idempotent. I've discovered one way to resolve this is to use wait_for_connection with block/rescue...
- name: Secure the server
hosts: "{{ hostvars.localhost.ipv4 }}"
gather_facts: no
tasks:
- name: Block required to leverage rescue as only way I can see of avoid an already disabled root stopping the playbook
block:
- wait_for_connection:
sleep: 2 # avoid too many requests within the timeout
timeout: 5
- name: Create the service account first so that play is idempotent, we can't rely on root being enabled as it is disabled later
user:
name: swirb
password: {{password}}
shell: /bin/bash
ssh_key_file: "{{ssh_key}}"
groups: sudo
- name: Add authorized keys for service account
authorized_key:
user: swirb
key: '{{ item }}'
with_file:
- "{{ssh_key}}"
- name: Disallow password authentication
lineinfile:
dest: /etc/ssh/sshd_config
regexp: "^[\#]PasswordAuthentication"
line: "PasswordAuthentication no"
state: present
notify: Restart ssh
- name: Disable root login
replace:
path: /etc/ssh/sshd_config
regexp: 'PermitRootLogin yes'
replace: 'PermitRootLogin no'
backup: yes
become: yes
notify: Restart ssh
rescue:
- debug:
msg: "{{error}}"
handlers:
- name: Restart ssh
service: name=ssh state=restarted
This is fine until I install fail2ban as the wait_for_connection causes too many connections to the server which jails the IP. So I created a task to add the IP address of the Ansible Controller to the jail.conf like so...
- name: Install and configure fail2ban
hosts: "{{hostvars.localhost.ipv4}}"
gather_facts: no
tasks:
- name: Install fail2ban
apt:
name: "{{ packages }}"
vars:
packages:
- fail2ban
become: yes
- name: Add the IP address to the whitelist otherwise wait_for_connection triggers jailing
lineinfile: dest=/etc/fail2ban/jail.conf
regexp="^(ignoreip = (?!.*{{hostvars.localhost.ipv4}}).*)"
line="\1 <IPv4>"
state=present
backrefs=True
notify: Restart fail2ban
become: yes
handlers:
- name: Restart fail2ban
service: name=fail2ban state=restarted
become: yes
This works but I have to hard wire the Ansible Controller IPv4. There doesn't seem to be a standard way of obtaining the IP address of the Ansible Controller.
I'm also not that keen on adding the controller to every server white list.
Is there a cleaner way of creating an idempotent provisioning playbook?
Otherwise, how do I get the Ansible Controller IP address?

Run one ansible playbook task on localhost, and then another task on remote server

I am playing with automating our VMware deployment and configurations and have run in to a question I can't find the answer for on google.
To start, I am running a playbook task on localhost that reaches out to vsphere to provision my server. After that in the same playbook I want to reach out to the provisioned server and make a few configuration changes to finish up. How can I do this? Right now my playbook looks like:
- hosts:
- localhost
tasks:
- name: Clone a virtual machine from Linux template and customize
vmware_guest:
hostname: "VSphere"
username: "Username"
password: "Password"
validate_certs: no
datacenter: "DC"
state: present
folder: /Servers
template: "MyTemplate"
name: "{{ServerName}}"
cluster: "Prod Cluster"
networks:
- name: VM Network
ip: "{{IP}}"
netmask: 255.255.255.0
gateway: "{{Gateway}}"
wait_for_ip_address: True
customization:
domain: "mydomain.com"
dns_servers:
- 192.168.1.1
- 192.168.1.2
dns_suffix:
- mydomain.com
delegate_to: localhost
- name: Register VM to Satellite
#here is where I need to know how to specify running commands on my specific IP(which I pass in on command line as var)
Use the add_host module to the new host to your inventory, and then target that host in another play (you don't need that delegate_to: localhost in your task, because you're already targeting localhost in the play):
---
- hosts: localhost
tasks:
- name: Clone a virtual machine from Linux template and customize
vmware_guest:
hostname: "VSphere"
username: "Username"
password: "Password"
validate_certs: no
datacenter: "DC"
state: present
folder: /Servers
template: "MyTemplate"
name: "{{ServerName}}"
cluster: "Prod Cluster"
networks:
- name: VM Network
ip: "{{IP}}"
netmask: 255.255.255.0
gateway: "{{Gateway}}"
wait_for_ip_address: True
customization:
domain: "mydomain.com"
dns_servers:
- 192.168.1.1
- 192.168.1.2
dns_suffix:
- mydomain.com
- name: add host to inventory
add_host:
name: new_host
ansible_host: "{{ IP }}"
groups: vms
- hosts: vms
tasks:
- name: register vm to satellite
...
You could also do this through the use of a dynamic inventory plugin; there is one available for vmware.
Since you already know what your IP address is, just put that in your inventory. In fact, you can have as many as you want. You'll have hosts: all ( not localhost). If the VM already exists, nothing will happen in the vmware_guest call. (And you already have the vCenter call delegated to localhost.)
You will want to put in a wait_for, to give the VM time to come up before you try to register it.

Ansible can't connect to windows host defined in in-memory inventory

I am trying to create a new Windows host on Azure, add it to Ansible's In-Memory inventory and run play against it. However it seems Ansible is trying to use ssh to connect to my windows host.
Below is my playbook
- name: Windows VM Playbook
hosts: localhost
vars:
nicName: "Blue-xxxx"
vmName: "Blue-xxxx"
vmPubIp: "51.141.x.x"
tasks:
- name: Create Windows VM
azure_rm_virtualmachine:
resource_group: AnsibleVMxxx
name: "{{ vmName }}"
vm_size: Standard_B2ms
storage_account: vmstoragedisksxxx
admin_username: xxxxxxxx
admin_password: xxxxxxxx
network_interface_names: "{{ nicName }}"
managed_disk_type: Standard_LRS
data_disks:
- lun: 0
disk_size_gb: 64
managed_disk_type: Standard_LRS
storage_container_name: vhd
storage_account_name: AnsibleVMxxxxtorageaccount
os_type: Windows
image:
offer: "WindowsServer"
publisher: MicrosoftWindowsServer
sku: '2012-R2-Datacenter'
location: 'uk west'
version: latest
- name: Custom Script Extension
azure_rm_deployment:
state: present
location: 'uk west'
resource_group_name: 'AnsibleVMxxxx'
template: "{{ lookup('file', '/etc/ansible/playbooks/ConfigureAnsibleForPowershellRemoting.json') | from_json }}"
deployment_mode: incremental
parameters:
vmName:
value: "{{ vmName }}"
- name: Add machine to in-memory inventory_plugins
add_host:
name: 51.141.x.x
groups: webservers
ansible_user: adminUser
ansible_password: xxxxxxxx
ansible_port: 5986
ansible_connection: winrm
ansible_winrm_server_cert_validation: ignore
ansible_winrm_scheme: https
- name: Ping
hosts: webservers
gather_facts: false
connection: local
win_ping:
I also have a webserver.yml file under /playbooks/group_vars folder with following content
ansible_user: adminUser
ansible_password: xxxxxxxx
ansible_port: 5986
ansible_connection: winrm
ansible_winrm_server_cert_validation: ignore
ansible_winrm_scheme: https
Can anyone please advise me on what I am doing wrong here?
Figured it out.
The problem was occuring because I had host: localhost at the begining of the playbook. I had to move the plays relying on in memory inventory out of localhost block.

How to write an Ansible playbook with port knocking

My server is set up to require port knocking in order to white-list an IP for port 22 SSH. I've found guides on setting up an Ansible playbook to configure port knocking on the server side, but not to perform port knocking on the client side.
For example, what would my playbook and/or inventory files look like if I need to knock port 9999, 9000, then connect to port 22 in order to run my Ansible tasks?
You can try out my ssh_pkn connection plugin.
# Example host definition:
# [pkn]
# myserver ansible_host=my.server.at.example.com
# [pkn:vars]
# ansible_connection=ssh_pkn
# knock_ports=[8000,9000]
# knock_delay=2
I have used https://stackoverflow.com/a/42647902/10191134 until it broke on an ansible update so I searched for another solution and finally stumbled over wait_for:
hosts:
[myserver]
knock_ports=[123,333,444]
play:
- name: Port knocking
wait_for:
port: "{{ item }}"
delay: 0
connect_timeout: 1
state: stopped
host: "{{ inventory_hostname }}"
connection: local
become: no
with_items: "{{ knock_ports }}"
when: knock_ports is defined
ofc can be adjusted to make the delay and/or timeout configurable in the hosts as well.
Here's a brute-force example. The timeouts will be hit, so this'll add 2 seconds per host to a play.
- hosts: all
connection: local
tasks:
- uri:
url: "http://{{ansible_host}}:9999"
timeout: 1
ignore_errors: yes
- uri:
url: "http://{{ansible_host}}:9000"
timeout: 1
ignore_errors: yes
- hosts: all
# your normal plays here
Other ways: use telnet, put a wrapper around Ansible (though it isn't recommended in Ansible2), make a role and then include with meta, write a custom module (and pull that back into Ansible itself).

Resources