How to write an Ansible playbook with port knocking - ansible

My server is set up to require port knocking in order to white-list an IP for port 22 SSH. I've found guides on setting up an Ansible playbook to configure port knocking on the server side, but not to perform port knocking on the client side.
For example, what would my playbook and/or inventory files look like if I need to knock port 9999, 9000, then connect to port 22 in order to run my Ansible tasks?

You can try out my ssh_pkn connection plugin.
# Example host definition:
# [pkn]
# myserver ansible_host=my.server.at.example.com
# [pkn:vars]
# ansible_connection=ssh_pkn
# knock_ports=[8000,9000]
# knock_delay=2

I have used https://stackoverflow.com/a/42647902/10191134 until it broke on an ansible update so I searched for another solution and finally stumbled over wait_for:
hosts:
[myserver]
knock_ports=[123,333,444]
play:
- name: Port knocking
wait_for:
port: "{{ item }}"
delay: 0
connect_timeout: 1
state: stopped
host: "{{ inventory_hostname }}"
connection: local
become: no
with_items: "{{ knock_ports }}"
when: knock_ports is defined
ofc can be adjusted to make the delay and/or timeout configurable in the hosts as well.

Here's a brute-force example. The timeouts will be hit, so this'll add 2 seconds per host to a play.
- hosts: all
connection: local
tasks:
- uri:
url: "http://{{ansible_host}}:9999"
timeout: 1
ignore_errors: yes
- uri:
url: "http://{{ansible_host}}:9000"
timeout: 1
ignore_errors: yes
- hosts: all
# your normal plays here
Other ways: use telnet, put a wrapper around Ansible (though it isn't recommended in Ansible2), make a role and then include with meta, write a custom module (and pull that back into Ansible itself).

Related

Firewall Functional Test

I am new to Ansible, I am only using a central machine and a host node on Ubuntu server, for which I have to deploy a firewall; I was able to make the SSH connections and the execution of the playbook. What I need to know is how to verify that the port I described in the playbook was blocked or opened, either on the controller machine and on the host node. Thanks
According your question
How to verify that the port I described in the playbook was blocked or opened, either on the controller machine and on the host node?
you may are looking for an approach like
- name: "Test connection to NFS_SERVER: {{ NFS_SERVER }}"
wait_for:
host: "{{ NFS_SERVER }}"
port: "{{ item }}"
state: drained
delay: 0
timeout: 3
active_connection_states: SYN_RECV
with_items:
- 111
- 2049
and have also a look into How to use Ansible module wait_for together with loop?
Documentation
Ansible wait_for Examples
You may also interested in Manage firewall with UFW and have a look into
Ansible ufw Examples

How to wait for ssh to become available on a host before installing a role?

Is there a way to wait for ssh to become available on a host before installing a role? There's wait_for_connection but I only figured out how to use it with tasks.
This particular playbook spin up servers on a cloud provider before attempting to install roles. But fails since the ssh service on the hosts isn't available yet.
How should I fix this?
---
- hosts: localhost
connection: local
tasks:
- name: Deploy vultr servers
include_tasks: create_vultr_server.yml
loop: "{{ groups['vultr_servers'] }}"
- hosts: all
gather_facts: no
become: true
tasks:
- name: wait_for_connection # This one works
wait_for_connection:
delay: 5
timeout: 600
- name: Gather facts for first time
setup:
- name: Install curl
package:
name: "curl"
state: present
roles: # How to NOT install roles UNLESS the current host is available ?
- role: apache2
vars:
doc_root: /var/www/example
message: 'Hello world!'
- common-tools
Ansible play actions start with pre_tasks, then roles, followed by tasks and finally post_tasks. Move your wait_for_connection task as the first pre_tasks and it will block everything until connection is available:
- hosts: all
gather_facts: no
become: true
pre_tasks:
- name: wait_for_connection # This one works
wait_for_connection:
delay: 5
timeout: 600
roles: ...
tasks: ...
For more info on execution order, see this title in role's documentation (paragraph just above the notes).
Note: you probably want to move all your current example tasks in that section too so that facts are gathered and curl installed prior to do anything else.

Ansible: Trigger the task only when previous task is successful and the output is created

I am deploying a VM in azure using ansible and using the public ip created in the next tasks. But the time taken to create the public ip is too long so when the subsequent task is executed, it fails. The time to create the ip also varies, it's not fixed. I want to introduce some logic where the next task will only run when the ip is created.
- name: Deploy Master Node
azure_rm_virtualmachine:
resource_group: myResourceGroup
name: testvm10
admin_username: chouseknecht
admin_password: <your password here>
image:
offer: CentOS-CI
publisher: OpenLogic
sku: '7-CI'
version: latest
Can someone assist me here..! It's greatly appreciated.
I think the wait_for module is a bad choice because while it can test for port availability it will often give you false positives because the port is open before the service is actually ready to accept connections.
Fortunately, the wait_for_connection module was designed for exactly the situation you are describing: it will wait until Ansible is able to successfully connect to your target.
This generally requires that you register your Azure VM with your Ansible inventory (e.g. using the add_host module). I don't use Azure, but if I were doing this with OpenStack I might write something like this:
- hosts: localhost
gather_facts: false
tasks:
# This is the task that creates the vm, much like your existing task
- os_server:
name: larstest
cloud: kaizen-lars
image: 669142a3-fbda-4a83-bce8-e09371660e2c
key_name: default
flavor: m1.small
security_groups: allow_ssh
nics:
- net-name: test_net0
auto_ip: true
register: myserver
# Now we take the public ip from the previous task and use it
# to create a new inventory entry for a host named "myserver".
- add_host:
name: myserver
ansible_host: "{{ myserver.openstack.accessIPv4 }}"
ansible_user: centos
# Now we wait for the host to finished booting. We need gather_facts: false here
# because otherwise Ansible will attempt to run the `setup` module on the target,
# which will fail if the host isn't ready yet.
- hosts: myserver
gather_facts: false
tasks:
- wait_for_connection:
delay: 10
# We could add additional tasks to the previous play, but we can also start
# new play with implicit fact gathering.
- hosts: myserver
tasks:
- ...other tasks here...

Create vars from host inventory

I have below infrastructure (3 servers running windows 2016 server edition) where master server runs a IIS service on port 80(example) and 2 agents need to connect to them. To allow the communication, I need to add windows firewall rules to whitelist the ip addresses
one master server (mas)
and two agent servers (agt)
The task which I need to execute through ansible is, I need to add the below firewall rule only on the master server and should not run on the agent hosts. How to run the below task only on the master server so that the ip address details of agent(agt) machines are used while configuring the firewall rules.
- hosts: mas, agt
tasks:
- name: Firewall Rule Modifications
win_firewall_rule:
name: "IIS port"
localport: "80"
direction: in
action: allow
remoteip: "{{ansible_ip_addresses[0]}}"
protocol: "tcp"
description: "Allow agents"
enabled: yes
state: present
I was able to create a solution(with a vagrant test setup with centos 7) as mentioned below but I think there should be a simpler way to achieve this :-)
Inventory File:
[master]
mas
[agents]
agt1
agt2
Playbook:
- name: Configure Iptables
hosts: all
serial: 1
tasks:
- name: create a file to store inventory IP's
file:
dest: /tmp/foo
state: touch
force: yes
delegate_to: localhost
- name: Register IP address
shell: echo "{{ ansible_enp0s8.ipv4.address }}"
register: op
delegate_to: localhost
- name: write IP's to a temp file
lineinfile:
dest: /tmp/foo
line: "{{ op.stdout_lines[0] }}"
insertafter: EOF
delegate_to: localhost
- name: Add firewall rules
iptables:
chain: INPUT
source: "{{item}}"
protocol: tcp
destination_port: 80
jump: ACCEPT
with_lines: cat /tmp/foo
when: ansible_hostname == 'mas'

How to check if communication between two nodes on specific port is possible

I have two machines, there is a need to check that machine-1 must be able to communicate with machine-2 over port 10250 using TCP protocol, similarly machine-2 must be able to communicate with machine-1 over port 2380 using UDP protocol. For this I thought to use Ansible and write an Ansible script, but I am not sure how this can be achieved.
Please let me know your views.
Thanks in advance
Just for testing writing an ansible playbook is not a good idea as per me. But if you want to write, you can follow the below steps:
Create a file named "inventory" --> this file will host port no. and host name
Write a playbook that will perform the tasks.
Structure:
inventory
[UDP]
xx.xx.xx.xx
[UDP:vars]
port_number = 2380
[TCP]
xx.xx.xx.xx
[TCP:vars]
port_number = 10250
playbook
---
name: play to test the port connection
hosts: "{{ hosts }}"
tasks:
- name: check if port is working.
<module_name>: .......
Here I have tried an ansible playbook for your requirement.
- hosts: host[1:2].xxx.xx
gather_facts: False
become: yes
vars:
host1: host1.xxx.xx
host2: host2.xxx.xx
tasks:
- name: Check port from host1 to host2
wait_for: host={{host2}} port=10250 timeout=1
delegate_to: "{{host1}}"
- name: Check port from host2 to host1
wait_for: host={{host1}} port=2380 timeout=1
delegate_to: "{{host2}}"

Resources