Firewall Functional Test - ansible

I am new to Ansible, I am only using a central machine and a host node on Ubuntu server, for which I have to deploy a firewall; I was able to make the SSH connections and the execution of the playbook. What I need to know is how to verify that the port I described in the playbook was blocked or opened, either on the controller machine and on the host node. Thanks

According your question
How to verify that the port I described in the playbook was blocked or opened, either on the controller machine and on the host node?
you may are looking for an approach like
- name: "Test connection to NFS_SERVER: {{ NFS_SERVER }}"
wait_for:
host: "{{ NFS_SERVER }}"
port: "{{ item }}"
state: drained
delay: 0
timeout: 3
active_connection_states: SYN_RECV
with_items:
- 111
- 2049
and have also a look into How to use Ansible module wait_for together with loop?
Documentation
Ansible wait_for Examples
You may also interested in Manage firewall with UFW and have a look into
Ansible ufw Examples

Related

Ansible wait_for_connection until the hosts are ready for ansible?

I am using ansible to configure some VM's.
Problem I am facing right now is, I can't execute ansible commands right after the VM's are just started, it gives connection time out error. This happens when I execute the ansible right after the VMs are spinned up in GCP.
Commands working fine when I execute ansible playbook after 60 seconds, but I am looking for a way to do this automatically without manually wait 60s and execute, so I can execute right after VM's are spun up and ansible will wait until they are ready. I don't want to add a delay seconds to ansible tasks as well,
I am looking for a dynamic way where ansible tries to execute playbook and when it fails, it won't show any error but wait until the VM's are ready?
I used this, but it still doesn't work (as it fails)
---
- hosts: all
tasks:
- name: Wait for connection
wait_for_connection: # but this will still fails, am I doing this wrong?
- name: Ping all hosts for connectivity check
ping:
Can someone please help me?
I have the same issue on my side.
I've fixed htis with this task wait_for.
The basic way is to waiting ssh connection like this :
- name: Wait 300 seconds for port 22 to become open and contain "OpenSSH"
wait_for:
port: 22
host: '{{ (ansible_ssh_host|default(ansible_host))|default(inventory_hostname) }}'
search_regex: OpenSSH
delay: 10
connection: local
I guess your VM must launch an application/service so you can monitor on the vm in the log file where application is started, like this for example (here for nexus container):
- name: Wait container is start and running
become: yes
become_user: "{{ ansible_nexus_user }}"
wait_for:
path: "{{ ansible_nexus_directory_data }}/log/nexus.log"
search_regex: ".*Started Sonatype Nexus.*"
I believe what you are looking for is to postpone gather_facts until the server is up, as that otherwise will time out as you experienced. Your file could work as follows:
---
- hosts: all
gather_facts: no
tasks:
- name: Wait for connection (600s default)
ansible.builtin.wait_for_connection:
- name: Gather facts manually
ansible.builtin.wait_for_connection
I have these under pre_tasks instead of tasks, but it should probably work if they are first in your file.

Restarting a service after looped commands on multiple servers

I poked around a bit here but didn't see anything that quite matched up to what I am trying to accomplish, so here goes.
So I've put together my first Ansible playbook which opens or closes one or more ports on the firewall of one or more hosts, for one or more specified IP addresses. Works great so far. But what I want to do is restart the firewall service after all the tasks for a given host are complete (with no errors, of course).
NOTE: The hostvars/localhost references just hold vars_prompt input from the user in a task list above this one. I store prompted data in hosts: localhost build a dynamic host list based on what the user entered, and then have a separate task list to actually do the work.
So:
- name: Execute remote firewall-cmd for each host in "dynamically created host group"
hosts: dynamically_created_host_list
gather_facts: no
tasks:
- set_fact:
hostList: "{{hostvars['localhost']['hostList']}}"
- set_fact:
portList: "{{hostvars['localhost']['portList']}}"
- set_fact:
portStateRequested: "{{hostvars['localhost']['portStateRequested']}}"
- set_fact:
portState: "{{hostvars['localhost']['portState']}}"
- set_fact:
remoteIPs: "{{hostvars['localhost']['remoteIPs']}}"
- name: Invoke firewall-cmd remotely
firewalld:
.. module-specific stuff here ...
with_nested:
- "{{ remoteIPs.split(',') }}"
- "{{ portList.split(',') }}"
register: requestStatus
In my original version of the script, which only did 1 port for 1 host for 1 IP, I just did:
- name: Reload firewalld
when: requestStatus.changed
systemd:
name: firewalld
state: reloaded
But I don't think that will work as easily here because of the nesting. For example. Let's say I want to open port 9999 for a remote IP address of 1.1.1.1 on 10 different hosts. And let's say the 5th host has an error for some reason. I may not want to restart the firewall service at that point.
Actually, now that I think about it, I guess that in that scenario, there would be 4 new entries to the firewall config, and 6 that didn't take because of the error. Now I'm wondering if I need to track the successes, and have a rescue block within the Playbook to back those entries that did go through.
Grrr.... any ideas? Sorry, new to Ansible here. Plus, I hate YAML for things like this. :D
Thanks in advance for any guidance.
It looks to me like what you are looking for is what Ansible call handlers.
As we’ve mentioned, modules should be idempotent and can relay when
they have made a change on the remote system. Playbooks recognize this
and have a basic event system that can be used to respond to change.
These ‘notify’ actions are triggered at the end of each block of tasks
in a play, and will only be triggered once even if notified by
multiple different tasks.
For instance, multiple resources may indicate that apache needs to be
restarted because they have changed a config file, but apache will
only be bounced once to avoid unnecessary restarts.
Note that handlers are simply a pair of
A notify attribute on one or multiple tasks
A handler, with a name matching your above mentioned notify attribute
So your playbook should look like
- name: Execute remote firewall-cmd for each host in "dynamically created host group"
hosts: dynamically_created_host_list
gather_facts: no
tasks:
# set_fact removed for concision
- name: Invoke firewall-cmd remotely
firewalld:
# .. module-specific stuff here ...
with_nested:
- "{{ remoteIPs.split(',') }}"
- "{{ portList.split(',') }}"
notify: Reload firewalld
handlers:
- name: Reload firewalld
systemd:
name: firewalld
state: reloaded

How to write an Ansible playbook with port knocking

My server is set up to require port knocking in order to white-list an IP for port 22 SSH. I've found guides on setting up an Ansible playbook to configure port knocking on the server side, but not to perform port knocking on the client side.
For example, what would my playbook and/or inventory files look like if I need to knock port 9999, 9000, then connect to port 22 in order to run my Ansible tasks?
You can try out my ssh_pkn connection plugin.
# Example host definition:
# [pkn]
# myserver ansible_host=my.server.at.example.com
# [pkn:vars]
# ansible_connection=ssh_pkn
# knock_ports=[8000,9000]
# knock_delay=2
I have used https://stackoverflow.com/a/42647902/10191134 until it broke on an ansible update so I searched for another solution and finally stumbled over wait_for:
hosts:
[myserver]
knock_ports=[123,333,444]
play:
- name: Port knocking
wait_for:
port: "{{ item }}"
delay: 0
connect_timeout: 1
state: stopped
host: "{{ inventory_hostname }}"
connection: local
become: no
with_items: "{{ knock_ports }}"
when: knock_ports is defined
ofc can be adjusted to make the delay and/or timeout configurable in the hosts as well.
Here's a brute-force example. The timeouts will be hit, so this'll add 2 seconds per host to a play.
- hosts: all
connection: local
tasks:
- uri:
url: "http://{{ansible_host}}:9999"
timeout: 1
ignore_errors: yes
- uri:
url: "http://{{ansible_host}}:9000"
timeout: 1
ignore_errors: yes
- hosts: all
# your normal plays here
Other ways: use telnet, put a wrapper around Ansible (though it isn't recommended in Ansible2), make a role and then include with meta, write a custom module (and pull that back into Ansible itself).

Cannot access machine after creating trough ec2 module within same script

I have problems with my playbook which should create new EC2 instances trough built-in module and connect to them to set some default stuff.
I went trough lot of tutorials/posts, but none of them mentioned same problem, therefor i'm asking there.
Everything, in terms of creating goes well, but when i have instances created, and successfully waited for SSH to come up. I got error which says machine is unreachable.
UNREACHABLE! => {"changed": false, "msg": "ERROR! SSH Error: data could not be sent to the remote host. Make sure this host can be reached over ssh", "unreachable": true}
I tried to connect manually (from terminal to the same host) and i was successful (while the playbook was waiting for connection). I also tried to increase timeout generally in ansible.cfg. I verified that given hostname is valid (and it is) and also tried public ip instead of public DNS, but nothing helps.
basically my playbook looks like that
---
- name: create ec2 instances
hosts: local
connection: local
gather_facts: False
vars:
machines:
- { type: "t2.micro", instance_tags: { Name: "machine1", group: "some_group" }, security_group: ["SSH"] }
tasks:
- name: lunch new ec2 instances
local_action: ec2
group={{ item.security_group }}
instance_type={{ item.type}}
image=...
wait=true
region=...
keypair=...
count=1
instance_tags=...
with_items: machines
register: ec2
- name: wait for SSH to come up
local_action: wait_for host={{ item.instances.0.public_dns_name }} port=22 delay=60 timeout=320 state=started
with_items: ec2.results
- name: add host into launched group
add_host: name={{ item.instances.0.public_ip }} group=launched
with_items: ec2.results
- name: with the newly provisioned EC2 node configure basic stuff
hosts: launched
sudo: yes
remote_user: ubuntu
gather_facts: True
roles:
- common
Note: in many tutorials are results from creating ec2 instances accessed in different way, but thats probably for different question.
Thanks
Solved:
I don't know how, but it suddenly started to work. No clue. In case i will find some new info, will update this question
A couple points that may help:
I'm guessing it's a version difference, but I've never seen a 'results' key in the registered 'ec2' variable. In any case, I usually use 'tagged_instances' -- this ensures that even if the play didn't create an instance (ie, because a matching instance already existed from a previous run-through), the variable will still return instance data you can use to add a new host to inventory.
Try adding 'search_regex: "OpenSSH"' to your 'wait_for' play to ensure that it's not trying to run before the SSH daemon is completely up.
The modified plays would look like this:
- name: wait for SSH to come up
local_action: wait_for host={{ item.public_dns_name }} port=22 delay=60 timeout=320 state=started search_regex="OpenSSH"
with_items: ec2.tagged_instances
- name: add host into launched group
add_host: name={{ item.public_ip }} group=launched
with_items: ec2.tagged_instances
You also, of course, want to make sure that Ansible knows to use the specified key when SSH'ing to the remote host either by adding 'ansible_ssh_private_key_file' to the inventory entry or specifying '--private-key=...' on the command line.

Ansible AWS EC2 Detecting Server is Running Fails

Background:
Just trying to learn how to use Ansible and have been experimenting with the AWS Ec2 module to build and deploy a Ubuntu instance on AWS-EC2. So have built a simple Playbook to create and startup an instance and executed via ansible-playbook -vvvv ic.yml
The playbook is:
---
- name: Create a ubuntu instance on AWS
hosts: localhost
connection: local
gather_facts: False
vars:
# AWS keys for access to the API
ec2_access_key: 'secret-key'
ec2_secret_key: 'secret-key'
region: ap-southeast-2
tasks:
- name: Create a Key-Pair necessary for connection to the remote EC2 host
ec2_key:
name=ic-key region="{{region}}"
register: keypair
- name: Write the Key-Pair to a file for re-use
copy:
dest: files/ic-key.pem
content: "{{ keypair.key.private_key }}"
mode: 0600
when: keypair.changed
- name: start the instance
ec2:
ec2_access_key: "{{ec2_access_key}}"
ec2_secret_key: "{{ec2_secret_key}}"
region: ap-southeast-2
instance_type: t2.micro
image: ami-69631053
key_name: ic-key # key we just created
instance_tags: {Name: icomplain-prod, type: web, env: production} #key-values pairs for naming etc
wait: yes
register: ec2
- name: Wait for instance to start up and be running
wait_for: host = {{item.public_dns_name}} port 22 delay=60 timeout=320 state=started
with_items: ec2.instances
Problem:
The issue is that when attempting to wait for the instance to fire up, using the wait_for test, as described in Examples for EC-2 module it fails with the following error message:
msg: this module requires key=value arguments (['host', '=', 'ec2-52-64-134-61.ap-southeast-2.compute.amazonaws.com', 'port', '22', 'delay=60', 'timeout=320', 'state=started'])
FATAL: all hosts have already failed -- aborting
Output:
Although the error message appears on the command line when I check in the AWS-Console the Key-Pair and EC2 instance are created and running.
Query:
Wondering
There is some other parameter which I need ?
What is the 'key=value' msg which is the error output being caused by?
Any recommendations on other ways to debug the script to determine the cause of the failure ?
Does it require registration of the host somewhere in the Ansible world ?
Additional NOTES:
Testing the playbook I've observed that the key-pair gets created, the server startup is initiated at AWS as seen from the AWS web console. What appears to be the issue is that the time period of the server to spin up is too long and the script timeouts or fails. Frustratingly, is that the error message is not all that helpful and also wondering if there is any other methods of debugging an ansible script ?
this isn't a problem of "detecting the server is running". As the error message says, it's a problem with syntax.
# bad
wait_for: host = {{item.public_dns_name}} port 22 delay=60 timeout=320 state=started
# good
wait_for: host={{item.public_dns_name}} port=22 delay=60 timeout=320 state=started
Additionally, you'll want to run this from the central machine, not the remote (new) server.
local_action: wait_for host={{item.public_dns_name}} port=22 delay=60 timeout=320 state=started
Focusing on the wait_for test as you indicate that the rest is working.
Based on the jobs I have running I would think the issue is with the host name, not with the rest of the code. I use an Ansible server in a protected VPC that has network access to the VPC where the servers start up in, and my wait_for code looks like this (variable name updated to match yours):
- name: wait for instances to listen on port 22
wait_for:
delay: 10
state: started
host: "{{ item.private_ip }}"
port: 22
timeout: 300
with_items: ec2.instances
Trying to use DNS instead of an IP address has always proven to be unreliable for me - if I'm registering DNS as part of a job, it can sometimes take a minute to be resolvable (sometimes instant, sometimes not). Using the IP addresses works every time of course - as long as the networking is set up correctly.
If your Ansible server is in a different region or has to use the external IP to access the new servers, you will of course need to have the relevant security groups and add the new server(s) to those before you can use wait_for.

Resources