I am setting Zabbix agent service on many hosts, it can't confirm network is available between Zabbix server and Zabbix agent even service is up.
For example, I install Zabbix-agent on host A by ansible playbook. And there is already a Zabbix server on host B.
How can I use ansible to test, if host A can access port 10050 of host A and if host A can access port 10051 of host B?
Can you help to tell me which modules are suitable for the above network testing? In addition, how to loop inventory hosts in ansible-playbook.
Thanks.
You can take advantage of wait_for module to accomplish this.
Example:
- name: verify port 10050 is listening on hostA
wait_for:
host: hostA
port: 10050
delay: 5
state: started
delegate_to: hostB
To iterate through the hosts in the inventory file, you can use inventory_hostnames module:
with_inventory_hostnames:
- all
Related
I am new to Ansible, I am only using a central machine and a host node on Ubuntu server, for which I have to deploy a firewall; I was able to make the SSH connections and the execution of the playbook. What I need to know is how to verify that the port I described in the playbook was blocked or opened, either on the controller machine and on the host node. Thanks
According your question
How to verify that the port I described in the playbook was blocked or opened, either on the controller machine and on the host node?
you may are looking for an approach like
- name: "Test connection to NFS_SERVER: {{ NFS_SERVER }}"
wait_for:
host: "{{ NFS_SERVER }}"
port: "{{ item }}"
state: drained
delay: 0
timeout: 3
active_connection_states: SYN_RECV
with_items:
- 111
- 2049
and have also a look into How to use Ansible module wait_for together with loop?
Documentation
Ansible wait_for Examples
You may also interested in Manage firewall with UFW and have a look into
Ansible ufw Examples
When I provision a new server, there is a lag between the time I create it and it becomes available. So I need to wait until it's ready.
I assumed that was the purpose of the wait_for task:
hosts:
[servers]
42.42.42.42
playbook.yml:
---
- hosts: all
gather_facts: no
tasks:
- name: wait until server is up
wait_for: port=22
This fails with Permission denied. I assume that's because nothing is setup yet.
I expected it to open an ssh connection and wait for the prompt - just to see if the server is up. But what actually happens is it tries to login.
Is there some other way to perform a wait that doesn't try to login?
as you correctly stated, this task executes on the "to be provisioned" host, so ansible tries to connect to it (via ssh) first, then would try to wait for the port to be up. this would work for other ports/services, but not for 22 on a given host, since 22 is a "prerequisite" for executing any task on that host.
what you could do is try to delegate_to this task to the ansible host (that you run the PB) and add the host parameter in the wait_for task.
Example:
- name: wait until server is up
wait_for:
port: 22
host: <the IP of the host you are trying to provision>
delegate_to: localhost
hope it helps
Q: "Is there some other way to perform a wait that doesn't try to login?"
A: It is possible to wait_for_connection. For example
- hosts: all
gather_facts: no
tasks:
- name: wait until server is up
wait_for_connection:
delay: 60
timeout: 300
Background:
Just trying to learn how to use Ansible and have been experimenting with the AWS Ec2 module to build and deploy a Ubuntu instance on AWS-EC2. So have built a simple Playbook to create and startup an instance and executed via ansible-playbook -vvvv ic.yml
The playbook is:
---
- name: Create a ubuntu instance on AWS
hosts: localhost
connection: local
gather_facts: False
vars:
# AWS keys for access to the API
ec2_access_key: 'secret-key'
ec2_secret_key: 'secret-key'
region: ap-southeast-2
tasks:
- name: Create a Key-Pair necessary for connection to the remote EC2 host
ec2_key:
name=ic-key region="{{region}}"
register: keypair
- name: Write the Key-Pair to a file for re-use
copy:
dest: files/ic-key.pem
content: "{{ keypair.key.private_key }}"
mode: 0600
when: keypair.changed
- name: start the instance
ec2:
ec2_access_key: "{{ec2_access_key}}"
ec2_secret_key: "{{ec2_secret_key}}"
region: ap-southeast-2
instance_type: t2.micro
image: ami-69631053
key_name: ic-key # key we just created
instance_tags: {Name: icomplain-prod, type: web, env: production} #key-values pairs for naming etc
wait: yes
register: ec2
- name: Wait for instance to start up and be running
wait_for: host = {{item.public_dns_name}} port 22 delay=60 timeout=320 state=started
with_items: ec2.instances
Problem:
The issue is that when attempting to wait for the instance to fire up, using the wait_for test, as described in Examples for EC-2 module it fails with the following error message:
msg: this module requires key=value arguments (['host', '=', 'ec2-52-64-134-61.ap-southeast-2.compute.amazonaws.com', 'port', '22', 'delay=60', 'timeout=320', 'state=started'])
FATAL: all hosts have already failed -- aborting
Output:
Although the error message appears on the command line when I check in the AWS-Console the Key-Pair and EC2 instance are created and running.
Query:
Wondering
There is some other parameter which I need ?
What is the 'key=value' msg which is the error output being caused by?
Any recommendations on other ways to debug the script to determine the cause of the failure ?
Does it require registration of the host somewhere in the Ansible world ?
Additional NOTES:
Testing the playbook I've observed that the key-pair gets created, the server startup is initiated at AWS as seen from the AWS web console. What appears to be the issue is that the time period of the server to spin up is too long and the script timeouts or fails. Frustratingly, is that the error message is not all that helpful and also wondering if there is any other methods of debugging an ansible script ?
this isn't a problem of "detecting the server is running". As the error message says, it's a problem with syntax.
# bad
wait_for: host = {{item.public_dns_name}} port 22 delay=60 timeout=320 state=started
# good
wait_for: host={{item.public_dns_name}} port=22 delay=60 timeout=320 state=started
Additionally, you'll want to run this from the central machine, not the remote (new) server.
local_action: wait_for host={{item.public_dns_name}} port=22 delay=60 timeout=320 state=started
Focusing on the wait_for test as you indicate that the rest is working.
Based on the jobs I have running I would think the issue is with the host name, not with the rest of the code. I use an Ansible server in a protected VPC that has network access to the VPC where the servers start up in, and my wait_for code looks like this (variable name updated to match yours):
- name: wait for instances to listen on port 22
wait_for:
delay: 10
state: started
host: "{{ item.private_ip }}"
port: 22
timeout: 300
with_items: ec2.instances
Trying to use DNS instead of an IP address has always proven to be unreliable for me - if I'm registering DNS as part of a job, it can sometimes take a minute to be resolvable (sometimes instant, sometimes not). Using the IP addresses works every time of course - as long as the networking is set up correctly.
If your Ansible server is in a different region or has to use the external IP to access the new servers, you will of course need to have the relevant security groups and add the new server(s) to those before you can use wait_for.
I am trying to start a server using ansible shell module with ipmitools and then do configuration change on that server once its up.
Server with ansible installed also has ipmitools.
On server with ansible i need to execute ipmitools to start target server and then execute playbooks on it.
Is there a way to execute local ipmi commands on server running ansible to start target server through ansible and then execute all playbooks over ssh on target server.
You can run any command locally by providing the delegate_to parameter.
- shell: ipmitools ...
delegate_to: localhost
If ansible complains about connecting to localhost via ssh, you need to add an entry in your inventory like this:
localhost ansible_connection=local
or in host_vars/localhost:
ansible_connection: local
See behavioral parameters.
Next, you're going to need to wait until the server is booted and accessible though ssh. Here is an article from Ansible covering this topic and this is the task they have listed:
- name: Wait for Server to Restart
local_action:
wait_for
host={{ inventory_hostname }}
port=22
delay=15
timeout=300
sudo: false
If that doesn't work (since it is an older article and I think I previously had issues with this solution) you can look into the answers of this SO question.
My Situation is my database server is not opened with default ssh port 22, so I am trying to execute a query over port 3838.Below is the code -
tasks:
- name: passive | Get MasterDB IP
mysql_replication: mode=getslave
register: slaveInfo
- name: Active | Get Variable Details
mysql_variables: variable=hostname ansible_ssh_port=3838
delegate_to: "{{ slaveInfo.Master_Host }}"
register: activeDbHostname
Ansible version is :- 1.7.2
TASK: [Active | Get Variable Details] *****************************************
<192.168.0.110> ESTABLISH CONNECTION FOR USER: root on PORT 22 TO 192.168.0.110
fatal: [example1.com -> 192.168.0.110] => {'msg': 'FAILED: [Errno 101] Network is unreachable', 'failed': True}
FATAL: all hosts have already failed -- aborting
It connecting over default port 22 rather connecting on 3838 port. Please share your thoughts, If I am going wrong somewhere ...
You can specify the value for ansible_ssh_port in a number of places. But you will likely want to use a dynamic inventory script.
eg. From hosts file:
[db-slave]
10.0.0.20 ansible_ssh_port=3838
eg. as a variable in host_vars:
---
# host_vars/10.0.0.20/default.yml
ansible_ssh_port: 3838
eg. in a dynamic inventory! you may use a combo of group_vars and tagging the instances:
---
# group_vars/db-slaves/default.yml
ansible_ssh_port: 3838
use gce.py, ec2.py or some other dynamic inventory script and group your intances in the hosts file:
[tag_db-slaves]
; this is automatically filled by ec2.py or gce.py
[db-slaves:children]
tag_db-slaves
Of course this will mean you need to tag the instances when you fire them up. You can find several dynamic inventory scripts in the ansible repository.
If your mysqld is running on a docker instance in the same host, I would recommend you create a custom dynamic inventory with some form of service discovery, such as using consul, etcd, zookeeper, or some custom solution using a key-value store such as redis. You can find an introduction to dynamic inventories in the ansible documentation.