How to use ansible_ssh_port in playbook task - ansible

My Situation is my database server is not opened with default ssh port 22, so I am trying to execute a query over port 3838.Below is the code -
tasks:
- name: passive | Get MasterDB IP
mysql_replication: mode=getslave
register: slaveInfo
- name: Active | Get Variable Details
mysql_variables: variable=hostname ansible_ssh_port=3838
delegate_to: "{{ slaveInfo.Master_Host }}"
register: activeDbHostname
Ansible version is :- 1.7.2
TASK: [Active | Get Variable Details] *****************************************
<192.168.0.110> ESTABLISH CONNECTION FOR USER: root on PORT 22 TO 192.168.0.110
fatal: [example1.com -> 192.168.0.110] => {'msg': 'FAILED: [Errno 101] Network is unreachable', 'failed': True}
FATAL: all hosts have already failed -- aborting
It connecting over default port 22 rather connecting on 3838 port. Please share your thoughts, If I am going wrong somewhere ...

You can specify the value for ansible_ssh_port in a number of places. But you will likely want to use a dynamic inventory script.
eg. From hosts file:
[db-slave]
10.0.0.20 ansible_ssh_port=3838
eg. as a variable in host_vars:
---
# host_vars/10.0.0.20/default.yml
ansible_ssh_port: 3838
eg. in a dynamic inventory! you may use a combo of group_vars and tagging the instances:
---
# group_vars/db-slaves/default.yml
ansible_ssh_port: 3838
use gce.py, ec2.py or some other dynamic inventory script and group your intances in the hosts file:
[tag_db-slaves]
; this is automatically filled by ec2.py or gce.py
[db-slaves:children]
tag_db-slaves
Of course this will mean you need to tag the instances when you fire them up. You can find several dynamic inventory scripts in the ansible repository.
If your mysqld is running on a docker instance in the same host, I would recommend you create a custom dynamic inventory with some form of service discovery, such as using consul, etcd, zookeeper, or some custom solution using a key-value store such as redis. You can find an introduction to dynamic inventories in the ansible documentation.

Related

Ansible: Host localhost is unreachable

In my job there is a playbook developed in the following way that is executed by ansible tower.
This is the file that ansible tower executes and calls a playbook
report.yaml:
- hosts: localhost
gather_facts: false
connection: local
tasks:
- name: "Execute"
include_role:
name: 'fusion'
main.yaml from fusion role:
- name: "hc fusion"
include_tasks: "hc_fusion.yaml"
hc_fusion.yaml from fusion role:
- name: "FUSION"
shell: ansible-playbook roles/fusion/tasks/fusion.yaml --extra-vars 'fusion_ip_ha={{item.ip}} fusion_user={{item.username}} fusion_pass={{item.password}} fecha="{{fecha.stdout}}" fusion_ansible_become_user={{item.ansible_become_user}} fusion_ansible_become_pass={{item.ansible_become_pass}}'
fusion.yaml from fusion role:
- hosts: localhost
vars:
ansible_become_user: "{{fusion_ansible_become_user}}"
ansible_become_pass: "{{fusion_ansible_become_pass}}"
tasks:
- name: Validate
ignore_unreachable: yes
shell: service had status
delegate_to: "{{fusion_user}}#{{fusion_ip_ha}}"
become: True
become_method: su
This is a summary of the entire run.
Previously it worked but throws the following error.
stdout: PLAY [localhost] \nTASK [Validate] [1;31mfatal: [localhost -> gandalf#10.66.173.14]: UNREACHABLE! => {\"changed\": false, \"msg\": \"Failed to connect to the host via ssh: Warning: Permanently added '10.66.173.14' (RSA) to the list of known hosts.\ngandalf#10.66.173.14: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password), \"skip_reason\": \"Host localhost is unreachable\"
When I execute ansible-playbook roles/fusion/tasks/fusion.yaml --extra-vars XXXXXXXX from the command line with user awx it works.
Also I validated the connection from the server where ansible tower is running to where you want to connect with the ssh command and if it allows me to connect without requesting a password with the user awx
fusion.yaml does not explicitly specify connection plugin, thus default ssh type is being used. For localhost this approach usually brings a number of related problems (ssh keys, known_hosts, loopback interfaces etc.). If you need to run tasks on localhost you should define connection plugin local just like in your report.yaml playbook.
Additionally, as Zeitounator mentioned, running one ansible playbook from another with shell model is a really bad practice. Please, avoid this. Ansible has a number of mechanism for code re-use (includes, imports, roles etc.).

How to use ansible to test network connection?

I am setting Zabbix agent service on many hosts, it can't confirm network is available between Zabbix server and Zabbix agent even service is up.
For example, I install Zabbix-agent on host A by ansible playbook. And there is already a Zabbix server on host B.
How can I use ansible to test, if host A can access port 10050 of host A and if host A can access port 10051 of host B?
Can you help to tell me which modules are suitable for the above network testing? In addition, how to loop inventory hosts in ansible-playbook.
Thanks.
You can take advantage of wait_for module to accomplish this.
Example:
- name: verify port 10050 is listening on hostA
wait_for:
host: hostA
port: 10050
delay: 5
state: started
delegate_to: hostB
To iterate through the hosts in the inventory file, you can use inventory_hostnames module:
with_inventory_hostnames:
- all

How to fix "Could not match supplied host pattern, ignoring: bigip" errors, works in Ansible, NOT Tower

I am running Ansible Tower v3.4.1 with Ansible v2.7.6 on an ubuntu 16.04 VM running on VirtualBox. I run a playbook that works when I run it from the command line using "ansible-playbook" but fails when I try to run it from Ansible Tower. I know I must have something misconfigured in ansible tower but I can't find it.
I get this warning no matter what changes I make to the inventory (hosts) file.
$ ansible-playbook 2.7.6
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/var/lib/awx/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 2.7.12 (default, Nov 12 2018, 14:36:49) [GCC 5.4.0 20160609]
Using /etc/ansible/ansible.cfg as config file
SSH password:
**/tmp/awx_74_z6yJB4/tmpVlXGCX did not meet host_list requirements**, check plugin documentation if this is unexpected
Parsed /tmp/awx_74_z6yJB4/tmpVlXGCX inventory source with script plugin
PLAYBOOK: addpool.yaml *********************************************************
1 plays in addpool.yaml
[WARNING]: **Could not match supplied host pattern, ignoring: bigip**
PLAY [Sample pool playbook] ****************************************************
17:05:43
skipping: no hosts matched
I have enabled inventory plugins for YAML, and made my hosts file into a hosts.yml file.
Here's my hosts file:
192.168.68.253
192.168.68.254
192.168.1.165
[centos]
dad2 ansible_ssh_host=192.168.1.165
[bigip]
bigip1 ansible_host=192.168.68.254
bigip2 ansible_host=192.168.68.253
Here's my playbook:
---
- name: Sample pool playbook
hosts: bigip
connection: local
tasks:
- name: create web servers pool
bigip_pool:
name: web-servers2
lb_method: ratio-member
password: admin
user: admin
server: '{{inventory_hostname}}'
validate_certs: no
I replaced hosts: bigip with hosts: all and specified the inventory in Tower as bigip which contains only the two hosts I want to change. This seems to provide the output I am looking for.
For the ansible-playbook command line, I added --limit bigip and this seems to provide the output I am looking for.
So things appear to be working, I just don't know whether this is best practice use.
If you get the error below while running a playbook with the command
ansible-playbook -i test-project/inventory.txt playbook.yml
{"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host 172.31.24.10 port 22: Connection timed out", "unreachable": true}
The solution is to add, in the file ansible.cfg:
[defaults]
inventory=/etc/ansible/hosts
I think you need to remove the connection: local.
You have specified in hosts: bigip that you want these tasks to only run on hosts in the bigip group. You then specify connection: local which causes the task to run on the controller node (i.e. localhost), rather than the nodes in the bigip group. Localhost is not a member of the bigip group, and so none of the tasks in the play will trigger.
Check for special characters in absolute path of hosts file or playbook. Incase if you directly copied the path from putty, try copy and paste it from notepad or any editor
For me the issue was the format of the /etc/ansible/hosts file. You should use the :children suffix in order to use groups of groups like this:
[dev1]
dev_1 ansible_ssh_host=192.168.1.55 ansible_connection=ssh ansible_ssh_user={{username}} ansible_ssh_pass={{password}}
[dev2]
dev_2 ansible_ssh_host=192.168.1.68 ansible_connection=ssh ansible_ssh_user={{username}} ansible_ssh_pass={{password}}
[devs:children]
dev1
dev2
Reference: here

SSH-less LXC containers using Ansible

I am new to ansible, and I am trying to use ansible on some lxc containers.
My problem is that I don't want to install ssh on my containers. So
What I tried:
I tried to use this connection plugin but it seams that it does not work with ansible 2.
After understanding that chifflier connection plugin doesn't work, I tried to use the connection plugin from openstack.
After some failed attempts I dived into the code, and I understand
that the plugin doesn't have the information that the host I am talking with is a container.(because the code never reached this point)
My current setup:
{Ansbile host}---|ssh|---{vm}--|ansible connection plugin|---{container1}
My ansible.cfg:
[defaults]
connection_plugins = /home/jkarr/ansible-test/connection_plugins/ssh
inventory = inventory
My inventory:
[hosts]
vm ansible_host=192.168.28.12
[containers]
mailserver physical_host=vm container_name=mailserver
my group vars:
ansible_host: "{{ physical_hostname }}"
ansible_ssh_extra_args: "{{ container_name }}"
ansible_user: containeruser
container_name: "{{ inventory_hostname }}"
physical_hostname: "{{ hostvars[physical_host]['ansible_host'] }}"
My testing playbook:
- name: Test Playbook
hosts: containers
gather_facts: true
tasks:
- name: testfile
copy:
content: "Test"
dest: /tmp/test
The output is:
fatal: [mailserver]: UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname mailserver: No address associated with hostname\r\n",
"unreachable": true
}
Ansible version is: 2.3.1.0
So what am I doing wrong? any tips?
Thanks in advance!
Update 1:
Based on eric answer I am now using this connection plug-in.
I update the my inventory and it looks like:
[hosts]
vm ansible_host=192.168.28.12
[containers]
mailserver physical_host=vm ansible_connection=lxc
After running my playbook I took:
<192.168.28.12> THIS IS A LOCAL LXC DIR
fatal: [mailserver]: FAILED! => {
"failed": true,
"msg": "192.168.28.12 is not running"
}
Which is weird because 192.168.28.12 is the vm and the container is called mailserver. Also I verified that the container is running.
Also why it says that 192.168.28.12 is local lxc dir?
Update 2:
I remove my group_vars, my ansible.cfg and the connection plugin from the playbook and I got the this error:
<mailserver> THIS IS A LOCAL LXC DIR
fatal: [mailserver]: FAILED! => {
"failed": true,
"msg": "mailserver is not running"
}
You should take a look at this lxc connection plugin. It might fit your needs.
Edit : lxc connection plugin is actually part of Ansible.
Just add ansible_connection=lxc in your inventory or group vars.
I'm trying something similar.
I want to configure a host over ssh using ansible and run lxc containers on the host, which are also configured using ansible:
ansible control node ----> host-a -----------> container-a
ssh lxc-attach
The issue with the lxc connection module is, that it only works for local lxc containers. There is no way to get it working through ssh.
At the moment the only way seems to be a direct ssh connection or a ssh connection through the first host:
ssh
ansible control node ----> container-a
or
ssh ssh
ansible control node ----> host-a ----> container-a
Both require sshd installed in the container. But the second way doesn't need port forwarding or multiple ip addresses.
Did you get a working solution?

Pointing to a specific host under a group, Ansible

I have multiple host details under one group in my Ansible hosts file. Like below,
[web-server]
10.0.0.1 name=apache ansible_ssh_user=username
10.0.0.2 name=nginx ansible_ssh_user=username
My ansible playbook,
---
- hosts: web-server[0]
roles:
- role: apache
These details gets added dynamically in the hosts file and I have no control over the order of lines getting added under a group. Hence I can't use the logic web-server[0] or web-server[1]
I want to mention the host by filtering based on the "name" parameter in the playbook since name will be unique. Is there a way to do this, Kindly help.
If you can modify inventory file generation process, make it like this:
[web-server]
apache ansible_host=10.0.0.1 ansible_ssh_user=username
nginx ansible_host=10.0.0.2 ansible_ssh_user=username
And then use:
- hosts: apache

Resources