ansible-pull on remote hosts - ansible

I want run the playbook on remote host, which the playbook is in github. So, I followed this blog and forked the repo https://github.com/vincesesto/ansible-pull-example
In side the repo, I have modified hosts file to my server IP. When run ansible-pull
veeru#carb0n:~/ansible-example$ ansible-pull -U https://github.com/veerendra2/ansible-pull-example -i hosts
Starting Ansible Pull at 2019-06-26 16:26:30
/usr/local/bin/ansible-pull -U https://github.com/veerendra2/ansible-pull-example -i hosts
[WARNING]: Could not match supplied host pattern, ignoring: carb0n
ERROR! Specified hosts and/or --limit does not match any hosts
Not sure why it is picking current server name carb0n even I specified -i hosts argument.
here is my hosts file
[hydrogen]
10.250.30.11
local.yml
---
- hosts: all
tasks:
- name: install example application
copy:
src: ansible_test_app
dest: /tmp/
owner: root
group: root
I had changed local.yml to hydrogen.yml, but still getting same error.

Not sure why it is picking current server name carb0n even I specified -i hosts argument.
Sure, because ansible pull is designed to run against the current host, always. If you want to run against a remote server then you are supposed to use ansible or ansible-playbook and then your specification of a host list and the connection mechanism would start to make sense again.
Using ansible-pull is designed for the cases where it is either impossible, or highly undesirable for something to connect to the managed host. That can be due to firewall, security policies, or any number of reasons. But policies are usually less strict about what a managed host can, itself, connect to, and that's why pulling configuration onto the host can be easier.

Related

How to fix "Could not match supplied host pattern, ignoring: bigip" errors, works in Ansible, NOT Tower

I am running Ansible Tower v3.4.1 with Ansible v2.7.6 on an ubuntu 16.04 VM running on VirtualBox. I run a playbook that works when I run it from the command line using "ansible-playbook" but fails when I try to run it from Ansible Tower. I know I must have something misconfigured in ansible tower but I can't find it.
I get this warning no matter what changes I make to the inventory (hosts) file.
$ ansible-playbook 2.7.6
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/var/lib/awx/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 2.7.12 (default, Nov 12 2018, 14:36:49) [GCC 5.4.0 20160609]
Using /etc/ansible/ansible.cfg as config file
SSH password:
**/tmp/awx_74_z6yJB4/tmpVlXGCX did not meet host_list requirements**, check plugin documentation if this is unexpected
Parsed /tmp/awx_74_z6yJB4/tmpVlXGCX inventory source with script plugin
PLAYBOOK: addpool.yaml *********************************************************
1 plays in addpool.yaml
[WARNING]: **Could not match supplied host pattern, ignoring: bigip**
PLAY [Sample pool playbook] ****************************************************
17:05:43
skipping: no hosts matched
I have enabled inventory plugins for YAML, and made my hosts file into a hosts.yml file.
Here's my hosts file:
192.168.68.253
192.168.68.254
192.168.1.165
[centos]
dad2 ansible_ssh_host=192.168.1.165
[bigip]
bigip1 ansible_host=192.168.68.254
bigip2 ansible_host=192.168.68.253
Here's my playbook:
---
- name: Sample pool playbook
hosts: bigip
connection: local
tasks:
- name: create web servers pool
bigip_pool:
name: web-servers2
lb_method: ratio-member
password: admin
user: admin
server: '{{inventory_hostname}}'
validate_certs: no
I replaced hosts: bigip with hosts: all and specified the inventory in Tower as bigip which contains only the two hosts I want to change. This seems to provide the output I am looking for.
For the ansible-playbook command line, I added --limit bigip and this seems to provide the output I am looking for.
So things appear to be working, I just don't know whether this is best practice use.
If you get the error below while running a playbook with the command
ansible-playbook -i test-project/inventory.txt playbook.yml
{"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host 172.31.24.10 port 22: Connection timed out", "unreachable": true}
The solution is to add, in the file ansible.cfg:
[defaults]
inventory=/etc/ansible/hosts
I think you need to remove the connection: local.
You have specified in hosts: bigip that you want these tasks to only run on hosts in the bigip group. You then specify connection: local which causes the task to run on the controller node (i.e. localhost), rather than the nodes in the bigip group. Localhost is not a member of the bigip group, and so none of the tasks in the play will trigger.
Check for special characters in absolute path of hosts file or playbook. Incase if you directly copied the path from putty, try copy and paste it from notepad or any editor
For me the issue was the format of the /etc/ansible/hosts file. You should use the :children suffix in order to use groups of groups like this:
[dev1]
dev_1 ansible_ssh_host=192.168.1.55 ansible_connection=ssh ansible_ssh_user={{username}} ansible_ssh_pass={{password}}
[dev2]
dev_2 ansible_ssh_host=192.168.1.68 ansible_connection=ssh ansible_ssh_user={{username}} ansible_ssh_pass={{password}}
[devs:children]
dev1
dev2
Reference: here

Looking for ansible solution to read standalone.xml files on wildfly

looking for a solution to gather and organize standalone.xml files from various wildfly servers grouped by "staging" or "production" in my hosts file.
looking to see if something is available with the same output functionality of:
ansible wildfly -m setup --tree config
which creates a file per host with the requested data.
for example if i have 4 servers each one having a file named the exact same, in the same path, but having different contents. i could have them copied to a local directory and named after the server it came from:
(E.G:
standalone.server1.myserver.com
standalone.server2.myserver.com
)
Use the Ansible fetch module which has a few examples:
A very simple playbook may look like:
hosts: widlfy
tasks:
- name: Store file into /tmp/fetched/{hostname}/tmp/somefile
fetch:
src: /tmp/somefile
dest: /tmp/fetched
Run the playbook:
ansible-playbook playbook.yml
You can use the fetch module, e.g as ad hoc command:
ansible wildfly -i myInventory -m fetch -a "src=/myRemotePathname/standalone dest=/myLocalPathName/myDir" -u myUser
You'll get the remote file standalone file from the remote directory /myRemotePathname of any host belonging to wildfly group defined in myInventory file.
Local files are stored in a local /myLocalPathName/myDir directory having a subdirectory named as the remote hosts and under that the remote directory path.

Ansible node not able to access a file in my host System

I am trying to copy a file from my Host Mac System to CentOS on a VM through Ansible Roles.
I have a folder created called Ansible Roles and under that I have used ansible-galaxy command and have created a role called tomcatdoccfg. helloworld.war is present in the root Ansible Roles folder.
The folder structure is as below :
Ansible tasks\main.yml playbook on Mac is as below:
- name: Copy war file to tmp
copy:
src: ⁨helloworld.war
dest: /tmp/helloworld.war
The helloworld.war file should be accessible for user abhilashdk(My Default MAC username). The CentOS VM also has a user called abhilashdk. I have configured ssh keys. Meaning I have generated ssh-keys -t rsa and moved the keys to the CentOS VM using ssh-copy-id and I am able to ping to VM using ansible -i hosts node1 -m ping command. I am able to install docker also on my node1 machine using ansible.
I have a main.yml file in the root Ansible Roles folder the contents of which is as below:
---
- hosts: node1
vars:
webapp:
app1:
PORT: 8090
NAME: webapp1
app2:
PORT: 8091
NAME: webapp2
become: true
roles:
- docinstall
- tomcatdoccfg
Now when I run the command ansible-playbook -i hosts main.yml I get the below error for Copy war file to tmp:
TASK [tomcatdoccfg : Copy war file to tmp] ************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: /Files/DevOps/Ansible/Ansible_roles/⁨helloworld.war
fatal: [node1]: FAILED! => {"changed": false, "msg": "Could not find or access '⁨helloworld.war'\nSearched in:\n\t/Files/DevOps/Ansible/Ansible_roles/tomcatdoccfg/files/⁨helloworld.war\n\t/Files/DevOps/Ansible/Ansible_roles/tomcatdoccfg/⁨helloworld.war\n\t/Files/DevOps/Ansible/Ansible_roles/tomcatdoccfg/tasks/files/⁨helloworld.war\n\t/Files/DevOps/Ansible/Ansible_roles/tomcatdoccfg/tasks/⁨helloworld.war\n\t/Files/DevOps/Ansible/Ansible_roles/files/⁨helloworld.war\n\t/Files/DevOps/Ansible/Ansible_roles/⁨helloworld.war"}
I am not understanding what permissions should I give to hellowrold.war file so that my centos on vm will be able to access it through ansible playbook/roles.
Could anybody help me out how to solve this issue.
Thanks in Advance
adding as answer, so i can show the non-latin characters that the log you attached in the question includes:
note right before the helloworld.war.
Could be the reason why Ansible cant find the file on the FS.
To be on the safe side, i would delete the whole main.yml and rewrite it.
ansible-playbook -i hosts main.yml --ask-sudo-pass or ansible-playbook -i hosts main.yml --ask-pass
This params will ask you for sudo password for playbook operations

Ansible command to trigger registration on another server

I can't find any documentation on how to include a secondary server in a playbook.
If for instance, I want to install sssd on SERVERA and register with a FreeIPA server.
On the FreeIPA server (only), I need to:
get a Kerberos ticket (via kinit)
check if SERVERA is already in IPA instance
delete SERVERA from IPA if true
Since this is an installation playbook run against SERVERA, it doesn't seem right to include the IPA server in the hostlist...but nor can I see any "third party servers" module?
I presume you are searching for the delegate_to option, which allows you to delegate a task to a host that is not in the hostlist.
Often used to run things on the localhost (host running ansible), it can also be used to push a task to a host not in hostlist. The host has to be in the inventory file though.
Example:
- name: Ping the other host
ping:
delegate_to: otherhost.com # This is where you set it
More info: http://docs.ansible.com/ansible/latest/user_guide/playbooks_delegation.html#delegation

Ansible execute command locally and then on remote server

I am trying to start a server using ansible shell module with ipmitools and then do configuration change on that server once its up.
Server with ansible installed also has ipmitools.
On server with ansible i need to execute ipmitools to start target server and then execute playbooks on it.
Is there a way to execute local ipmi commands on server running ansible to start target server through ansible and then execute all playbooks over ssh on target server.
You can run any command locally by providing the delegate_to parameter.
- shell: ipmitools ...
delegate_to: localhost
If ansible complains about connecting to localhost via ssh, you need to add an entry in your inventory like this:
localhost ansible_connection=local
or in host_vars/localhost:
ansible_connection: local
See behavioral parameters.
Next, you're going to need to wait until the server is booted and accessible though ssh. Here is an article from Ansible covering this topic and this is the task they have listed:
- name: Wait for Server to Restart
local_action:
wait_for
host={{ inventory_hostname }}
port=22
delay=15
timeout=300
sudo: false
If that doesn't work (since it is an older article and I think I previously had issues with this solution) you can look into the answers of this SO question.

Resources