Ansible node not able to access a file in my host System - macos

I am trying to copy a file from my Host Mac System to CentOS on a VM through Ansible Roles.
I have a folder created called Ansible Roles and under that I have used ansible-galaxy command and have created a role called tomcatdoccfg. helloworld.war is present in the root Ansible Roles folder.
The folder structure is as below :
Ansible tasks\main.yml playbook on Mac is as below:
- name: Copy war file to tmp
copy:
src: ⁨helloworld.war
dest: /tmp/helloworld.war
The helloworld.war file should be accessible for user abhilashdk(My Default MAC username). The CentOS VM also has a user called abhilashdk. I have configured ssh keys. Meaning I have generated ssh-keys -t rsa and moved the keys to the CentOS VM using ssh-copy-id and I am able to ping to VM using ansible -i hosts node1 -m ping command. I am able to install docker also on my node1 machine using ansible.
I have a main.yml file in the root Ansible Roles folder the contents of which is as below:
---
- hosts: node1
vars:
webapp:
app1:
PORT: 8090
NAME: webapp1
app2:
PORT: 8091
NAME: webapp2
become: true
roles:
- docinstall
- tomcatdoccfg
Now when I run the command ansible-playbook -i hosts main.yml I get the below error for Copy war file to tmp:
TASK [tomcatdoccfg : Copy war file to tmp] ************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: /Files/DevOps/Ansible/Ansible_roles/⁨helloworld.war
fatal: [node1]: FAILED! => {"changed": false, "msg": "Could not find or access '⁨helloworld.war'\nSearched in:\n\t/Files/DevOps/Ansible/Ansible_roles/tomcatdoccfg/files/⁨helloworld.war\n\t/Files/DevOps/Ansible/Ansible_roles/tomcatdoccfg/⁨helloworld.war\n\t/Files/DevOps/Ansible/Ansible_roles/tomcatdoccfg/tasks/files/⁨helloworld.war\n\t/Files/DevOps/Ansible/Ansible_roles/tomcatdoccfg/tasks/⁨helloworld.war\n\t/Files/DevOps/Ansible/Ansible_roles/files/⁨helloworld.war\n\t/Files/DevOps/Ansible/Ansible_roles/⁨helloworld.war"}
I am not understanding what permissions should I give to hellowrold.war file so that my centos on vm will be able to access it through ansible playbook/roles.
Could anybody help me out how to solve this issue.
Thanks in Advance

adding as answer, so i can show the non-latin characters that the log you attached in the question includes:
note right before the helloworld.war.
Could be the reason why Ansible cant find the file on the FS.
To be on the safe side, i would delete the whole main.yml and rewrite it.

ansible-playbook -i hosts main.yml --ask-sudo-pass or ansible-playbook -i hosts main.yml --ask-pass
This params will ask you for sudo password for playbook operations

Related

Ansible playbook error while running on - hosts:

write a task in main.yml to stop and start service in service "ssh" using service module in ansible.
---
- hosts: localhost
become: true
become_method: sudo
tasks:
- name: stop service
service:
name: ssh
state: stopped
- name: start service
service:
name: ssh
state: started
when run it's giving below error
[WARNING]: Unable to parse /projects/challenge/localhost as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'
ERROR! unexpected parameter type in action: <class 'ansible.parsing.yaml.objects.AnsibleSequence'>
The error appears to be in '/projects/challenge/fresco_module/tasks/main.yml': line 2, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be
---
- hosts: localhost
^ here
Firstly, you should be able to do SSH to localhost.
You can try,
ssh user#localhost date
You can create a hosts file and name it hosts and add the following content to it.
[localhost]
localhost
[localhost:vars]
ansible_ssh_user=user
ansible_ssh_pass=pass
ansible_sudo_pass=sudopass
And run the playbook as
ansible-playbook -i hosts main.yml
Using command module was able to stop and start the service, using sudo service ssh stop and sudo service ssh start served my purpose.
was not able to do so with service module, still don't know about that
Resolved at my end by using complete path for .yml file
ansible-playbook -i /etc/ansible/hosts myfirstplaybook.yml

ansible-pull on remote hosts

I want run the playbook on remote host, which the playbook is in github. So, I followed this blog and forked the repo https://github.com/vincesesto/ansible-pull-example
In side the repo, I have modified hosts file to my server IP. When run ansible-pull
veeru#carb0n:~/ansible-example$ ansible-pull -U https://github.com/veerendra2/ansible-pull-example -i hosts
Starting Ansible Pull at 2019-06-26 16:26:30
/usr/local/bin/ansible-pull -U https://github.com/veerendra2/ansible-pull-example -i hosts
[WARNING]: Could not match supplied host pattern, ignoring: carb0n
ERROR! Specified hosts and/or --limit does not match any hosts
Not sure why it is picking current server name carb0n even I specified -i hosts argument.
here is my hosts file
[hydrogen]
10.250.30.11
local.yml
---
- hosts: all
tasks:
- name: install example application
copy:
src: ansible_test_app
dest: /tmp/
owner: root
group: root
I had changed local.yml to hydrogen.yml, but still getting same error.
Not sure why it is picking current server name carb0n even I specified -i hosts argument.
Sure, because ansible pull is designed to run against the current host, always. If you want to run against a remote server then you are supposed to use ansible or ansible-playbook and then your specification of a host list and the connection mechanism would start to make sense again.
Using ansible-pull is designed for the cases where it is either impossible, or highly undesirable for something to connect to the managed host. That can be due to firewall, security policies, or any number of reasons. But policies are usually less strict about what a managed host can, itself, connect to, and that's why pulling configuration onto the host can be easier.

How to fix "Could not match supplied host pattern, ignoring: bigip" errors, works in Ansible, NOT Tower

I am running Ansible Tower v3.4.1 with Ansible v2.7.6 on an ubuntu 16.04 VM running on VirtualBox. I run a playbook that works when I run it from the command line using "ansible-playbook" but fails when I try to run it from Ansible Tower. I know I must have something misconfigured in ansible tower but I can't find it.
I get this warning no matter what changes I make to the inventory (hosts) file.
$ ansible-playbook 2.7.6
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/var/lib/awx/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 2.7.12 (default, Nov 12 2018, 14:36:49) [GCC 5.4.0 20160609]
Using /etc/ansible/ansible.cfg as config file
SSH password:
**/tmp/awx_74_z6yJB4/tmpVlXGCX did not meet host_list requirements**, check plugin documentation if this is unexpected
Parsed /tmp/awx_74_z6yJB4/tmpVlXGCX inventory source with script plugin
PLAYBOOK: addpool.yaml *********************************************************
1 plays in addpool.yaml
[WARNING]: **Could not match supplied host pattern, ignoring: bigip**
PLAY [Sample pool playbook] ****************************************************
17:05:43
skipping: no hosts matched
I have enabled inventory plugins for YAML, and made my hosts file into a hosts.yml file.
Here's my hosts file:
192.168.68.253
192.168.68.254
192.168.1.165
[centos]
dad2 ansible_ssh_host=192.168.1.165
[bigip]
bigip1 ansible_host=192.168.68.254
bigip2 ansible_host=192.168.68.253
Here's my playbook:
---
- name: Sample pool playbook
hosts: bigip
connection: local
tasks:
- name: create web servers pool
bigip_pool:
name: web-servers2
lb_method: ratio-member
password: admin
user: admin
server: '{{inventory_hostname}}'
validate_certs: no
I replaced hosts: bigip with hosts: all and specified the inventory in Tower as bigip which contains only the two hosts I want to change. This seems to provide the output I am looking for.
For the ansible-playbook command line, I added --limit bigip and this seems to provide the output I am looking for.
So things appear to be working, I just don't know whether this is best practice use.
If you get the error below while running a playbook with the command
ansible-playbook -i test-project/inventory.txt playbook.yml
{"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host 172.31.24.10 port 22: Connection timed out", "unreachable": true}
The solution is to add, in the file ansible.cfg:
[defaults]
inventory=/etc/ansible/hosts
I think you need to remove the connection: local.
You have specified in hosts: bigip that you want these tasks to only run on hosts in the bigip group. You then specify connection: local which causes the task to run on the controller node (i.e. localhost), rather than the nodes in the bigip group. Localhost is not a member of the bigip group, and so none of the tasks in the play will trigger.
Check for special characters in absolute path of hosts file or playbook. Incase if you directly copied the path from putty, try copy and paste it from notepad or any editor
For me the issue was the format of the /etc/ansible/hosts file. You should use the :children suffix in order to use groups of groups like this:
[dev1]
dev_1 ansible_ssh_host=192.168.1.55 ansible_connection=ssh ansible_ssh_user={{username}} ansible_ssh_pass={{password}}
[dev2]
dev_2 ansible_ssh_host=192.168.1.68 ansible_connection=ssh ansible_ssh_user={{username}} ansible_ssh_pass={{password}}
[devs:children]
dev1
dev2
Reference: here

Looking for ansible solution to read standalone.xml files on wildfly

looking for a solution to gather and organize standalone.xml files from various wildfly servers grouped by "staging" or "production" in my hosts file.
looking to see if something is available with the same output functionality of:
ansible wildfly -m setup --tree config
which creates a file per host with the requested data.
for example if i have 4 servers each one having a file named the exact same, in the same path, but having different contents. i could have them copied to a local directory and named after the server it came from:
(E.G:
standalone.server1.myserver.com
standalone.server2.myserver.com
)
Use the Ansible fetch module which has a few examples:
A very simple playbook may look like:
hosts: widlfy
tasks:
- name: Store file into /tmp/fetched/{hostname}/tmp/somefile
fetch:
src: /tmp/somefile
dest: /tmp/fetched
Run the playbook:
ansible-playbook playbook.yml
You can use the fetch module, e.g as ad hoc command:
ansible wildfly -i myInventory -m fetch -a "src=/myRemotePathname/standalone dest=/myLocalPathName/myDir" -u myUser
You'll get the remote file standalone file from the remote directory /myRemotePathname of any host belonging to wildfly group defined in myInventory file.
Local files are stored in a local /myLocalPathName/myDir directory having a subdirectory named as the remote hosts and under that the remote directory path.

Ansible Roles - not seeing my tasks file

Whenever I run my playbook on my control machine I only see this:
PLAY RECAP *********************************************************************
So I get the feeling ansible is not finding my task file. Here is my directory structure (it's a git project in Eclipse):
ansible
ansible
dockerhosts.yml
hosts
roles
dockerhost
tasks
main.yml
My dockerhosts.yml:
---
- hosts: integration
roles: [dockerhost]
...
My hosts file:
[integration]
192.168.1.8
192.168.1.9
And my main.yml file:
- name: Install Docker CE from added Docker YUM repo
remote_user: installer
become: true
become_user: root
become_method: sudo
command: yum -y install docker-ce
I don't have any syntax errors clearly as it's running but for some reason it doesn't appear to find my main.yml file. I tried to see what user ansible runs under in case it's a question of file permissions but I haven't found anything.
I am running ansible-playbook dockerhosts.yml from the /ansible/ansible directory.
What am I doing wrong?
I have a hosts file but it's not in the /etc/ansible/hosts default location. As I showed in my question it's actually at the same level as dockerhosts.yml since this is a git project.
I used the -vvvv flag but that didn't tell me much. After running ansible-playbook -h I tried the -i flag and ran ansible-playbook dockerhosts.yml -i hosts and that actually did something.
It gave me SSH connection errors but it did more than just the blank PLAY RECAP I got before which to me means it's actually running the tasks now.

Resources