Ansible Roles - not seeing my tasks file - ansible

Whenever I run my playbook on my control machine I only see this:
PLAY RECAP *********************************************************************
So I get the feeling ansible is not finding my task file. Here is my directory structure (it's a git project in Eclipse):
ansible
ansible
dockerhosts.yml
hosts
roles
dockerhost
tasks
main.yml
My dockerhosts.yml:
---
- hosts: integration
roles: [dockerhost]
...
My hosts file:
[integration]
192.168.1.8
192.168.1.9
And my main.yml file:
- name: Install Docker CE from added Docker YUM repo
remote_user: installer
become: true
become_user: root
become_method: sudo
command: yum -y install docker-ce
I don't have any syntax errors clearly as it's running but for some reason it doesn't appear to find my main.yml file. I tried to see what user ansible runs under in case it's a question of file permissions but I haven't found anything.
I am running ansible-playbook dockerhosts.yml from the /ansible/ansible directory.
What am I doing wrong?

I have a hosts file but it's not in the /etc/ansible/hosts default location. As I showed in my question it's actually at the same level as dockerhosts.yml since this is a git project.
I used the -vvvv flag but that didn't tell me much. After running ansible-playbook -h I tried the -i flag and ran ansible-playbook dockerhosts.yml -i hosts and that actually did something.
It gave me SSH connection errors but it did more than just the blank PLAY RECAP I got before which to me means it's actually running the tasks now.

Related

Ansible playbook error while running on - hosts:

write a task in main.yml to stop and start service in service "ssh" using service module in ansible.
---
- hosts: localhost
become: true
become_method: sudo
tasks:
- name: stop service
service:
name: ssh
state: stopped
- name: start service
service:
name: ssh
state: started
when run it's giving below error
[WARNING]: Unable to parse /projects/challenge/localhost as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'
ERROR! unexpected parameter type in action: <class 'ansible.parsing.yaml.objects.AnsibleSequence'>
The error appears to be in '/projects/challenge/fresco_module/tasks/main.yml': line 2, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be
---
- hosts: localhost
^ here
Firstly, you should be able to do SSH to localhost.
You can try,
ssh user#localhost date
You can create a hosts file and name it hosts and add the following content to it.
[localhost]
localhost
[localhost:vars]
ansible_ssh_user=user
ansible_ssh_pass=pass
ansible_sudo_pass=sudopass
And run the playbook as
ansible-playbook -i hosts main.yml
Using command module was able to stop and start the service, using sudo service ssh stop and sudo service ssh start served my purpose.
was not able to do so with service module, still don't know about that
Resolved at my end by using complete path for .yml file
ansible-playbook -i /etc/ansible/hosts myfirstplaybook.yml

How to fix "Could not match supplied host pattern, ignoring: bigip" errors, works in Ansible, NOT Tower

I am running Ansible Tower v3.4.1 with Ansible v2.7.6 on an ubuntu 16.04 VM running on VirtualBox. I run a playbook that works when I run it from the command line using "ansible-playbook" but fails when I try to run it from Ansible Tower. I know I must have something misconfigured in ansible tower but I can't find it.
I get this warning no matter what changes I make to the inventory (hosts) file.
$ ansible-playbook 2.7.6
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/var/lib/awx/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 2.7.12 (default, Nov 12 2018, 14:36:49) [GCC 5.4.0 20160609]
Using /etc/ansible/ansible.cfg as config file
SSH password:
**/tmp/awx_74_z6yJB4/tmpVlXGCX did not meet host_list requirements**, check plugin documentation if this is unexpected
Parsed /tmp/awx_74_z6yJB4/tmpVlXGCX inventory source with script plugin
PLAYBOOK: addpool.yaml *********************************************************
1 plays in addpool.yaml
[WARNING]: **Could not match supplied host pattern, ignoring: bigip**
PLAY [Sample pool playbook] ****************************************************
17:05:43
skipping: no hosts matched
I have enabled inventory plugins for YAML, and made my hosts file into a hosts.yml file.
Here's my hosts file:
192.168.68.253
192.168.68.254
192.168.1.165
[centos]
dad2 ansible_ssh_host=192.168.1.165
[bigip]
bigip1 ansible_host=192.168.68.254
bigip2 ansible_host=192.168.68.253
Here's my playbook:
---
- name: Sample pool playbook
hosts: bigip
connection: local
tasks:
- name: create web servers pool
bigip_pool:
name: web-servers2
lb_method: ratio-member
password: admin
user: admin
server: '{{inventory_hostname}}'
validate_certs: no
I replaced hosts: bigip with hosts: all and specified the inventory in Tower as bigip which contains only the two hosts I want to change. This seems to provide the output I am looking for.
For the ansible-playbook command line, I added --limit bigip and this seems to provide the output I am looking for.
So things appear to be working, I just don't know whether this is best practice use.
If you get the error below while running a playbook with the command
ansible-playbook -i test-project/inventory.txt playbook.yml
{"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host 172.31.24.10 port 22: Connection timed out", "unreachable": true}
The solution is to add, in the file ansible.cfg:
[defaults]
inventory=/etc/ansible/hosts
I think you need to remove the connection: local.
You have specified in hosts: bigip that you want these tasks to only run on hosts in the bigip group. You then specify connection: local which causes the task to run on the controller node (i.e. localhost), rather than the nodes in the bigip group. Localhost is not a member of the bigip group, and so none of the tasks in the play will trigger.
Check for special characters in absolute path of hosts file or playbook. Incase if you directly copied the path from putty, try copy and paste it from notepad or any editor
For me the issue was the format of the /etc/ansible/hosts file. You should use the :children suffix in order to use groups of groups like this:
[dev1]
dev_1 ansible_ssh_host=192.168.1.55 ansible_connection=ssh ansible_ssh_user={{username}} ansible_ssh_pass={{password}}
[dev2]
dev_2 ansible_ssh_host=192.168.1.68 ansible_connection=ssh ansible_ssh_user={{username}} ansible_ssh_pass={{password}}
[devs:children]
dev1
dev2
Reference: here

Ansible playbook does not run tasks in roles

I have a simple ansible roles with one task, but the problem is when i run it
the tasks are not actually started
It worked when I tried my task without roles and not sure why its happening when I try using roles.
Version of ansible: ansible 2.2.3.0
This is my run.yml
- name: add user to general purpose
hosts: localhosts
roles:
- adduser
cd adduser/tasks/main.yml
- name: Create user
shell: sudo adduser tom
Running
ansible-playbook run.yml -vvv
This is the output
Using /etc/ansible/ansible.cfg as config file
[WARNING]: provided hosts list is empty, only localhost is available
PLAYBOOK: run.yml
**************************************************************
1 plays in run.yml
PLAY RECAP
*********************************************************************
It is because you have a typo in your hosts: field; the name is localhost not localhosts (as there is no such thing as a plural of the local host)
Also, while this isn't what you asked, it is bad news to (a) manually use sudo in a module (b) call adduser unconditionally, as it will bomb the second time you run that playbook. The thing you want is to tell ansible that task needs elevated privileges and then make use of the user: module to allow ansible to ensure there is such a user by the end of that role:
- name: Create user
become: yes
user:
name: tom
The benefit of being more declarative is (a) that's how ansible works (b) it allows ansible to be idempotent across runs

Ansible node not able to access a file in my host System

I am trying to copy a file from my Host Mac System to CentOS on a VM through Ansible Roles.
I have a folder created called Ansible Roles and under that I have used ansible-galaxy command and have created a role called tomcatdoccfg. helloworld.war is present in the root Ansible Roles folder.
The folder structure is as below :
Ansible tasks\main.yml playbook on Mac is as below:
- name: Copy war file to tmp
copy:
src: ⁨helloworld.war
dest: /tmp/helloworld.war
The helloworld.war file should be accessible for user abhilashdk(My Default MAC username). The CentOS VM also has a user called abhilashdk. I have configured ssh keys. Meaning I have generated ssh-keys -t rsa and moved the keys to the CentOS VM using ssh-copy-id and I am able to ping to VM using ansible -i hosts node1 -m ping command. I am able to install docker also on my node1 machine using ansible.
I have a main.yml file in the root Ansible Roles folder the contents of which is as below:
---
- hosts: node1
vars:
webapp:
app1:
PORT: 8090
NAME: webapp1
app2:
PORT: 8091
NAME: webapp2
become: true
roles:
- docinstall
- tomcatdoccfg
Now when I run the command ansible-playbook -i hosts main.yml I get the below error for Copy war file to tmp:
TASK [tomcatdoccfg : Copy war file to tmp] ************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: /Files/DevOps/Ansible/Ansible_roles/⁨helloworld.war
fatal: [node1]: FAILED! => {"changed": false, "msg": "Could not find or access '⁨helloworld.war'\nSearched in:\n\t/Files/DevOps/Ansible/Ansible_roles/tomcatdoccfg/files/⁨helloworld.war\n\t/Files/DevOps/Ansible/Ansible_roles/tomcatdoccfg/⁨helloworld.war\n\t/Files/DevOps/Ansible/Ansible_roles/tomcatdoccfg/tasks/files/⁨helloworld.war\n\t/Files/DevOps/Ansible/Ansible_roles/tomcatdoccfg/tasks/⁨helloworld.war\n\t/Files/DevOps/Ansible/Ansible_roles/files/⁨helloworld.war\n\t/Files/DevOps/Ansible/Ansible_roles/⁨helloworld.war"}
I am not understanding what permissions should I give to hellowrold.war file so that my centos on vm will be able to access it through ansible playbook/roles.
Could anybody help me out how to solve this issue.
Thanks in Advance
adding as answer, so i can show the non-latin characters that the log you attached in the question includes:
note right before the helloworld.war.
Could be the reason why Ansible cant find the file on the FS.
To be on the safe side, i would delete the whole main.yml and rewrite it.
ansible-playbook -i hosts main.yml --ask-sudo-pass or ansible-playbook -i hosts main.yml --ask-pass
This params will ask you for sudo password for playbook operations

Ansible to get aws instances software list

I want to get list of services installed and their versions on debian ec2 instances.
I am unable to understand how can i get the list of packages which dpkg --list shows because i want to get this list through ansible on my little server farm.
The easiest would be to simply run a shell task:
- shell: dpkg --list
register: packages
Now you have the result stored in packages.stdout_lines.
If you only want the package names, run something like this
dpkg --get-selections | grep -v "deinstall" | cut -f1
To run the task on the Ansible control host you need to delegate the task:
- shell: dpkg --list
register: packages
delegate_to: localhost
Now the command is executed on the control host (localhost) and the result stored in packages.stdout_lines
---
- hosts: hostblockname
tasks:
- name: Get Packages List
shell: dpkg --list > packageslist
register: packages
- fetch: src=/root/packageslist dest=/root/packagesdirectory/
I added the above playbook which helped serving my purpose. There may be room for optimization but somehow I am able to get it done for me.
I wanted to get list of all packages installed in a proper format on all Cloud Instances. Then I wanted to get list of all packages in a file on my Ansible server.
This playbook first generated list of installed packages on remote instances and then fetched those files back to main Ansible host.
The command to run playbook was:
ansible-playbook -i hostslistfile myplaybook.yml
myplaybook.yml is as above.
hostslistfile is simple file which is as below:
[hostblockname]
192.168.0.144:22

Resources