Ansible : remote_user in playbook file issues - ansible

Actually I've defined remote_user variable for each host group. But remote_user value is not taken from defined one. Rather its using top assigned value.
Ansible version:
# ansible --version
ansible 2.3.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
python version = 2.7.12 (default, Jul 1 2016, 15:12:24) [GCC 5.4.0 20160609]
Playbook file : info.yml
---
- hosts: all
remote_user: demo
roles:
- common
- hosts: devlocal
remote_user: root
become: yes
roles:
- common
- hosts: testlocal
remote_user: test
become: yes
roles:
- common
when I run the playbook for hosts [ devlocal] , the users name is taken from first assignment [ i.e : "demo" ]. Actually it should use the remote_user "root" in my case.
logs :
# ansible-playbook -i hosts -l devlocal info.yml --ask-pass -vvvv
Using /etc/ansible/ansible.cfg as config file
SSH password:
Loading callback plugin default of type stdout, v2.0 from /usr/lib/python2.7/dist-packages/ansible/plugins/callback/__init__.pyc
PLAYBOOK: site.yml ********************************************************************************************************************************
3 plays in site.yml
PLAY [all] ****************************************************************************************************************************************
TASK [Gathering Facts] ****************************************************************************************************************************
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/system/setup.py
<10.11.12.213> ESTABLISH SSH CONNECTION FOR USER: demo
Someone please help what was an issue here. Thanks in advance

Someone please help what was an issue here.
The issue here is that you specified the first play to run as demo:
- hosts: all
remote_user: demo
roles:
- common
And Ansible runs it as demo which seems not to be your objective.
That's why Ansible provides inventory mechanism, so you can specify connection details per host, not in plays.

I've defined remote_user variable for each host group
Wrong. You've defined remote_user for each play and not host group.
Hosts and groups are defined via inventory.
So you should defined devlocal and testlocal groups with ansible_user assigned.
And have single play:
- hosts: all
roles:
- common

Related

Ansible: Host localhost is unreachable

In my job there is a playbook developed in the following way that is executed by ansible tower.
This is the file that ansible tower executes and calls a playbook
report.yaml:
- hosts: localhost
gather_facts: false
connection: local
tasks:
- name: "Execute"
include_role:
name: 'fusion'
main.yaml from fusion role:
- name: "hc fusion"
include_tasks: "hc_fusion.yaml"
hc_fusion.yaml from fusion role:
- name: "FUSION"
shell: ansible-playbook roles/fusion/tasks/fusion.yaml --extra-vars 'fusion_ip_ha={{item.ip}} fusion_user={{item.username}} fusion_pass={{item.password}} fecha="{{fecha.stdout}}" fusion_ansible_become_user={{item.ansible_become_user}} fusion_ansible_become_pass={{item.ansible_become_pass}}'
fusion.yaml from fusion role:
- hosts: localhost
vars:
ansible_become_user: "{{fusion_ansible_become_user}}"
ansible_become_pass: "{{fusion_ansible_become_pass}}"
tasks:
- name: Validate
ignore_unreachable: yes
shell: service had status
delegate_to: "{{fusion_user}}#{{fusion_ip_ha}}"
become: True
become_method: su
This is a summary of the entire run.
Previously it worked but throws the following error.
stdout: PLAY [localhost] \nTASK [Validate] [1;31mfatal: [localhost -> gandalf#10.66.173.14]: UNREACHABLE! => {\"changed\": false, \"msg\": \"Failed to connect to the host via ssh: Warning: Permanently added '10.66.173.14' (RSA) to the list of known hosts.\ngandalf#10.66.173.14: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password), \"skip_reason\": \"Host localhost is unreachable\"
When I execute ansible-playbook roles/fusion/tasks/fusion.yaml --extra-vars XXXXXXXX from the command line with user awx it works.
Also I validated the connection from the server where ansible tower is running to where you want to connect with the ssh command and if it allows me to connect without requesting a password with the user awx
fusion.yaml does not explicitly specify connection plugin, thus default ssh type is being used. For localhost this approach usually brings a number of related problems (ssh keys, known_hosts, loopback interfaces etc.). If you need to run tasks on localhost you should define connection plugin local just like in your report.yaml playbook.
Additionally, as Zeitounator mentioned, running one ansible playbook from another with shell model is a really bad practice. Please, avoid this. Ansible has a number of mechanism for code re-use (includes, imports, roles etc.).

How to fix "Could not match supplied host pattern, ignoring: bigip" errors, works in Ansible, NOT Tower

I am running Ansible Tower v3.4.1 with Ansible v2.7.6 on an ubuntu 16.04 VM running on VirtualBox. I run a playbook that works when I run it from the command line using "ansible-playbook" but fails when I try to run it from Ansible Tower. I know I must have something misconfigured in ansible tower but I can't find it.
I get this warning no matter what changes I make to the inventory (hosts) file.
$ ansible-playbook 2.7.6
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/var/lib/awx/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 2.7.12 (default, Nov 12 2018, 14:36:49) [GCC 5.4.0 20160609]
Using /etc/ansible/ansible.cfg as config file
SSH password:
**/tmp/awx_74_z6yJB4/tmpVlXGCX did not meet host_list requirements**, check plugin documentation if this is unexpected
Parsed /tmp/awx_74_z6yJB4/tmpVlXGCX inventory source with script plugin
PLAYBOOK: addpool.yaml *********************************************************
1 plays in addpool.yaml
[WARNING]: **Could not match supplied host pattern, ignoring: bigip**
PLAY [Sample pool playbook] ****************************************************
17:05:43
skipping: no hosts matched
I have enabled inventory plugins for YAML, and made my hosts file into a hosts.yml file.
Here's my hosts file:
192.168.68.253
192.168.68.254
192.168.1.165
[centos]
dad2 ansible_ssh_host=192.168.1.165
[bigip]
bigip1 ansible_host=192.168.68.254
bigip2 ansible_host=192.168.68.253
Here's my playbook:
---
- name: Sample pool playbook
hosts: bigip
connection: local
tasks:
- name: create web servers pool
bigip_pool:
name: web-servers2
lb_method: ratio-member
password: admin
user: admin
server: '{{inventory_hostname}}'
validate_certs: no
I replaced hosts: bigip with hosts: all and specified the inventory in Tower as bigip which contains only the two hosts I want to change. This seems to provide the output I am looking for.
For the ansible-playbook command line, I added --limit bigip and this seems to provide the output I am looking for.
So things appear to be working, I just don't know whether this is best practice use.
If you get the error below while running a playbook with the command
ansible-playbook -i test-project/inventory.txt playbook.yml
{"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host 172.31.24.10 port 22: Connection timed out", "unreachable": true}
The solution is to add, in the file ansible.cfg:
[defaults]
inventory=/etc/ansible/hosts
I think you need to remove the connection: local.
You have specified in hosts: bigip that you want these tasks to only run on hosts in the bigip group. You then specify connection: local which causes the task to run on the controller node (i.e. localhost), rather than the nodes in the bigip group. Localhost is not a member of the bigip group, and so none of the tasks in the play will trigger.
Check for special characters in absolute path of hosts file or playbook. Incase if you directly copied the path from putty, try copy and paste it from notepad or any editor
For me the issue was the format of the /etc/ansible/hosts file. You should use the :children suffix in order to use groups of groups like this:
[dev1]
dev_1 ansible_ssh_host=192.168.1.55 ansible_connection=ssh ansible_ssh_user={{username}} ansible_ssh_pass={{password}}
[dev2]
dev_2 ansible_ssh_host=192.168.1.68 ansible_connection=ssh ansible_ssh_user={{username}} ansible_ssh_pass={{password}}
[devs:children]
dev1
dev2
Reference: here

Ansible playbook does not run tasks in roles

I have a simple ansible roles with one task, but the problem is when i run it
the tasks are not actually started
It worked when I tried my task without roles and not sure why its happening when I try using roles.
Version of ansible: ansible 2.2.3.0
This is my run.yml
- name: add user to general purpose
hosts: localhosts
roles:
- adduser
cd adduser/tasks/main.yml
- name: Create user
shell: sudo adduser tom
Running
ansible-playbook run.yml -vvv
This is the output
Using /etc/ansible/ansible.cfg as config file
[WARNING]: provided hosts list is empty, only localhost is available
PLAYBOOK: run.yml
**************************************************************
1 plays in run.yml
PLAY RECAP
*********************************************************************
It is because you have a typo in your hosts: field; the name is localhost not localhosts (as there is no such thing as a plural of the local host)
Also, while this isn't what you asked, it is bad news to (a) manually use sudo in a module (b) call adduser unconditionally, as it will bomb the second time you run that playbook. The thing you want is to tell ansible that task needs elevated privileges and then make use of the user: module to allow ansible to ensure there is such a user by the end of that role:
- name: Create user
become: yes
user:
name: tom
The benefit of being more declarative is (a) that's how ansible works (b) it allows ansible to be idempotent across runs

Can I force current hosts group to be identified as another in a playbook include?

The current case is this:
I have a playbook which provisions a bunch of servers and installs apps to these servers.
One of these apps already has it's own ansible playbook which I wanted to use. Now my problem arises from this playbook, as it's limited to hosts: [prod] and the host groups I have in the upper-level playbook are different.
I know I could just use add_host to add the needed hosts to a prod group, but that is a solution which I don't like.
So my question is: Is there a way to add the current hosts to a new host group in the include statement?
Something like - include: foo.yml prod={{ ansible_host_group }}
Or can I somehow include only the tasks from a playbook?
No, there's no direct way to do this.
Now my problem arises from this playbook, as it's limited to
hosts: [prod]
You can setup host's more flexible via extra vars:
- name: add role fail2ban
hosts: '{{ target }}'
remote_user: root
roles:
- fail2ban
Run it:
ansible-playbook testplaybook.yml --extra-vars "target=10.0.190.123"
ansible-playbook testplaybook.yml --extra-vars "target=webservers"
Is this workaround suitable for you?

Ansible: ansible_user in inventory vs remote_user in playbook

I am trying to run an Ansible playbook against a server using an account other than the one I am logged on the control machine. I tried to specify an ansible_user in the inventory file according to the documentation on Inventory:
[srv1]
192.168.1.146 ansible_connection=ssh ansible_user=user1
However Ansible called with ansible-playbook -i inventory playbook.yml -vvvv prints the following:
GATHERING FACTS ***************************************************************
<192.168.1.146> ESTABLISH CONNECTION FOR USER: techraf
What worked for me was adding the remote_user argument to the playbook:
- hosts: srv1
remote_user: user1
Now the same Ansible command connects as user1:
GATHERING FACTS ***************************************************************
<192.168.1.146> ESTABLISH CONNECTION FOR USER: user1
Also adding remote_user variable to ansible.cfg makes Ansible use the intended user instead of the logged-on one.
Are the ansible_user in inventory file and remote_user in playbook/ansible.cfg for different purposes?
What is the ansible_user used for? Or why doesn't Ansible observe the setting in the inventory?
You're likely running into a common issue: the published ansible docs are for the development version (2.0 right now), and we don't keep the old ones around. It's a big point of contention... Assuming you're using something pre-2.0, the inventory var name you need is ansible_ssh_user. ansible_user works in 2.0 (as does ansible_ssh_user- it gets aliased in).
I usually add my remote username in /etc/ansible/ansible.cfg as follows:
remote_user = MY_REMOTE_USERNAME
This way it is not required to configure ansible_user in the inventory file for each host entry.

Resources