MODULE FAILURE\nSee stdout/stderr for the exact error - ansible

I am trying to create the user account using ansible on Ubuntu20.04. But getting error:
msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
But same playbook is working fine for Ubuntu 18.04.
Below is my playbook:
- hosts: abc
remote_user: root
become: true
tasks:
- name: create user account admin with password xyz
user:
name: admin
group: admin
shell: /bin/bash
password: $6$pLkiHBvZOf9/zctp1SlLXC2PsTFfwwcwmE73wuwwXb2g8.
append: yes
- name: ceating .ssh directory for account admin
file:
path: /home/admin/.ssh
state: directory
group: admin
owner: admin
mode: 0755
- name: copy authorized_keys file from root
copy:
src: /root/.ssh/authorized_keys
dest: /home/admin/.ssh
remote_src: yes
group: admin
owner: admin
- name: change the ssh port
lineinfile:
path: /etc/ssh/sshd_config
state: present
insertafter: '#Port 22'
line: "Port 811"
backup: yes
- name: disable the root login
lineinfile:
path: /etc/ssh/sshd_config
regexp: '^PermitRootLogin yes'
line: 'PermitRootLogin no'
- name: Restart ssh
service: name=ssh state=restarted
Can you please help me what is the error cause?
Thank you

You can usually get more information from ansible by capturing the error and emitting it:
- name: create user account admin with password xyz
user:
name: admin
group: admin
shell: /bin/bash
password: $6$pLkiHBvZOf9/zctp1SlLXC2PsTFfwwcwmE73wuwwXb2g8.
append: yes
ignore_errors: yes
register: kaboom
- debug: var=kaboom
- fail: msg=yup
and you will get the most information by also running ansible with env ANSIBLE_DEBUG=1 ansible-playbook -vvvv although often times the extra verbosity still isn't enough to get it to surface the actual exception text, so try that register: trick first

Related

Ansible Playbook: remote_user is ncorrect ,switching failed

I write playbook for download and install and unarchive tar file:
- name: Install DB
remote_user: ldb
hosts: db
tasks:
- name: Create download directory
file:
path: /home/ldb/servicebroker
state: directory
- name: download DB and service_broker
get_url:
url: "http://192.168.1.133:12345/stage/{{ item }}"
dest: /home/ldb/servicebroker
mode: 0755
timeout: 30
with_items:
- linkoopdb/4.1.0/zettabase-4.1.0-rc6.x86_64.iso
- service_broker/4.1.0/servicebroker-4.1.0-rc6.x86_64.tar.gz
- name: unzip tar file
unarchive:
src: /home/ldb/servicebroker/servicebroker-4.1.0-rc6.x86_64.tar.gz
dest: /home/ldb/servicebroker/
- name: Start master
shell: "/home/ldb/servicebroker/brokerServer --master_ip 192.168.14.94 --master_port 7777"
- name: Start slave
shell: "/home/ldb/servicebroker/brokerServer brokerServer --master_ip 192.168.14.94 --master_port 7777 --slave_ip {{ item }} --slave_port 7777 join"
with_items:
- 192.168.14.95
- 192.168.14.94
- 192.168.14.96
- 192.168.14.97
- 192.168.14.37
- 192.168.14.38
- 192.168.14.39
- name: Check for servicebroker command
shell: /home/ldb/servicebroker/bcli show service_broker,
register: command_output
- name: Start create repository and upload install DB
shell: "{{ item }}"
retries: 3
delay: 10
register: command_output
with_items:
- /home/ldb/servicebroker/bcli create repository db1
- /home/ldb/servicebroker/bcli show repository all
- /home/ldb/servicebroker/bcli upload zettabase-4.1.0-rc6.x86_64.iso
- name: Print result
debug:
var: command_output.stdout_lines
But run failed and get error info:
fatal: [192.168.14.96]: FAILED! => {"changed": false, "msg": "Could not find or access '/home/ldb/servicebroker/servicebroker-4.1.0-rc6.x86_64.tar.gz' on the Ansible Controller.\nIf you are using a module and expect the file to exist on the remote, see the remote_src option"}
I found db file owner is root user, not ldb user:
-rwxr-xr-x 1 root root 2162550784 Apr 19 19:36 base-4.1.0-rc6.x86_64.iso
-rwxr-xr-x 1 root root 3918489072 Apr 19 19:47 base-4.1.0-rc6.x86_64.tar.gz
-rwxr-xr-x 1 root root 31523406 Apr 19 20:03 servicebroker-4.1.0-rc6.x86_64.tar.gz
remote_user: ldb not enable.Please help check! Thanks!
I think you just need to look at the error more closely:
"Could not find or access '/home/ldb/servicebroker/servicebroker-4.1.0-rc6.x86_64.tar.gz' *on the Ansible Controller*.
You are downloading the file remotely, but trying to access it locally on the controller. In unarchive where you say:
src: /home/ldb/servicebroker/servicebroker-4.1.0-rc6.x86_64.tar.gz
Try:
src: /home/ldb/servicebroker/servicebroker-4.1.0-rc6.x86_64.tar.gz
remote_src: yes
Note the documentation for unarchive here:
https://docs.ansible.com/ansible/latest/collections/ansible/builtin/unarchive_module.html
Also note: the error is saying the ldb user doesn't have a shell assigned. Ansible can't login as it. Assuming it is the service account to run a service, you probably don't want to make it an interactive user just for Ansible's sake.

Creating an idempotent playbook that uses root then disables root server access

I'm provisioning a server and hardening it by disabling root access after creating an account with escalated privs. The tasks before the account creation require root so once root has been disabled the playbook is no longer idempotent. I've discovered one way to resolve this is to use wait_for_connection with block/rescue...
- name: Secure the server
hosts: "{{ hostvars.localhost.ipv4 }}"
gather_facts: no
tasks:
- name: Block required to leverage rescue as only way I can see of avoid an already disabled root stopping the playbook
block:
- wait_for_connection:
sleep: 2 # avoid too many requests within the timeout
timeout: 5
- name: Create the service account first so that play is idempotent, we can't rely on root being enabled as it is disabled later
user:
name: swirb
password: {{password}}
shell: /bin/bash
ssh_key_file: "{{ssh_key}}"
groups: sudo
- name: Add authorized keys for service account
authorized_key:
user: swirb
key: '{{ item }}'
with_file:
- "{{ssh_key}}"
- name: Disallow password authentication
lineinfile:
dest: /etc/ssh/sshd_config
regexp: "^[\#]PasswordAuthentication"
line: "PasswordAuthentication no"
state: present
notify: Restart ssh
- name: Disable root login
replace:
path: /etc/ssh/sshd_config
regexp: 'PermitRootLogin yes'
replace: 'PermitRootLogin no'
backup: yes
become: yes
notify: Restart ssh
rescue:
- debug:
msg: "{{error}}"
handlers:
- name: Restart ssh
service: name=ssh state=restarted
This is fine until I install fail2ban as the wait_for_connection causes too many connections to the server which jails the IP. So I created a task to add the IP address of the Ansible Controller to the jail.conf like so...
- name: Install and configure fail2ban
hosts: "{{hostvars.localhost.ipv4}}"
gather_facts: no
tasks:
- name: Install fail2ban
apt:
name: "{{ packages }}"
vars:
packages:
- fail2ban
become: yes
- name: Add the IP address to the whitelist otherwise wait_for_connection triggers jailing
lineinfile: dest=/etc/fail2ban/jail.conf
regexp="^(ignoreip = (?!.*{{hostvars.localhost.ipv4}}).*)"
line="\1 <IPv4>"
state=present
backrefs=True
notify: Restart fail2ban
become: yes
handlers:
- name: Restart fail2ban
service: name=fail2ban state=restarted
become: yes
This works but I have to hard wire the Ansible Controller IPv4. There doesn't seem to be a standard way of obtaining the IP address of the Ansible Controller.
I'm also not that keen on adding the controller to every server white list.
Is there a cleaner way of creating an idempotent provisioning playbook?
Otherwise, how do I get the Ansible Controller IP address?

Ansible Failed to set permissions on the temporary

I am using ansible to replace the ssh keys for a user on multiple RHEL6 & RHEL7 servers. The task I am running is:
- name: private key
copy:
src: /Users/me/Documents/keys/id_rsa
dest: ~/.ssh/
owner: unpriv
group: unpriv
mode: 0600
backup: yes
Two of the hosts that I'm trying to update are giving the following error:
fatal: [host1]: FAILED! => {"failed": true, "msg": "Failed to set
permissions on the temporary files Ansible needs to create when
becoming an unprivileged user (rc: 1, err: chown: changing ownership
of /tmp/ansible-tmp-19/': Operation not permitted\nchown: changing
ownership of/tmp/ansible-tmp-19/stat.py': Operation not
permitted\n). For information on working around this, see
https://docs.ansible.com/ansible/become.html#becoming-an-unprivileged-user"}
The thing is that these two that are getting the errors are clones of some that are updating just fine. I've compared the sudoers and sshd settings, as well as permissions and mount options on the /tmp directory. They are all the same between the problem hosts and the working ones. Any ideas on what I could check next?
I am running ansible 2.3.1.0 on Mac OS Sierra, if that helps.
Update:
#techraf
I have no idea why this worked on all hosts except for two. Here is the original playbook:
- name: ssh_keys
hosts: my_hosts
remote_user: my_user
tasks:
- include: ./roles/common/tasks/keys.yml
become: yes
become_method: sudo
and original keys.yml:
- name: public key
copy:
src: /Users/me/Documents/keys/id_rsab
dest: ~/.ssh/
owner: unpriv
group: unpriv
mode: 060
backup: yes
I changed the playbook to:
- name: ssh_keys
hosts: my_hosts
remote_user: my_user
tasks:
- include: ./roles/common/tasks/keys.yml
become: yes
become_method: sudo
become_user: root
And keys.yml to:
- name: public key
copy:
src: /Users/me/Documents/keys/id_rsab
dest: /home/unpriv/.ssh/
owner: unpriv
group: unpriv
mode: 0600
backup: yes
And it worked across all hosts.
Try to install ACL on remote host, after that execute ansible script
sudo apt-get install acl
You could try something like this:
- name: private key
become: true
become_user: root
copy:
src: /Users/me/Documents/keys/id_rsa
dest: ~/.ssh/
owner: unpriv
group: unpriv
mode: 0600
backup: yes
Notice the:
become: true
become_user: root
Check the "become" docs for more info
While installing the acl module works there is an alternative.
Add the line below to the defaults section of your ansible.cfg.
allow_world_readable_tmpfiles = True
Of better, just add it to the task that needs it with:
vars:
allow_world_readable_tmpfiles: true
A similar question with more details is Becoming non root user in ansible fails
I'm using ad-hoc and when I got into this problem, adding -b --become-user ANSIBLE_USER to my command fixes my problem.
example:
ansible all -m file -a "path=/etc/s.text state=touch" -b --become-user ansadmin
Of course, before this, I had given Sudo access to the user
If you give Sudo access to your user, you can write like this :
ansible all -m file -a "path=/var/s.text state=touch" -b --become-user root

Ansible authorized_key cant find key file

I am starting to use Ansible to automate the creation of users. The following code creates the user and the /home/test_user_003/.ssh/id_rsa.pub file.
But the authorized_key step gives error "could not find file in lookup". Its there, I can see it.
---
- hosts: test
become: true
tasks:
- name: create user
user:
name: test_user_003
generate_ssh_key: yes
group: sudo
ssh_key_passphrase: xyz
- name: Set authorized key
authorized_key:
user: test_user_003
state: present
key: "{{ lookup('file', '/home/test_user_003/.ssh/id_rsa.pub') }}"
(I would be interested to know why "key" uses lookup, but thats for education only)
You create user on remote host but try to lookup generated key on local host (all lookups in ansible are executed locally).
You may want to capture (register) result of user task and use it's fields:
- name: create user
user:
name: test_user_003
generate_ssh_key: yes
group: sudo
ssh_key_passphrase: xyz
register: new_user
- name: Set authorized key
authorized_key:
user: test_user_003
state: present
key: "{{ new_user.ssh_public_key }}"

Ansible delegate_to how to set user that is used to connect to target?

I have an Ansible (2.1.1.) inventory:
build_machine ansible_host=localhost ansible_connection=local
staging_machine ansible_host=my.staging.host ansible_user=stager
I'm using SSH without ControlMaster.
I have a playbook that has a synchronize command:
- name: Copy build to staging
hosts: staging_machine
tasks:
- synchronize: src=... dest=...
delegate_to: staging_machine
remote_user: stager
The command prompts for password of the wrong user:
local-mac-user#my-staging-host's password:
So instead of using ansible_user defined in the inventory or remote_user defined in task to connect to target (hosts specified in play), it uses the user that we connected to delegate-to box as, to connect to target hosts.
What am I doing wrong? How do I fix this?
EDIT: It works in 2.0.2, doesn't work in 2.1.x
The remote_user setting is used at the playbook level to set a particular play run as a user.
example:
---
- hosts: webservers
remote_user: root
tasks:
- name: ensure apache is at the latest version
yum:
name: httpd
state: latest
- name: write the apache config file
template:
src: /srv/httpd.j2
dest: /etc/httpd.conf
If you only have a certain task that needs to be run as a different user you can use the become and become_user settings.
- name: Run command
command: whoami
become: yes
become_user: some_user
Finally if you have a group of tasks to run as a user in a play you can group them with block
example:
- block:
- name: checkout repo
git:
repo: https://github.com/some/repo.git
version: master
dest: "{{ dst }}"
- name: change perms
file:
dest: "{{ dst }}"
state: directory
mode: 0755
owner: some_user
become: yes
become_user: some user
Reference:
- How to switch a user per task or set of tasks?
- https://docs.ansible.com/ansible/latest/user_guide/playbooks_intro.html
The one which works for me but please note that it is for Windows and Linux do not require become_method: runas and basically does not have it
- name: restart IIS services
win_service:
name: '{{ item }}'
state: restarted
start_mode: auto
force_dependent_services: true
loop:
- 'SMTPSVC'
- 'IISADMIN'
become: yes
become_method: runas
become_user: '{{ webserver_user }}'
vars:
ansible_become_password: '{{ webserver_password }}'
delegate_facts: true
delegate_to: '{{ groups["webserver"][0] }}'
when: dev_env
Try set become: yes and become_user: stager on your YAML file... That should fix it...
https://docs.ansible.com/ansible/2.5/user_guide/become.html

Resources