Usage of async in ansible task raised privileged errors - ansible

I get puzzled a lot of time with the following issue.
I try so launch a process (here just a silly java -version) using async feature.
I run the ansible-playbook using my user which has a remote account as sudoer in the docker host. The other account with which I'd like to start the command is toto
So I wrote this
- name: test escalation
shell: id ; echo "shell says toto"
become: true
become_user: "toto"
tags:
- escalation
vars:
ansible_ssh_pipelining: true
- name: java escalation
shell:
cmd: "/data/tools/java/jdk8u232-b09/bin/java -version &"
async: 10
# Don't wait
poll: 0
become: true
become_user: "toto"
tags:
- escalation
vars:
ansible_ssh_pipelining: true
If i run this, I have
TASK [java escalation] ************************************************************************************************************
fatal: [main]: FAILED! => {"msg": "Failed to set permissions on the temporary files Ansible needs to create when becoming an unprivileged user (rc: 1, err: chown: changing ownership of '/var/tmp/ansible-tmp-1587484730.23-27264-164045960304097/': Operation not permitted\nchown: changing ownership of '/var/tmp/ansible-tmp-1587484730.23-27264-164045960304097/AnsiballZ_command.py': Operation not permitted\nchown: changing ownership of '/var/tmp/ansible-tmp-1587484730.23-27264-164045960304097/async_wrapper.py': Operation not permitted\n}). For information on working around this, see https://docs.ansible.com/ansible/become.html#becoming-an-unprivileged-user"}
Did anybody had the same issue ?
ansible --version
ansible 2.9.7

If I do not use the async feature (I can use any value for poll)
- name: java escalation
shell:
cmd: "/data/tools/java/jdk8u232-b09/bin/java -version &"
# async: 10
# Don't wait
poll: 0
become: true
become_user: "toto"
tags:
- escalation
vars:
ansible_ssh_pipelining: true
It works fine
TASK [ java escalation] ************************************************************************************************************
changed: [main] => {"changed": true, "cmd": "/data/tools/java/jdk8u232-b09/bin/java -version &", "delta": "0:00:00.034427", "end": "2020-04-21 15:59:46.402081", "rc": 0, "start": "2020-04-21 15:59:46.367654", "stderr": "openjdk version \"1.8.0_232\"\nOpenJDK Runtime Environment (AdoptOpenJDK)(build 1.8.0_232-b09)\nOpenJDK 64-Bit Server VM (AdoptOpenJDK)(build 25.232-b09, mixed mode)", "stderr_lines": ["openjdk version \"1.8.0_232\"", "OpenJDK Runtime Environment (AdoptOpenJDK)(build 1.8.0_232-b09)", "OpenJDK 64-Bit Server VM (AdoptOpenJDK)(build 25.232-b09, mixed mode)"], "stdout": "", "stdout_lines": []}

Related

Use of privilege escalation in a secure environment with become/ansible

I want to perform administrative tasks with ansible in a secure environment:
On the server :
root is not activated
we connect throught ssh to a not sudoer account (public/private key, I usually use ssh-agent not to type the passphrase each and every time)
change to a user which belongs to sudo group
then we perform administrative tasks
Here is the command I execute :
ansible-playbook install_update.yaml -K
the playbook :
---
- hosts: server
tasks:
- name: install
apt:
name: python-apt
state: latest
- name: update
become: yes
become_user: admin_account
become_method: su
apt:
name: "*"
state: latest
The hosts file :
[server]
192.168.1.50 ansible_user=ssh_account
But this doesn't allow me to do the tasks: for this particular playbook, It raises this error :
fatal: [192.168.1.50]: FAILED! => {"changed": false, "msg": "'/usr/bin/apt-get upgrade --with-new-pkgs ' failed: E: Could not open lock file /var/lib/dpkg/lock-frontend - open (13: Permission denied)\nE: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), are you root?\n", "rc": 100, "stdout": "", "stdout_lines": []}
which gives the idea that there is a privilege issue...
I would be really glad if someone had an idea !!
Best regards
PS: I have added to sudoers file the nopasswd for this admin account and if I run this playbook it works :
---
- hosts: pi
tasks:
- name: install
apt:
name: python-apt
state: latest
- name: update
become: yes
become_method: su
become_user: rasp_admin
shell: bash -c "sudo apt update"
I guess that when I changed user via su command from ssh_account, I would like to specify that with the admin_accound, my commands have to be run with sudo, but I failed finding the right way to do it...any ideas ??
PS: a workarround is to download a shell file et execute it with ansible but I find it is not satisfying...any other idea ?

Ansible error executing pm2 startup command

When executing ansible playbook with command: ansible-playbook 2_installJsReport.yml
CentOS 7.6
Ansible 2.7.10
i get an error saying:
TASK [make jsreport start at system restart] >*****************************************************************************>**************************************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["pm2", "startup"], >"delta": "0:00:00.601130", "end": "2019-04-24 12:59:33.091819", "msg": "non->zero return code", "rc": 1, "start": "2019-04-24 12:59:32.490689", "stderr": >"", "stderr_lines": [], "stdout": "[PM2] Init System found: systemd\n[PM2] To >setup the Startup Script, copy/paste the following command:\nsudo env >PATH=$PATH:/home/username/.nvm/versions/node/v8.11.3/bin >/home/username/.nvm/versions/node/v8.11.3/lib/node_modules/pm2/bin/pm2 >startup systemd -u username --hp /home/username", "stdout_lines": ["[PM2] >Init System found: systemd", "[PM2] To setup the Startup Script, copy/paste >the following command:", "sudo env >PATH=$PATH:/home/username/.nvm/versions/node/v8.11.3/bin >/home/username/.nvm/versions/node/v8.11.3/lib/node_modules/pm2/bin/pm2 >startup systemd -u username --hp /home/username"]}
Ansible script
---
- hosts: localhost
tasks:
- name: make jsreport start at system restart
command: pm2 startup
The "error" message contains instructions you are supposed to follow to configure the startup:
[PM2] Init System found: systemd
[PM2] To setup the Startup Script, copy/paste the following command:
sudo env PATH=$PATH:/home/username/.nvm/versions/node/v8.11.3/bin /home/username/.nvm/versions/node/v8.11.3/lib/node_modules/pm2/bin/pm2 startup systemd -u username --hp /home/username
If you follow those instructions, it suggests that you should replace your task with something like:
---
- hosts: localhost
tasks:
- name: make jsreport start at system restart
become: true
command: pm2 startup systemd -u username --hp /home/username
environment:
PATH: "{{ ansible_env.PATH }}"

Run playbook against Openstack with Ansible Tower

I am trying to run a simple playbook against Openstack in admin tenant using Ansible Tower, both running on localhost. Here is the script:
--- #
- hosts: localhost
gather_facts: no
connection: local
tasks:
- name: Security Group
os_security_group:
state: present
name: example
I have done the following configuration:
Credentials:
Template:
Inventory test:
With this configuration, I am getting this error:
TASK [Security Group] **********************************************************
13:35:48
fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "sudo: effective uid is not 0, is /usr/bin/sudo on a file system with the 'nosuid' option set or an NFS file system without root privileges?\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
Any idea what can be? Looks like is a credential problem.
Untick Enable Privilege Escalation - it's not necessary. Your OpenStack privilege/authorisation will be tied to your OpenStack credentials (admin in this case), not the user running the Ansible task.

Ansible way to stop a service only if it needs to be upgraded

In an ansible playbook I want to stop MariaDB if an upgrade is needed (restart from the RPM package does not always work in my situation). I'm quite new to ansible.
I came up with this:
- name: "Check if MariaDB needs to be upgraded"
shell: "yum check-update MariaDB-server|grep MariaDB|wc -l"
register: needs_update
- name: "Stop mysql service"
service:
name: mysql
state: stopped
when: needs_update.stdout == "1"
Is there a better way to do this then by executing a shell command? When running it I get warnings:
TASK [mariadb_galera : Check if MariaDB needs to be upgraded] ******************
changed: [139.162.220.42] => {"changed": true, "cmd": "yum check-update MariaDB-server|grep MariaDB|wc -l", "delta": "0:00:00.540862", "end": "2017-03-01 13:03:34.415272", "rc": 0, "start": "2017-03-01 13:03:33.874410", "stderr": "", "stdout": "0", "stdout_lines": ["0"], "warnings": ["Consider using yum module rather than running yum"]}
[WARNING]: Consider using yum module rather than running yum
Thank you!
You can hide warning with:
- name: "Check if MariaDB needs to be upgraded"
shell: "yum check-update MariaDB-server|grep MariaDB|wc -l"
args:
warn: false
register: needs_update
Or you can trick Ansible to execute yum task in check_mode:
- name: "Check if MariaDB needs to be upgraded (CHECK MODE!)"
yum:
name: MariaDB-server
state: latest
check_mode: yes
register: needs_update_check
- name: "Stop mysql service"
service:
name: mysql
state: stopped
when: needs_update_check | changed
Please, test this code before use.
The best way to handle this is use a "handler"
eg something along the lines of
tasks:
- name: Update db
yum: name=MariaDB-server state=latest
notify:
- stop db
handlers:
- name: stop db
service: name=MariaDB-server state=stopped
You can specify multiple handlers if you need to do multiple things, but if you just want to restart the db, use restarted instead of stopped
http://docs.ansible.com/ansible/playbooks_best_practices.html

Ansible Service Restart Failed

I've been having some trouble with restarting the SSH daemon with Ansible.
I'm using the latest software as of May 11 2015 (Ansible 1.9.1 / Vagrant 1.7.2 / VirtualBox 4.3.26 / Host: OS X 10.10.1 / Guest: ubuntu/trusty64)
tl;dr: There appears to be something wrong with the way I'm invoking the service syntax.
Problem With Original Use Case (Handler)
Playbook
- hosts: all
- remote_user: vagrant
- tasks:
...
- name: Forbid SSH root login
sudo: yes
lineinfile: dest=/etc/ssh/sshd_config regexp="^PermitRootLogin" line="permitRootLogin no" state=present
notify:
- restart ssh
...
- handlers:
- name: restart ssh
sudo: yes
service: name=ssh state=restarted
Output
NOTIFIED: [restart ssh]
failed: [default] => {"failed": true}
FATAL: all hosts have already failed -- aborting
The nginx handler completed successfully with nearly identical syntax.
Task Also Fails
Playbook
- name: Restart SSH server
sudo: yes
service: name=ssh state=restarted
Same output as the handler use case.
Ad Hoc Command Also Fails
Shell
> ansible all -i ansible_inventory -u vagrant -k -m service -a "name=ssh state=restarted"
Inventory
127.0.0.1:8022
Output
127.0.0.1 | FAILED >> {
"failed": true,
"msg": ""
}
Shell command in box works
When I SSH in and run the usual command, everything works fine.
> vagrant ssh
> sudo service ssh restart
ssh stop/waiting
ssh start/running, process 7899
> echo $?
0
Command task also works
Output
TASK: [Restart SSH server] ****************************************************
changed: [default] => {"changed": true, "cmd": ["service", "ssh", "restart"], "delta": "0:00:00.060220", "end": "2015-05-11 07:59:25.310183", "rc": 0, "start": "2015-05-11 07:59:25.249963", "stderr": "", "stdout": "ssh stop/waiting\nssh start/running, process 8553", "warnings": ["Consider using service module rather than running service"]}
As we can see in the warning, we're supposed to use the service module, but I'm still not sure where the snag is.
As the comments above state, this is an Ansible issue that will apparently be fixed in the 2.0 release.
I just changed my handler to use the command module and moved on:
- name: restart sshd
command: service ssh restart

Resources