I'd like to make a playbook that shows me the user currently in use.
this is my ansible cfg:
[defaults]
inventory=inventory
remote_user=adminek
[privilege_escalation]
become=true
[ssh_connection]
allow_world_readable_tmpfiles = True
ssh_args = -o ControlMaster=no -o ControlPath=none -o ControlPersist=no
pipelining = false
and this is my playbook
---
- name: show currenty users
hosts: server_a
tasks:
- name: test user - root
shell: "whoami"
register: myvar_root
- name: test user - user2
become: true
become_user: user2
shell: "whoami"
register: myvar_user2
- name: print myvar root
debug:
var: myvar_root.stdout_lines
- name: print myvar user2
debug:
var: myvar_user2.stdout
taks "test user - root" work fine and give me output
ok: [172.22.0.134] => {
"myvar_root.stdout_lines": [
"root"
]
}
taks "test user - user2" give me output
fatal: [172.22.0.134]: FAILED! => {"msg": "Failed to set permissions on the temporary files Ansible needs to create when becoming an unprivileged user (rc: 1, err: chown: changing ownership `/var/tmp/ansible-tmp-1621340458.2-11599-141854654478770/': Operation permited\nchown: changing ownership `/var/tmp/ansible-tmp-1621340458.2-11599-141854654478770/AnsiballZ_command.py': Operation permited\n}). For information on working around this, see https://docs.ansible.com/ansible/become.html#becoming-an-unprivileged-user"}
Explanation:
adminek- sudoer user
User2 - non sudoers users
OS - Scientific Linux release 6.9
Additionaly I hgad similar problem on ubuntu 18.04 but when i installed acl begun works
Someone know what is wrong?
Thanks for help!
One of the following options should fix your issue:
Ensure sudo is installed on the remote host
Ensure acl is installed on the remote host
Uncomment the following lines in /etc/ansible/ansible.cfg:
allow_world_readable_tmpfiles = True
pipelining = True
#F1ko thanks for reply.
I did what you wont and I installed acl on my host, but steal was wrong.
I added to visudo.
Defaults:user2 !requiretty
Defaults:adminek !requiretty
I dont know it's ok and secure but work.
for me it worked installing the acl package in host
- name: Install required packaged
yum:
name: "{{ item }}"
state: present
with_items:
- acl
- python3-pip
in my case I used centos/07, if you use ubuntu, change yum to apt.
Related
I want to perform administrative tasks with ansible in a secure environment:
On the server :
root is not activated
we connect throught ssh to a not sudoer account (public/private key, I usually use ssh-agent not to type the passphrase each and every time)
change to a user which belongs to sudo group
then we perform administrative tasks
Here is the command I execute :
ansible-playbook install_update.yaml -K
the playbook :
---
- hosts: server
tasks:
- name: install
apt:
name: python-apt
state: latest
- name: update
become: yes
become_user: admin_account
become_method: su
apt:
name: "*"
state: latest
The hosts file :
[server]
192.168.1.50 ansible_user=ssh_account
But this doesn't allow me to do the tasks: for this particular playbook, It raises this error :
fatal: [192.168.1.50]: FAILED! => {"changed": false, "msg": "'/usr/bin/apt-get upgrade --with-new-pkgs ' failed: E: Could not open lock file /var/lib/dpkg/lock-frontend - open (13: Permission denied)\nE: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), are you root?\n", "rc": 100, "stdout": "", "stdout_lines": []}
which gives the idea that there is a privilege issue...
I would be really glad if someone had an idea !!
Best regards
PS: I have added to sudoers file the nopasswd for this admin account and if I run this playbook it works :
---
- hosts: pi
tasks:
- name: install
apt:
name: python-apt
state: latest
- name: update
become: yes
become_method: su
become_user: rasp_admin
shell: bash -c "sudo apt update"
I guess that when I changed user via su command from ssh_account, I would like to specify that with the admin_accound, my commands have to be run with sudo, but I failed finding the right way to do it...any ideas ??
PS: a workarround is to download a shell file et execute it with ansible but I find it is not satisfying...any other idea ?
I have an Ansible playbook to update my Debian based servers. For simplicity and security reasons, I don't want to use a vault for the passwords and I also don't want to store them in a publically accessible config file. So I ask for the password for every client with
become: yes
become_method: sudo
Now, when the playbook runs, it seems the first thing Ansible does is ask for the sudo password, but I don't know for which server (the passwords are different). Is there a way to get Ansible to print the current host name before it asks for the password?
The update playbook is similar to this:
---
- hosts:
all
gather_facts: no
vars:
verbose: false
log_dir: "log/dist-upgrade/{{ inventory_hostname }}"
pre_tasks:
- block:
- setup:
rescue:
- name: "Install required python-minimal package"
raw: "apt-get update && apt-get install -y --force-yes python-apt python-minimal"
- setup:
tasks:
- name: Update packages
apt:
update_cache: yes
upgrade: dist
autoremove: yes
register: output
- name: Check changes
set_fact:
updated: true
when: not output.stdout | search("0 upgraded, 0 newly installed")
- name: Display changes
debug:
msg: "{{ output.stdout_lines }}"
when: verbose or updated is defined
- block:
- name: "Create log directory"
file:
path: "{{ log_dir }}"
state: directory
changed_when: false
- name: "Write changes to logfile"
copy:
content: "{{ output.stdout }}"
dest: "{{ log_dir }}/dist-upgrade_{{ ansible_date_time.iso8601 }}.log"
changed_when: false
when: updated is defined
connection: local
(source: http://www.panticz.de/Debian-Ubuntu-mass-dist-upgrade-with-Ansible)
Your above become configuration does not make ansible ask you for a become password: it just advises it to use become with the sudo method (which will work without any password if your have the correct keys configured for example).
If you are asked for a become password, it's because (it's a guess but I'm rather confident...) you used the --ask-become-pass option when running ansible-playbook.
In this case, you are prompted only once at the beginning of the playbook operations and this default become password will be used on all servers you connect to except if you defined an other one in your inventory for a specific host/group.
If you have different become passwords depending on your machines, you don't really have an other option: you need to declare those passwords in your inventory (and it is strongly advised to use ansible-vault encryption) or use some other mechanisms to get them out of an external application (hashicorp vault, dynamic inventory, cyberark...)
A recurring theme that's in my ansible playbooks is that I often must execute a command with sudo privileges (sudo: yes) because I'd like to do it for a certain user. Ideally I'd much rather use sudo to switch to that user and execute the commands normally. Because then I won't have to do my usual post commands clean up such as chowning directories. Here's a snippet from one of my playbooks:
- name: checkout repo
git: repo=https://github.com/some/repo.git version=master dest={{ dst }}
sudo: yes
- name: change perms
file: dest={{ dst }} state=directory mode=0755 owner=some_user
sudo: yes
Ideally I could run commands or sets of commands as a different user even if it requires sudo to su to that user.
With Ansible 1.9 or later
Ansible uses the become, become_user, and become_method directives to achieve privilege escalation. You can apply them to an entire play or playbook, set them in an included playbook, or set them for a particular task.
- name: checkout repo
git: repo=https://github.com/some/repo.git version=master dest={{ dst }}
become: yes
become_user: some_user
You can use become_with to specify how the privilege escalation is achieved, the default being sudo.
The directive is in effect for the scope of the block in which it is used (examples).
See Hosts and Users for some additional examples and Become (Privilege Escalation) for more detailed documentation.
In addition to the task-scoped become and become_user directives, Ansible 1.9 added some new variables and command line options to set these values for the duration of a play in the absence of explicit directives:
Command line options for the equivalent become/become_user directives.
Connection specific variables which can be set per host or group.
As of Ansible 2.0.2.0, the older sudo/sudo_user syntax described below still works, but the deprecation notice states, "This feature will be removed in a future release."
Previous syntax, deprecated as of Ansible 1.9 and scheduled for removal:
- name: checkout repo
git: repo=https://github.com/some/repo.git version=master dest={{ dst }}
sudo: yes
sudo_user: some_user
In Ansible 2.x, you can use the block for group of tasks:
- block:
- name: checkout repo
git:
repo: https://github.com/some/repo.git
version: master
dest: "{{ dst }}"
- name: change perms
file:
dest: "{{ dst }}"
state: directory
mode: 0755
owner: some_user
become: yes
become_user: some user
In Ansible >1.4 you can actually specify a remote user at the task level which should allow you to login as that user and execute that command without resorting to sudo. If you can't login as that user then the sudo_user solution will work too.
---
- hosts: webservers
remote_user: root
tasks:
- name: test connection
ping:
remote_user: yourname
See http://docs.ansible.com/playbooks_intro.html#hosts-and-users
A solution is to use the include statement with remote_user var (describe there : http://docs.ansible.com/playbooks_roles.html) but it has to be done at playbook instead of task level.
You can specify become_method to override the default method set in ansible.cfg (if any), and which can be set to one of sudo, su, pbrun, pfexec, doas, dzdo, ksu.
- name: I am confused
command: 'whoami'
become: true
become_method: su
become_user: some_user
register: myidentity
- name: my secret identity
debug:
msg: '{{ myidentity.stdout }}'
Should display
TASK [my-task : my secret identity] ************************************************************
ok: [my_ansible_server] => {
"msg": "some_user"
}
I am using ansible to replace the ssh keys for a user on multiple RHEL6 & RHEL7 servers. The task I am running is:
- name: private key
copy:
src: /Users/me/Documents/keys/id_rsa
dest: ~/.ssh/
owner: unpriv
group: unpriv
mode: 0600
backup: yes
Two of the hosts that I'm trying to update are giving the following error:
fatal: [host1]: FAILED! => {"failed": true, "msg": "Failed to set
permissions on the temporary files Ansible needs to create when
becoming an unprivileged user (rc: 1, err: chown: changing ownership
of /tmp/ansible-tmp-19/': Operation not permitted\nchown: changing
ownership of/tmp/ansible-tmp-19/stat.py': Operation not
permitted\n). For information on working around this, see
https://docs.ansible.com/ansible/become.html#becoming-an-unprivileged-user"}
The thing is that these two that are getting the errors are clones of some that are updating just fine. I've compared the sudoers and sshd settings, as well as permissions and mount options on the /tmp directory. They are all the same between the problem hosts and the working ones. Any ideas on what I could check next?
I am running ansible 2.3.1.0 on Mac OS Sierra, if that helps.
Update:
#techraf
I have no idea why this worked on all hosts except for two. Here is the original playbook:
- name: ssh_keys
hosts: my_hosts
remote_user: my_user
tasks:
- include: ./roles/common/tasks/keys.yml
become: yes
become_method: sudo
and original keys.yml:
- name: public key
copy:
src: /Users/me/Documents/keys/id_rsab
dest: ~/.ssh/
owner: unpriv
group: unpriv
mode: 060
backup: yes
I changed the playbook to:
- name: ssh_keys
hosts: my_hosts
remote_user: my_user
tasks:
- include: ./roles/common/tasks/keys.yml
become: yes
become_method: sudo
become_user: root
And keys.yml to:
- name: public key
copy:
src: /Users/me/Documents/keys/id_rsab
dest: /home/unpriv/.ssh/
owner: unpriv
group: unpriv
mode: 0600
backup: yes
And it worked across all hosts.
Try to install ACL on remote host, after that execute ansible script
sudo apt-get install acl
You could try something like this:
- name: private key
become: true
become_user: root
copy:
src: /Users/me/Documents/keys/id_rsa
dest: ~/.ssh/
owner: unpriv
group: unpriv
mode: 0600
backup: yes
Notice the:
become: true
become_user: root
Check the "become" docs for more info
While installing the acl module works there is an alternative.
Add the line below to the defaults section of your ansible.cfg.
allow_world_readable_tmpfiles = True
Of better, just add it to the task that needs it with:
vars:
allow_world_readable_tmpfiles: true
A similar question with more details is Becoming non root user in ansible fails
I'm using ad-hoc and when I got into this problem, adding -b --become-user ANSIBLE_USER to my command fixes my problem.
example:
ansible all -m file -a "path=/etc/s.text state=touch" -b --become-user ansadmin
Of course, before this, I had given Sudo access to the user
If you give Sudo access to your user, you can write like this :
ansible all -m file -a "path=/var/s.text state=touch" -b --become-user root
I am trying to run an extremely simple playbook to test a new Ansible setup.
When using the 'new' Ansible Privilege Escalation config options in my ansible.cfg file:
[defaults]
host_key_checking=false
log_path=./logs/ansible.log
executable=/bin/bash
#callback_plugins=./lib/callback_plugins
######
[privilege_escalation]
become=True
become_method='sudo'
become_user='tstuser01'
become_ask_pass=False
[ssh_connection]
scp_if_ssh=True
I get the following error:
fatal: [webserver1.local] => Internal Error: this module does not support running commands via 'sudo'
FATAL: all hosts have already failed -- aborting
The playbook is also very simple:
# Checks the hosts provisioned by midrange
---
- name: Test su connecting as current user
hosts: all
gather_facts: no
tasks:
- name: "sudo to configued user -- tstuser01"
#action: ping
command: /usr/bin/whoami
I am not sure if there is something broken in Ansible 1.9.1 or if I am doing something wrong. Surely the 'command' module in Ansible allows running commands as sudo.
The issue is with configuration; I also took it as an example and got the same problem. After playing awhile I noticed that the following works:
1) deprecated sudo:
---
- hosts: all
sudo: yes
gather_facts: no
tasks:
- name: "sudo to root"
command: /usr/bin/whoami
2) new become
---
- hosts: all
become: yes
become_method: sudo
gather_facts: no
tasks:
- name: "sudo to root"
command: /usr/bin/whoami
3) using ansible.cfg:
[privilege_escalation]
become = yes
become_method = sudo
and then in a playbook:
---
- hosts: all
gather_facts: no
tasks:
- name: "sudo to root"
command: /usr/bin/whoami
since you "becoming" tstuser01 (not a root like me), please play a bit, probably user name should not be quoted too:
become_user = tstuser01
at least this is the way I define remote_user in ansible.cfg and it works... My issue resolved, hope yours too
I think you should use the sudo directive in the hosts section so that subsequent tasks can run with sudo privileges unless you explicitly specified sudo:no in a task.
Here's your playbook that I've modified to use sudo directive.
# Checks the hosts provisioned by midrange
---
- hosts: all
sudo: yes
gather_facts: no
tasks:
- name: "sudo to configued user -- tstuser01"
command: /usr/bin/whoami