Use of privilege escalation in a secure environment with become/ansible - ansible

I want to perform administrative tasks with ansible in a secure environment:
On the server :
root is not activated
we connect throught ssh to a not sudoer account (public/private key, I usually use ssh-agent not to type the passphrase each and every time)
change to a user which belongs to sudo group
then we perform administrative tasks
Here is the command I execute :
ansible-playbook install_update.yaml -K
the playbook :
---
- hosts: server
tasks:
- name: install
apt:
name: python-apt
state: latest
- name: update
become: yes
become_user: admin_account
become_method: su
apt:
name: "*"
state: latest
The hosts file :
[server]
192.168.1.50 ansible_user=ssh_account
But this doesn't allow me to do the tasks: for this particular playbook, It raises this error :
fatal: [192.168.1.50]: FAILED! => {"changed": false, "msg": "'/usr/bin/apt-get upgrade --with-new-pkgs ' failed: E: Could not open lock file /var/lib/dpkg/lock-frontend - open (13: Permission denied)\nE: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), are you root?\n", "rc": 100, "stdout": "", "stdout_lines": []}
which gives the idea that there is a privilege issue...
I would be really glad if someone had an idea !!
Best regards
PS: I have added to sudoers file the nopasswd for this admin account and if I run this playbook it works :
---
- hosts: pi
tasks:
- name: install
apt:
name: python-apt
state: latest
- name: update
become: yes
become_method: su
become_user: rasp_admin
shell: bash -c "sudo apt update"
I guess that when I changed user via su command from ssh_account, I would like to specify that with the admin_accound, my commands have to be run with sudo, but I failed finding the right way to do it...any ideas ??
PS: a workarround is to download a shell file et execute it with ansible but I find it is not satisfying...any other idea ?

Related

Use Ansible playbook to enable and disable root login

I am new to Ansible and I'm trying to write my first Ansible playbook to enable root login via ssh two remote ubuntu servers.
By default, ssh to the two remote ubuntu servers as root is disabled. In order to enable the root login via ssh, I normally do this
#ssh to server01 as an admin user
ssh admin#server01
#set PermitRootLogin yes
sudo vim /etc/ssh/sshd_config
# Restart the SSH server
service sshd restart
Now I'd like to do this via Ansible playbook.
This is my playbook
---
- hosts: all
gather_facts: no
tasks:
- name: Enable Root Login
lineinfile:
dest: /etc/ssh/sshd_config
regexp: '^PermitRootLogin'
line: "PermitRootLogin yes"
state: present
backup: yes
notify:
- restart ssh
handlers:
- name: restart ssh
service:
name: sshd
state: restarted
I run the playbook as the admin user which was created in these two remote servers
ansible-playbook enable-root-login.yml -u admin --ask-pass
Unfortunately, the playbook is failed due to the permission denied.
fatal: [server01]: FAILED! => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "changed": false, "msg": "Could not make backup of /etc/ssh/sshd_config to /etc/ssh/sshd_config.2569989.2021-07-16#06:33:33~: [Errno 13] Permission denied: '/etc/ssh/sshd_config.2569989.2021-07-16#06:33:33~'"}
Can anyone please advise what is wrong with my playbook?
Thanks
When you edit sshd_config file you use sudo then you need to specify to the task that it must be executed with other user. You have to set the keyword become: yes, by default the become_user will be root and the become_method will be sudo and you also could to specifiy the become_password.
---
- hosts: all
gather_facts: no
tasks:
- name: Enable Root Login
lineinfile:
dest: /etc/ssh/sshd_config
regexp: '^PermitRootLogin'
line: "PermitRootLogin yes"
state: present
backup: yes
become: yes
notify:
- restart ssh
handlers:
- name: restart ssh
systemctl:
name: sshd
state: restarted
Documentation:
https://docs.ansible.com/ansible/latest/user_guide/become.html#using-become

Ansible Failed to set permissions on the temporary files

I'd like to make a playbook that shows me the user currently in use.
this is my ansible cfg:
[defaults]
inventory=inventory
remote_user=adminek
[privilege_escalation]
become=true
[ssh_connection]
allow_world_readable_tmpfiles = True
ssh_args = -o ControlMaster=no -o ControlPath=none -o ControlPersist=no
pipelining = false
and this is my playbook
---
- name: show currenty users
hosts: server_a
tasks:
- name: test user - root
shell: "whoami"
register: myvar_root
- name: test user - user2
become: true
become_user: user2
shell: "whoami"
register: myvar_user2
- name: print myvar root
debug:
var: myvar_root.stdout_lines
- name: print myvar user2
debug:
var: myvar_user2.stdout
taks "test user - root" work fine and give me output
ok: [172.22.0.134] => {
"myvar_root.stdout_lines": [
"root"
]
}
taks "test user - user2" give me output
fatal: [172.22.0.134]: FAILED! => {"msg": "Failed to set permissions on the temporary files Ansible needs to create when becoming an unprivileged user (rc: 1, err: chown: changing ownership `/var/tmp/ansible-tmp-1621340458.2-11599-141854654478770/': Operation permited\nchown: changing ownership `/var/tmp/ansible-tmp-1621340458.2-11599-141854654478770/AnsiballZ_command.py': Operation permited\n}). For information on working around this, see https://docs.ansible.com/ansible/become.html#becoming-an-unprivileged-user"}
Explanation:
adminek- sudoer user
User2 - non sudoers users
OS - Scientific Linux release 6.9
Additionaly I hgad similar problem on ubuntu 18.04 but when i installed acl begun works
Someone know what is wrong?
Thanks for help!
One of the following options should fix your issue:
Ensure sudo is installed on the remote host
Ensure acl is installed on the remote host
Uncomment the following lines in /etc/ansible/ansible.cfg:
allow_world_readable_tmpfiles = True
pipelining = True
#F1ko thanks for reply.
I did what you wont and I installed acl on my host, but steal was wrong.
I added to visudo.
Defaults:user2 !requiretty
Defaults:adminek !requiretty
I dont know it's ok and secure but work.
for me it worked installing the acl package in host
- name: Install required packaged
yum:
name: "{{ item }}"
state: present
with_items:
- acl
- python3-pip
in my case I used centos/07, if you use ubuntu, change yum to apt.

Ansible apt module to execute with sudo

I'm trying to get ansible to execute sudo apt-get update. I already have the sudoers setup to run without a password and it works if I login as the user ansible is using and execute sudo apt-get update. If I use the following ansible playbook
---
- name: updates
hosts: pis
tasks:
- name: Update APT
become: true
apt:
update_cache: yes
It will give me the following error
fatal: [127.0.0.1]: FAILED! => {"changed": false, "module_stderr": "Shared connection to 127.0.0.1 closed.\r\n", "module_stdout": "sudo: a password is required\r\n", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
but the following playbook works fine
---
- name: updates
hosts: pis
tasks:
- name: Update APT
command: sudo apt-get update
When using become: yes the connection will try to spawn shell as root (as you haven't specified become_user).
It seems you have access to run sudo apt-get update without password, i.e:
/etc/sudodoers may contain something similar to this:
ansible ALL=(ALL) NOPASSWD: /bin/apt-get update
To replicate the issue, login as user ansible and issue a command sudo -i
If you're using a dedicated user (ansible ?) for all ansible connectivity, you might want to give that user access to all commands via sudo without password. Read Here this is however not the best practice, you are always better off giving access to specific commands for security reasons
Add the following line to /etc/sudoers file using visudo /etc/sudoers
ansible ALL=(ALL) NOPASSWD:ALL
If you opt for this, I would also advise to use a key-pair authentication for this privileged account rather than password authentication. More info

ansible playbook: Cannot launch a service as root

I've been banging my head on this one for most of the day, I've tried everything I could without success, even with the help of my sysadmin. (note that I am not at all an ansible expert, I've discovered that today)
context: I try to run implement continuous integration of a java service via gitlab. a pipeline will, on a push, run tests, package the jar, then run an ancible playbook to stop the existing service, replace the jar, launch the service again. We have that for the production in google cloud, and it works fine. I'm trying to add an extra step before that, to do the same on localhost.
And I just can't understand why ansible fails to do a "sudo service XXXX stop|start" . All I got is
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "module_stderr": "Sorry, try again.\n[sudo via ansible, key=nbjplyhtvodoeqooejtlnhxhqubibbjy] password: \nsudo: 1 incorrect password attempt\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 1}
Here is the the gitlab pipeline stage that I call :
indexer-integration:
stage: deploy integration
script:
- ansible-playbook -i ~/git/ansible/inventory deploy_integration.yml --vault-password-file=/home/gitlab-runner/vault.txt
when: on_success
vault.txt contains the vault encryption password. Here is the deploy_integration.yml
---
- name: deploy integration saleindexer
hosts: localhost
gather_facts: no
user: test-ccc #this is the user that I created as a test
connection: local
vars_files:
- /home/gitlab-runner/secret.txt #holds the sudo password
tasks:
- name: Stop indexer
service: name=indexer state=stopped
become: true
become_user: root
- name: Clean JAR
become: true
become_user: root
file:
state: absent
path: '/PATH/indexer-latest.jar'
- name: Copy JAR
become: true
become_user: root
copy:
src: 'target/indexer-latest.jar'
dest: '/PATH/indexer-latest.jar'
- name: Start indexer
service: name=indexer state=started
become: true
become_user: root
the user 'test-ccc' is another user that I created ( part of the group root and in the sudoer file) to make sure it was not an issue related to the gitlab-runner user ( and because apparently no one here can remembers the sudo password of that user xD )
I've try a lot od thing, including
shell: echo 'password' | sudo -S service indexer stop
that works in command line. But if executed by ansible, all I got is a prompt message asking me to enter the sudo password
Thanks
edit per comment request : The secret.txt has :
ansible_become_pass: password
When using that user in command line (su user / sudo service start ....) and prompted for that password, it works fine. The problem I believe is that either ansible always prompts for password, or the password is not properly passed to the task.
The sshd_config has a line 'PermitRootLogin yes'
ok, thanks to a reponse(now deleted) from techraf, I noticed that the line
user: test-ccc
is actually useless, everything was still run by the 'gitlab-runner' user. So I :
put all my action in a script postbuild.sh
add gitlab-runners to the sudoers and gave the nopassword for that script
gitlab-runner ALL=(ALL) NOPASSWD:/home/PATH/postbuild.sh
removed everrything about passing the password and the secret from the ansible task, and used instead :
shell: sudo -S /home/PATH/postbuild.sh
So that works, the script is executed, service is stop/start. I'll mark this as answered, even though using service: name=indexer state=started and giving NOPASSWD:ALL for the user still caused an error (the one in my comment on the question ) . If anyone can shed light on that in the comment ....

How can a user with SSH keys authentication have sudo powers in Ansible? [duplicate]

This question already has answers here:
Missing sudo password in Ansible
(14 answers)
Ansible: sudo without password
(3 answers)
Closed 4 years ago.
I create a vm in the azure cloud with the following ansible script:
---
- name: azure playbook
hosts: localhost
vars_files: ['vars.yaml']
tasks:
- name: Create VM with defaults
azure_rm_virtualmachine:
resource_group: "{{account_prefix}}_rg"
vm_size: Standard_D1
name: "{{account_prefix}}-vm1"
storage_account_name: "{{account_prefix}}store1"
network_interface_names: "{{account_prefix}}vm1eth0"
ssh_password_enabled: false
admin_username: owen
ssh_public_keys:
- { path: /home/owen/.ssh/authorized_keys,
key_data: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDH0q4pmdkJcc/JPVJui5uWMV12GsJAsDCosfUSSFZfTIx92bb9FC3hx1zU7tD1+Zw3aQW13m6ZS2T ... YnvieSbdD3v}
image:
offer: CentOS
publisher: OpenLogic
sku: '7.2'
version: latest
but when running a further script to add another user:
---
- name: create user
hosts: my-vm1.westeurope.cloudapp.azure.com
# vars_files: ['vars.yaml']
remote_user: owen
tasks:
- name: Create User
user:
name: andrea
password: $6$rounds=656000$1AspdTb0lfOSc5yM$bAkPgHkuHwap/j6f0P88WxOdjxq3MCRO7/qgufYB.s/4t4k99wwtu/.../
group: users
shell: /bin/bash
become: true
I get "sudo: a password is required" error:
PLAY [create user] *************************************************************
TASK [setup] *******************************************************************
fatal: [my-vm1.westeurope.cloudapp.azure.com]: FAILED! => {"changed": false, "failed": true, "module_stderr": "", "module_stdout": "sudo: a password is required\r\n", "msg": "MODULE FAILURE", "parsed": false}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit #8-add-admin-user-to-vm-with-userpswd-already.retry
My inventory looks like this:
my-vm1.westeurope.cloudapp.azure.com ansible_ssh_private_key_file=/home/myuser/.ssh/id_rsa ansible_user=owen ansible_become=true
So how can the user have sudo privileges and so use ansible 'become' and the like?
Note that the same result happens when ansible_user and ansible_become are omitted from the inventory file.
EDIT: If I ssh on to the vm as owen (from the box with the ssh private key, that created the vm) then I am able to run sudo visudo -f /etc/sudoers and access that file. So does owen have sudo privileges or not? I'm getting confused now!! Am I misunderstanding the error from the ansible add user script?
EDIT2: I think this question is invalid - as the user does have sudo privileges added manually through the portal. I'm still not sure what's going on but I don't think this question is coherent - or really represents the actual problem I'm trying to solve.
You can either change the sudo config for the user owen with this command:
sudo visudo -f /etc/sudoers
and change the line with user owen to this:
owen ALL=(ALL) NOPASSWD:ALL
then sudo won't require Ansible to enter the password. Or you could instruct Ansible to ask you for the password with the parameter --ask-become-pass like this:
ansible-playbook site.yml --ask-become-pass

Resources